Remove the about to be dropped CF from the UUID lookup table before
truncating and stopping it. This closes a race window where new
operations based on the UUID might be initiated after truncate
completes.
Signed-off-by: Pekka Enberg <penberg@scylladb.com>
From Pekka:
This patch series implements support for CQL DROP TABLE. It uses the newly
added truncate infrastructure under the hood. After this series, the
test_table CQL test in dtest passes:
[penberg@nero urchin-dtest]$ nosetests -v cql_tests.py:TestCQL.table_test
table_test (cql_tests.TestCQL) ... ok
----------------------------------------------------------------------
Ran 1 test in 23.841s
OK
For drop_column_family(), we want to first remove the column_family from
lookup tables and truncate after that to avoid races. Introduce a
truncate() variant that takes keyspace and column_family references.
Signed-off-by: Pekka Enberg <penberg@scylladb.com>
Currently, we control incremental backups behavior from the storage service.
This creates some very concrete problems, since the storage service is not
always available and initialized.
The solution is to move it to the column family (and to the keyspace so we can
properly propagate the conf file value). When we change this from the api, we will
have to iterate over all of them, changing the value accordingly.
Signed-off-by: Glauber Costa <glommer@scylladb.com>
This patch contains the following changes, in the definition of the read
and write latency histogram it removes the mask value, so the the
default value will be used.
To support the gothering of the read latency histogram the query method
cannot be const as it modifies the histogram statistics.
The read statistic is sample based and it should have no real impact on
performance, if there will be an impact, we can always change it in the
future to a lower sampling rate.
Signed-off-by: Amnon Heiman <amnon@cloudius-systems.com>
Only tables that arise from flushes are backed up. Compacted tables are not.
Therefore, the place for that to happen is right after our flush.
Note that due to our sharded architecture, it is possible that in the face of a
value change some shards will backup sstables while others won't.
This is, in theory, possible to mitigate through a rwlock. However, this
doesn't differ from the situation where all tables are coming from a single
shard and the toggle happens in the middle of them.
The code as is guarantees that we'll never partially backup a single sstable,
so that is enough of a guarantee.
Signed-off-by: Glauber Costa <glommer@scylladb.com>
When we convert exceptions into CQL server errors, type information is
not preserved. Therefore, improve exception error messages to make
debugging dtest failures, for example, slightly easier.
Signed-off-by: Pekka Enberg <penberg@cloudius-systems.com>
When we write an SSTable, all its components are already in memory. load() is
to big of a hammer.
We still want to keep the write operation separated from the preparation to
read, but in the case of a newly written SSTable, all we need to do is to open
the index and data file.
Fixes#300
Signed-off-by: Glauber Costa <glommer@scylladb.com>
WARN level is for messages which should draw log reader's attention,
journalctl highlights them for example. Populating of keyspace is a
fairly normal thing, so it should be logged on lower level.
Race condition happens when two or more shards will try to delete
the same partial sstable. So the problem doesn't affect scylla
when it boots with a single shard.
To fix this problem, shard 0 will be made the responsible for
deleting a partial sstable.
fixes#359.
Signed-off-by: Raphael S. Carvalho <raphaelsc@cloudius-systems.com>
If an sstable is irrelevant for a shard, delete it. The deletion will
only complete when all shards agree (either ignore the sstable or
delete it after compaction).
In event of a compaction failure, run_compaction would be called
more than one time for a request, which could result in an
underflow in the stats pending_compactions.
Let's fix that by only decreasing it if compaction succeeded.
Signed-off-by: Raphael S. Carvalho <raphaelsc@cloudius-systems.com>
When populating a column family, we will now delete all components
of a sstable with a temporary toc file. A sstable with a temporary
TOC file means that it was partially written, and can be safely
deleted because the respective data is either saved in the commit
log, or in the compacted sstables in case of the partial sstable
being result of a compaction.
Deletion procedure is guarded against power failure by only deleting
the temporary TOC file after all other components were deleted.
Signed-off-by: Raphael S. Carvalho <raphaelsc@cloudius-systems.com>
When populating a cf, we should also check for a sstable with
temporary TOC file, and act accordingly. By the time being,
we will only refuse to boot. Subsequent work is to gather all
files of a sstable with a temporary TOC file and delete them.
Signed-off-by: Raphael S. Carvalho <raphaelsc@cloudius-systems.com>
column_family
This patch adds a getter for the dirty_memory_region_group in the
database object and add an occupency method to column family that
returns the total occupency in all the memtable in the column family.
Signed-off-by: Amnon Heiman <amnon@cloudius-systems.com>
Fixes#309.
When scanning memtable readers detect is was flushed, which means that
it started to be moved to cache, they fall back to reading from
memtable's sstable.
Eventually what we should do is to combine memtable and cache contents
so that as long as data is not evicted we won't do IO. We do not
support scanning in cache yet though, so there is no point in doing
this now, and it is not trivial.
Deleting sstables is tricky, since they can be shared across shards.
This patchset introduces an sstable deletion agreement table, that records
the agreement of shards to delete an sstable. Sstables are only deleted
after all shards have agreed.
With this, we can change core count across boots.
Fixes#53.
All database code was converted to is when storage_proxy was made
distributed, but then new code was written to use storage_proxy& again.
Passing distributed<> object is safer since it can be passed between
shards safely. There was a patch to fix one such case yesterday, I found
one more while converting.
"Refs #293
* Add a commitlog::sync_all_segments, that explicitly forces all pending
disk writes
* Only delete segments from disk IFF they are marked clean. Thus on partial
shutdown or whatnot, even if CL is destroyed (destructor runs) disk files
not yet clean visavi sstables are preserved and replayable
* Do a sync_all_segments first of all in database::stop.
Exactly what to not stop in main I leave up to others discretion, or at least
another patch."
From Pawel:
This series makes compaction remove items that are no longer items:
- expired cells are changed into tombstones
- items covered by higher level tombstones are removed
- expired tombstones are removed if possible
Fixes#70.
Fixes#71.
Refs #293
IFF one desires to _not_ shutdown stuff cleanly, still running this first
in database::stop will at least ensure that mutations already in CL transit
will end up on disk and be replayable
Also at seastar-dev: calle/commitlog_flush_v3
(And, yes, this time I _did_ update the remote!)
Refs #262
Commit of original series was done on stale version (v2) due to authors
inability to multitask and update git repos.
v3:
* Removed future<> return value from callbacks. I.e. flush callback is now
only fully syncronous over actual call
"Fixes #262
Handles CL disk size exceeding configured max size by calling flush handlers
for each dirty CF id / high replay_position mark. (Instead of uncontrolled
delete as previously).
* Increased default max disk size to 8GB. Same as Origin/scylla.yaml (so no
real change, but synced).
* Divide the max disk size by cpus (so sum of all shards == max)
* Abstract flush callbacks in CL
* Handler in DB that initiates memtable->sstable writes when called.
Note that the flush request is done "syncronously" in new_segment() (i.e.
when getting a new segment and crossing threshold). This is however more or
less congruent with Origin, which will do a request-sync in the corresponding
case.
Actual dealing with the request should at least in production code however be
done async, and in DB it is, i.e. we initiate sstable writes. Hopefully
they finish soon, and CL segments will be released (before next segment is
allocated).
If the flush request does _not_ eventually result in any CF:s becoming
clean and segments released we could potentially be issuing flushes
repeatedly, but never more often than on every new segment."
The reader has a field for the sstable, but we are not initializing it, so it
can be destroyed before we finish our job. It seems to work here, but transposing
this code to the test case crashed it. So this means at some point we will crash
here as well.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
Read-ahead will require that we close input_streams. As part of that
we have to close sstables, and mutation_readers (which encapsulate
input_streams). This is part 1 of a patchset series to do that.
(The overarching goal is to enable read-ahead for sstables, see #244)
Conflicts:
sstables/compaction.cc
Using a lambda for implementing a mutation_reader is nifty, but does not
allow us to add methods.
Switch to a class-based implementation in anticipation of adding a close()
method.