"* Runs the batchlog loop on only main cpu, but round-robins the actual work
to each available shard in round-robin fashion.
* Use gate to guard work loop instead of semaphore (better shutdown,
eventually)
* Actually _start_ the batch loop (not done previously)
* Rename logger + add cpu# hint"
Fixes#424
Since replay is a "node global" operation, we should not attempt to
do it in parallel on each shard. It will just overlap/interfere.
Could just run this on cpu 0 or but since this _could_ be a
lengty operation, each timer callback is round-robined shards just in case...
Fixes #423
"Changes the "truncated_at" blob contents of system.local table. It now stores
N replay_positions, where N == # shards.
The system.local table schema remains unchanged, and older truncation data
is accepted, though it will for obvious reasons still be insufficient.
Since the data is opaque to the running instance, blob compatibilty with
origin should be irrelevant (and we're not really that now anyway).
Note that technically, changing shard cound inbetween runs could make us hold
on to RP data "longer than required", but this is
a.) Insignificant data sizes
b.) Data that is valid exactly once: When restarting a failed node and
replaying. The "shards" only refer to "last run", and after that we don't
care. At worst, we can get less than fresh data (not all shards manage
to save truncation records before crash).
It is worth noting (and I've done do in the code) that the system.local table
+ sharding cause some rather silly inefficiencens, since for this (and others)
we store a value for each shard, each save which causes a global flush of the
systable, in turn delegated on all cores. So the op is N^2 in "db complexity".
At some point we should maybe consider if operations like "drop table" and
"truncate" should not be done on shard level, but on machine level, so it can
coordinate itself. But otoh, it is rare and not _very_ expensive either."
We call the conversion function that expectes a NUL terminated string,
but provide a string view, which is not.
Fix by using the begin/end variant, which doesn't require a NUL terminator.
Fixes#437.
Fixes #423
* CF ID now maps to a truncation record comprised of a set of
per-shard RP:s and a high-mark timestamp
* Retrieving RP:s are done in "bulk"
* Truncation time is calculated as max of all shards.
This version of the patch will accept "old" truncation data, though the
result of applying it will most likely not be correct (just one shard)
Record is still kept as a blob, "new" format is indicated by
record size.
Must ensure we find a chunk/entry boundary still even when run
with a start offset, since file navigation in chunk based.
Was not observed as broken previously because
1.) We did not run with offsets
2.) The exception never reached caller.
Also make the reader silently ignore empty files.
type
Allow providing both hash/equal etc for resulting map, as well
as explicit data_types for the deserialization.
Also allow direct extraction of kv-pairs to iterator, for more advanced
unpacking.
Remove the about to be dropped CF from the UUID lookup table before
truncating and stopping it. This closes a race window where new
operations based on the UUID might be initiated after truncate
completes.
Signed-off-by: Pekka Enberg <penberg@scylladb.com>
Almost the whole file is (accidentally) indented four spaces to the
right for no reason. Fix that up because it's annoying as hell.
Signed-off-by: Pekka Enberg <penberg@scylladb.com>
"This patch series implements support for CQL DROP KEYSPACE and makes the
test_keyspace CQL test in dtest pass:
[penberg@nero urchin-dtest]$ nosetests -v cql_tests.py:TestCQL.keyspace_test
keyspace_test (cql_tests.TestCQL) ... ok
----------------------------------------------------------------------
Ran 1 test in 12.166s
OK
[penberg@nero urchin-dtest]$ nosetests -v cql_tests.py:TestCQL.table_test
table_test (cql_tests.TestCQL) ... ok
----------------------------------------------------------------------
Ran 1 test in 23.841s
OK"
When we query schema keyspaces after we have applied a delete mutation,
the dropped keyspace does not exist in the "after" result set. Fix the
merge_keyspaces() algorithm to take that into account.
Makes merge_keyspaces() really call to database::drop_keyspace() when a
keyspace is dropped.
Signed-off-by: Pekka Enberg <penberg@cloudius-systems.com>
We need to capture the "is_local_only" boolean by value because it's an
argument to the function. Fixes an annoying bug where we failed to update
schema version because we pass "true" accidentally.
Signed-off-by: Pekka Enberg <penberg@scylladb.com>
From Pekka:
This patch series implements support for CQL DROP TABLE. It uses the newly
added truncate infrastructure under the hood. After this series, the
test_table CQL test in dtest passes:
[penberg@nero urchin-dtest]$ nosetests -v cql_tests.py:TestCQL.table_test
table_test (cql_tests.TestCQL) ... ok
----------------------------------------------------------------------
Ran 1 test in 23.841s
OK
We are generating huge output xml files with the --jenkins flag. Update
the printout from all to test_suite - to reduce size and incldue the
info we need.
Error messages / failed assertions are still printed
Signed-off-by: Shlomi Livne <shlomi@cloudius-systems.com>
When we query schema tables after we have applied a delete mutation, the
dropped table does not exist in the "after" result set. Fix the
merge_tables() algorithm to take that into account.
Makes merge_tables() really call to database::drop_column_family() when
a table is dropped.
Signed-off-by: Pekka Enberg <penberg@cloudius-systems.com>
For drop_column_family(), we want to first remove the column_family from
lookup tables and truncate after that to avoid races. Introduce a
truncate() variant that takes keyspace and column_family references.
Signed-off-by: Pekka Enberg <penberg@scylladb.com>
We need to capture the "is_local_only" boolean by value because it's an
argument to the function. Fixes an annoying bug where we failed to update
schema version because we pass "true" accidentally. Spotted by ASan.
Signed-off-by: Pekka Enberg <penberg@scylladb.com>
"The control over backups is now moved to the CF itself, from the storage
service. That allows us to simplify the code (while making it correct) for cases
in which the storage service is not available.
With this change, we no longer need the database config passed down to the
storage_service object. So that patch is reverted."
Currently, we control incremental backups behavior from the storage service.
This creates some very concrete problems, since the storage service is not
always available and initialized.
The solution is to move it to the column family (and to the keyspace so we can
properly propagate the conf file value). When we change this from the api, we will
have to iterate over all of them, changing the value accordingly.
Signed-off-by: Glauber Costa <glommer@scylladb.com>
We will need to change some properties of the keyspace / cf. We need an acessor
that is not marked as const.
Signed-off-by: Glauber Costa <glommer@scylladb.com>
"This series adds the functionality that is required so the nodetool cfstats
would work.
It complete the histogram support for read and write latency and add stub for
functionality that is needed but is not supported yet."