read_indexes was one of the first functions coded in the sstable read path. At
the time, I made the (now so obviously) wrong decision to code it generic
enough so that we could specify the number of items to be read, instead of an
upper bound in the file.
The main reason for that, was that without the Summary, we have no way to know
where to stop reading, and the Summary is a relatively new addition to the C*
codebase: while I didn't really check when it got in, the code is full of tests
for its presence.
That turned out to be totally useless: we always read the indexes with the help
of the Summary. While the Summary is a relatively new addition to C*, it is
present in all version we aim to support. Meaning that reads without the
Summary will never happen in our codebase.
Even if, in the future, we happen to ditch the Summary file, we are very likely
to do so in favor of some other structure that also allows us to manipulate precise
borders in the Index.
The code as it is, however, would not be too big of a problem if that wasn't
causing us performance problems. But it is, and the majority of it is caused by
the fact that our underlying read_indexes do not know in advance how many bytes
to read, forcing us to do an element-per-element read.
It's time for a change.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
The CQL tokenizer recognizes "COUNTER" token but the parser rule for
counter type is disabled. This causes users to see the following error
in cqlsh, for example:
CREATE TABLE count (u int PRIMARY KEY, c counter);
SyntaxException: <ErrorMessage code=2000 [Syntax error in CQL query] message=" : cannot match to any predicted input... ">
We cannot disable the "COUNTER" token because it's also used in batch
statements. Instead, fix the issue by implementing a stub counter type.
Fixes#195.
Signed-off-by: Pekka Enberg <penberg@cloudius-systems.com>
Fix cf_prop_defs::apply_to_builder() to actually apply the given
compaction strategy to the schema that is being created.
Fixes#192.
Signed-off-by: Pekka Enberg <penberg@cloudius-systems.com>
Fix compaction_strategy::type() to throw configuration_exception which
is what Origin throws from CFMetaData.createCompactionStrategy(). This
ensures that the CQL error we send back to the client is the same.
Signed-off-by: Pekka Enberg <penberg@cloudius-systems.com>
make_reader_returning() is used by the single-key query path, and is slowed
down by needlessly allocating a vector, which is initialized by copying
the mutation (as initializer_list<> can not be moved from).
Fix by giving it its own implementation instead of relying on
make_reader_returning_many().
Before:
host_id in system.local is empty
After:
host_id in system.local is inserted correctly
This fixes a hasty problem that we always get a new host_id when
booting up a node with data.
"Initial implementation/transposition of commit log replay.
* Changes replay position to be shard aware
* Commit log segment ID:s now follow basically the same scheme as origin;
max(previous ID, wall clock time in ms) + shard info (for us)
* SStables now use the DB definition of replay_position.
* Stores and propagates (compaction) flush replay positions in sstables
* If CL segments are left over from a previous run, they, and existing
sstables are inspected for high water mark, and then replayed from
those marks to amend mutations potentially lost in a crash
* Note that CPU count change is "handled" in so much that shard matching is
per _previous_ runs shards, not current.
Known limitations:
* Mutations deserialized from old CL segments are _not_ fully validated
against existing schemas.
* System::truncated_at (not currently used) does not handle sharding afaik,
so watermark ID:s coming from there are dubious.
* Mutations that fail to apply (invalid, broken) are not placed in blob files
like origin. Partly because I am lazy, but also partly because our serial
format differs, and we currently have no tools to do anything useful with it
* No replay filtering (Origin allows a system property to designate a filter
file, detailing which keyspace/cf:s to replay). Partly because we have no
system properties.
There is no unit test for the commit log replayer (yet).
Because I could not really come up with a good one given the test
infrastructure that exists (tricky to kill stuff just "right").
The functionality is verified by manual testing, i.e. running scylla,
building up data (cassandra-stress), kill -9 + restart.
This of course does not really fully validate whether the resulting DB is
100% valid compared to the one at k-9, but at least it verified that replay
took place, and mutations where applied.
(Note that origin also lacks validity testing)"
* seastar 23f4fae...d1fa2d7 (4):
> memory: provide some statistic for total memory in debug mode
> core/thread: Introduce yield()
> future-util: Move later() into future-util.hh
> tests: make alloc_test work with many --memory sizes
Example run of perf_sstable_index:
64k: 1401296.23 +- 5461.20 partitions / sec (30 runs, 1 concurrent ops)
128k: 1459283.89 +- 6674.87 partitions / sec (30 runs, 1 concurrent ops)
This is 4 % higher on an 0.45 % error
For larger buffers, like 256k, this doesn't yield a consistent gain, sometimes
yielding a loss.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
We'll pay the price of having this now as a variable instead of a constexpr,
but this dims in comparison with the rest of the operation.
By paying this cost, we gain the ability of actually specifying it during test
runs, making it easy to automate scripts that will measure the performance over
various buffer sizes.
I am also providing a new constructor that allows for the setting of the buffer
size. The said constructor will be private, meaning that only the test class
will be able to use it.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
Not doing that will include the smp communication costs in the total cost of
the operation. This will not very significant when comparing one run against
the other when the results clearly differ, but the proposed way yields error
figures that are much lower. So results are generally better.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
As we have discussed recently, the sstable writer can't even handle intra-core
parallelism - it has only one writer thread per core, and for reads, it affects
the final throughput a lot.
We don't want to get rid of it, because in real scenarios intra-core
parallelism will be there, specially for reads. So let's make it a tunable so we
can easily test its effect on the final result.
The iterations are now all sequential, and we will run x parallel invocation at
each of them.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
$ curl -X POST --header "Content-Type: application/json" --header "Accept:
application/json" "http://127.0.0.1:10000/storage_service/gossiping"
btw, the description looks incorrect:
POST /storage_service/gossiping
allows a user to recover a forcibly 'killed' node