Before:
host_id in system.local is empty
After:
host_id in system.local is inserted correctly
This fixes a hasty problem that we always get a new host_id when
booting up a node with data.
"Initial implementation/transposition of commit log replay.
* Changes replay position to be shard aware
* Commit log segment ID:s now follow basically the same scheme as origin;
max(previous ID, wall clock time in ms) + shard info (for us)
* SStables now use the DB definition of replay_position.
* Stores and propagates (compaction) flush replay positions in sstables
* If CL segments are left over from a previous run, they, and existing
sstables are inspected for high water mark, and then replayed from
those marks to amend mutations potentially lost in a crash
* Note that CPU count change is "handled" in so much that shard matching is
per _previous_ runs shards, not current.
Known limitations:
* Mutations deserialized from old CL segments are _not_ fully validated
against existing schemas.
* System::truncated_at (not currently used) does not handle sharding afaik,
so watermark ID:s coming from there are dubious.
* Mutations that fail to apply (invalid, broken) are not placed in blob files
like origin. Partly because I am lazy, but also partly because our serial
format differs, and we currently have no tools to do anything useful with it
* No replay filtering (Origin allows a system property to designate a filter
file, detailing which keyspace/cf:s to replay). Partly because we have no
system properties.
There is no unit test for the commit log replayer (yet).
Because I could not really come up with a good one given the test
infrastructure that exists (tricky to kill stuff just "right").
The functionality is verified by manual testing, i.e. running scylla,
building up data (cassandra-stress), kill -9 + restart.
This of course does not really fully validate whether the resulting DB is
100% valid compared to the one at k-9, but at least it verified that replay
took place, and mutations where applied.
(Note that origin also lacks validity testing)"
* Make it more like origin, i.e. based on wall clock time of app start
* Encode shard ID in the, RP segement ID, to ensure RP:s and segement names
are unique per shard
We set status to COMPLETED in join_token_ring
set_bootstrap_state(db::system_keyspace::bootstrap_state::COMPLETED)
but
cqlsh 127.0.0.$i -e "SELECT * from system.local;"
shows
bootstrapped -> IN_PROGRESS
The static sstring state_name is the bad boy.
To avoid spreading the futures all over, we will resort to a cache with this,
the same way we did for the dc/rack information.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
"Related to 108
Does not fix the problem (fully at least), but at least:
* Throws exceptions instead of crashing
* Tries to back off slighly (allocate less) if possible
* Logs it
Also recycles segments to keep them from being fragmented by mem system"
I'm not sure what happened. We have the same commented code in both .hh
and .cc. It is very confusing when enabling some of the code. Let's
remove the duplicated code in .cc and leave the in .hh only.
There is another field I missed, index_interval. It is not actually used for
2.1.8 - so that's why it is easy to stop, but it at least exists.
2.1.8 already has "min_index_interval" and "max_index_interval". If we see a
table that contains index_interval, that will become "min_index_interval".
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
They are multi-cell in Origin. This has nothing to do with 2.2 vs 2.1,
and it is just a plain bug.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>