Commit Graph

558 Commits

Author SHA1 Message Date
Pavel Emelyanov
d755fdc1f4 storage_service: Remove global proxy call
Storage service needs it to calculate schema version on join. The proxy
at this point can be passed as an argument to the joining helper.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2022-05-23 12:55:30 +03:00
Pavel Emelyanov
bc051387c5 storage_service: Remove sys_dist_ks from storage_service dependencies
The service in question is only needed join_cluster-time, no need to
keep it in the dependencies list. This also solves the dependency
trouble -- the distributed keyspace is sharded::start-ed after it's
passed to storage_service initialization.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2022-05-23 12:55:30 +03:00
Pavel Emelyanov
5a97ba7121 storage_service: Remove cdc_gen_service from storage_service dependencies
This service is only needed join-time, it's better to pass it as
argument to join_cluster(). This solves current reversed dependency
issuse -- the cdc_gen_svc is now started after it's passed to storage
service initialization.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2022-05-23 12:55:30 +03:00
Pavel Emelyanov
282cc070bc storage_service: Merge init-server and join-cluster
Now they always follow one another both in main and cql-test-env.
Also, despite the name, init-server does joins the cluster when it's
just a normal node restarting, so join-cluster is called when the
cluster is already joind. This merge make the function be named as
what it really does.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2022-05-23 12:55:30 +03:00
Pavel Emelyanov
b2b86b0c83 main, storage_service: Move wait for gossip to settle
And make cql-test-env configure to skip it not to slow down tests in
vain. Another side effect is that cql-test-env would trigger features
enabling at this point, but that's OK, they are enabled anyway.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2022-05-23 12:55:30 +03:00
Gleb Natapov
c2ef390a52 service: raft: move group0 write path into a separate file
Writing into the group0 raft group on a client side involves locking
the state machine, choosing a state id and checking for its presence
after operation completes. The code that does it resides now in the
migration manager since the currently it is the only user of group0. In
the near future we will have more client for group0 and they all will
have to have the same logic, so the patch moves it to a separate class
raft_group0_client that any future user of group0 can use to write
into it.

Message-Id: <YoYAJwdTdbX+iCUn@scylladb.com>
2022-05-19 17:21:35 +03:00
Eliran Sinvani
c5e5692a01 compaction_manager: Make invoking the empty constructor more explicit
The compaction manager's empty constructor is supposed to be invoked
only in testing environment, however, it is easy to invoke it by mistake
from production code.
Here we add a more verbose constructor and making the default compaction
private, the verbose compiler need to be invoked with a tag
for_testing_tag, this will ensure that this constructor will be invoked
only when intended.
The unit tests were changed according to this new paradigm.

Tests: unit (dev)

Signed-off-by: Eliran Sinvani <eliransin@scylladb.com>
2022-05-18 14:57:10 +03:00
Botond Dénes
1f0d3d57eb Merge 'Convert (almost) all uses of flat_mutation_reader_assertions to v2' from Michael Livshin
"Almost" because 2 uses of the v1 asserter remain (as they are deliberate).

Closes #10518

* github.com:scylladb/scylla:
  tests: remove obsolete utility functions
  tests: less trivial flat_reader_assertions{,_v2} conversions
  tests: trivial flat_reader_assertions{,_v2} conversions
  flat_mutation_reader_assertions_v2: improve range tombstone support
2022-05-13 08:04:20 +03:00
Tomasz Grabiec
f703e8ded5 Merge 'New failure detector for Raft' from Kamil Braun
We introduce a new service that performs failure detection by periodically pinging
endpoints. The set of pinged endpoints can be dynamically extended and
shrinked. To learn about liveness of endpoints, user of the service
registers a listener and chooses a threshold - a duration of time which
has to pass since the last successful ping in order to mark an endpoint
as dead. When an endpoint responds it's immediately marked as alive.

Endpoints are identified using abstract integer identifiers.
The method of performing a ping is a dependency of the service provided
by the user through the `pinger` interface. The implementation of `pinger` is
responsible for translating the abstract endpoint IDs to 'real'
addresses. For example, production implementation may map endpoint IDs
to IP addresses and use TCP/IP to perform the ping, while a test/simulation
implementation may use a simulated network that also operates on
abstract identifiers.

Similarly, the method of measuring time is a dependency provided by the
user using the `clock` interface. The service operates on abstract time
intervals and timepoints. So, for example, in a production
implementation time can be measured using a stopwatch, while in
test/simulation we can use a logical clock.

The service distributes work across different shards. When an endpoint
is added to the set of detected endpoints, the service will choose a
shard with the smallest amount of workers and create a worker that is
responsible for periodically pinging this endpoint on that shard and
sending notifications to listeners.

We modify the randomized nemesis test to use the new service.
The service is sharded, but for simplicity of implementation in the test
we implement rpcs and sleeps by routing the requests to shard 0, where
logical timers and network live. rpcs are using the existing simulated
network and clock using the existing logical timers.

We also integrate the service with production code. There,
`pinger` is implemented using existing GOSSIP_ECHO verb. The gossip echo
message requires the node's gossip generation number. We handle this by
embedding the pinger implementation inside `gossiper`, and making
`gossiper` update the generation number (cached inside the pinger class)
periodically.

Production `clock` is a simple implementation which uses
`std::chrono::steady_clock` and `seastar::sleep_until` underneath.
Translating `steady_clock` durations to `direct_fd::clock` durations happens
by taking the number of ticks.

We connect the group 0 raft server rpc implementation to the new service,
so that when servers are added or removed from the the group 0 configuration,
corresponding endpoints are added to the direct failure detector service.
Thus the set of detected endpoints will be equal to the group 0 configuration.

On each shard, we register a listener for the service.
The listener maintains a set of live addresses; on mark_alive it adds a
server to the set and on mark_dead it removes it. This set is then used
to implement the `raft::failure_detector` interface, consisting of
`is_alive()` function, which simply checks set membership.

---

v6:
- remove `_alive_start_index`. Instead, keep a map of `bool`s to track liveness of each endpoint. See the code for details (`listeners_liveness` struct and its usage in `ping_fiber()`, `notify_fiber()`, `add/remove_worker`, `add/remove_listener`). The diff is easy to read: f617aeca62..d4b225437c

v5:
- renamed `rpc` to `pinger`
- replaced `bool` with `enum class endpoint_update` (with values `added` and `removed`) in `_endpoint_updates`
- replaced `unsigned` with `shard_id`
- fixed definition of `threshold(size_t n)` (it didn't use `n`, but `_alive_start`; fortunately all uses passed `_alive_start` as `n` so the bug wouldn't affect the behavior)
- improve `_num_workers` assertions
- signal `_alive_start_changed` only when `_alive_start` indeed changed
- renamed `{_marked}_alive_start` to `{_marked}_alive_start_index`

v4:
- rearrange ping_fiber(). Remove the loop at the end of the big `while`
  which was timing out listeners (after the sleep). Instead:
    - rely on the loop before the sleep for timing out listeners
    - before calling ping(), check if there is a timed out listener,
      if so abandon the ping, immediately proceed to the timing-out-listeners
      loop, and then immediately proceed to the next iteration of the big `while`
      (without sleeping)
- inline send_mark_dead() and send_mark_alive(); each was used in
  exactly one place after the rearrangement
- when marking alive, instead of repeatedly doing `--_alive_start` and
  signalling the condition variable, just do `_alive_start = 0` and signal
  the condition variable once
- fix the condition for stopping `endpoint_worker::notify_fiber()`: before, it was
  `_as.abort_requested()`, now it is `_as.abort_requested() && _alive_start == _fd._listeners.size()`.
  Indeed, we want to wait for the stopping code (`destroy_worker()`)
  to set `_alive_start = _fd._listeners.size()` before `notify_fiber()`
  finishes so `notify_fiber()` can send the final `mark_dead`
  notifications for this endpoint. There was a race before where
  `notify_fiber()` could finish before it sent those notifications
  (because it finished as soon as it noticed `_as.abort_requested()`)
- fix some waits in the unit test; they depended on particular ordering
  of tasks by the Scylla reactor, the test could sometimes hang in debug
  mode which randomizes task order
- fix `rpc::ping()` in randomized_nemesis_test so it doesn't give an
  exceptional discarded future in some cases

v3:
- fix a race in failure_detector::stop(): we must first wait for _destroy_subscriptions fiber to finish on all shards, only then we can set _impl to nullptr on any shard
- invoke_abortable_on was moved from randomized_nemesis_test to raft/helpers
- add a unit test (second patch)

v2:
- rename `direct_fd` namespace to `direct_failure_detector`
- move gms/direct_failure_detector.{cc,hh} to direct_failure_detector/failure_detector.{cc,hh}
- cleaned license comments
- removed _mark_queue for sending notifications from ping_fiber() to notify_fiber(). Instead:
    - _listeners is now a boost::container::flat_multimap (previously it was std::multimap)
    - _alive_start is no longer an iterator to _listeners, but an index (size_t)
    - _mark_queue was replaced with a second index to _listeners, _marked_alive_start, together with a condition variable, _alive_start_changed
    - ping_fiber() signals _alive_start_changed when it changes _alive_start
    - notify_fiber() waits on _alive_start_changed. When it wakes up, it compares _marked_alive_start to _alive_start, sends notifications to listeners appropriately, and updates _marked_alive_start
- replacing _mark_queue with index + condition variable allowed some better exception specifications: send_mark_alive and send_mark_dead are now noexcept, ping_fiber() is specified to not return exceptional futures other than sleep_aborted which can only happen when we destroy the worker (previously, ping_fiber() could silently stop due to exception happening when we insert to _mark_queue - it could probably only be bad_alloc, but still)
- _shard_workers is now unordered_map<endpoint_id, endpoint_worker> instead of unordered_map<endpoint_id, unique_ptr<endpoint_worker>> (after learning how to construct map values in place - using either `emplace`+`forward_as_tuple` or `try_emplace`)
- `failure_detector::impl::add_endpoint` now gives strong exception guarantee: if an exception is thrown, no state changes
- same for `failure_detector::impl::remove_endpoint`
- `failure_detector::impl::create_worker` now uses `on_internal_error` when it detects that there is a worker for this endpoint already - thanks to the strong exception guarantees of `add_endpoint` and `remove_endpoint` this should never happen
- comment at _num_workers definition why we maintain this statistic (to pick a shard with smallest number of workers)
- remove unnecessary `if (_as.abort_requested())` in `ping_fiber()`
- in ping_fiber(), after a ping, we send notifications to listeners which we know will time-out before the next ping starts. Before, we would sleep until the threshold is actually passed by the clock. Now we send it immediately - we know ahead of time that the listener will time-out and we can notify it immediately.
- due to above, comment at `register_listener` was adjusted, with the following note added: "Note: the `mark_dead` notification may be sent earlier if we know ahead of time that `threshold` will be crossed before the next `ping()` can start."
- `register_listener` now takes a `listener&`, not `listener*`
- at `register_listener` comment why we allow different thresholds (second to last paragraph)
- at `register_listener` mention that listeners can be registered on any shard (last paragraph)
- add protected destructors to rpc, clock, listener, and mention that these objects are not owned/destroyed by `failure_detector`.
- replaced _endpoint_queue (seastar::queue<pair<endpoint_id, bool>>) with unordered_map<endpoint_id, bool> + condition variable. When user calls add/remove_endpoint, an entry is inserted to this map, or existing entry is updated, and the condition variable is signaled. update_endpoint_fiber() waits on the condition variable, performs the add/remove operation, and removes entries from this map. Compared to the previous solution:
    - the new solution has at most one entry for a given endpoint, so the number of entries is bounded by the number of different endpoints (so in the main Scylla use case, by the number of different nodes that ever exist); the previous solution could in theory have a backlog of unprocessed events, with updates for a given endpoint appearing multiple times in the queue at once
    - when the add/remove operation fails in update_endpoint_fiber(), we don't remove the entry from the map so the operation can be retried later. Previously we would always remove the entry from the queue so it doesn't grow too big in presence of failures.
    - when the add/remove operation fails in update_endpoint_fiber(), we sleep for 10*ping_period before retrying. Note that this codepath should not be reached in practice, it can basically only happen on bad_alloc
- commented that `clock::sleep_until` should signalize aborts using `sleep_aborted`
- `clock::now()` is `noexcept`
- `add/remove_endpoint` can be called after `stop()`, they just won't do anything in that case. Reason: next item
- in randomized_nemesis_test, stop failure detector before raft server (it was the other way before), so it stops using server's RPC before server is aborted. Before, the log was spammed with errors from failure detector because failure detector was getting gate_closed_exceptions from the RPC when the server was stopped. A side effect is that the raft server may continue adding/removing endpoints when the failure detector is stopped, which is fine due to above item
- randomized_nemesis_test: direct_fd_clock::sleep_until translates abort_requested_exception to sleep_aborted (so sleep_until satisfies the interface specification)
- message/rpc_protocol_impl: send_message_abortable: if abort_source::subscribe returns null, immediately throw abort_requested_exception (before we would send the message out and not react to an abort if it happened before we were called)
- rebase

Closes #10437

* github.com:scylladb/scylla:
  service: raft: remove `raft_gossip_failure_detector`
  service: raft: raft_group_registry: use direct failure detector notifications for raft server liveness
  service: raft: add/remove direct failure detector endpoints on group 0 configuration changes
  main: start direct failure detector service
  messaging_service: abortable version of `send_gossip_echo`
  message: abortable version of `send_message`
  test: raft: randomized_nemesis_test: remove old failure_detector
  test: raft: randomized_nemesis_test: use `direct_failure_detector::failure_detector`
  test: raft: randomized_nemesis_test: ping all shards on each tick
  test: unit test for new failure detector service
  direct_failure_detector: introduce new failure detector service
2022-05-11 14:46:27 +02:00
Michael Livshin
1e690e6773 tests: remove obsolete utility functions
Signed-off-by: Michael Livshin <michael.livshin@scylladb.com>
2022-05-10 22:10:40 +03:00
Michael Livshin
864882253a tests: less trivial flat_reader_assertions{,_v2} conversions
Dealing with the handful of tests that check range tombstones in
interesting ways and need more than search-and-replace.

Signed-off-by: Michael Livshin <michael.livshin@scylladb.com>
2022-05-10 22:10:40 +03:00
Michael Livshin
3cc2343775 tests: trivial flat_reader_assertions{,_v2} conversions
(Which entails temporary cut-and-pasting some utility functions)

Signed-off-by: Michael Livshin <michael.livshin@scylladb.com>
2022-05-10 22:10:40 +03:00
Michael Livshin
51cf84e8c9 flat_mutation_reader_assertions_v2: improve range tombstone support
* Track the active range tombstone.

* Add `may_produce_tombstones()`.

* Flesh out `produces_row_with_key()`.

* Add more trace logs.

Signed-off-by: Michael Livshin <michael.livshin@scylladb.com>
2022-05-10 22:10:40 +03:00
Raphael S. Carvalho
48e3117ebc compaction: move propagate_replacement() into private namespace
propagate_replacement() is an internal function that shouldn't be in
the public interface. No one besides an unit test for incremental
compaction needs it. In the future, I want to revisit incremental
compaction unit test to stop using it and only rely on public
interfaces

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20220506171647.81063-1-raphaelsc@scylladb.com>
2022-05-09 16:49:50 +03:00
Kamil Braun
7e4bb68061 service: raft: add/remove direct failure detector endpoints on group 0 configuration changes
We connect the group 0 raft server rpc implementation to the new direct
failure detector service, so that when servers are added or removed from
the the group 0 configuration, corresponding endpoints are added to the
direct failure detector service. Thus the set of detected endpoints will
be equal to the group 0 configuration.

This causes the failure detector service to start pinging endpoints,
but no listeners are registered yet. The following commit changes that.
2022-05-09 15:31:19 +02:00
Kamil Braun
38f65e5a2e main: start direct failure detector service
We add the new direct failure detector to the list of services started
in the Scylla process.

To start the service, we need an implementation of `pinger` and `clock`.

`pinger` is implemented using existing GOSSIP_ECHO verb. The gossip echo
message requires the node's gossip generation number. We handle this by
embedding the pinger implementation inside `gossiper`, and making
`gossiper` update the generation number (cached inside the pinger class)
periodically.

`clock` is a simple implementation which uses `std::chrono::steady_clock`
and `seastar::sleep_until` underneath. Translating `steady_clock`
durations to `direct_failure_detector::clock` durations happens by taking
the number of ticks.

The service is currently not used, just initialized; no endpoints are
added and no listeners are registered yet, but the following commits
change that.
2022-05-09 13:14:42 +02:00
Botond Dénes
98f3d516a2 test/lib/random_schema: add a simpler overload for fixed partition count
Some tests want to generate a fixed amount of random partitions, make
their life easier.
2022-05-05 14:33:37 +03:00
Pavel Emelyanov
e80adbade3 code: De-globalize gossiper
No code uses global gossiper instance, it can be removed. The main and
cql-test-env code now have their own real local instances.

This change also requires adding the debug:: pointer and fixing the
scylle-gdb.py to find the correct global location.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2022-05-03 10:57:40 +03:00
Pavel Emelyanov
b0544ba7bd cql test env: Keep gossiper reference on board
The reference is already available at the env initialization, but it's
not kept on the env instance itself. Will be used by the next patch.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2022-05-03 10:57:40 +03:00
Pavel Emelyanov
4bea0b7491 code: Use gossiper reference where possible
Some places in the code has function-local gossiper reference but
continue to use global instance. Re-use the local reference (it's going
to become sharded<> instance soon).

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2022-05-03 10:57:40 +03:00
Pavel Emelyanov
38c77d0d85 snitch: Keep gossiper reference
The reference is put on the snitch_ptr because this is the sharded<>
thing and because gossiper reference is the same for different snitch
drivers. Also, getting gossiper from snitch_ptr by driver will look
simpler than getting it from any base class.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2022-05-03 10:57:40 +03:00
Pavel Emelyanov
2d32c47d0d main, cql_test_env: Start snitch later
Snitch depends on gossiper and system keyspace, so it needs to be
started after those two do.

fixes #10402

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2022-05-03 10:57:32 +03:00
Avi Kivity
5169ce40ef Merge 'loading_cache: force minimum size of unprivileged ' from Piotr Grabowski
This series enforces a minimum size of the unprivileged section when
performing `shrink()` operation.

When the cache is shrunk, we still drop entries first from unprivileged
section (as before this commit), however, if this section is already small
(smaller than `max_size / 2`), we will drop entries from the privileged
section.

This is necessary, as before this change the unprivileged section could
be starved. For example if the cache could store at most 50 entries and
there are 49 entries in privileged section, after adding 5 entries (that would
go to unprivileged section) 4 of them would get evicted and only the 5th one
would stay. This caused problems with BATCH statements where all
prepared statements in the batch have to stay in cache at the same time
for the batch to correctly execute.

To correctly check if the unprivileged section might get too small after
dropping an entry, `_current_size` variable, which tracked the overall size
of cache, is changed to two variables: `_unprivileged_section_size` and
`_privileged_section_size`, tracking section sizes separately.

New tests are added to check this new behavior and bookkeeping of the section
sizes. A test is added, that sets up a CQL environment with a very small
prepared statement cache, reproduces issue in #10440 and stresses the cache.

Fixes #10440.

Closes #10456

* github.com:scylladb/scylla:
  loading_cache_test: test prepared stmts cache
  loading_cache: force minimum size of unprivileged
  loading_cache: extract dropping entries to lambdas
  loading_cache: separately track size of sections
  loading_cache: fix typo in 'privileged'
2022-05-01 19:36:35 +03:00
Piotr Grabowski
6537dc6126 loading_cache_test: test prepared stmts cache
Add a new test that sets up a CQL environment with a very small prepared 
statements cache. The test reproduces a scenario described in #10440,
where a privileged section of prepared statement cache gets large
and that could possibly starve the unprivileged section, making it
impossible to execute BATCH statements. Additionally, at the end of the 
test, prepared statements/"simulated batches" with prepared statements 
are executed a random number of times, stressing the cache.

To create a CQL environment with small prepared cache, cql_test_config
is extended to allow setting custom memory_config value.
2022-04-29 19:22:55 +02:00
Botond Dénes
272da51f80 test/lib/mutation_source_test: remove upgrade_to_v2 tests
We don't have any upgrade_to_v2() left in production code, so no need to
keep testing it. Removing it from this test paves the way for removing
it for good (not in this series).
2022-04-28 14:12:24 +03:00
Pavel Emelyanov
828a951886 snitch: Remove create_snitch/stop_snitch
After previous patches both, create_snitch() and stop_snitch() no look
like the classica sharded service start/stop sequence. Finally both
helpers can be removed and the rest of the user can just call start/stop
on locally obtained sharded references.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2022-04-11 14:43:25 +03:00
Pavel Emelyanov
633746b87d snitch: Make config-based construction of all drivers
Currently snitch drivers register themselves in class-registry with all
sorts of construction options possible. All those different constuctors
are in fact "config options".

When later snitch will declare its dependencies (gossiper and system
keyspace), it will require patching all this registrations, which's very
inconvenient.

This patch introduces the snitch_config struct and replaces all the
snitch constructors with the snitch_driver(snitch_config cfg) one.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2022-04-11 14:38:34 +03:00
Botond Dénes
ad075b27a4 test/lib/mutation_diff: s/colordiff/diff/
Colordiff is problematic when writing the diff into a file for later
examination. Use regular diff instead. One can still get syntax
highlighting by writing the output into `.diff` file (which most editors
will recognize).

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <20220407080944.324108-1-bdenes@scylladb.com>
2022-04-07 12:07:24 +03:00
Michael Livshin
da7c7fd3dc delete code of the unused normalizing_reader class
Signed-off-by: Michael Livshin <michael.livshin@scylladb.com>
Message-Id: <20220406161107.2376568-3-michael.livshin@scylladb.com>
2022-04-07 09:29:41 +03:00
Pavel Emelyanov
9066224cf4 table: Don't export compaction manager reference
There's a public call on replica::table to get back the compaction
manager reference. It's not needed, actually. The users of the call are
distributed loader which already has database at hand, and a test that
creates itw own instance of compaction manager for its testing tables
and thus also has it available.

tests: unit(dev)

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Message-Id: <20220406171351.3050-1-xemul@scylladb.com>
2022-04-07 09:27:45 +03:00
Benny Halevy
b3e2bbe5bd test: random_mutation_generator: make more interesting range tombstones
Include also singular prefix and semi-bounded range tombstones.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2022-04-04 17:34:49 +03:00
Avi Kivity
af07519928 Merge "Remove reader from mutations v1" from Botond
"
First migrate all users to the v2 variant, all of which are tests.
However, to be able to properly migrate all tests off it, a v2 variant
of the restricted reader is also needed. All restricted reader users are
then migrated to the freshly introduced v2 variant and the v1 variant is
removed.
Users include:
* replica::table::make_reader_v2()
* streaming_virtual_table::as_mutation_source()
* sstables::make_reader()
* tests

This allows us to get rid of a bunch of conversions on the query path,
which was mostly v2 already.

With a few tests we did kick the can down the road by wrapping the v2
reader in `downgrade_to_v1()`, but this series is long enough already.

Tests: unit(dev), unit(boost/flat_mutation_reader_test:debug)
"

* 'remove-reader-from-mutations-v1/v3' of https://github.com/denesb/scylla:
  readers: remove now unused v1 reader from mutations
  test: move away from v1 reader from mutations
  test/boost/mutation_reader_test: use fragment_scatterer
  test/boost/mutation_fragment_test: extract fragment_scatterer into a separate hh
  test/boost: mutation_fragment_test: refactor fragment_scatterer
  readers: remove now unused v1 reversing reader
  test/boost/flat_mutation_reader_test: convert to v2
  frozen_mutation: fragment_and_freeze(): convert to v2
  frozen_mutation: coroutinize fragment_and_freeze()
  readers: migrate away from v1 reversing reader
  db/virtual_table: use v2 variant of reversing and forwardable readers
  replica/table: use v2 variant of reversing reader
  sstables/sstable: remove unused make_crawling_reader_v1()
  sstables/sstable: remove make_reader_v1()
  readers: add v2 variant of reversing reader
  readers/reversing: remove FIXME
  readers: reader from mutations: use mutation's own schema when slicing
2022-03-31 13:29:11 +03:00
Botond Dénes
fd69add579 test: move away from v1 reader from mutations
Use the v2 variant instead.
2022-03-31 10:36:23 +03:00
Botond Dénes
feecc19d5b test/boost/mutation_fragment_test: extract fragment_scatterer into a separate hh
We want to use it in test/boost/mutation_reader_test.cc too.
2022-03-31 10:25:45 +03:00
Botond Dénes
b029bd3db7 tree: remove mutation_reader.hh include
In most files it was unused. We should move these to the patch which
moved out the last interesting reader from mutation_reader.hh (and added
the corresponding new header include) but its probably not worth the
effort.
Some other files still relied on mutation_reader.hh to provide reader
concurrency semaphore and some other misc reader related definitions.
2022-03-30 15:42:51 +03:00
Botond Dénes
d0ea895671 readers: move multishard reader & friends to reader/multishard.cc
Since the multishard reader family weighs more than 1K SLOC, it gets
its own .cc file.
2022-03-30 15:42:51 +03:00
Botond Dénes
f8015d9c26 readers: move combined reader into readers/
Since the combined reader family weighs more than 1K SLOC, it gets its
own .cc file.
2022-03-30 15:42:51 +03:00
Avi Kivity
3c2271af52 Merge "De-globalize system keyspace local cache" from Pavel E
"
There's a static global sharded<local_cache> variable in system keyspace
the keeps several bits on board that other subsystems need to get from
the system keyspace, but what to have it in future<>-less manner.

Some time ago the system_keyspace became a classical sharded<> service
that references the qctx and the local cache. This set removes the global
cache variable and makes its instances be unique_ptr's sitting on the
system keyspace instances.

The biggest obstacle on this route is the local_host_id that was cached,
but at some point was copied onto db::config to simplify getting the value
from sstables manager (there's no system keyspace at hand there at all).
So the first thing this set does is removes the cached host_id and makes
all the users get it from the db::config.

(There's a BUG with config copy of host id  -- replace node doesn't
 update it. This set also fixes this place)

De-globalizing the cache is the prerequisite for untangling the snitch-
-messaging-gossiper-system_keyspace knot. Currently cache is initialized
too late -- when main calls system_keyspace.start() on all shards -- but
before this time messaging should already have access to it to store
its preferred IP mappings.

tests: unit(dev), dtest.simple_boot_shutdown(dev)
"

* 'br-trade-local-hostid-for-global-cache' of https://github.com/xemul/scylla:
  system_keyspace: Make set_local_host_id non-static
  system_keyspace: Make load_local_host_id non-static
  system_keyspace: Remove global cache instance
  system_keyspace: Make it peering service
  system_keyspace,snitch: Make load_dc_rack_info non-static
  system_keyspace,cdc,storage_service: Make bootstrap manipulations non-static
  system_keyspace: Coroutinize set_bootstrap_state
  gossiper: Add system keyspace dependency
  cdc_generation_service: Add system keyspace dependency
  system_keyspace: Remove local host id from local cache
  storage_service: Update config.host_id on replace
  storage_service: Indentation fix after previous patch
  storage_service: Coroutinize prepare_replacement_info()
  system_distributed_keyspace: Indentation fix after previous patch
  code,system_keyspace: Relax system_keyspace::load_local_host_id() usage
  code,system_keyspace: Remove system_keyspace::get_local_host_id()
2022-03-27 17:19:24 +03:00
Pavel Emelyanov
3da5f6ac30 gossiper: Add system keyspace dependency
The gossiper reads peer features from system keyspace. Also the snitch
code needs system keyspace, and since now it gets all its dependencies
from gossiper (will be fixed some day, but not now), it will do the same
for sys.ks.. Thus it's worth having gossiper->system_keyspace explicit
dependency.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2022-03-25 15:08:13 +03:00
Pavel Emelyanov
62417577ab cdc_generation_service: Add system keyspace dependency
The service uses system keyspace to, e.g., manage the generation id,
thus it depends on the system_keyspace instance and deserves the
explicit reference.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2022-03-25 13:39:32 +03:00
Raphael S. Carvalho
c25d8f6770 compaction: Move decision of garbage collection from strategy to task type
For compaction to be able to purge expired data, like tombstones, a
sstable set snapshot is set in the compaction descriptor.

That's a decision that belongs to task type. For example, all regular
compaction enable GC, whereas scrub for example doesn't for safety
reasons.

The problem is that the decision is being made by every instantiation
of compaction_descriptor in the strategies, which is both unnecessary
and also adds lots of boilerplate to the code, making it hard to
understand and work with.

As sstable set snapshot is an implementation detail, a new method
is being added to compaction_descriptor to make the intention
clearer, making the interface easier to understand.

can_purge_tombstones, used previously by rewrite task only, is being
reused for communicating GC intention into task::compact_sstables().

The boilerplate was a pain when adding a new strategy method for
the ongoing work on cleanup, described by issue #10097.
Another benefit is that we'll now only create a set snapshot when
compaction will really run. Before, it could happen that the snapshot
would be discarded if the compaction attempt had to be postponed,
which is a waste of cpu cycles.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2022-03-21 12:14:04 -03:00
Tomasz Grabiec
cd5fec8a23 Merge "raft: re-advertise gossiper features when raft feature support changes" from Pavel
Prior to the change, `USES_RAFT_CLUSTER_MANAGEMENT` feature wasn't
properly advertised upon enabling `SUPPORTS_RAFT_CLUSTER_MANAGEMENT`
raft feature.

This small series consists of 3 parts to fix the handling of supported
features for raft:
1. Move subscription for `SUPPORTS_RAFT_CLUSTER_MANAGEMENT` to the
   `raft_group_registry`.
2. Update `system.local#supported_features` directly in the
   `feature_service::support()` method.
3. Re-advertise gossiper state for `SUPPORTED_FEATURES` gossiper
   value in the support callback within `raft_group_registry`.

* manmanson/track_supported_set_recalculation_v7:
  raft: re-advertise gossiper features when raft feature support changes
  raft: move tracking `SUPPORTS_RAFT_CLUSTER_MANAGEMENT` feature to raft
  gms: feature_service: update `system.local#supported_features` when feature support changes
  test: cql_test_env: enable features in a `seastar::thread`
2022-03-18 12:34:17 +01:00
Pavel Solodovnikov
011942dcce raft: move tracking SUPPORTS_RAFT_CLUSTER_MANAGEMENT feature to raft
Move the listener from feature service to the `raft_group_registry`.

Enable support for the `USES_RAFT_CLUSTER_MANAGEMENT`
feature when the former is enabled.

Signed-off-by: Pavel Solodovnikov <pa.solodovnikov@scylladb.com>
2022-03-18 09:54:25 +03:00
Pavel Solodovnikov
724ea7aa38 test: cql_test_env: enable features in a seastar::thread
Each feature can have an associated `when_enabled` callback
registered, which is assumed to run in the thread context,
so wrap the `enable()` call in a seastar thread.

Tests: unit(dev)

Signed-off-by: Pavel Solodovnikov <pa.solodovnikov@scylladb.com>
2022-03-18 09:54:15 +03:00
Avi Kivity
77f330f393 Merge "readers: retire v1 generating reader implementation" from Botond
"
The generating reader is a reader which converts a functor returning
mutation fragments to a mutation reader.
We currently have 2 generating reader implementations: one operating
with a v1 functor and one with a v2 one. This patch-set converts the v1
functor based one to a v2 reader, by adapting the v1 functor to a v2
functor and reusing the v2 reader implementation.
Tests are also added to both variants.

Tests: unit(dev)
"

* 'generating-reader-v2/v1' of https://github.com/denesb/scylla:
  test/boost: mutation_reader_test: add tests for generating reader
  test: export squash_mutations() into lib/mutation_source_test.hh
  readers: add next partition adaptor
  readers: implement generating_reader from v1 generator via adaptor
  readers: upgrade_to_v2(): reimplement in terms of upgrading_consumer
  readers: add upgrading_consumer
  readers: generating_reader: use noncopyable_function<>
  readers: merge generating.hh into generating_v2.hh
  readers/generating.hh: return v2 reader from make_generating_reader()
2022-03-17 12:19:25 +02:00
Botond Dénes
4243cd395d test: export squash_mutations() into lib/mutation_source_test.hh
This method used to be a static one in
boost/flat_mutation_reader_test.cc. Turns out it is useful for other
tests based on the mutation source test suite, so move it into the
header of the latter to make it accessible.
2022-03-17 08:08:01 +02:00
Pavel Emelyanov
009c449cc3 replica: Push sharded<system_keyspace> down to parse_system_tables
The method needs to call merge_schema() that will need system keyspace
instance at hand. The parse_s._t. method is boot-time one, pushing the
main-local instance through it is fine

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2022-03-16 14:24:40 +03:00
Pavel Emelyanov
f18a80852e storage_service: Keep sharded<system_keyspace> reference
Storage service uses system keyspace on boot heavily

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2022-03-16 14:24:40 +03:00
Pavel Emelyanov
42e733bdf7 migration_manager: Keep sharded<system_keyspace> reference
The main target here is system_keyspace::update_schema_version() which
is now static, but needs to have system_keyspace at "this". Migration
manager is one of the places that calls that method indirectly.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2022-03-16 14:24:40 +03:00
Pavel Emelyanov
b761e558b5 system_keyspace: Keep local_cache reference on board
For now it's a reference, but all users of the cache will be
eventually switched into using system_keyspace.

In cql-test-env cache starting happens earlier than it was
before, but that's OK, it just initializes empty instances.
In main cache starts at the same time as before patching.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2022-03-16 13:57:59 +03:00