Tablet streaming involves asynchronous RPCs to other replicas which transfer writes. We want side-effects from streaming only within the migration stage in which the streaming was started. This is currently not guaranteed on failure. When streaming master fails (e.g. due to RPC failing), it can be that some streaming work is still alive somewhere (e.g. RPC on wire) and will have side-effects at some point later.
This PR implements tracking of all operations involved in streaming which may have side-effects, which allows the topology change coordinator to fence them and wait for them to complete if they were already admitted.
The tracking and fencing is implemented by using global "sessions", created for streaming of a single tablet. Session is globally identified by UUID. The identifier is assigned by the topology change coordinator, and stored in system.tablets. Sessions are created and closed based on group0 state (tablet metadata) by the barrier command sent to each replica, which we already do on transitions between stages. Also, each barrier waits for sessions which have been closed to be drained.
The barrier is blocked only if there is some session with work which was left behind by unsuccessful streaming. In which case it should not be blocked for long, because streaming process checks often if the guard was left behind and stops if it was.
This mechanism of tracking is fault-tolerant: session id is stored in group0, so coordinator can make progress on failover. The barriers guarantee that session exists on all replicas, and that it will be closed on all replicas.
Closesscylladb/scylladb#15847
* github.com:scylladb/scylladb:
test: tablets: Add test for failed streaming being fenced away
error_injection: Introduce poll_for_message()
error_injection: Make is_enabled() public
api: Add API to kill connection to a particular host
range_streamer: Do not block topology change barriers around streaming
range_streamer, tablets: Do not keep token metadata around streaming
tablets: Fail gracefully when migrating tablet has no pending replica
storage_service, api: Add API to disable tablet balancing
storage_service, api: Add API to migrate a tablet
storage_service, raft topology: Run streaming under session topology guard
storage_service, tablets: Use session to guard tablet streaming
tablets: Add per-tablet session id field to tablet metadata
service: range_streamer: Propagate topology_guard to receivers
streaming: Always close the rpc::sink
storage_service: Introduce concept of a topology_guard
storage_service: Introduce session concept
tablets: Fix topology_metadata_guard holding on to the old erm
docs: Document the topology_guard mechanism
This series adds preparation patches for file stream tablet implementation in enterprise branch. It minimizes the differences between those two branches.
Closesscylladb/scylladb#16297
* github.com:scylladb/scylladb:
messaging_service: Introduce STREAM_BLOB and TABLET_STREAM_FILES verb
compaction_group_for_token: Handle minimum_token and maximum_token token
serializer: Add temporary_buffer support
cql_test_env: Allow messaging_service to start listen
Fixes#16298
The adjusted buffer position calculation in buffer_position(), introduced in https://github.com/scylladb/scylladb/pull/15494
was in fact broken. It calculated (like previously) a "position" based on diff between
underlying buffer size and ostream size() (i.e. avail), then adjusted this according to
sector overhead rules.
However, the underlying buffer size is in unadjusted terms, and the ostream is adjusted.
The two cannot be compared as such, which means the "positions" we get here are borked.
Luckily for us (sarcasm), the position calculation in replayer made a similar error,
in that it adjusts up current position by one sector overhead to much, leading to us
more or less getting the same, erroneous results in both ends.
However, when/iff one needs to adjust the segment file format further, one might very
quickly realize that this does not work well if, say, one needs to be able to safely
read some extra bytes before first chunk in a segment. Conversely, trying to adjust
this also exposes a latent potential error in the skip mechanism, manifesting here.
Issue fixed by keeping track of the initial ostream capacity for segment buffer, and
use this for position calculation, and in the case of replayer, move file pos adjustment
from read_data() to subroutine (shared with skipping), that better takes data stream
position vs. file position adjustment. In implementaion terms, we first inc the
"data stream" pos (i.e. pos in data without overhead), then adjust for overhead.
Also fix replayer::skip, so that we handle the buffer/pos relation correctly now.
Added test for intial entry position, as well as data replay consistency for single
entry_writer paths.
Fixes#16301
The calculation on whether data may be added is based on position vs. size of incoming data.
However, it did not take sector overhead into account, which lead us to writing past allowed
segment end, which in turn also leads to metrics overflows.
Closesscylladb/scylladb#16302
* github.com:scylladb/scylladb:
commitlog: Fix allocation size check to take sector overhead into account.
commitlog: Fix commitlog_segment::buffer_position() calculation and replay counterpart
The test test_many_partitions is very slow, as it tests a slow scan over
a lot of partitions. This was observed to time out on the slower ARM
machines, making the test flaky. To prevent this, create an
extra-patient cql connection with a 10 minutes timeout for the scan
itself.
Fixes: #16145Closesscylladb/scylladb#16303
This commit updates the configuration for
ScyllaDB documentation so that:
- 5.4 is the latest version.
- 5.4 is removed from the list of unstable versions.
It must be merged when ScyllaDB 5.4 is released.
No backport is required.
Closesscylladb/scylladb#16308
Fixes#16301
The calculation on whether data may be added is based on position vs. size of incoming data.
However, it did not take sector overhead into account, which lead us to writing past allowed
segment end, which in turn also leads to metrics overflows.
Fixes#16298
The adjusted buffer position calculation in buffer_position(), introduced in #15494
was in fact broken. It calculated (like previously) a "position" based on diff between
underlying buffer size and ostream size() (i.e. avail), then adjusted this according to
sector overhead rules.
However, the underlying buffer size is in unadjusted terms, and the ostream is adjusted.
The two cannot be compared as such, which means the "positions" we get here are borked.
Luckily for us (sarcasm), the position calculation in replayer made a similar error,
in that it adjusts up current position by one sector overhead to much, leading to us
more or less getting the same, erroneous results in both ends.
However, when/iff one needs to adjust the segment file format further, one might very
quickly realize that this does not work well if, say, one needs to be able to safely
read some extra bytes before first chunk in a segment. Conversely, trying to adjust
this also exposes a latent potential error in the skip mechanism, manifesting here.
Issue fixed by keeping track of the initial ostream capacity for segment buffer, and
use this for position calculation, and in the case of replayer, move file pos adjustment
from read_data() to subroutine (shared with skipping), that better takes data stream
position vs. file position adjustment. In implementaion terms, we first inc the
"data stream" pos (i.e. pos in data without overhead), then adjust for overhead.
Also fix replayer::skip, so that we handle the buffer/pos relation correctly now.
Added test for intial entry position, as well as data replay consistency for single
entry_writer paths.
The following error was seen:
[shard 0] table - compaction_group_for_token: compaction_group idx=0 range=(minimum
token,-6917529027641081857] does not contain token=minimum token
Since minimum_token or maximum_token will not be inside a token range. Skip
the in token range check.
This is needed for rpc calls to work in the tests. With this patch, by
default, messaging_service does not listen as it was before.
This is useful for file stream for tablet test.
This patch fixes error check and speed up swap allocation.
Following patches are included:
- scylla_swap_setup: run error check before allocating swap
avoid create swapfile before running error check
- scylla_swap_setup: use fallocate on ext4
this inclease swap allocation speed on ext4
Closesscylladb/scylladb#12668
* github.com:scylladb/scylladb:
scylla_swap_setup: use fallocate on ext4
scylla_swap_setup: run error check before allocating swap
The current implementation starts in sstables_manager that gets the deletion function from storage which, in turn, should atomically do sst.unlink() over a list of sstables (s3 driver is still not atomic though #13567).
This PR generalizes the atomic deletion inside sstables_manager method and removes the atomic deletor function that nobody liked when it was introduced (#13562)
Closesscylladb/scylladb#16290
* github.com:scylladb/scylladb:
sstables/storage: Drop atomic deleter
sstables/storage: Reimplement atomic deletion in sstables_manager
sstables/storage: Add prepare/complete skaffold for atomic deletion
Streaming was keeping effective_replication_map_ptr around the whole
process, which blocks topology change barriers.
This will inhibit progress of tablet load balancer or concurrent
migrations, resulting in worse performance.
Fix by switching to the most recent erm on sharder
calls. multishard_writer calls shard_of() for each new partition.
A better way would be to switch immediately when topology version
changes, but this is left for later.
Load balancing needs to be disabled before making a series of manual
migrations so that we don't fight with the load balancer.
Also will be used in tests to ensure tablets stick to expected locations.
Prevents stale streaming operation from running beyond topology
operation they were started in. After the session field is cleared, or
changed to something else, the old topology_guard used by streaming is
interrupted and fenced and the next barrier will join with any
remaining work.
rpc::sink::~sink aborts if not closed. There is a try/catch clause
which ensures that close() is called, but there was code after sink is
created which is not covered by it. Move sink construction past that
code.
A write to a base table can generate one or more writes to a materialized
view. The write to RF base replicas need to cause writes to RF view
replicas. Our MV implementation, based on Cassandra's implementation,
does this via "pairing": Each one of the base replicas involved in this
write sends each view update to exactly one view replica. The function
get_view_natural_endpoint() tells a base replica which of the view
replicas it should send the update to.
The standard pairing is based on the ring order: The first owner of the
base token sends to the first owner of the view token, the second to the
second, and so on. However, the existing code also uses an optimization
we call self-pairing: If a single node is both a base replica and a base
replica, the pairing is modified so this node sends the update to itself.
This patch *disables* the self-pairing optimization in keyspaces that
use tablets:
The self-pairing optimization can cause the pairing to change after
token ranges are moved between nodes, so it can break base-view consistency
in some edge cases, leading to "ghost rows". With tablets, these range
movements become even more frequent - they can happen even if the
cluster doesn't grow. This is why we want to solve this problem for tablets.
For backward compatibility and to avoid sudden inconsistencies emerging
during upgrades, we decided to continue using the self-pairing optimization
for keyspaces that are *not* using tablets (i.e., using vnoodes).
Currently, we don't introduce a "CREATE MATERIALIZED VIEW" option to
override these defaults - i.e., we don't provide a way to disable
self-pairing with vnodes or to enable them with tablets. We could introduce
such a schema flag later, if we ever want to (and I'm not sure we want to).
It's important to note, that in some cases, this change has implications
on when view updates become synchronous, in the tablets case.
For example:
* If we have 3 nodes and RF=3, with the self-pairing optimization each
node is paired with itself, the view update is local, and is
implicitly synchronous (without requiring a "synchronous_updates"
flag).
* In the same setup with tablets, without the self-pairing optimization
(due to this patch), this is not guaranteed. Some view updates may not
be synchronous, i.e., the base write will not wait for the view
write. If the user really wants synchronous updates, they should
be requested explicitly, with the "synchronous_updates" view option.
Fixes#16260.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Closesscylladb/scylladb#16272
run_on_existing_tables() is not used at all. and we have two of them.
in this change, let's drop them.
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
Closesscylladb/scylladb#16304
Add CAP_PERFMON to AmbientCapabilities in capabilities.conf, to enable
perf_event based stall detector in Seastar.
However, on Debian/Ubuntu CAP_PERFMON with non-root user does not work
because it sets kernel.perf_event_paranoid=4 which disallow all non-root
user access.
(On Debian it kernel.perf_event_paranoid=3)
So we need to configure kernel.perf_event_paranoid=2 on these distros.
see: https://askubuntu.com/questions/1400874/what-does-perf-paranoia-level-four-do
Also, CAP_PERFMON is only available on linux-5.8+, older kernel does not
have this capability.
To enable older kernel environment such as CentOS7, we need to configure
kernel.perf_event_paranoid=1 to allow non-root user access even without
the capability.
Fixes#15743Closesscylladb/scylladb#16070
* seastar 55a821524d...ae8449e04f (22):
> Revert "Merge 'reactor: merge pollfn on I/O paths into a single one' from Kefu Chai"
> http/exception: Make unexpected status message more informative
> docker: bump up to clang {16,17} and gcc {12,13}
> doc: replace space (0xA0) in unicode with ASCII space (0x20)
> file: Remove reactor class friendship
> dpdk: adjust for poller in internal namespace
> http: make_requests accept optional expected
> Merge 'future: future_state_base: assert owner shard in debug mode' from Benny Halevy
> Merge 'Keep pollers in internal/poll.hh' from Pavel Emelyanov
> sharded: access instance promise only on instance shard
> test: network_interface_test: add tests for format and parse
> Merge 'reactor: merge pollfn on I/O paths into a single one' from Kefu Chai
> reactor/scheduling_group: Handle at_destroy queue special in init_new_scheduling_group_key etc (v2)
> reactor: set local_engine after it is fully initialized
> build: do not error when running into GCC BZ-1017852
> Merge 'shared_future: make available() immediate after set_value()' from Piotr Dulikowski
> tls: add format_as(subject_alt_name_type) overload
> tls: linearize small packets on send
> shared_future: remove unused #include
> shared_ptr: add fmt::formatter for shared_ptr types
> lazy: add fmt::formatter for lazy_eval types
> Merge 'file: use unbuffered generator in experimental_list_directory()' from Kefu Chai
Closesscylladb/scylladb#16274
This PR removes the incorrect information that the ScyllaDB Rust Driver is not GA.
In addition, it replaces "Scylla" with "ScyllaDB".
Fixes https://github.com/scylladb/scylladb/issues/16178
(nobackport)
Closesscylladb/scylladb#16199
* github.com:scylladb/scylladb:
doc: remove the "preview" label from Rust driver
doc: fix Rust Driver release information
Fixes some more typos as found by codespell run on the code. In this commit, there are more user-visible errors.
Refs: https://github.com/scylladb/scylladb/issues/16255Closesscylladb/scylladb#16289
* github.com:scylladb/scylladb:
Update unified/build_unified.sh
Update main.cc
Update dist/common/scripts/scylla-housekeeping
Typos: fix typos in code
utils::fb_utilities is a global in-memory registry for storing and retrieving broadcast_address and broadcat_rpc_address.
As part of the effort to get rid of all global state, this series gets rid of fb_utilities.
This will eventually allow e.g. cql_test_env to instantiate multiple scylla server nodes, each serving on its own address.
Closesscylladb/scylladb#16250
* github.com:scylladb/scylladb:
treewide: get rid of now unused fb_utilities
tracing: use locator::topology rather than fb_utilities
streaming: use locator::topology rather than fb_utilities
raft: use locator::topology/messaging rather than fb_utilities
storage_service: use locator::topology rather than fb_utilities
storage_proxy: use locator::topology rather than fb_utilities
service_level_controller: use locator::topology rather than fb_utilities
misc_services: use locator::topology rather than fb_utilities
migration_manager: use messaging rather than fb_utilities
forward_service: use messaging rather than fb_utilities
messaging_service: accept broadcast_addr in config rather than via fb_utilities
messaging_service: move listen_address and port getters inline
test: manual: modernize message test
table: use gossiper rather than fb_utilities
repair: use locator::topology rather than fb_utilities
dht/range_streamer: use locator::topology rather than fb_utilities
db/view: use locator::topology rather than fb_utilities
database: use locator::topology rather than fb_utilities
db/system_keyspace: use topology via db rather than fb_utilities
db/system_keyspace: save_local_info: get broadcast addresses from caller
db/hints/manager: use locator::topology rather than fb_utilities
db/consistency_level: use locator::topology rather than fb_utilities
api: use locator::topology rather than fb_utilities
alternator: ttl: use locator::topology rather than fb_utilities
gossiper: use locator::topology rather than fb_utilities
gossiper: add get_this_endpoint_state_ptr
test: lib: cql_test_env: pass broadcast_address in cql_test_config
init: get_seeds_from_db_config: accept broadcast_address
locator: replication strategies: use locator::topology rather than fb_utilities
locator: topology: add helpers to retrieve this host_id and address
snitch: pass broadcast_address in snitch_config
snitch: add optional get_broadcast_address method
locator: ec2_multi_region_snitch: keep local public address as member
ec2_multi_region_snitch: reindent load_config
ec2_multi_region_snitch: coroutinize load_config
ec2_snitch: reindent load_config
ec2_snitch: coroutinize load_config
thrift: thrift_validation: use std::numeric_limits rather than fb_utilities
install-dependencies.sh includes a list of pip packages that the build
environment requires.
This functionality was added in
729d0feef0, however, the actual use of the
list is missing and instead the `pip install` commands are hard coded
into the logic.
This change complete the transition to pip-packages list.
It includes also modifying the `pip_packages` array to include a
constrain (if needed) for every package.
Fixes#16269
Signed-off-by: Eliran Sinvani <eliransin@scylladb.com>
Closesscylladb/scylladb#16282
Get my_address via query_processor->proxy and pass it
to all static make_ methods, instead of getting it from
utils::fb_utilities.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
This commit adds a short paragraph to the Raft
page to explain how to enable consistent
topology updates with Raft - an experimental
feature in version 5.4.
The paragraph should satisfy the requirements
for version 5.4. The Raft page will be
rewritten in the next release when consistent
topology changes with Raft will be GA.
Fixes https://github.com/scylladb/scylladb/issues/15080
Requires backport to branch-5.4.
Closesscylladb/scylladb#16273
Right now the atomic deletion is called on manager, but it gets the
actual deletion function from storage and off-loads the deletion to it.
This patch makes the manager fully responsible for the delition by
implemeting the sequence of
auto ctx = storage.prepare()
for sst in sstables:
sst.unlink()
storage.complate(ctx)
Storage implementations provide the prepare/complete methods. The
filesystem storage does it via deletion log and the s3 storage is still
not atomic :(
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
The atomic deletion is going to look like
auto ctx = storage.prepare()
for sst in sstables:
sst.unlink()
storage.complate(ctx)
and this patch prepares the class storage for that by extending it with
prepare and complete methods. The opaque ctx object is also here
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
topology_guard is used to track distributed operations started by the
topology change coordinator, e.g. streaming, to make sure that those
operations have no side effects after topology change coordinator
moved to the next migration stage, of a given tablet or of the whole
ring.
topology_guard can be sent over the wire in the form of
frozen_topology_guard. It can be materialized again on the other
side. While in transit, it doesn't block the coordinator barriers. But
if the coordinator moved on, materialization of the guard will
fail. So tracking safety is preserved.
In this patch, the guard implementation is based on tracking work
under global sessions, but the concept is flexible and other
mechanisms can be used without changing user code.