Commit Graph

5555 Commits

Author SHA1 Message Date
Petr Gusev
ed6bec2cac storage_proxy: node_local_only: always use my_host_id
The previous implementation did not handle topology changes well:
* In node_local_only mode with CL=1, if the current node is pending,
  the CL is raised to 2, causing unavailable_exception.
* If the current tablet is in write_both_read_old and we read with
  node_local_only on the new node, the replica list is empty.

This patch changes node_local_only mode to always use my_host_id as
the replica list. An explicit check ensures the current node is a
replica for the operation; otherwise on_internal_error is called.
2025-08-19 16:11:49 +02:00
Avi Kivity
1ef6697949 Merge 'service/vector_store_client: Add live configuration update support' from Karol Nowacki
Enable runtime updates of vector_store_uri configuration without
requiring server restart.
This allows to dynamically enable, disable, or switch the vector search service endpoint on the fly.

To improve the clarity the seastar::experimental::http::client is now wrapped in a private http_client class that also holds the host, address, and port information.

Tests have been added to verify that the client correctly handles transitions between enabled/disabled states and successfully switches traffic to a new endpoint after a configuration update.

Closes: VECTOR-102

No backport is needed as this is a new feature.

Closes scylladb/scylladb#25208

* github.com:scylladb/scylladb:
  service/vector_store_client: Add live configuration update support
  test/boost/vector_store_client_test.cc: Refactor vector store client test
  service/vector_store_client: Refactor host_port struct created
  service/vector_store_client: Refactor HTTP request creation
2025-08-14 19:45:06 +03:00
Avi Kivity
fe6e1071d3 Merge 'locator: util: optimize describe_ring' from Benny Halevy
This change includes basic optimizations to
locator::describe_ring, mainly caching the per-endpoint information in an unordered_map instead of looking them up in every inner-loop.

This yields an improvement of 20% in cpu time.
With 45 nodes organized as 3 dcs, 3 racks per dc, 5 nodes per rack, 256 tokens per node, yielding 11520 ranges and 9 replicas per range, describe_ring took Before: 30 milliseconds (2.6 microseconds per range) After:  24 milliseconds (2.1 microseconds per range)

Add respective unit test for vnode keyspace
and for tablets.

Fixes #24887

* backport up to 2025.1 as describe_ring slowness was hit in the field with large clusters

Closes scylladb/scylladb#24889

* github.com:scylladb/scylladb:
  locator: util: optimize describe_ring
  locator: util: construct_range_to_endpoint_map: pass is_vnode=true to get_natural_replicas
  vnode_effective_replication_map: do_get_replicas: throw internal error if token not found in map
  locator: effective_replication_map: get_natural_replicas: get is_vnode param
  test: cluster: test_repair: add test_vnode_keyspace_describe_ring
2025-08-14 19:39:17 +03:00
Petr Gusev
3f287275b8 test_tablets_lwt: add test_error_message_for_timeout_due_to_uncertainty 2025-08-13 14:03:57 +02:00
Petr Gusev
8bd936b72c storage_proxy: preserve accept error messages 2025-08-13 13:43:12 +02:00
Petr Gusev
00c25d396f storage_proxy: preserve prepare error message 2025-08-13 13:43:12 +02:00
Petr Gusev
0724fafe47 storage_proxy: fix log message 2025-08-13 13:40:09 +02:00
Benny Halevy
50abeb1270 locator: util: optimize describe_ring
This change includes basic optimizations to
locator::describe_ring, mainly caching the per-endpoint
information in an unordered_map instead of looking
them up in every inner-loop.

This yields an improvement of 20% in cpu time.
With 45 nodes organized as 3 dcs, 3 racks per dc, 5 nodes per rack, 256 tokens per
node, yielding 11520 ranges and 9 replicas per range, describe_ring took
Before: 30 milliseconds (2.6 microseconds per range)
After:  24 milliseconds (2.1 microseconds per range)

Add respective unit test of describe_ring for tablets.
A unit test for vnodes already exists in
test/nodetool/test_describering.py

Fixes #24887

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-08-13 12:42:25 +03:00
Petr Gusev
ff89c03c7f exceptions: add constructors that accept explicit error messages
To improve debuggability, we need to propagate original error messages
from Paxos verbs to the user. This change adds constructors that take
an error message directly, enabling better error reporting.

Additionally, functions such as write_timeout_to_read,
write_failure_to_read etc are updated to use these message-based
constructors. These functions are used in storage_proxy::cas to
convert between different error types, and without this change,
they could lose the original error message during conversion.
2025-08-12 16:31:05 +02:00
Karol Nowacki
22a133df9b service/vector_store_client: Add live configuration update support
Enable runtime updates of vector_store_uri configuration without
requiring server restart.
This allows to dynamically enable, disable, or switch the vector search node endpoint on the fly.
2025-08-12 08:12:53 +02:00
Karol Nowacki
858c423501 service/vector_store_client: Refactor host_port struct created
This new struct groups the host and port.
2025-08-12 08:12:53 +02:00
Karol Nowacki
dd147cd8e5 service/vector_store_client: Refactor HTTP request creation
Introduce lightweight wrapper for seastar::http::experimental::client
This wrapper simplifies request creation by automatically injecting the host name.
2025-08-12 08:12:53 +02:00
Tomasz Grabiec
9fd312d157 Merge 'row_cache: add memtable overlap checks elision optimization for tombstone gc' from Botond Dénes
https://github.com/scylladb/scylladb/issues/24962 introduced memtable overlap checks to cache tombstone GC. This was observed to be very strict and greatly reduce the effectiveness of tombstone GC in the cache, especially for MV workloads, which regularly recycle old timestamp into new writes, so the memtable often has smaller min live timestamp than the timestamp of the tombstones in the cache.

When creating a new memtable, save a snapshot of the tombstone gc state. This snapshot is used later to exclude this memtable from overlap checks for tombstones, whose token have an expiry time larger than that of the tombstone, meaning: all writes in this memtable were produced at a point in time when the current tombstone has already expired. This has the following implications:
* The partition the tombstone is part of was already repaired at the time the memtable was created.
* All writes in the memtable were produced *after* this tombstone's expiry time, these writes cannot be possibly relevant for this tombstone.

Based on this, such memtables are excluded from the overlap checks. With adequately frequent memtable flushes -- so that the tombstone gc state snapshot is refreshed -- most memtables should be excluded from overlap checks, greatly helping the cache's tombstone GC efficiency.

Fixes: https://github.com/scylladb/scylladb/issues/24962

Fixes a regression introduced by https://github.com/scylladb/scylladb/pull/23255 which was backported to all releases, needs backport to all releases as well

Closes scylladb/scylladb#25033

* github.com:scylladb/scylladb:
  docs/dev/tombstone.md: document the memtable overlap check elision optimization
  test/boost/row_cache_test: add test for memtable overlap check elision
  db/cache_mutation_reader: obtain gc-before and min-live-ts lazily
  mutation/mutation_compactor: use max_purgeable::can_purge and max_purgeable::purge_result
  db/cache_mutation_reader: use max_purgeable::can_purge()
  replica/table: get_max_purgeable_fn_for_cache_underlying_reader(): use max_purgable::combine()
  replica/database: memtable_list::get_max_purgeable(): set expiry-treshold
  compaction/compaction_garbage_collector: max_purgeable: add expiry_treshold
  replica/table: propagate gc_state to memtable_list
  replica/memtable_list: add tombstone_gc_state* member
  replica/memtable: add tombstone_gc_state_snapshot
  tombstone_gc: introduce tombstone_gc_state_snapshot
  tombstone_gc: extract shared state into shared_tombstone_gc_state
  tombstone_gc: per_table_history_maps::_group0_gc_time: make it a value
  tombstone_gc: fold get_group0_gc_time() into its caller
  tombstone_gc: fold get_or_create_group0_gc_time() into update_group0_refresh_time()
  tombstone_gc: fold get_or_create_repair_history_for_table() into update_repair_time()
  tombstone_gc: refactor get_or_greate_repair_history_for_table()
  replica/memtable_list: s/min_live_timestamp()/get_max_purgeable()/
  db/read_context: return max_purgeable from get_max_purgeable()
  compaction/compaction_garbage_collector: add formatter for max_purgeable
  mutation: move definition of gc symbols to compaction.cc
  compaction/compaction_garbage_collector: refactor max_purgeable into a class
  test/boost/row_cache_test: refactor test_populating_reader_tombstone_gc_with_data_in_memtable
  test: rewrite test_compacting_reader_tombstone_gc_with_data_in_memtable in C++
  test/boost/row_cache_test: refactor cache tombstone GC with memtable overlap tests
2025-08-11 23:54:59 +02:00
Botond Dénes
614d17347a tombstone_gc: extract shared state into shared_tombstone_gc_state
Instead of storing it partially in tombstone_gc and partially in an
external map. Move all external parts into the new
shared_tombstone_gc_state. This new class is responsible for
keeping and updating the repair history. tombstone_gc_state just keeps
const pointers to the shared state as before and is only responsible for
querying the tombstone gc before times.
This separation makes the code easier to follow and also enables further
patching of tombstone_gc_state.
2025-08-11 07:09:14 +03:00
Emil Maskovsky
7c54401d3d raft: enforce odd number of voters in group0
Implement odd number voter enforcement in the group0 voter calculator to
ensure proper Raft consensus behavior. Raft consensus requires a majority
of voters to make decisions, and odd numbers of voters is preferred
because an even number doesn't add additional reliability but introduces
the risk of scenarios where no group can make progress. If an even number
of voters is divided into two groups of equal size during a network
partition, neither group will have majority and both will be unable to
commit new entries. With an odd number of voters, such equal partition
scenarios are impossible (unless the network is partitioned into at least
three groups).

Fixes: scylladb/scylladb#23266
2025-08-08 19:49:20 +02:00
Avi Kivity
8164f72f6e Merge 'Separate local_effective_replication_map from vnode_effective_replication_map' from Benny Halevy
Derive both vnode_effective_replication_map
and local_effective_replication_map from
static_effective_replication_map as both are static and per-keyspace.

However, local_effective_replication_map does not need vnodes
for the mapping of all tokens to the local node.

Refs #22733

* No backport required

Closes scylladb/scylladb#25222

* github.com:scylladb/scylladb:
  locator: abstract_replication_strategy: implement local_replication_strategy
  locator: vnode_effective_replication_map: convert clone_data_gently to clone_gently
  locator: abstract_replication_map: rename make_effective_replication_map
  locator: abstract_replication_map: rename calculate_effective_replication_map
  replica: database: keyspace: rename {create,update}_effective_replication_map
  locator: effective_replication_map_factory: rename create_effective_replication_map
  locator: abstract_replication_strategy: rename vnode_effective_replication_map_ptr et. al
  locator: abstract_replication_strategy: rename global_vnode_effective_replication_map
  keyspace: rename get_vnode_effective_replication_map
  dht: range_streamer: use naked e_r_m pointers
  storage_service: use naked e_r_m pointers
  alternator: ttl: use naked e_r_m pointers
  locator: abstract_replication_strategy: define is_local
2025-08-07 12:51:43 +03:00
Benny Halevy
6dbbb80aae locator: abstract_replication_strategy: implement local_replication_strategy
Derive both vnode_effective_replication_map
and local_effective_replication_map from
static_effective_replication_map as both are static and per-keyspace.

However, local_effective_replication_map does not need vnodes
for the mapping of all tokens to the local node.

Note that everywhere_replication_strategy is not abstracted in a similar
way, although it could, since the plan is to get rid of it
once all system keyspaces areconverted to local or tablets replication
(and propagated everywhere if needed using raft group0)

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-08-06 16:05:11 +03:00
Benny Halevy
34b223f6f9 replica: database: keyspace: rename {create,update}_effective_replication_map
to *_static_effective_replication_map, in preparation
for separating local_effective_replication_map from
vnode_effective_replication_map.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-08-06 16:03:53 +03:00
Benny Halevy
688bd4fd43 locator: effective_replication_map_factory: rename create_effective_replication_map
to create_static_effective_replication_map, in preparation
for separating local_effective_replication_map from
vnode_effective_replication_map.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-08-06 16:03:53 +03:00
Benny Halevy
cbad497859 locator: abstract_replication_strategy: rename vnode_effective_replication_map_ptr et. al
to static_effective_replication_map_ptr, in preparation
for separating local_effective_replication_map from
vnode_effective_replication_map.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-08-06 16:03:53 +03:00
Benny Halevy
bd62421c05 keyspace: rename get_vnode_effective_replication_map
to get_static_effective_replication_map, in preparation
for separating local_effective_replication_map from
vnode_effective_replication_map (both are per-keyspace).

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-08-06 13:40:43 +03:00
Benny Halevy
33f34c8c32 dht: range_streamer: use naked e_r_m pointers
Prepare for following patch that will separate
the local effective replication map from
vnode_effective_replication_map.

The caller is responsible to keep the
effective_replication_map_ptr alive while
in use by low-level async functions.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-08-06 13:34:23 +03:00
Benny Halevy
d6d434b1c2 storage_service: use naked e_r_m pointers
Prepare for following patch that will separate
the local effective replication map from
vnode_effective_replication_map.

The caller is responsible to keep the
effective_replication_map_ptr alive while
in use by low-level async functions.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-08-06 13:34:23 +03:00
Benny Halevy
ec85678de1 locator: abstract_replication_strategy: define is_local
Prefer for specializing the local replication strategy,
local effective replication map, et. al byt defining
an is_local() predicate, similar to uses_tablets().

Note that is_vnode_based() still applies to local replication
strategy.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-08-06 13:34:23 +03:00
Avi Kivity
630b3d31bb storage_proxy: reduce allocations in send_to_live_endpoints()
send_to_live_endpoints() computes sets of endpoints to
which we send mutations - remote endpoints (where we send
to each set as a whole, using forwarding), and local endpoints,
where we send directly. To make handling regular, each local
endpoint is treated as its own set. Thus, each local endpoint
and each datacenter receive one RPC call (or local call if the
coordinator is also a replica).

These sets are maintained a std::unordered_map (for remote endpoints)
and a vector with the same value_type as the map (for local endpoints).
The key part of the vector payload is initialized to the empty string.

We simplify this by noting that the datacenter name is never used
after this computation, so the vector can hold just the replica sets,
without the fake datacenter name. The downstream variable `all` is
adjusted to point just to the replica set as well.

As a reward for our efforts, the vector's contents becomes nothrow
move constructible (no string), and we can convert it to a small_vector,
which reduces allocations in the common case of RF<=3.

The reduction in allocations is visible in perf-simple-query --write
results:

```
before 165080.62 tps ( 60.3 allocs/op,  16.0 logallocs/op,  14.2 tasks/op,   53438 insns/op,   26705 cycles/op,        0 errors)

after  164513.83 tps ( 59.3 allocs/op,  16.0 logallocs/op,  14.2 tasks/op,   53347 insns/op,   26761 cycles/op,        0 errors)
```

The instruction count reduction is a not very impressive 70/op:

before
```
instructions_per_op:
	mean=   53412.22 standard-deviation=32.12
	median= 53420.53 median-absolute-deviation=20.32
	maximum=53462.23 minimum=53290.06
```

after
```
instructions_per_op:
	mean=   53350.32 standard-deviation=32.38
	median= 53353.71 median-absolute-deviation=13.60
	maximum=53415.20 minimum=53222.24
```

Perhaps the extra code from small_vector defeated some inlining,
which negated some of the gain from the reduced allocations. Perhaps
a build with full profiling will gain it back (my builds were without
pgo).

Closes scylladb/scylladb#25270
2025-08-06 11:28:20 +03:00
Nadav Har'El
d46dda0840 Merge 'cql, vector_search: implement read path' from null
This pull request is an addition of ANN OF queries.

The patch contains:

- CQL syntax for ORDER BY `vector_column_name` ANN OF `vector_literal` clause of SELECT statements.
- implementation of external ANN queries (using vector-store service)
- tests

Example syntax:

```
SELECT comment
    FROM cycling.comments_vs
    ORDER BY comment_vector ANN OF [0.1, 0.15, 0.3, 0.12, 0.05]
    LIMIT 3;
```
Limit can be between 1 and 1000 - same as for Cassandra.

Co-authored-by: @janpiotrlakomy @smoczy123
Fixes: VECTOR-48
Fixes: VECTOR-46

Closes scylladb/scylladb#24444

* github.com:scylladb/scylladb:
  cql3/statements: implement external `ANN OF` queries
  vector_store_client: implement ann_error_visitor
  test/cqlpy: check ANN queries disallow filtering properly
  cassandra_tests: translate vector_invalid_query_test
  cassandra_tests: copy vector_invalid_query_test from Cassandra
  vector_index: make parameter names case insensitive
  cql3/statements: add `ANN OF` queries support to select statements
  cql/Cql.g: extend the grammar to allow for `ANN OF` queries
  cql3/raw: add ANN ordering to the raw statement layer
2025-08-06 09:53:38 +03:00
Dawid Pawlik
7a826b79d9 vector_store_client: implement ann_error_visitor
Implement ann_error_visitor managing error messages depending on
ANN error type received.
2025-08-05 12:34:48 +02:00
Piotr Dulikowski
ec7832cc84 Merge 'Raft-based recovery procedure: simplify rolling restart with recovery_leader' from Patryk Jędrzejczak
The following steps are performed in sequence as part of the
Raft-based recovery procedure:
- set `recovery_leader` to the host ID of the recovery leader in
  `scylla.yaml` on all live nodes,
- send the `SIGHUP` signal to all Scylla processes to reload the config,
- perform a rolling restart (with the recovery leader being restarted
  first).

These steps are not intuitive and more complicated than they could be.

In this PR, we simplify these steps. From now on, we will be able to
simply set `recovery_leader` on each node just before restarting it.

Apart from making necessary changes in the code, we also update all
tests of the Raft-based recovery procedure and the user-facing
documentation.

Fixes scylladb/scylladb#25015

The Raft-based procedure was added in 2025.2. This PR makes the
procedure simpler and less error-prone, so it should be backported
to 2025.2 and 2025.3.

Closes scylladb/scylladb#25032

* github.com:scylladb/scylladb:
  docs: document the option to set recovery_leader later
  test: delay setting recovery_leader in the recovery procedure tests
  gossip: add recovery_leader to gossip_digest_syn
  db: system_keyspace: peers_table_read_fixup: remove rows with null host_id
  db/config, gms/gossiper: change recovery_leader to UUID
  db/config, utils: allow using UUID as a config option
2025-08-04 08:29:32 +02:00
Avi Kivity
1c25aa891b Merge 'storage_proxy.cc: get_cas_shard: fallback to the primary replica shard' from Petr Gusev
Currently, `get_cas_shard` uses `sharder.shard_for_reads` to decide which shard to use for LWT execution—both on replicas and the coordinator.

If the coordinator is not a replica, `shard_for_reads` returns a default shard (shard 0). There are at least two problems with this:
* shard 0 can become overloaded, because all LWT coordinators-but-not-replacas are served on it.
* mismatch with replicas: the default shard doesn't match what `shard_for_reads` returns on replicas. This hinders the "same shard for client and server" RPC level optimization.

In this PR we change `get_cas_shard` to use a primary replica shard if the current node is not a replica. This guarantees that all LWT coordinators for the same tablet will be served on the same shard. This is important for LWT coordinator locks (`paxos::paxos_state::get_cas_lock`). Also, if all tablet replicas on different nodes live on the same shard, RPC optimization will make sure that no additional `smp::submit_to` will be needed on server side.

backport: not needed, since this fix applies only to LWT over tablets, and this feature is not released yet

Closes scylladb/scylladb#25224

* github.com:scylladb/scylladb:
  test_tablets_lwt.py: make tests rf_rack_valid
  test_tablets_lwt: add test_lwt_coordinator_shard
  storage_proxy.cc: get_cas_shard: fallback to the primary replica shard
  sharder: add try_get_shard_for_reads method
2025-08-01 23:07:25 +03:00
Michał Jadwiszczak
10214e13bd storage_service, group0_state_machine: move SL cache update from topology_state_load() to load_snapshot()
Currently the service levels cache is unnecessarily updated in every
call of `topology_state_load()`.
But it is enough to reload it only when a snapshot is loaded.
(The cache is also already updated when there is a change to one of
`service_levels_v2`, `role_members`, `role_attributes` tables.)

Fixes scylladb/scylladb#25114
Fixes scylladb/scylladb#23065

Closes scylladb/scylladb#25116
2025-08-01 13:41:08 +02:00
Taras Veretilnyk
1d6808aec4 topology_coordinator: Make tablet_load_stats_refresh_interval configurable
This commits introduces an config option 'tablet_load_stats_refresh_interval_in_seconds'
that allows overriding the default value without using error injection.

Fixes scylladb/scylladb#24641

Closes scylladb/scylladb#24746
2025-07-31 14:31:55 +03:00
Pavel Emelyanov
34608450c5 Merge 'qos: don't populate effective service level cache until auth is migrated to raft' from Piotr Dulikowski
Right now, service levels are migrated in one group0 command and auth is migrated in the next one. This has a bad effect on the group0 state reload logic - modifying service levels in group0 causes the effective service levels cache to be recalculated, and to do so we need to fetch information about all roles. If the reload happens after SL upgrade and before auth upgrade, the query for roles will be directed to the legacy auth tables in system_auth - and the query, being a potentially remote query, has a timeout. If the query times out, it will throw an exception which will break the group0 apply fiber and the node will need to be restarted to bring it back to work.

In order to solve this issue, make sure that the service level module does not start populating and using the service level cache until both service levels and auth are migrated to raft. This is achieved by adding the check both to the cache population logic and the effective service level getter - they now look at service level's accessor new method, `can_use_effective_service_level_cache` which takes a look at the auth version.

Fixes: scylladb/scylladb#24963

Should be backported to all versions which support upgrade to topology over raft - the issue described here may put the cluster into a state which is difficult to get out of (group0 apply fiber can break on multiple nodes, which necessitates their restart).

Closes scylladb/scylladb#25188

* github.com:scylladb/scylladb:
  test: sl: verify that legacy auth is not queried in sl to raft upgrade
  qos: don't populate effective service level cache until auth is migrated to raft
2025-07-31 13:05:27 +03:00
Petr Gusev
e120ee6d32 storage_proxy.cc: get_cas_shard: fallback to the primary replica shard
Currently, get_cas_shard uses shard_for_reads to decide which
shard to use for LWT execution—both on replicas and the coordinator.

If the coordinator is not a replica, shard_for_reads returns a default
shard (shard 0). There are at least two problems with this:
* shard 0 can become overloaded, because all LWT
coordinators-but-not-replacas are served on it.
* mismatch with replicas: the default shard doesn't match what
shard_for_reads returns on replicas. This hinders the "same shard for
client and server" RPC level optimization.

In this commit we change get_cas_shard to use a primary replica
shard if the current node is not a replica. This guarantees that all
LWT coordinators for the same tablet will be served on the same shard.
This is important for LWT coordinator locks
(paxos::paxos_state::get_cas_lock). Also, if all tablet replicas on
different nodes live on the same shard, RPC
optimization will make sure that no additional smp::submit_to will
be needed on the server side.

Fixes scylladb/scylladb#20497
2025-07-29 17:07:04 +02:00
Piotr Dulikowski
2bb800c004 qos: don't populate effective service level cache until auth is migrated to raft
Right now, service levels are migrated in one group0 command and auth
is migrated in the next one. This has a bad effect on the group0 state
reload logic - modifying service levels in group0 causes the effective
service levels cache to be recalculated, and to do so we need to fetch
information about all roles. If the reload happens after SL upgrade and
before auth upgrade, the query for roles will be directed to the legacy
auth tables in system_auth - and the query, being a potentially remote
query, has a timeout. If the query times out, it will throw
an exception which will break the group0 apply fiber and the node will
need to be restarted to bring it back to work.

In order to solve this issue, make sure that the service level module
does not start populating and using the service level cache until both
service levels and auth are migrated to raft. This is achieved by adding
the check both to the cache population logic and the effective service
level getter - they now look at service level's accessor new method,
`can_use_effective_service_level_cache` which takes a look at the auth
version.

Fixes: scylladb/scylladb#24963
2025-07-29 11:37:37 +02:00
Patryk Jędrzejczak
3299ffba51 Merge 'raft_group0: split shutdown into abort-and-drain and destroy' from Petr Gusev
Previously, `raft_group0::abort()` was called in `storage_service::do_drain` (introduced in #24418) to stop the group0 Raft server before destroying local storage. This was necessary because `raft::server` depends on storage (via `raft_sys_table_storage` and `group0_state_machine`).

However, this caused issues: services like `sstable_dict_autotrainer` and `auth::service`, which use `group0_client` but are not stopped by `storage_service`, could trigger use-after-free if `raft_group0` was destroyed too early. This can happen both during normal shutdown and when 'nodetool drain' is used.

This PR reworks the shutdown logic:
* Introduces `abort_and_drain()`, which aborts the server and waits for background tasks to finish, but keeps the server object alive. Clients will see `raft::stopped_error` if they try to access group0 after this method is called.
* Final destruction now happens in `abort_and_destroy()`, called later from `main.cc`, ensuring safe cleanup.

The `raft_server_for_group::aborted` is changed to a `shared_future`, as it is now awaited in both abort methods.

Node startup can fail before reaching `storage_service`, in which case `drain_on_shutdown()` and `abort_and_drain()` are never called. To ensure proper cleanup, `raft_group0` deinitialization logic must be included in both `abort_and_drain()` and `abort_and_destroy()`.

Refs #25115

Fixes #24625

Backport: the changes are complicated and not safe to backport, we'll backport a revert of the original patch (#24418) in a separate PR.

Closes scylladb/scylladb#25151

* https://github.com/scylladb/scylladb:
  raft_group0: split shutdown into abort_and_drain and destroy
  Revert "main.cc: fix group0 shutdown order"
2025-07-29 10:39:00 +02:00
Gleb Natapov
198cfc6fe7 migration manager: do not use group0 on non zero shard
Commit ddc3b6dcf5 added a check of group0 state in
get_schema_for_write(), but group0 client can only be used on shard 0,
and get_schema_for_write() can be called on any shard, so we cannot use
_group0_client there directly. Move assert where we use another group0
function already where it is guarantied to run on shard 0.

Closes scylladb/scylladb#25204
2025-07-28 14:10:01 +02:00
Nadav Har'El
b4fc3578fc Merge 'LWT: enable for tablet-based tables' from Petr Gusev
This PR enables **LWT (Lightweight Transactions)** support for tablet-based tables by leveraging **colocated tables**.

Currently, storing Paxos state in system tables causes two major issues:
* **Loss of Paxos state during tablet migration or base table rebuilds**
  * When a tablet is migrated or the base table is rebuilt, system tables don't retain Paxos state.
  * This breaks LWT correctness in certain scenarios.
  * Failing test cases demonstrating this:
      * test_lwt_state_is_preserved_on_tablet_migration
      * test_lwt_state_is_preserved_on_rebuild
* **Shard misalignment and performance overhead**
  * Tablets may be placed on arbitrary shards by the tablet balancer.
  * Accessing Paxos state in system tables could require a shard jump, degrading performance.

We move Paxos state into a dedicated Paxos table, colocated with the base table:
  * Each base table gets its own Paxos state table.
  * This table is lazily created on the first LWT operation.
  * Its tablets are colocated with those of the base table, ensuring:
    * Co-migration during tablet movement
    * Co-rebuilding with the base table
    * Shard alignment for local access to Paxos state

Some reasoning for why this is sufficient to preserve LWT correctness is discussed in [2].

This PR addresses two issues from the "Why doesn't it work for tablets" section  in [1]:
  * Tablet migration vs LWT correctness
  * Paxos table sharding

Other issues ("bounce to shard" and "locking for intranode_migration") have already been resolved in previous PRs.

References
[1] - [LWT over tablets design](https://docs.google.com/document/d/1CPm0N9XFUcZ8zILpTkfP5O4EtlwGsXg_TU4-1m7dTuM/edit?tab=t.0#heading=h.goufx7gx24yu)
[2] - [LWT: Paxos state and tablet balancer](https://docs.google.com/document/d/1-xubDo612GGgguc0khCj5ukmMGgLGCLWLIeG6GtHTY4/edit?tab=t.0)
[3] - [Colocated tables PR](https://github.com/scylladb/scylladb/pull/22906#issuecomment-3027123886)
[4] - [Possible LWT consistency violations after a topology change](https://github.com/scylladb/scylladb/issues/5251)

Backport: not needed because this is a new feature.

Closes scylladb/scylladb#24819

* github.com:scylladb/scylladb:
  create_keyspace: fix warning for tablets
  docs: fix lwt.rst
  docs: fix tablets.rst
  alternator: enable LWT
  random_failures: enable execute_lwt_transaction
  test_tablets_lwt: add test_paxos_state_table_permissions
  test_tablets_lwt: add test_lwt_for_tablets_is_not_supported_without_raft
  test_tablets_lwt: test timeout creating paxos state table
  test_tablets_lwt: add test_lwt_concurrent_base_table_recreation
  test_tablets_lwt: add test_lwt_state_is_preserved_on_rebuild
  test_tablets_lwt: migrate test_lwt_support_with_tablets
  test_tablets_lwt: add test_lwt_state_is_preserved_on_tablet_migration
  test_tablets_lwt: add simple test for LWT
  check_internal_table_permissions: handle Paxos state tables
  client_state: extract check_internal_table_permissions
  paxos_store: handle base table removal
  database: get_base_table_for_tablet_colocation: handle paxos state table
  paxos_state: use node_local_only mode to access paxos state
  query_options: add node_local_only mode
  storage_proxy: handle node_local_only in query
  storage_proxy: handle node_local_only in mutate
  storage_proxy: introduce node_local_only flag
  abstract_replication_strategy: remove unused using
  storage_proxy: add coordinator_mutate_options
  storage_proxy: rename create_write_response_handler -> make_write_response_handler
  storage_proxy: simplify mutate_prepare
  paxos_state: lazily create paxos state table
  migration_manager: add timeout to start_group0_operation and announce
  paxos_store: use non-internal queries
  qp: make make_internal_options public
  paxos_store: conditional cf_id filter
  paxos_store: coroutinize
  feature_service: add LWT_WITH_TABLETS feature
  paxos_state: inline system_keyspace functions into paxos_store
  paxos_state: extract state access functions into paxos_store
2025-07-28 13:19:23 +03:00
Tomasz Grabiec
55116ee660 topology_coordinator: Trigger load stats refresh after replace
Otherwise, tablet rebuilt will be delayed for up to 60s, as the tablet
scheduler needs load stats for the new node (replacing) to make
decisisons.

Fixes #25163

Closes scylladb/scylladb#25181
2025-07-28 11:07:17 +02:00
Avi Kivity
8180cbcf48 Merge 'tablets: prevent accidental copy of tablets_map' from Benny Halevy
As they are wasteful in many cases, it is better
to move the tablet_map if possible, or clone
it gently in an async fiber.

Add clone() and clone_gently() methods to
allow explicit copies.

* minor optimization, no backport needed

Closes scylladb/scylladb#24978

* github.com:scylladb/scylladb:
  tablets: prevent accidental copy of tablets_map
  locator: tablets: get rid of synchronous mutate_tablet_map
2025-07-27 16:48:27 +03:00
Lakshmi Narayanan Sreethar
0c5fa8e154 locator/token_metadata.cc: use chunked_vector to store _sorted_tokens
The `token_metadata_impl` stores the sorted tokens in an `std::vector`.
With a large number of nodes, the size of this vector can grow quickly,
and updating it might lead to oversized allocations.

This commit changes `_sorted_tokens` to a `chunked_vector` to avoid such
issues. It also updates all related code to use `chunked_vector` instead
of `std::vector`.

Fixes #24876

Signed-off-by: Lakshmi Narayanan Sreethar <lakshmi.sreethar@scylladb.com>

Closes scylladb/scylladb#25027
2025-07-27 11:29:22 +03:00
Michael Litvak
3ff388cd94 storage service: drain view builder before group0
The view builder uses group0 operations to coordinate view building, so
we should drain the view builder before stopping group0.

Fixes scylladb/scylladb#25096

Closes scylladb/scylladb#25101
2025-07-27 09:25:42 +02:00
Petr Gusev
8b8b7adbe5 raft_group0: split shutdown into abort_and_drain and destroy
Previously, raft_group0::abort() was called in
storage_service::do_drain (introduced in #24418) to
stop the group0 Raft server before destroying local storage.
This was necessary because raft::server depends on storage
(via raft_sys_table_storage and group0_state_machine).

However, this caused issues: services like
sstable_dict_autotrainer and auth::service, which use
group0_client but are not stopped by storage_service,
could trigger use-after-free if raft_group0 was destroyed
too early. This can happen both during normal shutdown
and when 'nodetool drain' is used.

This commit reworks the shutdown logic:
* Introduces abort_and_drain(), which aborts the server
and waits for background tasks to finish, but keeps the
server object alive. Clients will see raft::stopped_error if
they try to access group0 after abort_and_drain().
* Final destruction happens in a separate method destroy(),
called later from main.cc.

The raft_server_for_group::aborted is changed to a
shared_future -- abort_server now returns a future so that
we can wait for it in abort_and_drain(), it should return
the future from the previous abort_server call, which can
happen in the on_background_error callback.

Node startup can fail before reaching storage_service,
in which case ss.drain_on_shutdown() and abort_and_drain()
are never called. To ensure proper cleanup,
abort_and_drain() is called from main.cc before destroy().

Clients of raft_group_registry are expected to call
destroy_server() for the servers they own. Currently,
the only such client is raft_group0, which satisfies
this requirement. As a result,
raft_group_registry::stop_servers() is no longer needed.
Instead, raft_group_registry::stop() now verifies that all
servers have been properly destroyed.
If any remain, it calls on_internal_error().

The call to drain_on_shutdown() in cql_test_env.cc
appears redundant. The only source of raft::server
instances in raft_group_registry is group0_service, and
if group0_service.start() succeeds, both abort_and_drain()
and destroy() are guaranteed to be called during shutdown.
2025-07-25 17:16:14 +02:00
Ran Regev
7c68ee06bf cleanup: remove partition_slice_builder from include
Refs: #22099 (issue)
Refs: #25079 (pr)

remove include for partition_slice_builder
that is not used. makes it clear that
group0_state_machine.cc does not depend on
partition_slice_builder

Closes scylladb/scylladb#25125
2025-07-25 10:45:32 +03:00
Ernest Zaslavsky
d2c5765a6b treewide: Move keys related files to a new keys directory
As requested in #22102, #22103 and #22105 moved the files and fixed other includes and build system.

Moved files:
- clustering_bounds_comparator.hh
- keys.cc
- keys.hh
- clustering_interval_set.hh
- clustering_key_filter.hh
- clustering_ranges_walker.hh
- compound_compat.hh
- compound.hh
- full_position.hh

Fixes: #22102
Fixes: #22103
Fixes: #22105

Closes scylladb/scylladb#25082
2025-07-25 10:45:32 +03:00
Petr Gusev
a9ef221ae8 test_tablets_lwt: add test_lwt_concurrent_base_table_recreation
The test checks that we correctly handle the case when the base table
is recreated during LWT execution.
2025-07-24 19:48:08 +02:00
Petr Gusev
78aa36b257 check_internal_table_permissions: handle Paxos state tables
CDC and $paxos tables are managed internally by Scylla. Users are
already prohibited from running ALTER and DROP commands on CDC tables.
In this commit, we extend the same restrictions to $paxos tables to
prevent users from shooting themselves in the foot.

Other commands are generally allowed for CDC and $paxos tables. An
important distinction is that CDC tables are meant to be accessed
directly by users, so appropriate permissions must be set for
non-superusers. In contrast, $paxos tables are not intended for direct
access by users. Therefore, this commit explicitly disallows
non-superusers from accessing them. Superusers are still allowed
access for debugging and troubleshooting purposes.

Note that these restrictions apply even if explicit permissions have
been granted. For example, a non-superuser may be granted SELECT
permissions on a $paxos table, but the restriction above will
still take precedence. We don't try to restrict users
from giving permissions to $paxos tables for simplicity.
2025-07-24 19:48:08 +02:00
Petr Gusev
ec3c5f4cbc client_state: extract check_internal_table_permissions
This is a refactoring commit — it extracts the CDC permissions handling
logic into a separate function: check_internal_table_permissions.

This is a preparatory step for the next commit, where we'll handle
paxos state tables similarly to CDC tables.
2025-07-24 19:48:08 +02:00
Petr Gusev
bb4e7a669f paxos_store: handle base table removal
Subscribe to on_before_drop_column_family to drop the associated
Paxos state table when the corresponding user table is dropped.
2025-07-24 19:48:08 +02:00
Petr Gusev
1b70623908 database: get_base_table_for_tablet_colocation: handle paxos state table
We need to mark paxos state table as colocated with the user table, so
that the corresponding tablets are migrated/repaired together.
2025-07-24 19:48:08 +02:00
Petr Gusev
03aa2e4823 paxos_state: use node_local_only mode to access paxos state 2025-07-24 19:48:08 +02:00