Commit Graph

4905 Commits

Author SHA1 Message Date
Karol Nowacki
c643f321af vector_search: decrease default connection timeout to 3s
Decrease the default connection timeout to 3s to better align with the
default CQL query timeout of 10s.

The previous timeout allowed only one failover request in high availability
scenario before hitting the CQL query timeout.
By decreasing the timeout to 3s, we can perform up to three failover requests
within the CQL query timeout, which significantly improves the chances of
successfully completing the query in high availability scenarios.

Fixes: SCYLLADB-95
2026-04-17 12:26:39 +03:00
Karol Nowacki
9269ca9cf7 vector_search: add unreachable node detection time config
Add option `vector_store_unreachable_node_detection_time_in_ms` to
control parameters related to detecting unreachable vector store nodes.
This parameter is used to set the TCP connect timeout, keepalive
parameters, and TCP_USER_TIMEOUT. By configuring these parameters,
we can detect unreachable vector store nodes faster and trigger
failover mechanisms in a timely manner.
2026-04-17 12:26:38 +03:00
Botond Dénes
d006c4c476 Merge 'Untie (partially) cql3/statements from db::config' from Pavel Emelyanov
There's a bunch of db::config options that are used by cql3/statements/ code. For that they use data_dictionary/database as a proxy to get db::config reference. This PR moves most of these accessed options onto cql_config

Options migrated to cql_config:

   1. select_internal_page_size
   2. strict_allow_filtering
   3. enable_parallelized_aggregation
   4. batch_size_warn_threshold_in_kb
   5. batch_size_fail_threshold_in_kb
   6. 7 keyspace replication restriction options
   7. 2 TWCS restriction options
   8. restrict_future_timestamp
   9. strict_is_not_null_in_views (with view_restrictions struct)
   10. enable_create_table_with_compact_storage

Some options need special treatment and are still abused via database, namely:

  1. enable_logstor
  2. cluster_name
  3. partitioner
  4. endpoint_snitch

Fixing components inter-dependencies, not backporting

Closes scylladb/scylladb#29424

* github.com:scylladb/scylladb:
  cql3: Move enable_create_table_with_compact_storage to cql_config
  cql3: Move strict_is_not_null_in_views to cql_config
  cql3: Move restrict_future_timestamp to cql_config
  cql3: Move TWCS restriction options to cql_config
  cql3: Move keyspace restriction options to cql_config
  cql3: Move batch_size_fail_threshold_in_kb to cql_config
  cql3: Move batch_size_warn_threshold_in_kb to cql_config
  cql3: Move enable_parallelized_aggregation to cql_config
  cql3: Move strict_allow_filtering to cql_config
  cql3: Move select_internal_page_size to cql_config
  test: Fix cql_test_env to use updateable cql_config from db::config
  cql3: Add cql_config parameter to parsed_statement::prepare()
2026-04-16 14:04:43 +03:00
Botond Dénes
88a8324e68 erge 'db: store large data records in SSTable metadata and serve via virtual tables' from Benny Halevy
`system.large_partitions`, `system.large_rows`, and `system.large_cells` store records keyed by SSTable name. When SSTables are migrated between shards or nodes (resharding, streaming, decommission), the records are lost because the destination never writes entries for the migrated SSTables.

This patch series moves the source of truth for large data records into the SSTable's scylla metadata component (new `LargeDataRecords` tag 13) and reimplements the three `system.large_*` tables as virtual tables that query live SSTables on demand. A cluster feature flag (`LARGE_DATA_VIRTUAL_TABLES`) gates the transition for safe rolling upgrades.

When the cluster feature is enabled, each node drops the old system large_* tables and starts serving the corresponding tables using virtual tables that represent the large data records now stored on the sstables.
Note that the virtual tables will be empty after upgrade until the sstables that contained large data are rewritten, therefore it is recommended to run upgrade sstables compaction or major compaction to repopulate the sstables scylla-metadata with large data records.

1. **keys: move key_to_str() to keys/keys.hh** — make the helper reusable across large_data_handler, virtual tables, and scylla-sstable
2. **sstables: add LargeDataRecords metadata type (tag 13)** — new struct with binary-serialized key fields, scylla-sstable JSON support, format documentation
3. **large_data_handler: rename partition_above_threshold to above_threshold_result** — generalize the struct for reuse
4. **large_data_handler: return above_threshold_result from maybe_record_large_cells** — separate booleans for cell size vs collection elements thresholds
5. **sstables: populate LargeDataRecords from writer** — bounded min-heaps (one per large_data_type), configurable top-N via `compaction_large_data_records_per_sstable`
6. **test: add LargeDataRecords round-trip unit tests** — verify write/read, top-N bounding, below-threshold behavior
7. **db: call initialize_virtual_tables from shard 0 only** — preparatory refactoring to enable cross-shard coordination
8. **db: implement large_data virtual tables with feature flag gating** — three virtual table classes, feature flag activation, legacy SSTable fallback, dual-threshold dedup, cross-shard collection

Fixes: https://scylladb.atlassian.net/browse/SCYLLADB-1276

* Although this fixes a bug where large data entries are effectively lost when sstables are renamed or migrated, the changes are intrusive and do not warrant a backport

Closes scylladb/scylladb#29257

* github.com:scylladb/scylladb:
  db: implement large_data virtual tables with feature flag gating
  db: call initialize_virtual_tables from shard 0 only
  test: add LargeDataRecords round-trip unit tests
  sstables: populate LargeDataRecords from writer
  large_data_handler: return above_threshold_result from maybe_record_large_cells
  large_data_handler: rename partition_above_threshold to above_threshold_result
  sstables: add LargeDataRecords metadata type (tag 13)
  sstables: add fmt::formatter for large_data_type
  keys: move key_to_str() to keys/keys.hh
2026-04-16 14:03:31 +03:00
Botond Dénes
33682fd14e Merge 'sstables/storage_manager: fix race between object storage config update and keyspace creation' from Dimitrios Symonidis
Previously, config_updater used a serialized_action to trigger update_config() when object_storage_endpoints changed. Because serialized_action::trigger() always schedules the action as a new reactor task (via semaphore::wait().then()), there was a window between the config value becoming visible to the REST API and update_config() actually running. This allowed a concurrent CREATE KEYSPACE to see the new endpoint via is_known_endpoint() before storage_manager had registered it in _object_storage_endpoints.

Now config observers run synchronously in a reactor turn and must not suspend. Split the previous monolithic async update_config() coroutine  into two phases:

- Sync (in the observer, never suspends): storage_manager::_object_storage_endpoints is updated in place; for already-instantiated clients, update_config_sync swaps the new config atomically
- Async (per-client gate): background fibers finish the work that can't run in the observer — S3 refreshes credentials under _creds_sem; GCS drains and closes the replaced client.

Config reloads triggered by SIGHUP are applied on shard 0 and then broadcast to all other shards. An rwlock has been also introduced to make sure that the configuration has been propagated to all cores. This guarantees that a client requesting a config via the REST API will see a consistent snapshot

Fixes: https://scylladb.atlassian.net/browse/SCYLLADB-757
Fixes: [28141](https://github.com/scylladb/scylladb/issues/28141)

Closes scylladb/scylladb#28950

* github.com:scylladb/scylladb:
  test/object_store: verify object storage client creation and live reconfiguration
  sstables/utils/s3: split config update into sync and async parts
  test_config: improve logging for wait_for_config API
  db: introduce read-write lock to synchronize config updates with REST API
2026-04-16 10:20:43 +03:00
Botond Dénes
8e7ba7efe2 Merge 'commitlog: fix segment replay order by using ordered map per shard' from Sergey Zolotukhin
The commitlog replayer groups segments by shard using a
std::unordered_multimap, then iterates per-shard segments via
equal_range(). However, equal_range() does not guarantee iteration
order for elements with the same key, so segments could be replayed
out of order within a shard.

Correct segment ordering is required for:
- Fragmented entry reconstruction, which accumulates fragments across
  segments and depends on ascending order for efficient processing.
- Commitlog-based storage used by the strongly consistent tables
  feature, which relies on replayed raft items being stored in order.

Fix by changing the data structure from
  std::unordered_multimap<unsigned, commitlog::descriptor>
to
  std::unordered_map<unsigned, utils::chunked_vector<commitlog::descriptor>>

Since the descriptors are inserted from a std::set ordered by ID, the
vector preserves insertion (and thus ID) order. The per-shard iteration
now simply iterates the vector, guaranteeing correct replay order.

Fixes: SCYLLADB-1411

Backport: It looks like this issue doesn't cause any trouble, and is required only by the strong consistent tables, so no backporting required.

Closes scylladb/scylladb#29372

* github.com:scylladb/scylladb:
  commitlog: add test to verify segment replay order
  commitlog: fix replay order by using ordered map per shard
2026-04-16 09:55:27 +03:00
Benny Halevy
ce00d61917 db: implement large_data virtual tables with feature flag gating
Replace the physical system.large_partitions, system.large_rows, and
system.large_cells CQL tables with virtual tables that read from
LargeDataRecords stored in SSTable scylla metadata (tag 13).

The transition is gated by a new LARGE_DATA_VIRTUAL_TABLES cluster
feature flag:

- Before the feature is enabled: the old physical tables remain in
  all_tables(), CQL writes are active, no virtual tables are registered.
  This ensures safe rollback during rolling upgrades.

- After the feature is enabled: old physical tables are dropped from
  disk via legacy_drop_table_on_all_shards(), virtual tables are
  registered on all shards, and CQL writes are skipped via
  skip_cql_writes() in cql_table_large_data_handler.

Key implementation details:

- Three virtual table classes (large_partitions_virtual_table,
  large_rows_virtual_table, large_cells_virtual_table) extend
  streaming_virtual_table with cross-shard record collection.

- generate_legacy_id() gains a version parameter; virtual tables
  use version 1 to get different UUIDs than the old physical tables.

- compaction_time is derived from SSTable generation UUID at display
  time via UUID_gen::unix_timestamp().

- Legacy SSTables without LargeDataRecords emit synthetic summary
  rows based on above_threshold > 0 in LargeDataStats.

- The activation logic uses two paths: when the feature is already
  enabled (test env, restart), it runs as a coroutine; when not yet
  enabled, it registers a when_enabled callback that runs inside
  seastar::async from feature_service::enable().

- sstable_3_x_test updated to use a simplified large_data_test_handler
  and validate LargeDataRecords in SSTable metadata directly.
2026-04-16 08:49:02 +03:00
Benny Halevy
cb6004b625 db: call initialize_virtual_tables from shard 0 only
Move the smp::invoke_on_all dispatch from the callers into
initialize_virtual_tables() itself, so the function is called
once from shard 0 and internally distributes the per-shard
virtual table setup to all shards.

This simplifies the callers and allows a single place to add
cross-shard coordination logic (e.g. feature-gated table
registration) in future commits.
2026-04-16 08:49:02 +03:00
Benny Halevy
1f7faeef57 sstables: populate LargeDataRecords from writer
During compaction (SSTable writing), maintain bounded min-heaps (one per
large_data_type) that collect the top-N above-threshold records.  On
stream end, drain all five heaps into a single LargeDataRecords array
and write it into the SSTable's scylla metadata component.

Five separate heaps are used:
- partition_size, row_size, cell_size: ordered by value (size bytes)
- rows_in_partition, elements_in_collection: ordered by elements_count

A new config option 'compaction_large_data_records_per_sstable' (default
10) controls the maximum number of records kept per type.
2026-04-16 08:49:02 +03:00
Benny Halevy
8f4976f65d large_data_handler: return above_threshold_result from maybe_record_large_cells
Change maybe_record_large_cells to return above_threshold_result with
separate booleans for cell size (.size) and collection elements
(.elements) thresholds.  This allows the writer to track above_threshold
counts for cell_size and elements_in_collection independently.
2026-04-16 08:49:02 +03:00
Benny Halevy
c1b797f288 large_data_handler: rename partition_above_threshold to above_threshold_result
Rename partition_above_threshold to above_threshold_result and its
'rows' field to 'elements', making it a generic struct that can be
reused for other large data types (e.g., cells with collection
elements).

Use designated initializers for clarity.
2026-04-16 08:49:02 +03:00
Benny Halevy
d4283d0ffc keys: move key_to_str() to keys/keys.hh
Move the key_to_str() template function from a file-local static in
db/large_data_handler.cc to keys/keys.hh so it can be reused by:
- large_data_handler.cc for log messages
- virtual tables (db/virtual_tables.cc) for converting binary keys
  to human-readable CQL display
- scylla-sstable for JSON output of LargeDataRecords

No functional change.
2026-04-16 08:42:54 +03:00
Pavel Emelyanov
60a834d9fa cql3: Add cql_config parameter to parsed_statement::prepare()
Pass cql_config to prepare() so that statement preparation can use
CQL-specific configuration rather than reaching into db::config
directly.

Callers that use default_cql_config:
- db/view/view.cc: builds a SELECT statement internally to compute view
  restrictions, not in response to a user query
- cql3/statements/create_view_statement.cc: same -- parses the view's
  WHERE clause as a synthetic SELECT to extract restrictions
- tools/schema_loader.cc: offline schema loading tool, no runtime
  config available
- tools/scylla-sstable.cc: offline sstable inspection tool, no runtime
  config available

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-04-16 07:57:25 +03:00
Avi Kivity
59ec93b86b Merge 'Allow arbitrary tablet boundaries and count' from Tomasz Grabiec
There are several reasons we want to do that.

One is that it will give us more flexibility in distributing the
load. We can subdivide tablets at any token, and achieve more
evenly-sized tablets. In particular, we can isolate large partitions
into separate tablets.

We can also split and merge incrementally individual tablets.
Currently, we do it for the whole table or nothing, which makes
splits and merges take longer and cause wide swings of the count.
This is not implemented in this PR yet, we still split/merge the whole table.

Another reason is vnode to tablets migration. We now could construct a
tablet map which matches exactly the vnode boundaries, so migration
can happen transparently from CQL-coordinator point of view.

Tablet count is still a power-of-two by default for newly created tables.
It may be different if tablet map is created by non-standard means,
or if per-table tablet option "pow2_count" is set to "false".

build/release/scylla perf-tablets:

Memory footprint for 131k tablets increased from 56 MiB to 58.1 MiB (+3.5%)

Before:
```
Generating tablet metadata
Total tablet count: 131072
Size of tablet_metadata in memory: 57456 KiB
Copied in 0.014346 [ms]
Cleared in 0.002698 [ms]
Saved in 1234.685303 [ms]
Read in 445.577881 [ms]
Read mutations in 299.596313 [ms] 128 mutations
Read required hosts in 247.482742 [ms]
Size of canonical mutations: 33.945053 [MiB]
Disk space used by system.tablets: 1.456761 [MiB]
Tablet metadata reload:
full      407.69ms
partial     2.65ms
```

After:
```
Generating tablet metadata
Total tablet count: 131072
Size of tablet_metadata in memory: 59504 KiB
Copied in 0.032475 [ms]
Cleared in 0.002965 [ms]
Saved in 1093.877441 [ms]
Read in 387.027100 [ms]
Read mutations in 255.752121 [ms] 128 mutations
Read required hosts in 211.202805 [ms]
Size of canonical mutations: 33.954453 [MiB]
Disk space used by system.tablets: 1.450162 [MiB]
Tablet metadata reload:
full      354.50ms
partial     2.19ms
```

Closes scylladb/scylladb#28459

* github.com:scylladb/scylladb:
  test: boost: tablets: Add test for merge with arbitrary tablet count
  tablets, database: Advertise 'arbitrary' layout in snapshot manifest
  tablets: Introduce pow2_count per-table tablet option
  tablets: Prepare for non-power-of-two tablet count
  tablets: Implement merged tablet_map constructor on top of for_each_sibling_tablets()
  tablets: Prepare resize_decision to hold data in decisions
  tablets: table: Make storage_group handle arbitrary merge boundaries
  tablets: Make stats update post-merge work with arbitrary merge boundaries
  locator: tablets: Support arbitrary tablet boundaries
  locator: tablets: Introduce tablet_map::get_split_token()
  dht: Introduce get_uniform_tokens()
2026-04-15 18:57:22 +03:00
Dimitrios Symonidis
71714fdc0e db: introduce read-write lock to synchronize config updates with REST API
Config is reloaded from SIGHUP on shard 0 and broadcast to all shards
under a write lock. REST API callers reading find_config_id acquire a
read lock via value_as_json_string_for_name() and are guaranteed a
consistent snapshot even when a reload is in progress.
2026-04-15 14:28:31 +02:00
Botond Dénes
280fe7cfb7 Merge 'Make inclusion of config.hh cheaper' from Nadav Har'El
This is an attempt (mostly suggested and implemented by AI, but with a few hours of human babysitting...), to somewhat reduce compilation time by picking one template, named_value<T>, which is used in more than a hundred source files through the config.hh header, and making it use external instantiation: The different methods of named_value<T> for various T are instantiated only once (in config.cc), and the individual translation units don't need to compile them a hundred times.

The resulting saving is a little underwhelming: The total object-file size goes down about 1% (from 346,200 before the patch to 343,488 after the patch), and previous experience shows that this object-file size is proportional to the compilation time, most of which involves code generation. But I haven't been able to measure speedup of the build itself.

1% is not nothing, but not a huge saving either. Though arguably, with 50 more of these patches, we can make the build twice faster :-)

Refs #1.

Closes scylladb/scylladb#28992

* github.com:scylladb/scylladb:
  config: move named_value<T> method bodies out-of-line
  config: suppress named_value<T> instantiation in every source file
2026-04-15 14:40:15 +03:00
Botond Dénes
4a2d032c6f Merge 'query: result_set: change row member to a chunked vector' from Benny Halevy
To prevent large memory allocations.

This series shows over 3% improvement in perf-simple-query throughput.
```
$ build/release/scylla perf-simple-query --default-log-level=error --smp=1 --random-seed=1855519715
random-seed=1855519715
enable-cache=1
Running test with config: {partitions=10000, concurrency=100, mode=read, query_single_key=no, counters=no}
Disabling auto compaction
Creating 10000 partitions...

Before:
random-seed=1775976514
enable-cache=1
enable-index-cache=1
sstable-summary-ratio=0.0005
sstable-format=me
Running test with config: {partitions=10000, concurrency=100, mode=read, query_single_key=no, counters=no}
Disabling auto compaction
Creating 10000 partitions...
336345.11 tps ( 58.1 allocs/op,   0.0 logallocs/op,  14.1 tasks/op,   32788 insns/op,   12430 cycles/op,        0 errors)
348748.14 tps ( 58.1 allocs/op,   0.0 logallocs/op,  14.1 tasks/op,   32794 insns/op,   12335 cycles/op,        0 errors)
349012.63 tps ( 58.1 allocs/op,   0.0 logallocs/op,  14.1 tasks/op,   32800 insns/op,   12326 cycles/op,        0 errors)
350629.97 tps ( 58.1 allocs/op,   0.0 logallocs/op,  14.1 tasks/op,   32770 insns/op,   12270 cycles/op,        0 errors)
348585.00 tps ( 58.1 allocs/op,   0.0 logallocs/op,  14.1 tasks/op,   32804 insns/op,   12338 cycles/op,        0 errors)
throughput:
        mean=   346664.17 standard-deviation=5825.77
        median= 348748.14 median-absolute-deviation=2348.46
        maximum=350629.97 minimum=336345.11
instructions_per_op:
        mean=   32791.35 standard-deviation=13.60
        median= 32794.47 median-absolute-deviation=8.65
        maximum=32804.45 minimum=32769.57
cpu_cycles_per_op:
        mean=   12340.05 standard-deviation=57.57
        median= 12335.05 median-absolute-deviation=13.94
        maximum=12430.42 minimum=12270.28

After:
random-seed=1775976514
enable-cache=1
enable-index-cache=1
sstable-summary-ratio=0.0005
sstable-format=me
Running test with config: {partitions=10000, concurrency=100, mode=read, query_single_key=no, counters=no}
Disabling auto compaction
Creating 10000 partitions...
353770.85 tps ( 58.1 allocs/op,   0.0 logallocs/op,  14.1 tasks/op,   32762 insns/op,   11893 cycles/op,        0 errors)
364447.98 tps ( 58.1 allocs/op,   0.0 logallocs/op,  14.1 tasks/op,   32738 insns/op,   11818 cycles/op,        0 errors)
365268.97 tps ( 58.1 allocs/op,   0.0 logallocs/op,  14.1 tasks/op,   32734 insns/op,   11788 cycles/op,        0 errors)
344304.87 tps ( 58.1 allocs/op,   0.0 logallocs/op,  14.1 tasks/op,   32746 insns/op,   12506 cycles/op,        0 errors)
362263.57 tps ( 58.1 allocs/op,   0.0 logallocs/op,  14.1 tasks/op,   32756 insns/op,   11888 cycles/op,        0 errors)
throughput:
        mean=   358011.25 standard-deviation=8916.76
        median= 362263.57 median-absolute-deviation=6436.74
        maximum=365268.97 minimum=344304.87
instructions_per_op:
        mean=   32747.06 standard-deviation=11.85
        median= 32745.80 median-absolute-deviation=9.36
        maximum=32762.18 minimum=32734.01
cpu_cycles_per_op:
        mean=   11978.65 standard-deviation=298.06
        median= 11887.96 median-absolute-deviation=160.96
        maximum=12505.72 minimum=11788.49
```

Refs #28511
(Refs rather than Fixes for the lack of a reproducer unit test)

* No backport needed as the issue is rare and not severe

Closes scylladb/scylladb#28631

* github.com:scylladb/scylladb:
  query: result_set: change row member to a chunked vector
  query: result_set_row: make noexcept
  query: non_null_data_value: assert is_nothrow_move_constructible and assignable
  types: data_value: assert is_nothrow_move_constructible and assignable
2026-04-15 14:40:15 +03:00
Pavel Emelyanov
a428472e50 db: Remove redundant enable_logstor config option
The enable_logstor configuration option is redundant with the 'logstor'
experimental feature flag. Consolidate to a single gate: use the
experimental feature to control both whether logstor is available for
table creation and whether it is initialized at database startup.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>

Closes scylladb/scylladb#29427
2026-04-15 14:40:15 +03:00
Tomasz Grabiec
50fbac6ea6 tablets: Introduce pow2_count per-table tablet option
By default it's true, in which case tablet count of the table is
rounded up to a power of two. This option allows lifting this, in
which case the count can be arbitrary. This will allow testing the
logic of arbitrary tablet count.
2026-04-15 10:40:56 +02:00
Tomasz Grabiec
a58243bc1e Merge 'hint_sender: send hints to all tablet replicas if the tablet leaving due to RF--' from Ferenc Szili
Currently, hints that are sent to tablet replicas which are leaving due to RF-- can be lost, because `hint_sender` only checks if the destination host is leaving. To avoid this, we add a new method `effective_replication_map::is_leaving(host, token)` which checks if the tablet identified by the given token is leaving the host. This method is called by the `hint_sender` to check if the hint should be sent only to the destination host, or to all the replicas. This way, we increase consistency. For v-node based ERPs, `is_leaving()` calls `token_metadata::is_leaving(host)`.

Fixes: SCYLLADB-287

This is an improvement, and backport is not needed.

Closes scylladb/scylladb#28770

* github.com:scylladb/scylladb:
  test: verify hints are delivered during tablet RF reduction
  hint_sender: use per-tablet is_leaving() to avoid losing hints on RF reduction
  erm: add is_leaving() to effective_replication_map
2026-04-14 22:51:34 +02:00
Avi Kivity
21d9f54a9a partition_snapshot_row_cursor: fix reversed maybe_refresh() losing latest version entry
In partition_snapshot_row_cursor::maybe_refresh(), the !is_in_latest_version()
path calls lower_bound(_position) on the latest version's rows to find the
cursor's position in that version. When lower_bound returns null (the cursor
is positioned above all entries in the latest version in table order), the code
unconditionally sets _background_continuity = true and allows the subsequent
if(!it) block to erase the latest version's entry from the heap.

This is correct for forward traversal: null means there are no more entries
ahead, so removing the version from the heap is safe.

However, in reversed mode, null from lower_bound means the cursor is above
all entries in table order -- those entries are BELOW the cursor in query
order and will be visited LATER during reversed traversal. Erasing the heap
entry permanently loses them, causing live rows to be skipped.

The fix mirrors what prepare_heap() already does correctly: when lower_bound
returns null in reversed mode, use std::prev(rows.end()) to keep the last
entry in the heap instead of erasing it.

Add test_reversed_maybe_refresh_keeps_latest_version_entry to mvcc_test,
alongside the existing reversed cursor tests. The test creates a two-version
partition snapshot (v0 with range tombstones, v1 with a live row positioned
below all v0 entries in table order), and
traverses in reverse calling maybe_refresh() at each step -- directly
exercising the buggy code path. The test fails without the fix.

The bug was introduced by 6b7473be53 ("Handle non-evictable snapshots",
2022-11-21), which added null-iterator handling for non-evictable snapshots
(memtable snapshots lack the trailing dummy entry that evictable snapshots
have). prepare_heap() got correct reversed-mode handling at that time, but
maybe_refresh() received only forward-mode logic.

The bug is intermittent because multiple mechanisms cause iterators_valid()
to return false, forcing maybe_refresh() to take the full rebuild path via
prepare_heap() (which handles reversed mode correctly):
  - Mutation cleaner merging versions in the background (changes change_mark)
  - LSA segment compaction during reserve() (invalidates references)
  - B-tree rebalancing on partition insertion (invalidates references)
  - Debug mode's always-true need_preempt() creating many multi-version
    partitions via preempted apply_monotonically()

A dtest reproducer confirmed the same root cause: with 100K overlapping range
tombstones creating a massively multi-version memtable partition (287K preemption
events), the reversed scan's latest_iterator was observed jumping discontinuously
during a version transition -- the latest version's heap entry was erased --
causing the query to walk the entire partition without finding the live row.

Fixes: SCYLLADB-1253

Closes scylladb/scylladb#29368
2026-04-14 21:50:25 +02:00
bitpathfinder
c06adffd6a commitlog: fix replay order by using ordered map per shard
The commitlog replayer groups segments by shard using a
std::unordered_multimap, then iterates per-shard segments via
equal_range(). However, equal_range() does not guarantee iteration
order for elements with the same key, so segments could be replayed
out of order within a shard. This can increase memory and disk
consumption during fragmented entry reconstruction, which accumulates
fragments across segments and benefits from ascending ID order.

This is also required by the strongly consistent tables feature,
particularly commitlog-based storage that relies on replayed raft
items being stored in order.

Fix by changing the data structure from
  std::unordered_multimap<unsigned, commitlog::descriptor>
to
  std::unordered_map<unsigned, utils::chunked_vector<commitlog::descriptor>>

Since the descriptors are inserted from a std::set ordered by ID, the
vector preserves insertion (and thus ID) order. The per-shard iteration
now simply iterates the vector, guaranteeing correct replay order.

Fixes SCYLLADB-1411.
2026-04-14 16:05:17 +02:00
Avi Kivity
0ae22a09d4 LICENSE: Update to version 1.1
Updated terms of non-commercial use (must be a never-customer).
2026-04-12 19:46:33 +03:00
Avi Kivity
22949bae52 Merge 'logstor: implement tablet split/merge and migration' from Michael Litvak
implement tablet split, tablet merge and tablet migration for tables that use the experimental logstor storage engine.

* tablet merge simply merges the histograms of segments of one compaction group with another.
* for tablet split we take the segments from the source compaction group, read them and write all live records to separate segments according to the split classifier, and move separated segments to the target compaction groups.
* for tablet migration we use stream_blob, similarly to file streaming of sstables. we add a new op type for streaming a logstor segment. on the source we take a snapshot of the segments with an input stream that reads the segment, and on the target we create a sink that allocates a new segment on the target shard and writes to it.
* we also do some improvements for recovery and loading of segments. we add a segment header that contains useful information for non-mixed segments, such as the table and token range.

Refs SCYLLADB-770

no backport - still a new and experimental feature

Closes scylladb/scylladb#29207

* github.com:scylladb/scylladb:
  test: logstor: additional logstor tests
  docs/dev: add logstor on-disk format section
  logstor: add version and crc to buffer header
  test: logstor: tablet split/merge and migration
  logstor: enable tablet balancing
  logstor: streaming of logstor segments using stream_blob
  logstor: add take_logstor_snapshot
  logstor: segment input/output stream
  logstor: implement compaction_group::cleanup
  logstor: tablet split
  logstor: tablet merge
  logstor: add compaction reenabler
  logstor: add segment header
  logstor: serialize writes to active segment
  replica: extend compaction_group functions for logstor
  replica: add compaction_group_for_logstor_segment
  logstor: code cleanup
2026-04-12 16:11:12 +03:00
Benny Halevy
e4f0539acf query: result_set: change row member to a chunked vector
To prevent large memory allocations.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2026-04-12 10:00:49 +03:00
Avi Kivity
8ccee6803e Merge 'Remove upgrade view builder' from Gleb Natapov
Since we do no longer support upgrade from versions that do not support
v2 of "view building status" code (building status is managed by raft) we can remove v1 code and upgrade code and make sure we do not boot with old "builder status" version.

v2 version was introduced by 8d25a4d678 which is included in scylla-2025.1.0.

No backport needed since this is code removal.

Closes scylladb/scylladb#29105

* github.com:scylladb/scylladb:
  view: drop unused v1 builder code
  view: remove upgrade to raft code
2026-04-12 00:39:26 +03:00
Avi Kivity
ca80ee8586 Merge 'Introduce maintenance scheduling supergroup and do initial population' from Pavel Emelyanov
The supergroup replaces streaming (a.k.a. maintenance as well) group, inherits 200 shares from it and consists of four sub-groups (all have equal shares of 200 withing the new supergroup)

* maintenance_compaction. This group configures `compaction_manager::maintenance_sg()` group. User-triggered compaction runs in it
* backup. This group configures `snapshot_ctl::config::backup_sched_group`. Native backup activity runs there
* maintenance. It's a new "visible" name, everything that was called "maintenance" in the code ran in "streaming" group. Now it will run in "maintenance". The activities include those that don't communicate over RPC (see below why)
  * `tablet_allocator::balance_tablets()`
  * `sstables_manager::components_reclaim_reload_fiber()`
  * `tablet_storage_group_manager::merge_completion_fiber()`
  * metrics exporting http server altogether
* streaming. This is purely existing streaming group that just moves under the new supergroup. Everything else that was run there, continues doing so, including
  * hints sender
  * all view building related components (update generator, builder, workers)
  * repair
  * stream_manager
  * messaging service (except for verb handlers that switch groups)
  * join_cluster() activity
  * REST API
  * ... something else I forgot

The `--maintenance_io_throughput_mb_per_sec` option is introduced. It controls the IO throughput limit applied to the maintenance supergroup. If not set, the `--stream_io_throughput_mb_per_sec` option is used to preserve backward compatibility.

All new sched groups inherit `request_class::maintenance` (however, "backup" seem not to make any requests yet).

Moving more activities from "streaming" into "maintenance" (or its own group) is possible, but one will need to take care of RPC group switching. The thing is that when a client makes an RPC call, the server may switch to one of pre-negotiated scheduling groups. Verbs for existing activities that run in "streaming" group are routed through RPC index that negotiates "streaming" group on the server side. If any of that client code moves to some other group, server will still run the handlers in "streaming" which is not quite expected. That's one of the main reasons why only the selected fibers were moved to their own "maintenance" group. Similar for backup -- this code doesn't use RPC, so it can be moved. Restoring code uses load-and-stream and corresponding RPCs, so it cannot be just moved into its own new group.

Fixes SCYLLADB-351

New feature, not backporting

Closes scylladb/scylladb#28542

* github.com:scylladb/scylladb:
  code: Add maintenance/maintenance group
  backup: Add maintenance/backup group
  compaction: Add maintenance/maintenance_compaction group
  main: Introduce maintenance supergroup
  main: Move all maintenance sched group into streaming one
  database: Use local variable for current_scheduling_group
  code: Live-update IO throughputs from main
2026-04-12 00:34:48 +03:00
Patryk Jędrzejczak
751bf31273 Merge 'More gossiper cleanups' from Gleb Natapov
The PR contains more code cleanups, mostly in gossiper. Dropping more gossiper state leaving only NORMAL and SHUTDOWN. All other states are checked against topology state. Those two are left because SHUTDOWN state is propagated through gossiper only and when the node is not in SHUTDOWN it should be in some other state.

No need to backport. Cleanups.

Closes scylladb/scylladb#29129

* https://github.com/scylladb/scylladb:
  storage_service: cleanup unused code
  storage_service: simplify get_peer_info_for_update
  gossiper: send shutdown notifications in parallel
  gms: remove unused code
  virtual_tables: no need to call gossiper if we already know that the node is in shutdown
  gossiper: print node state from raft topology in the logs
  gossiper: use is_shutdown instead of code it manually
  gossiper: mark endpoint_state(inet_address ip) constructor as explicit
  gossiper: remove unused code
  gossiper: drop last use of LEFT state and drop the state
  gossiper: drop unused STATUS_BOOTSTRAPPING state
  gossiper: rename is_dead_state to is_left since this is all that the function checks now.
  gossiper: use raft topology state instead of gossiper one when checking node's state
  storage_service: drop check_for_endpoint_collision function
  storage_service: drop is_first_node function
  gossiper: remove unused REMOVED_TOKEN state
  gossiper: remove unused advertise_token_removed function
2026-04-10 09:56:20 +02:00
Yaniv Michael Kaul
5c8b4a003e db: set_skip_when_empty() for rare error-path metrics
Add .set_skip_when_empty() to four metrics in the db module that are
only incremented on very rare error paths and are almost always zero:

- cache::pinned_dirty_memory_overload: described as 'should sit
  constantly at 0, nonzero is indicative of a bug'
- corrupt_data::entries_reported: only fires on actual data corruption
- hints::corrupted_files: only fires on on-disk hint file corruption
- rate_limiter::failed_allocations: only fires when the rate limiter
  hash table is completely full and gives up allocating, requiring
  extreme cardinality pressure

These metrics create unnecessary reporting overhead when they are
perpetually zero. set_skip_when_empty() suppresses them from metrics
output until they become non-zero.

AI-Assisted: yes
Signed-off-by: Yaniv Kaul <yaniv.kaul@scylladb.com>

Closes scylladb/scylladb#29344
2026-04-09 13:32:09 +03:00
Gleb Natapov
b2e35c538f virtual_tables: no need to call gossiper if we already know that the node is in shutdown 2026-04-09 13:31:40 +03:00
Dawid Mędrek
a8bc90a375 Merge 'cql3: fix DESCRIBE INDEX WITH INTERNALS name' from Piotr Smaron
This series fixes two related inconsistencies around secondary-index
names.
1. `DESCRIBE INDEX ... WITH INTERNALS` returned the backing
   materialized-view name in the `name` column instead of the logical
   index name.
2. The snapshot REST API accepted backing table names for MV-backed
   secondary indexes, but not the logical index names exposed to users.

The snapshot side now resolves logical secondary-index names to backing
table names where applicable, reports logical index names in snapshot
details, rejects vector index names with HTTP 400, and keeps multi-keyspace
DELETE atomic by resolving all keyspaces before deleting anything.
The tests were also extended accordingly, and the snapshot test helper
was fixed to clean up multi-table snapshots using one DELETE per table.

Fixes: SCYLLADB-1122

Minor bugfix, no need to backport.

Closes scylladb/scylladb#29083

* github.com:scylladb/scylladb:
  cql3: fix DESCRIBE INDEX WITH INTERNALS name
  test: add snapshot REST API tests for logical index names
  test: fix snapshot cleanup helper
  api: clarify snapshot REST parameter descriptions
  api: surface no_such_column_family as HTTP 400
  db: fix clear_snapshot() atomicity and use C++23 lambda form
  db: normalize index names in get_snapshot_details()
  db: add resolve_table_name() to snapshot_ctl
2026-04-09 08:37:51 +03:00
Piotr Dulikowski
ec0231c36c Merge 'db/view/view_building_worker: lock staging sstables mutex for all necessary shards when creating tasks' from Michał Jadwiszczak
To create `process_staging` view building tasks, we firstly need to collect informations about them on shard0, create necessary mutations, commit them to group0 and move staging sstables objects to their original shards.

But there is a possible race after committing the group0 command and before moving the staging sstables to their shards. Between those two events, the coordinator may schedule freshly created tasks and dispatch them to the worker but the worker won't have the sstables objects because they weren't moved yet.

This patch fixes the race by holding `_staging_sstables_mutex` locks from all necessary shards when executing `create_staging_sstable_tasks()`. With this, even if the task will be scheduled and dispatched quickly, the worker will wait with executing it until the sstables objects are moved and the locks are released.

Fixes SCYLLADB-816

This PR should be backported to all versions containing view building coordinator (2025.4 and newer).

Closes scylladb/scylladb#29174

* github.com:scylladb/scylladb:
  db/view/view_building_worker: fix indentation
  db/view/view_building_worker: lock staging sstables mutex for necessary shards when creating tasks
2026-04-09 08:37:51 +03:00
Piotr Smaron
7d83a264ac db: fix clear_snapshot() atomicity and use C++23 lambda form
clear_snapshot() applies a table filter independently in
each keyspace, so logical index names must be resolved
per keyspace on the delete path as well.

Resolve all keyspaces before deleting anything so a later
failure cannot partially remove a snapshot, and use the
explicit-object-parameter coroutine lambda form for the
asynchronous implementation.
2026-04-08 13:36:27 +02:00
Piotr Smaron
39baa1870e db: normalize index names in get_snapshot_details()
Snapshot details exposed backing secondary-index view
names instead of logical index names.

Normalize index entries in get_snapshot_details() so the
REST API reports the user-facing name, and update the
existing REST test to assert that behavior directly.
2026-04-08 13:36:27 +02:00
Piotr Smaron
9c37f1def2 db: add resolve_table_name() to snapshot_ctl
The snapshot REST API accepted backing secondary-index
table names, but not logical index names.

Introduce resolve_table_name() so snapshot creation can
translate a logical index name to the backing table when
the index is materialized as a view.
2026-04-08 13:36:27 +02:00
Pavel Emelyanov
7f854c0255 hints: Use shorter fault-injection overload
In order to apply fsult-injected delay, there's the inject(duration)
overload. Results in shorter code

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>

Closes scylladb/scylladb#29168
2026-04-08 10:51:37 +03:00
Botond Dénes
aeefbda304 Merge 'Simplify and improve API descibe_ring code flow' from Pavel Emelyanov
The endpoint in question has some places worth fixing, in particular

- the keyspace parameter is not validated
- the validated table name is resolved into table_id, but the id is unused
- two ugly static helpers to stream obtained token ranges into json

Improving the API code flow, not backporting

Closes scylladb/scylladb#29154

* github.com:scylladb/scylladb:
  api: Inline describe_ring JSON handling
  storage_service: Make describe_ring_for_table() take table_id
2026-04-08 10:50:07 +03:00
Michał Jadwiszczak
9cf94116c2 db/view/view_building_worker: fix indentation 2026-04-07 16:12:04 +02:00
Michał Jadwiszczak
c9aa5bb09c db/view/view_building_worker: lock staging sstables mutex for necessary shards when creating tasks
To create `process_staging` view building tasks, we firstly need to
collect informations about them on shard0, create necessary mutations,
commit them to group0 and move staging sstables objects to their
original shards.

But there is a possible race after committing the group0 command
and before moving the staging sstables to their shards.
Between those two events, the coordinator may schedule freshly created
tasks and dispatch them to the worker but the worker won't have the
sstables objects because they weren't moved yet.

This patch fixes the race by holding `_staging_sstables_mutex` locks
from necessary shards when executing `create_staging_sstable_tasks()`.
With this, even if the task will be scheduled and dispatched quickly,
the worker will wait with executing it until the sstables objects are
moved and the locks are released.

Fixes SCYLLADB-816
2026-04-07 16:11:45 +02:00
Avi Kivity
00409b61f1 Merge 'Add Vnodes to Tablets Migration Procedure' from Nikos Dragazis
This PR introduces the vnodes-to-tablets migration procedure, which enables converting an existing vnode-based keyspace to tablets.

The migration is implemented as a manual, operator-driven process executed in several stages. The core idea is to first create tablet maps with the same token boundaries and replica hosts as the vnodes, and then incrementally convert the storage of each node to the tablets layout. At a high level, the procedure is the following:
1. Create tablet maps for all tables in the keyspace.
2. Sequentially upgrade all nodes from vnodes to tablets:
    1. Mark a node for upgrade in the topology state.
    2. Restart the node. During startup, while the node is offline, it reshards the SSTables on vnode boundaries and switches to a tablet ERM.
    3. Wait for the node to return online before proceeding to the next node.
4. Finalize the migration:
    1. Update the keyspace schema to mark it as tablet-based.
    2. Clear the group0 state related to the migration.

From the client's perspective, the migration is online; the cluster can still serve requests on that keyspace, although performance may be temporarily degraded.

During the migration, some nodes use vnode ERMs while others use tablet ERMs. Cluster-level algorithms such as load balancing will treat the keyspace's tables as vnode-based. Once migration is finalized, the keyspace is permanently switched to tablets and cannot be reverted back to vnodes. However, a rollback procedure is available before finalization.

The patch series consists of:
* Load balancer adjustments to ignore tablets belonging to a migrating keyspace.
* A new vnode-based resharding mode, where SSTables are segregated on vnode boundaries rather than with the static sharder.
* A new per-node `intended_storage_mode` column in `system.topology`. Represents migration intent (whether migration should occur on restart) and direction.
* Four new REST endpoints for driving the migration (start, node upgrade/downgrade, finalize, status), along with `nodetool` wrappers. The finalization is implemented as a global topology request.
* Wiring of the migration process into the startup logic: the `distributed_loader` determines a migrating table's ERM flavor from the `intended_storage_mode` and the ERM flavor determines the `table_populator`'s resharding mode. Token metadata changes have been adjusted to preserve the ERM flavor.
* Cluster tests for the migration process.

Fixes SCYLLADB-722.
Fixes SCYLLADB-723.
Fixes SCYLLADB-725.
Fixes SCYLLADB-779.
Fixes SCYLLADB-948.

New feature, no backport is needed.

Closes scylladb/scylladb#29065

* github.com:scylladb/scylladb:
  docs: Add ops guide for vnodes-to-tablets migration
  test: cluster: Add test for migration of multiple keyspaces
  test: cluster: Add test for error conditions
  test: cluster: Add vnodes->tablets migration test (rollback)
  test: cluster: Add vnodes->tablets migration test (1 table, 3 nodes)
  test: cluster: Add vnodes->tablets migration test (1 table, 1 node)
  scylla-nodetool: Add migrate-to-tablets subcommand
  api: Add REST endpoint for vnode-to-tablet migration status
  api: Add REST endpoint for migration finalization
  topology_coordinator: Add `finalize_migration` request
  database: Construct migrating tables with tablet ERMs
  api: Add REST endpoint for upgrading nodes to tablets
  api: Add REST endpoint for starting vnodes-to-tablets migration
  topology_state_machine: Add intended_storage_mode to system.topology
  distributed_loader: Wire vnode-based resharding into table populator
  replica: Pick any compaction group for resharding
  compaction: resharding_compaction: add vnodes_resharding option
  storage_service: Preserve ERM flavor of migrating tables
  tablet_allocator: Exclude migrating tables from load balancing
  feature_service: Add vnodes_to_tablets_migrations feature
2026-04-07 14:32:22 +03:00
Łukasz Paszkowski
6f364fd3b7 db: fix system.size_estimates to aggregate sstable estimates across all shards
The estimate() function in the size_estimates virtual reader only
considered sstables local to the shard that happened to own the
keyspace's partition key token. Since sstables are distributed across
shards, this caused partition count estimates to be approximately
1/smp_count of the actual value.

This bug has been present since the virtual reader was introduced in
225648780d.

Use db.container().map_reduce0() to aggregate sstable estimates
across all shards. Each shard contributes its local count and
estimated_histogram, which are then merged to produce the correct
total.

Also fix the `test_partitions_estimate_full_overlap` test which becomes
flaky (xpassing ~1% of runs) because autocompaction could merge the
two overlapping sstables before the size estimate was read. Wrap the
test body in nodetool.no_autocompaction_context to prevent this race.

Fixes https://scylladb.atlassian.net/browse/SCYLLADB-1179
Refs https://github.com/scylladb/scylladb/issues/9083

Closes scylladb/scylladb#29286
2026-04-07 14:13:26 +03:00
Michał Jadwiszczak
51c164c8d2 view_building_worker: extract starting a new batch to state's method
Following the previous commit, a new batch cannot be started if the
state was already drained.
This commit also adds a check that only one batch is running at a time.
2026-04-07 08:39:05 +02:00
Michał Jadwiszczak
639aa223f3 view_building_worker: distinguish between state's clear() and drain()
While both of this methods do the same (abort current batch, clear
data), we can clear the state multiple times during view_building_worker
lifetime (for instance when processing base table is changed) but
`view_building_worker::state::drain()` should be called only once and
after this no other work on the state should be done.
2026-04-07 08:39:05 +02:00
Michał Jadwiszczak
7aea524f52 view_building_worker: lock mutexes before breaking them in drain()
Not doing this may lead to races like SCYLLADB-844.
If some consumer is holding a lock of a mutex and `drain()`
is just braking the mutex without locking it beforehand,
then the consumer may process its code which should be aborted.

An example of the race is SCYLLADB-844, where `work_on_tasks()` is
holding `_state._mutex` while it is broken by `drain()`.
This causes a new batch is started after the `_state` is cleared.
2026-04-07 08:39:00 +02:00
Michał Jadwiszczak
91c7ac1fb2 view_building_worker: execute drain() once
Future changes will require that the view building worker is drained
only once per its lifetime.
2026-04-07 08:35:02 +02:00
Michael Litvak
996d623ab4 logstor: enable tablet balancing
enable tablet balancing with the logstor feature now that it works
2026-03-31 18:45:08 +02:00
Ferenc Szili
7b308f3aa0 test: verify hints are delivered during tablet RF reduction
Add test_hint_to_leaving_when_reducing_rf which verifies that mutations
stored as hints are delivered to the correct replicas when a tablet is
removed due to RF reduction. The test sets up a 3-node cluster with
RF=2, drops the hint for one replica via error injection, then reduces
RF to 1 while hints are pending. It asserts that the mutation is
readable after the topology change completes.

Also adds a "drop_hint_for_host" error injection point in
hint_endpoint_manager to selectively drop hints for a specific host.
2026-03-31 09:18:42 +02:00
Ferenc Szili
1d64ddbdd3 hint_sender: use per-tablet is_leaving() to avoid losing hints on RF reduction
hint_sender decides whether to send a hint directly to its destination
or to re-mutate from scratch based on token_metadata::is_leaving(),
which only checks whether the *host* is leaving the cluster. When a
tablet is dropped from a host due to RF reduction (RF--), the host
is still alive and is_leaving() returns false, so hint_sender sends
directly to a replica that will no longer own the data -- effectively
losing the hint.

Switch to the new ermp->is_leaving(host, token) which is tablet-aware.
When the destination's tablet is being migrated away *and* there are
pending endpoints, send directly (the pending endpoints will receive
the data via streaming); otherwise fall through to the re-mutate path
so all current replicas receive the mutation.
2026-03-30 15:49:59 +02:00
Andrzej Jackowski
181ad9f476 Revert "audit: disable DDL by default"
This reverts commit c30607d80b.

With the default configuration, enabling DDL has no effect because
no `audit_keyspaces` or `audit_tables` are specified. Including DDL
in the default categories can be misleading for some customers, and
ideally we would like to avoid it.

However, DDL has been one of the default audit categories for years,
and removing it risks silently breaking existing deployments that
depend on it. Therefore, the recent change to disable DDL by default
is reverted.

Fixes: SCYLLADB-1155

Closes scylladb/scylladb#29169
2026-03-27 09:55:11 +01:00
Nikos Dragazis
c88ddecfca topology_coordinator: Add finalize_migration request
Vnodes-to-tablets migration needs a finalization step to finish or
rollback the migration. Finishing the migration involves switching the
keyspace schema to tablets and clearing the `intended_storage_mode` from
system.topology. Rolling back the migration involves deleting the tablet
maps and clearing the `intended_storage_mode`.

The finalization needs to be done as a topology request to exclude with
other operations such as repair and TRUNCATE.

This patch introduces the `finalize_migration` global topology request
for this purpose. The request takes a keyspace name as an argument.
The direction of the finalization (i.e., forward path vs rollback) is
inferred from the `intended_storage_mode` of all nodes (not ideal,
should be made explicit).

Signed-off-by: Nikos Dragazis <nikolaos.dragazis@scylladb.com>
2026-03-24 13:20:39 +02:00