So that a multi-dc/multi-rack cluster can be populated in a single call.
Original PR: scylladb/scylladb#23341
Fixes https://github.com/scylladb/scylladb/issues/23551
(cherry picked from commit 0fdf2a2090)
Closesscylladb/scylladb#23549
* github.com:scylladb/scylladb:
Merge 'test/pylib: servers_add: support list of property_files' from Benny Halevy
test.py: Remove reuse cluster in cluster tests
test.py apply prepare_3_nodes_cluster in topology
test.py: introduce prepare_3_nodes_cluster marker
Any empty object of the json::json_list type has its internal
_set variable assigned to false which results in such objects
being skipped by the json::json_builder.
Hence, the json returned by the api GET//compaction_manager/compaction_history
does not contain the field `rows_merged` if a cell in the
system.compaction_history table is null or an empty list.
In such cases, executing the command `nodetool compactionhistory`
will result in a crash with the following error message:
`error running operation: rjson::error (JSON assert failed on condition 'false'`
The patch fixes it by checking if the json object contains the
`rows_merged` element before processing. If the element does
not exist, the nodetool will now produce an empty list.
Fixes https://github.com/scylladb/scylladb/issues/23540Closesscylladb/scylladb#23514
(cherry picked from commit 113647550f)
Closesscylladb/scylladb#24113
test_tablet_repair_hosts_filter checks whether the host filter
specfied for tablet repair is correctly persisted. To check this,
we need to ensure that the repair is still ongoing and its data
is kept. The test achieves that by failing the repair on replica
side - as the failed repair is going to be retried.
However, if the filter does not contain any host (included_host_count = 0),
the repair is started on no replica, so the request succeeds
and its data is deleted. The test fails if it checks the filter
after repair request data is removed.
Fail repair on topology coordinator side, so the request is ongoing
regardless of the specified hosts.
Fixes: #23986.
Closesscylladb/scylladb#24003
(cherry picked from commit 2549f5e16b)
Closesscylladb/scylladb#24075
The test is failing in CI sometimes due to performance reasons.
There are at least two problems:
1. The initial 500ms (wall time) sleep might be too short. If the reclaimer
doesn't manage to evict enough memory during this time, the test will fail.
2. During the 100ms (thread CPU time) window given by the test to background
reclaim, the `background_reclaim` scheduling group isn't actually
guaranteed to get any CPU, regardless of shares. If the process is
switched out inside the `background_reclaim` group, it might
accumulate so much vruntime that it won't get any more CPU again
for a long time.
We have seen both.
This kind of timing test can't be run reliably on overcommitted machines
without modifying the Seastar scheduler to support that (by e.g. using
thread clock instead of wall time clock in the scheduler), and that would
require an amount of effort disproportionate to the value of the test.
So for now, to unflake the test, this patch removes the performance test
part. (And the tradeoff is a weakening of the test). After the patch,
we only check that the background reclaim happens *eventually*.
Fixes https://github.com/scylladb/scylladb/issues/15677
Backporting this is optional. The test is flaky even in stable branches, but the failure is rare.
- (cherry picked from commit c47f438db3)
- (cherry picked from commit 1c1741cfbc)
Parent PR: #24030Closesscylladb/scylladb#24092
* github.com:scylladb/scylladb:
logalloc_test: don't test performance in test `background_reclaim`
logalloc: make background_reclaimer::free_memory_threshold publicly visible
Currently, stream_manager is initialized after storage_service and
so it is stopped before the storage_service is. In its stop method
storage_service accesses stream_manager which is uninitialized
at a time.
Move stream_manager initialization over the storage_service initialization.
Fixes: #23207.
Closesscylladb/scylladb#24008
(cherry picked from commit 9c03255fd2)
Closesscylladb/scylladb#24189
The test is failing in CI sometimes due to performance reasons.
There are at least two problems:
1. The initial 500ms (wall time) sleep might be too short. If the reclaimer
doesn't manage to evict enough memory during this time, the test will fail.
2. During the 100ms (thread CPU time) window given by the test to background
reclaim, the `background_reclaim` scheduling group isn't actually
guaranteed to get any CPU, regardless of shares. If the process is
switched out inside the `background_reclaim` group, it might
accumulate so much vruntime that it won't get any more CPU again
for a long time.
We have seen both.
This kind of timing test can't be run reliably on overcommitted machines
without modifying the Seastar scheduler to support that (by e.g. using
thread clock instead of wall time clock in the scheduler), and that would
require an amount of effort disproportionate to the value of the test.
So for now, to unflake the test, this patch removes the performance test
part. (And the tradeoff is a weakening of the test).
(cherry picked from commit 1c1741cfbc)
Each view update is correlated to a write that generates it (aside from view
building which is throttled separately). These writes are limited by a throttling
mechanism, which effectively works by performing the writes with CL=ALL if
ongoing writes exceed some memory usage limit
When writes generate view updates, they usually also need to perform a read. This read
goes through a read concurrency semaphore where it can get delayed or killed. The
semaphore allows up to 100 concurrent reads and puts all remaining reads in a queue.
If the number of queued reads exceeds a specific limit, the view update will fail on
the replica, causing inconsistencies.
This limit is not necessary. When a read gets queued on the semaphore, the write that's
causing the view update is paused, so the write takes part in the regular write throttling.
If too many writes get stuck on view update reads, they will get throttled, so their
number is limited and the number of queued reads is also limited to the same amount.
In this patch we remove the specified queue length limit for the view update read concurrency
semaphore. Instead of this limit, the queue will be now limited indirectly, by the base write
throttling mechanism. This may allow the queue grow longer than with the previous limit, but
it shouldn't ever cause issues - we only perform up to 100 actual reads at once, and the
remaining ones that get queued use a tiny amount of memory, less than the writes that generated
them and which are getting limited directly.
Fixes https://github.com/scylladb/scylladb/issues/23319Closesscylladb/scylladb#24112
(cherry picked from commit 5920647617)
Closesscylladb/scylladb#24168
We're reducing the log level in case the provided property file is incomplete.
The rationale behind this change is related to how CCM interacts with Scylla:
* The `GossipingPropertyFileSnitch` reloads the `cassandra-rackdc.properties`
configuration every 60 seconds.
* When a new node is added to the cluster, CCM recreates the
`cassandra-rackdc.properties` file for EVERY node.
If those two processes start happening at about the same time, it may lead
to Scylla trying to read a not-completely-recreated file, and an error will
be produced.
Although we would normally fix this issue and try to avoid the race, that
behavior will be no longer relevant as we're making the rack and DC values
immutable (cf. scylladb/scylladb#23278). What's more, trying to fix the problem
in the older versions of Scylla could bring a more serious regression. Having
that in mind, this commit is a compromise between making CI less flaky and
having minimal impact when backported.
We do the same for when the format of the file is invalid: the rationale
is the same.
We also do that for when there is a double declaration. Although it seems
impossible that this can stem from the same scenario the other two errors
can (since if the format of the file is valid, the error is justified;
if the format is invalid, it should be detected sooner than a doubled
declaration), let's stay consistent with the logging level.
Fixesscylladb/scylladb#20092Closesscylladb/scylladb#23956
(cherry picked from commit 9ebd6df43a)
Closesscylladb/scylladb#24142
Materialized Views and Secondary Indexes are yet another features that
keyspaces with tablets do not support, but these were not listed in a
warning message returned to the user on CREATE KEYSPACE statement. This
commit adds the 2 missing features.
Fixes: #24006Closesscylladb/scylladb#23902
(cherry picked from commit f740f9f0e1)
Closesscylladb/scylladb#24079
When the topology coordinator is shut down while doing a long-running
operation, the current operation might throw a raft::request_aborted
exception. This is not a critical issue and should not be logged with
ERROR verbosity level.
Make sure that all the try..catch blocks in the topology coordinator
which:
- May try to acquire a new group0 guard in the `try` part
- Have a `catch (...)` block that print an ERROR-level message
...have a pass-through `catch (raft::request_aborted&)` block which does
not log the exception.
Fixes: scylladb/scylladb#22649Closesscylladb/scylladb#23962
(cherry picked from commit 156ff8798b)
Closesscylladb/scylladb#24078
This PR improves and refactors the test.topology.util new_test_keyspace generator
and adds a corresponding create_new_test_keyspace function to be used by most if not
all topology unit tests in order to standardize the way the tests create keyspaces
and to mitigate the python driver create keyspace retry issue: https://github.com/scylladb/python-driver/issues/317Fixes#22342Fixes#21905
Refs https://github.com/scylladb/scylla-enterprise/issues/5060Fixes#23699
- (cherry picked from commit 50ce0aaf1c)
- (cherry picked from commit 5d448f721e)
- (cherry picked from commit f946302369)
- (cherry picked from commit 0fd1b846fe)
- (cherry picked from commit a66ddb7c04)
- (cherry picked from commit df84097a4b)
- (cherry picked from commit 59687c25e0)
- (cherry picked from commit fdb339bf28)
- (cherry picked from commit 205ed113dd)
- (cherry picked from commit 57faab9ffa)
- (cherry picked from commit 4fefffe335)
- (cherry picked from commit 480a5837ab)
- (cherry picked from commit fed078a38a)
- (cherry picked from commit c6653e65ba)
- (cherry picked from commit 9c095b622b)
- (cherry picked from commit 0668c642a2)
- (cherry picked from commit 0e11aad9c5)
- (cherry picked from commit ef85c4b27e)
- (cherry picked from commit b13e48b648)
- (cherry picked from commit a82e734110)
- (cherry picked from commit 629ee3cb46)
- (cherry picked from commit 42a104038d)
- (cherry picked from commit d5e3c578f5)
- (cherry picked from commit c05794c156)
- (cherry picked from commit 966cf82dae)
- (cherry picked from commit 11005b10db)
- (cherry picked from commit ff9c8428df)
- (cherry picked from commit 55b35eb21c)
- (cherry picked from commit 5759a97eb4)
- (cherry picked from commit c68d2a471c)
- (cherry picked from commit e05372afa4)
- (cherry picked from commit 380c5e5ac8)
- (cherry picked from commit 3f35491264)
- (cherry picked from commit e72a9d3faa)
- (cherry picked from commit 47326d01b7)
- (cherry picked from commit 72bc4016e7)
- (cherry picked from commit 4fd6c2d24e)
- (cherry picked from commit 50a8f5c1c0)
- (cherry picked from commit 005ceb77d3)
- (cherry picked from commit 649e68c6db)
- (cherry picked from commit 0b88ea9798)
- (cherry picked from commit 6b37d04aa9)
- (cherry picked from commit e59aca66bf)
- (cherry picked from commit 5ff3153912)
- (cherry picked from commit 20f7eda16e)
- (cherry picked from commit f30e4c6917)
- (cherry picked from commit 96d327fb83)
- (cherry picked from commit 16ef78075c)
- (cherry picked from commit 2d4af01281)
- (cherry picked from commit b810791fbb)
- (cherry picked from commit 46b1850f0c)
- (cherry picked from commit 0564e95c51)
- (cherry picked from commit 12f85ce57c)
- (cherry picked from commit 9829b1594f)
- (cherry picked from commit cbe79b20f7)
- (cherry picked from commit cc281ff88d)
Parent PR: #22399Closesscylladb/scylladb#23408
* github.com:scylladb/scylladb:
test_tablet_repair_scheduler: prepare_multi_dc_repair: use create_new_test_keyspace
test/repair: create_table_insert_data_for_repair: create keyspace with unique name
topology_tasks/test_tablet_tasks: use new_test_keyspace
topology_tasks/test_node_ops_tasks: use new_test_keyspace
topology_custom/test_zero_token_nodes_no_replication: use create_new_test_keyspace
topology_custom/test_zero_token_nodes_multidc: use create_new_test_keyspace
topology_custom/test_view_build_status: use new_test_keyspace
topology_custom/test_truncate_with_tablets: use new_test_keyspace
topology_custom/test_topology_failure_recovery: use new_test_keyspace
topology_custom/test_tablets_removenode: use create_new_test_keyspace
topology_custom/test_tablets_migration: use new_test_keyspace
topology_custom/test_tablets_merge: use new_test_keyspace
topology_custom/test_tablets_intranode: use new_test_keyspace
topology_custom/test_tablets_cql: use new_test_keyspace
topology_custom/test_tablets2: use *new_test_keyspace
topology_custom/test_tablets2: test_schema_change_during_cleanup: drop unused check function
test/cluster/test_tablets.py: Fix test errorneous indentation
topology_custom/test_tablets: use new_test_keyspace
topology_custom/test_table_desc_read_barrier: use new_test_keyspace
topology_custom/test_shutdown_hang: use new_test_keyspace
topology_custom/test_select_from_mutation_fragments: use new_test_keyspace
topology_custom/test_rpc_compression: use new_test_keyspace
topology_custom/test_reversed_queries_during_simulated_upgrade_process: use new_test_keyspace
topology_custom/test_raft_snapshot_truncation: use create_new_test_keyspace
topology_custom/test_raft_no_quorum: use new_test_keyspace
topology_custom/test_raft_fix_broken_snapshot: use new_test_keyspace
topology_custom/test_query_rebounce: use new_test_keyspace
topology_custom/test_not_enough_token_owners: use new_test_keyspace
topology_custom/test_node_shutdown_waits_for_pending_requests: use new_test_keyspace
topology_custom/test_node_isolation: use create_new_test_keyspace
topology_custom/test_mv_topology_change: use new_test_keyspace
topology_custom/test_mv_tablets_replace: use new_test_keyspace
topology_custom/test_mv_tablets_empty_ip: use new_test_keyspace
topology_custom/test_mv_tablets: use new_test_keyspace
topology_custom/test_mv_read_concurrency: use new_test_keyspace
topology_custom/test_mv_fail_building: use new_test_keyspace
topology_custom/test_mv_delete_partitions: use new_test_keyspace
topology_custom/test_mv_building: use new_test_keyspace
topology_custom/test_mv_backlog: use new_test_keyspace
topology_custom/test_mv_admission_control: use new_test_keyspace
topology_custom/test_major_compaction: use new_test_keyspace
topology_custom/test_maintenance_mode: use new_test_keyspace
topology_custom/test_lwt_semaphore: use new_test_keyspace
topology_custom/test_ip_mappings: use new_test_keyspace
topology_custom/test_hints: use new_test_keyspace
topology_custom/test_group0_schema_versioning: use new_test_keyspace
topology_custom/test_data_resurrection_after_cleanup: use new_test_keyspace
topology_custom/test_read_repair_with_conflicting_hash_keys: use new_test_keyspace
topology_custom/test_read_repair: use new_test_keyspace
topology_custom/test_compacting_reader_tombstone_gc_with_data_in_memtable: use new_test_keyspace
topology_custom/test_commitlog_segment_data_resurrection: use new_test_keyspace
topology_custom/test_change_replication_factor_1_to_0: use new_test_keyspace
topology/test_tls: test_upgrade_to_ssl: use new_test_keyspace
test/topology/util: new_test_keyspace: drop keyspace only on success
test/topology/util: refactor new_test_keyspace
test/topology/util: CREATE KEYSPACE IF NOT EXISTS
test/topology/util: new_test_keyspace: accept ManagerClient
And create_new_test_keyspace when we need drop
to be explicit.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
(cherry picked from commit e59aca66bf)
Some of the statements in the test are not indented properly
and, as a result, are never run. It's most likely a small mistake,
so let's fix it.
Closesscylladb/scylladb#23659
(cherry picked from commit 0ed21d9cc1)
Using the new_test_keyspace fixture is awkward for this test
as it is written to explicitly drop the created keyspaces
at certain points.
Therefore, just use create_new_test_keyspace to standardize the
creation procedure.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
(cherry picked from commit e72a9d3faa)
new_test_keyspace is problematic here since
the presence of the banned node can fail the automatic drop of
the test keyspace due to NoHostAvailable (in debug mode for
some reason)
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
(cherry picked from commit 55b35eb21c)
When the test fails with exception, keep the keyspace
intact for post-mortem analysis.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
(cherry picked from commit 0fd1b846fe)
Define create_new_test_keyspace that can be used in
cases we cannot automatically drop the newly created keyspace
due to e.g. loss of raft majority at the end of the test.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
(cherry picked from commit f946302369)
Following patch will convert topology tests to use
new_test_keyspace and friends.
Some tests restart server and reset the driver connection
so we cannot use the original cql Session for
dropping the created keyspace in the `finally` block.
Pass the ManagerClient instead to get a new cql
session for dropping the keyspace.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
(cherry picked from commit 50ce0aaf1c)
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Used host id to check if the update is for the node itself. Using IP is unreliable since if a node is restarted with different IP a gossiper message with previous IP can be misinterpreted as belonging to a different node.
Fixes: #22777
Backport to 2025.1 since this fixes a crash. Older version do not have the code.
- (cherry picked from commit a2178b7c31)
- (cherry picked from commit ecd14753c0)
- (cherry picked from commit 7403de241c)
Parent PR: #24000Closesscylladb/scylladb#24088
* https://github.com/scylladb/scylladb:
test: add reproducer for #22777
storage_service: use id to check for local node
Add sleep before starting gossiper to increase a chance of getting old
gossiper entry about yourself before updating local gossiper info with
new IP address.
(cherry picked from commit 7403de241c)
The test checks that merging the partition versions on-the-fly using the
cursor gives the same results as merging them destructively with apply_monotonically.
In particular, it tests that the continuity of both results is equal.
However, there's a subtlety which makes this not true.
The cursor puts empty dummy rows (i.e. dummies shadowed by the partition
tombstone) in the output.
But the destructive merge is allowed (as an expection to the general
rule, for optimization reasons), to remove those dummies and thus reduce
the continuity.
So after this patch we instead check that the output of the cursor
has continuity equal to the merged continuities of version.
(Rather than to the continuity of merged versions, which can be
smaller as described above).
Refs https://github.com/scylladb/scylladb/pull/21459, a patch which did
the same in a different test.
Fixes https://github.com/scylladb/scylladb/issues/13642Closesscylladb/scylladb#24044
(cherry picked from commit 746ec1d4e4)
Closesscylladb/scylladb#24077
This change resolves an issue where selecting a version from the multiversion dropdown on Markdown pages (e.g. https://docs.scylladb.com/manual/stable/alternator/getting-started.html) incorrectly redirected users to the main page instead of the corresponding versioned page.
The underlying cause was that the `multiversion` extension relies on `source_suffix` to identify available pages for URL mapping. Without this configuration, proper redirection fails for `.md` files.
This fix should be backported to `2025.1` to ensure correct behavior. Otherwise, the fix will only take effect in future releases.
Testing locally is non-trivial: clone the repository, apply the changes to each relevant branch, set `smv_remote_whitelist` to "", then run `make multiversionpreview`. Afterward, switch between versions in the dropdown to verify behavior. I've tested it locally, so the best next step is to merge and confirm that it works as expected in the live environment.
Closesscylladb/scylladb#23957
(cherry picked from commit 4ba7182515)
Closesscylladb/scylladb#24010
Currently, flush throws no_such_column_family if a table is dropped. Skip the flush of dropped table instead.
Fixes: #16095.
Needs backport to 2025.1 and 6.2 as they contain the bug
- (cherry picked from commit 91b57e79f3)
- (cherry picked from commit c1618c7de5)
Parent PR: #23876Closesscylladb/scylladb#23905
* github.com:scylladb/scylladb:
test: test table drop during flush
replica: skip flush of dropped table
Currently, stream_session::prepare throws when a table in requests
or summaries is dropped. However, we do not want to fail streaming
if the table is dropped.
Delete table checks from stream_session::prepare. Further streaming
steps can handle the dropped table and finish the streaming successfully.
Fixes: #15257.
Closesscylladb/scylladb#23915
(cherry picked from commit 20c2d6210e)
Closesscylladb/scylladb#24052
When an sstable is identified by sstable_directory as remote-unshared,
it will at some point be moved to the target shard. When it happens a
log-message appears:
sstable_directory - Moving 1 unshared SSTables to shard 1
Processing of tables by sstable_directory often happens in parallel, and
messages from sstable_directory are intermixed. Having a message like
above is not very informative, as it tells nothing about sstables that
are being moved.
Equip the message with ks:cf pair to make it more informative.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Closesscylladb/scylladb#23912
(cherry picked from commit d40d6801b0)
Closesscylladb/scylladb#24015
This is passed by reference to the constructor, but a copy is saved into
the _table_shared_data member. A reference to this member is passed down
to all memtable readers. Because of the copy, the memtable readers save
a reference to the memtable_list's member, which goes away together with
the memtable_list when the storage_group is destroyed.
This causes use-after-free when a storage group is destroyed while a
memtable read is still ongoing. The memtable reader keeps the memtable
alive, but its reference to the memtable_table_shared_data becomes
stale.
Fix by saving a reference in the memtable_list too, so memtable readers
receive a reference pointing to the original replica::table member,
which is stable accross tablet migrations and merges.
The copy was introduced by 2a76065e3d.
There was a copy even before this commit, but in the previous vnode-only
world this was fine -- there was one memtable_list per table and it was
around until the table itself was. In the tablet world, this is no
longer given, but the above commit didn't account for this.
A test is included, which reproduces the use-after-free on memtable
migration. The test is somewhat artificial in that the use-after-free
would be prevented by holding on to an ERM, but this is done
intentionaly to keep the test simple. Migration -- unlike merge where
this use-after-free was originally observed -- is easy to trigger from
unit tests.
Fixes: #23762Closesscylladb/scylladb#23984
(cherry picked from commit 0a9ca52cfd)
Closesscylladb/scylladb#24033
The test populates a table with 50k rows, creates a view on that table
and then compares the time spent in streaming vs. gossip scheduling
groups. It only takes 10s in dev mode on my machine, but is much slower
in debug mode in CI - building the view doesn't finish within 2 minutes.
The bigger the view to build, the more accurrate the measurement;
moreover, the test scenario isn't interesting enough to be worth running
it in debug mode as this should be covered by other tests. Therefore,
just skip this test in debug mode.
Fixes: scylladb/scylladb#23862Closesscylladb/scylladb#23866
(cherry picked from commit 3d73c79a72)
Closesscylladb/scylladb#23881
The loading_cache has a periodic timer which acquires the
_timer_reads_gate. The stop() method first closes the gate and then
cancels the timer - this order is necessary because the timer is
re-armed under the gate. However, the timer callback does not check
whether the gate was closed but tries to acquire it, which might result
in unhandled exception which is logged with ERROR severity.
Fix the timer callback by acquiring access to the gate at the beginning
and gracefully returning if the gate is closed. Even though the gate
used to be entered in the middle of the callback, it does not make sense
to execute the timer's logic at all if the cache is being stopped.
Fixes: scylladb/scylladb#23951Closesscylladb/scylladb#23952
(cherry picked from commit 8ffe4b0308)
Closesscylladb/scylladb#23981
Currently, test_tablet_resize_revoked tries to trigger split revoke
by deleting some rows. This method isn't deterministic and so a test
is flaky.
Use error injection to trigger resize revoke.
Fixes: #22570.
Closesscylladb/scylladb#23966
(cherry picked from commit 1f4edd8683)
Closesscylladb/scylladb#23974
In case when dht::boot_strapper::get_boostrap_tokens fail to parse the
tokens, the topology coordinator handles the exception and schedules a
rollback. However, the current code tries to continue with the topology
coordinator logic even if an exception occurs, leaving boostrap_tokens
empty. This does not make sense and can actually cause issues,
specifically in prepare_and_broadcast_cdc_generation_data which
implicitly expect that the bootstrap_tokens of the first node in the
cluster will not be empty.
Fix this by adding the missing break.
Fixes: scylladb/scylladb#23897
From the code inspection alone it looks like 2025.1 and 6.2 have this problem, so marking for backport to both of them.
- (cherry picked from commit 66acaa1bf8)
- (cherry picked from commit 845cedea7f)
- (cherry picked from commit 670a69007e)
Parent PR: #23914Closesscylladb/scylladb#23949
* github.com:scylladb/scylladb:
test: cluster: add test_bad_initial_token
topology coordinator: do not proceed further on invalid boostrap tokens
cdc: add sanity check for generating an empty generation
Check whether a node is alive before making an rpc that gathers children
infos from the whole cluster in virtual_task::impl::get_children.
Fixes: https://github.com/scylladb/scylladb/issues/22514.
Needs backport to 2025.1 and 6.2 as they contain the bug.
- (cherry picked from commit 53e0f79947)
- (cherry picked from commit e178bd7847)
Parent PR: #23787Closesscylladb/scylladb#23943
* github.com:scylladb/scylladb:
test: add test for getting tasks children
tasks: check whether a node is alive before rpc
Add missing awaits for the rebuild_repair and repair background actions.
Although the background actions hold the _async_gate
which is closed in topology_coordinator::run(),
stop() still needs to await all background action futures
and handle any errors they may have left behind.
Fixes#23755
* The issue exists since 6.2
- (cherry picked from commit d624795fda)
- (cherry picked from commit 6de79d0dd3)
- (cherry picked from commit 7a0f5e0a54)
Parent PR: #17712Closesscylladb/scylladb#23799
* github.com:scylladb/scylladb:
topology_coordinator: stop: await all background_action_holder:s
topology_coordinator: stop: improve error messages
topology_coordinator: stop: define stop_background_action helper
Check whether a node is alive before making an rpc that gathers children
infos from the whole cluster in virtual_task::impl::get_children.
(cherry picked from commit 53e0f79947)
Scylla operations use concurrency semaphores to limit the number of concurrent operations and prevent resource exhaustion. The semaphore is selected based on the current scheduling group.
For RAFT group operations, it is essential to use a system semaphore to avoid queuing behind user operations. This patch ensures that RAFT operations use the `gossip` scheduling group to leverage the system semaphore.
Fixes scylladb/scylladb#21637
Backport: 6.2 and 6.1
- (cherry picked from commit 60f1053087)
- (cherry picked from commit e05c082002)
Parent PR: #22779Closesscylladb/scylladb#23844
* https://github.com/scylladb/scylladb:
Ensure raft group0 RPCs use the gossip scheduling group
Move RAFT operations verbs to GOSSIP group.
Adds a test which checks that rollback works properly in case when a bad
value of the initial_token function is provided.
(cherry picked from commit 670a69007e)
In case when dht::boot_strapper::get_boostrap_tokens fail to parse the
tokens, the topology coordinator handles the exception and schedules a
rollback. However, the current code tries to continue with the topology
coordinator logic even if an exception occurs, leaving boostrap_tokens
empty. This does not make sense and can actually cause issues,
specifically in prepare_and_broadcast_cdc_generation_data which
implicitly expect that the bootstrap_tokens of the first node in the
cluster will not be empty.
Fix this by adding the missing break.
Fixes: scylladb/scylladb#23897
(cherry picked from commit 845cedea7f)
It doesn't make sense to create an empty CDC generation because it does
not make sense to have a cluster with no tokens. Add a sanity check to
cdc::make_new_generation_description which fails if somebody attempts to
do that (i.e. when the set of current tokens + optionally bootstrapping
node's tokens is empty).
The function does not work correctly if it is misused, as we saw in
scylladb/scylladb#23897. While the function should not be misused in the
first place, it's better to throw an exception rather than crash -
especially that this crash could happen on the topology coordinator.
(cherry picked from commit 66acaa1bf8)
Fixes the following scenario:
1. Scale out adds new nodes to each rack
2. Table is created - all tablets are allocated to new nodes because they have low load
3. Rebalancing moves tablets from old nodes to new nodes - table balance for the new table is not fixed
We're wrong to try to equalize global load when allocating tablets,
and we should equalize per-table load instead, and let background load
balancing fix it in a fair way. It will add to the allocated storage
imbalance, but:
1. The table is initially empty, so doesn't impact actual storage imbalance.
2. It's more important to avoid overloading CPU on the nodes - imbalance hurts this aspect immediately.
3. If the table was created before imbalance was formed, we would end up in the same situation as in the problematic scenario after the patch.
4. It's the job of the load balancing to keep up with storage growing, and if it's not, scale out should kick in.
Before we have CPU-aware tablet allocation, and thus can prove we have
CPU capacity on the small nodes, we should respect per-table balance
as this is the way in which we achieve full CPU utilization.
Fixes#23631
Backport to 2025.1 because load imbalance is a serious problem in production.
- (cherry picked from commit d493a8d736)
- (cherry picked from commit 2597a7e980)
- (cherry picked from commit 1e407ab4d2)
Parent PR: #23708Closesscylladb/scylladb#23873
* github.com:scylladb/scylladb:
tablets: Equalize per-table balance when allocating tablets for a new table
load_sketch: Tolerate missing tablet_map when selecting for a given table
tests: tablets: Simplify tests by moving common code to topology_builder
This series exposes a Clock template parameter for loading_cache so that the test could use
the manual_clock rather than the lowres_clock, since relying on the latter is flaky.
In addition, the test load function is simplified to sleep some small random time and co_return the expected string,
rather than reading it from a real file, since the latter's timing might also be flaky, and it out-of-scope for this test.
Fixes#20322
* The test was flaky forever, so backport is required for all live versions.
- (cherry picked from commit b509644972)
- (cherry picked from commit 934a9d3fd6)
- (cherry picked from commit d68829243f)
- (cherry picked from commit b258f8cc69)
- (cherry picked from commit 0841483d68)
- (cherry picked from commit 32b7cab917)
Parent PR: #22064Closesscylladb/scylladb#23926
* github.com:scylladb/scylladb:
tests: loading_cache_test: use manual_clock
utils: loading_cache: make clock_type a template parameter
test: loading_cache_test: use function-scope loader
test: loading_cache_test: simlute loader using sleep
test: lib: eventually: add sleep function param
test: lib: eventually: make *EVENTUALLY_EQUAL inline functions
Relying on a real-time clock like lowres_clock
can be flaky (in particular in debug mode).
Use manual_clock instead to harden the test against
timing issues.
Fixes#20322
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
(cherry picked from commit 32b7cab917)
So the unit test can use manual_clock rather than lowres_clock
which can be flaky (in particular in debug mode).
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
(cherry picked from commit 0841483d68)
Rather than a global function, accessing a thread-local `load_count`.
The thread-local load_count cannot be used when multiple test
cases run in parallel.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
(cherry picked from commit b258f8cc69)
This test isn't about reading values from file,
but rather it's about the loading_cache.
Reading from the file can sometimes take longer than
the expected refresh times, causing flakiness (see #20322).
Rather than reading a string from a real file, just
sleep a random, short time, and co_return the string.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
(cherry picked from commit d68829243f)
rather then macros.
This is a first cleanup step before adding a sleep function
parameter to support also manual_clock.
Also, add a call to BOOST_REQUIRE_EQUAL/BOOST_CHECK_EQUAL,
respectively, to make an error more visible in the test log
since those entry points print the offending values
when not equal.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
(cherry picked from commit b509644972)
Fixes the following scenario:
1. Scale out adds new nodes to each rack
2. Table is created - all tablets are allocated to new nodes because they have low load
3. Rebalancing moves tablets from old nodes to new nodes - table balance for the new table is not fixed
We're wrong to try to equalize global load when allocating tablets,
and we should equalize per-table load instead, and let background load
balancing fix it in a fair way. It will add to the allocated storage
imbalance, but:
1. The table is initially empty, so doesn't impact actual storage imbalance.
2. It's more important to avoid overloading CPU on the nodes - imbalance hurts this aspect immediately.
3. If the table was created before imbalance was formed, we would end up in the same situation in the problematic scenario after the patch.
4. It's the job of the load balancing to keep up with storage growing, and if it's not, scale out should kick in.
Before we have CPU-aware tablet allocation, and thus can prove we have
CPU capacity on the small nodes, we should respect per-table balance
as this is the way in which we achieve full CPU utilization.
Fixes#23631
(cherry picked from commit 1e407ab4d2)
To simplify future usage in
network_topology_strategy::add_tablets_in_dc() which invokes
populate() for a given table, which may be both new and preexisitng.
(cherry picked from commit 2597a7e980)
The query may fail also on a no_such_keyspace
exception, which generates the following cql error:
```
Error from server: code=2200 [Invalid query] message="Can\'t find a keyspace test_1745198244144_qoohq"
```
Extend the pytest.raises match expression to include
this error as well.
Fixes#23812
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Closesscylladb/scylladb#23875
(cherry picked from commit f279625f59)
Closesscylladb/scylladb#23887
The test `test_mv_write_to_dead_node` currently uses a timeout of 60
seconds for remove_node, after it was increased from 30 seconds to fix
scylladb/scylladb#22953. Apparently it is still too low, and it was
observed to fail in debug mode.
Normally remove_node uses a default timeout of TOPOLOGY_TIMEOUT = 1000
seconds, but the test requires a timeout which is shorter than 5
minutes, because it is a regression test for an issue where MV updates
hold topology changes for more than 5 minutes, and we want to verify in
the test that the topology change completes in less than 5 minutes.
To resolve the issue, we set the test to skip in debug mode, because the
remove node operation is unpredictably slow, and we increase the timeout
to 180 seconds which is hopefully enough time for remove_node in
non-debug modes, and still sufficient to satisfy the test requirements.
Fixesscylladb/scylladb#22530Closesscylladb/scylladb#23833
(cherry picked from commit 5c1d24f983)
Closesscylladb/scylladb#23874
So that a multi-dc/multi-rack cluster can be populated
in a single call.
* Enhancement, no backport required
Closesscylladb/scylladb#23341
* github.com:scylladb/scylladb:
test/pylib: servers_add: add auto_rack_dc parameter
test/pylib: servers_add: support list of property_files
(cherry picked from commit 0fdf2a2090)
Pool is not aware of the cluster configuration, so it can return cluster
to the test that is not suitable for it. Removing reuse will remove such
possibility, so there will be less flaky tests.
Closesscylladb/scylladb#23277
(cherry picked from commit d68e54c26d)
Scylla operations use concurrency semaphores to limit the number
of concurrent operations and prevent resource exhaustion. The
semaphore is selected based on the current scheduling group.
For Raft group operations, it is essential to use a system semaphore to
avoid queuing behind user operations.
This commit adds a check to ensure that the raft group0 RPCs are
executed with the `gossiper` scheduling group.
(cherry picked from commit e05c082002)
In order for RAFT operations to use the gossip system semaphore, moving RAFT
verbs to the gossip group in `do_get_rpc_client_idx`, messaging_service.
Fixes scylladb/scylladb21637
(cherry picked from commit 60f1053087)
* seastar 6d8fccf14c...ed31c1ce82 (1):
> Merge 'Share IO queues between mountpoints' from Pavel Emelyanov
Changes in io-queue call for scylla-gdb update as well -- now the
reactor map of device to io-queue uses seastar::shared_ptr, not
std::unique_ptr.
Closes scylladb/scylladb#23733
Ref 70ac5828a8Fixes#23820
Add missing awaits for the rebuild_repair and repair background actions.
Although the background actions hold the _async_gate
which is closed in topology_coordinator::run(),
stop() still needs to await all background action futures
and handle any errors they may have left behind.
Fixes#23755
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
(cherry picked from commit 7a0f5e0a54)
"when cleanup" is ill-formed. Use "when XYZ"
to "during XYZ" instead.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
(cherry picked from commit 6de79d0dd3)
Refactor the code to use a helper to await background_action_holder
and handle any errors by printing a warning.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
(cherry picked from commit d624795fda)
Commit 14bf09f447 added a single-chunk layout to `managed_bytes`, which makes the overhead of `managed_bytes` smaller in the common case of a small buffer.
But there was a bug in it. In the copy constructor of `managed_bytes`, a copy of a single-chunk `managed_bytes` is made single-chunk too.
But this is wrong, because the source of the copy and the target of the copy might have different preferred max contiguous allocation sizes.
In particular, if a `managed_bytes` of size between 13 kiB and 128 kiB is copied from the standard allocator into LSA, the resulting `managed_bytes` is a single chunk which violates LSA's preferred allocation size. (And therefore is placed by LSA in the standard allocator).
In other words, since Scylla 6.0, cache and memtable cells between 13 kiB and 128 kiB are getting allocated in the standard allocator rather than inside LSA segments.
Consequences of the bug:
1. Effective memory consumption of an affected cell is rounded up to the nearest power of 2.
2. With a pathological-enough allocation pattern (for example, one which somehow ends up placing a single 16 kiB memtable-owned allocation in every aligned 128 kiB span), memtable flushing could theoretically deadlock, because the allocator might be too fragmented to let the memtable grow by another 128 kiB segment, while keeping the sum of all allocations small enough to avoid triggering a flush. (Such an allocation pattern probably wouldn't happen in practice though).
3. It triggers a bug in reclaim which results in spurious allocation failures despite ample evictable memory.
There is a path in the reclaimer procedure where we check whether reclamation succeeded by checking that the number of free LSA segments grew.
But in the presence of evictable non-LSA allocations, this is wrong because the reclaim might have met its target by evicting the non-LSA allocations, in which case memory is returned directly to the standard allocator, rather than to the pool of free segments.
If that happens, the reclaimer wrongly returns `reclaimed_nothing` to Seastar, which fails the allocation.
Refs (possibly fixes) https://github.com/scylladb/scylladb/issues/21072
Fixes https://github.com/scylladb/scylladb/issues/22941
Fixes https://github.com/scylladb/scylladb/issues/22389
Fixes https://github.com/scylladb/scylladb/issues/23781
This is a regression fix, should be backported to all affected releases.
- (cherry picked from commit 4e2f62143b)
- (cherry picked from commit 6c1889f65c)
Parent PR: #23782Closesscylladb/scylladb#23810
* github.com:scylladb/scylladb:
managed_bytes_test: add a reproducer for #23781
managed_bytes: in the copy constructor, respect the target preferred allocation size
Said method passes down its diff input to mutate_internal(), after some std::ranges massaging. Said massaging is destructive -- it moves items from the diff. If the output range is iterated-over multiple times, only the first time will see the actual output, further iterations will get an empty range.
When trace-level logging is enabled, this is exactly what happens: mutate_internal() iterates over the range multiple times, first to log its content, then to pass it down the stack. This ends up resulting in an empty range being pased down and write handlers being created with nullopt optionals.
Fixes: scylladb/scylladb#21907Fixes: scylladb/scylladb#21714
A follow-up stability fix for the test is also included.
Fixes: https://github.com/scylladb/scylladb/issues/23513
Fixes: https://github.com/scylladb/scylladb/issues/23512
Based on code-inspection, all versions are vulnerable, although <=6.2 use boost::ranges, not std::ranges.
- (cherry picked from commit 7150442f6a)
Parent PR: #21910Closesscylladb/scylladb#23791
* github.com:scylladb/scylladb:
test/cluster/test_read_repair.py: increase read request timeout
service/storage_proxy: schedule_repair(): materialize the range into a vector
Currently, when we rebuild a tablet, we stream data from all
replicas. This creates a lot of redundancy, wastes bandwidth
and CPU resources.
In this series, we split the streaming stage of tablet rebuild into
two phases: first we stream tablet's data from only one replica
and then repair the tablet.
Fixes: https://github.com/scylladb/scylladb/issues/17174.
Needs backport to 2025.1 to prevent out of space during streaming
- (cherry picked from commit b80e957a40)
- (cherry picked from commit ed7b8bb787)
- (cherry picked from commit 5d6041617b)
- (cherry picked from commit 4a847df55c)
- (cherry picked from commit eb17af6143)
- (cherry picked from commit acd32b24d3)
- (cherry picked from commit 372b562f5e)
Parent PR: #23187Closesscylladb/scylladb#23682
* github.com:scylladb/scylladb:
test: add test for rebuild with repair
locator: service: move to rebuild_v2 transition if cluster is upgraded
locator: service: add transition to rebuild_repair stage for rebuild_v2
locator: service: add rebuild_repair tablet transition stage
locator: add maybe_get_primary_replica
locator: service: add rebuild_v2 tablet transition kind
gms: add REPAIR_BASED_TABLET_REBUILD cluster feature
Commit 14bf09f447 added a single-chunk
layout to `managed_bytes`, which makes the overhead of `managed_bytes`
smaller in the common case of a small buffer.
But there was a bug in it. In the copy constructor of `managed_bytes`,
a copy of a single-chunk `managed_bytes` is made single-chunk too.
But this is wrong, because the source of the copy and the target
of the copy might have different preferred max contiguous allocation
sizes.
In particular, if a `managed_bytes` of size between 13 kiB and 128 kiB
is copied from the standard allocator into LSA, the resulting
`managed_bytes` is a single chunk which violates LSA's preferred
allocation size. (And therefore is placed by LSA in the standard
allocator).
In other words, since Scylla 6.0, cache and memtable cells
between 13 kiB and 128 kiB are getting allocated in the standard allocator
rather than inside LSA segments.
Consequences of the bug:
1. Effective memory consumption of an affected cell is rounded up to the nearest
power of 2.
2. With a pathological-enough allocation pattern
(for example, one which somehow ends up placing a single 16 kiB
memtable-owned allocation in every aligned 128 kiB span),
memtable flushing could theoretically deadlock,
because the allocator might be too fragmented to let the memtable
grow by another 128 kiB segment, while keeping the sum of all
allocations small enough to avoid triggering a flush.
(Such an allocation pattern probably wouldn't happen in practice though).
3. It triggers a bug in reclaim which results in spurious
allocation failures despite ample evictable memory.
There is a path in the reclaimer procedure where we check whether
reclamation succeeded by checking that the number of free LSA
segments grew.
But in the presence of evictable non-LSA allocations, this is wrong
because the reclaim might have met its target by evicting the non-LSA
allocations, in which case memory is returned directly to the
standard allocator, rather than to the pool of free segments.
If that happens, the reclaimer wrongly returns `reclaimed_nothing`
to Seastar, which fails the allocation.
Refs (possibly fixes) https://github.com/scylladb/scylladb/issues/21072
Fixes https://github.com/scylladb/scylladb/issues/22941
Fixes https://github.com/scylladb/scylladb/issues/22389
Fixes https://github.com/scylladb/scylladb/issues/23781
(cherry picked from commit 4e2f62143b)
This test enables trace-level logging for the mutation_data logger,
which seems to be too much in debug mode and the test read times out.
Increase timeout to 1minute to avoid this.
Fixes: #23513Closesscylladb/scylladb#23558
(cherry picked from commit 7bbfa5293f)
This series adds support for reporting consumed capacity in BatchGetItem operations in Alternator.
It includes changes to the RCU accounting logic, exposing internal functionality to support batch-specific behavior, and adds corresponding tests for both simple and complex use cases involving multiple tables and consistency modes.
Need backporting to 2025.1, as RCU and WCU are not fully supported
Fixes#23690
- (cherry picked from commit 0eabf8b388)
- (cherry picked from commit 88095919d0)
- (cherry picked from commit 3acde5f904)
Parent PR: #23691Closesscylladb/scylladb#23790
* github.com:scylladb/scylladb:
test_returnconsumedcapacity.py: test RCU for batch get item
alternator/executor: Add RCU support for batch get items
alternator/consumed_capacity: make functionality public
Said method passes down its `diff` input to `mutate_internal()`, after
some std::ranges massaging. Said massaging is destructive -- it moves
items from the diff. If the output range is iterated-over multiple
times, only the first time will see the actual output, further
iterations will get an empty range.
When trace-level logging is enabled, this is exactly what happens:
`mutate_internal()` iterates over the range multiple times, first to log
its content, then to pass it down the stack. This ends up resulting in
a range with moved-from elements being pased down and consequently write
handlers being created with nullopt mutations.
Make the range re-entrant by materializing it into a vector before
passing it to `mutate_internal()`.
Fixes: scylladb/scylladb#21907Fixes: scylladb/scylladb#21714Closesscylladb/scylladb#21910
(cherry picked from commit 7150442f6a)
Currently, maybe_switch_to_new_writer resets _current_writer
only in a continuation after closing the current writer.
This leaves a window of vulnerability if close() yields,
and token_group_based_splitting_mutation_writer::close()
is called. Seeing the engaged _current_writer, close()
will call _current_writer->close() - which must be called
exactly once.
Solve this when switching to a new writer by resetting
_current_writer before closing it and potentially yielding.
Fixes#22715
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Closesscylladb/scylladb#22922
(cherry picked from commit 29b795709b)
Closesscylladb/scylladb#22965
This patch adds tests for consumed capacity in batch get item. It tests
both the simple case and the multi-item, multi-table case that combines
consistent and non-consistent reads.
(cherry picked from commit 3acde5f904)
This patch adds RCU support for batch get items. With batch requests,
multiple objects are read from multiple tables. While the criterion for
adding the units is per the batch request, the units are calculated per
table—and so is the read consistency.
(cherry picked from commit 88095919d0)
The consumed_capacity_counter is not completely applicable for batch
operations. This patch makes some of its functionality public so that
batch get item can use the components to decide if it needs to send
consumed capacity in the reply, to get the half units used by the
metrics and returned result, and to allow an empty constructor for the
RCU counter.
(cherry picked from commit 0eabf8b388)
Because of rounding and alignment, there are multiple pools for small
sizes (e.g. 4 for size 32). Because the pool selection algorithm
ignores alignment, different pools can be chosen for different object
sizes. For example, an object size of 29 will choose the first pool
of size 32, while an object size of 32 will choose the fourth pool of
size 32.
The small-objects command doesn't know about this and always considers
just the first pool for a given size. This causes it to miss out on
sister pools.
While it's possible to adjust pool selection to always choose one of the
pools, it may eat a precious cycle. So instead let's compensate in the
small-objects command. Instead of finding one pool for a given size,
find all of them, and iterate over all those pools.
Fixes#23603Closesscylladb/scylladb#23604
(cherry picked from commit b4d4e48381)
Closesscylladb/scylladb#23749
Fixes#23225Fixes#23185
Adds a "wrap_sink" (with default implementation) to sstables::file_io_extension, and moves
extension wrapping of file and sink objects to storage level.
(Wrapping/handling on sstable level would be problematic, because for file storage we typically re-use the sstable file objects for sinks, whereas for S3 we do not).
This ensures we apply encryption on both read and write, whereas we previously only did so on read -> fail.
Adds io wrapper objects for adapting file/sink for default implementation, as well as a proper encrypted sink implementation for EAR.
Unit tests for io objects and a macro test for S3 encrypted storage included.
- (cherry picked from commit 98a6d0f79c)
- (cherry picked from commit e100af5280)
- (cherry picked from commit d46dcbb769)
- (cherry picked from commit e02be77af7)
- (cherry picked from commit 9ac9813c62)
- (cherry picked from commit 5c6337b887)
Parent PR: #23261Closesscylladb/scylladb#23424
* github.com:scylladb/scylladb:
encryption: Add "wrap_sink" to encryption sstable extension
encrypted_file_impl: Add encrypted_data_sink
sstables::storage: Move wrapping sstable components to storage provider
sstables::file_io_extension: Add a "wrap_sink" method.
sstables::file_io_extension: Make sstable argument to "wrap" const
utils: Add "io-wrappers", useful IO helper types
Adds a sibling type to encrypted file, a data_sink, that
will write a data stream in the same block format as a file
object would. Including end padding.
For making encrypted data sink writing less cumbersome.
(cherry picked from commit 9ac9813c62)
Fixes#23225Fixes#23185
Moved wrapping component files/sinks to storage provider. Also ensures
to wrap data_sinks as well as actual files. This ensures that we actually
write encryption if active.
(cherry picked from commit e02be77af7)
Similar to wrap file, should wrap a data_sink (used for
sstable writers), in obvious write-only, simple stream
mode.
Default impl will detect if we wrap files for this component,
and if so, generate a file wrapper for the input sink, wrap
this, and the wrap it in a file_data_sink_impl.
This is obviously not efficient, so extensions used in actual
non-test code should implement the method.
(cherry picked from commit d46dcbb769)
This matches the signature of call sites. Since the only "real"
extension to actually make a marker in the sstable will do so in
the scylla component, which is writable even in a const sstable,
this is ok.
(cherry picked from commit e100af5280)
Mainly to add a somewhat functional file-impl wrapping
a data_sink. This can implement a rudimentary, write-only,
file based on any output sink.
For testing, and because they fit there, place memory
sink and source types there as well.
(cherry picked from commit 98a6d0f79c)
audit_syslog_storage_helper::syslog_send_helper uses Seastar's
net::datagram_channel to write to syslog device (usually /dev/log).
However, datagram_channel.send() is not fiber-safe (ref seastar#2690),
so unserialized use of send() results in packets overwriting its state.
This, in turn, causes a corruption of audit logs, as well as assertion
failures.
To workaround the problem, a new semaphore is introduced in
audit_syslog_storage_helper. As storage_helper is a member of sharded
audit service, the semaphore allows for one datagram_channel.send() on
each shard. Each audit_syslog_storage_helper stores its own
datagram_channel, therefore concurrent sends to datagram_channel are
eliminated.
This change:
- Moved syslog_send_helper to audit_syslog_storage_helper
- Corutinize audit_syslog_storage_helper
- Introduce semaphore with count=1 in audit_syslog_storage_helper.
See https://github.com/scylladb/scylla-dtest/pull/5749 for releated dtest
Fixes: scylladb/scylladb#22973
Backport to 2025.1 should be considered, as https://github.com/scylladb/scylladb/issues/22973 is known to cause crashes of 2025.1.
- (cherry picked from commit dbd2acd2be)
- (cherry picked from commit 889fd5bc9f)
- (cherry picked from commit c12f976389)
Parent PR: #23464Closesscylladb/scylladb#23674
* github.com:scylladb/scylladb:
audit: add semaphore to audit_syslog_storage_helper
audit: corutinize audit_syslog_storage_helper
audit: moved syslog_send_helper to audit_syslog_storage_helper
If cluster is upgraded to version containing rebuild_v2 transition
kind, move to this transition kind instead of rebuild.
(cherry picked from commit acd32b24d3)
Modify write_both_read_old and streaming stages in rebuild_v2 transition
kind: write_both_read_old moves to rebuild_repair stage and streaming stage
streams data only from one replica.
(cherry picked from commit eb17af6143)
Currently, in the streaming stage of rebuild tablet transition,
we stream tablet data from all replicas.
This patch series splits the streaming stage into two phases:
- repair phase, where we repair the tablet;
- streaming phase, where we stream tablet data from one replica.
rebuild_repair is a stage that will be used to perform the repair
phase. It executes the tablet repair on tablet_info::replicas.
A primary replica out of migration_streraming_info::read_from is
the repair master. If the repair succeeds, we move to streaming
tablet transition stage, and to cleanup_target - if it fails.
The repair bypasses the tablet repair scheduler and it does not update
the repair_time.
A transition to the rebuild_repair stage will be added in the following
patches.
(cherry picked from commit 4a847df55c)
Currently, in the streaming stage of rebuild tablet transition,
we stream tablet data from all replicas.
This patch series splits the streaming stage into two phases:
- repair phase, where we repair the tablet;
- streaming phase, where we stream tablet data from one replica.
To differentiate the two streaming methods, a new tablet transition
kind - rebuild_v2 - is added.
The transtions and stages for rebuild_v2 transition kind will be
added in the following patches.
(cherry picked from commit ed7b8bb787)
Resize finalization is executed in a separate topology transition state,
`tablet_resize_finalization`, to ensure it does not overlap with tablet
transitions. The topology transitions into the
`tablet_resize_finalization` state only when no tablet migrations are
scheduled or being executed. If there is a large load-balancing backlog,
split finalization might be delayed indefinitely, leaving the tables
with large tablets.
This PR fixes the issue by updating the load balancer to no schedule any
migrations and to not make any repair plans when there a resize
finalization is pending in any table.
Also added a testcase to verify the fix.
Fixes#21762
- (cherry picked from commit 8cabc66f07)
- (cherry picked from commit 5b47d84399)
- (cherry picked from commit dccce670c1)
Parent PR: #22148Closesscylladb/scylladb#23633
* github.com:scylladb/scylladb:
topology_coordinator: fix indentation in generate_migration_updates
topology_coordinator: do not schedule migrations when there are pending resize finalizations
load_balancer: make repair plans only when there is no pending resize finalization
This adaptor adapts a mutation reader pausable consumer to the frozen
mutation visitor interface. The pausable consumer protocol allows the
consumer to skip the remaining parts of the partition and resume the
consumption with the next one. To do this, the consumer just has to
return stop_iteration::yes from one of the consume() overloads for
clustering elements, then return stop_iteration::no from
consume_end_of_partition(). Due to a bug in the adaptor, this sequence
leads to terminating the consumption completely -- so any remaining
partitions are also skipped.
This protocol implementation bug has user-visible effects, when the
only user of the adaptor -- read repair -- happens during a query which
has limitations on the amount of content in each partition.
There are two such queries: select distinct ... and select ... with
partition limit. When converting the repaired mutation to to query
result, these queries will trigger the skip sequence in the consumer and
due to the above described bug, will skip the remaining partitions in
the results, omitting these from the final query result.
This patch fixes the protocol bug, the return value of the underlying
consumer's consume_end_of_partition() is now respected.
A unit test is also added which reproduces the problem both with select
distinct ... and select ... per partition limit.
Follow-up work:
* frozen_mutation_consumer_adaptor::on_end_of_partition() calls the
underlying consumer's on_end_of_stream(), so when consuming multiple
frozen mutations, the underlying's on_end_of_stream() is called for
each partition. This is incorrect but benign.
* Improve documentation of mutation_reader::consume_pausable().
Fixes: #20084Closesscylladb/scylladb#23657
(cherry picked from commit d67202972a)
Closesscylladb/scylladb#23694
Add a new nodetool cluster super-command. Add nodetool
cluster repair command to repair tablet keyspaces.
It uses the new /storage_service/tablets/repair API.
The nodetool cluster repair command allows you to specify
the keyspace and tables to be repaired. A cluster repair of many
tables will request /storage_service/tablets/repair and wait for
the result synchronously for each table.
The nodetool repair command, which was previously used to repair
keyspaces of any type, now repairs only vnode keyspaces.
Fixes: https://github.com/scylladb/scylladb/issues/22409.
Needs backport to 2025.1 that introduces the new tablet repair API
- (cherry picked from commit cbde835792)
- (cherry picked from commit b81c81c7f4)
- (cherry picked from commit aa3973c850)
- (cherry picked from commit 8bbc5e8923)
- (cherry picked from commit 02fb71da42)
- (cherry picked from commit 9769d7a564)
Parent PR: #22905Closesscylladb/scylladb#23672
* github.com:scylladb/scylladb:
docs: nodetool: update repair and add tablet-repair docs
test: nodetool: add tests for cluster repair command
nodetool: add cluster repair command
nodetool: repair: extract getting hosts and dcs to functions
nodetool: repair: warn about repairing tablet keyspaces
nodetool: repair: move keyspace_uses_tablets function
When running those operations after a tablet replica is migrated away from
a shard, an assert can fail resulting in a crash.
Status quo (around the assert in truncate procedure):
1) Highest RP seen by table is saved in low_mark, and the current time in
low_mark_at.
2) Then compaction is disabled in order to not mix data written before truncate,
and data written later.
3) Then memtable is flushed in order for the data written before truncate to be
available in sstables and then removed.
4) Now, current time is saved in truncated_at, which is supposedly the time of
truncate to decide which sstables to remove.
Note: truncated_at is likely above low_mark_at due to steps 2 and 3.
The interesting part of the assert is:
(truncated_at <= low_mark_at ? rp <= low_mark : low_mark <= rp)
Note: RP in the assert above is the highest RP among all sstables generated
before truncated_at. RP is retrieved by table::discard_sstables().
If truncated_at > low_mark_at, maybe newer data was written during steps 2 and
3, and memtable's RP becomes greater than low_mark, resulting in a SSTable with
RP > low_mark.
So assert's 2nd condition is there to defend against the scenario above.
truncated_at and low_mark_at uses millisecond granularity, so even if
truncated_at == low_mark_at, data could have been written in steps 2 and 3
(during same MS window), failing the assert. This is fragile.
Reproducer:
To reproduce the problem, truncated_at must be > low_mark_at, which can easily
happen with both drop table and truncate due to steps 2 and 3.
If a shard has 2 or more tablets, the table's highest RP refer to just one
tablet in that shard.
If the tablet with the highest RP is migrated away, then the sstables in that
shard will have lower RP than the recorded highest RP (it's a table wide state,
which makes sense since CL is shared among tablets).
So when either drop table or truncate runs, low_mark will be potentially bigger
than highest RP retrieved from sstables.
Proposed solution:
The current assert is hacked to not fail if writes sneak in, during steps 2 and
3, but it's still fragile and seems not to serve its real purpose, since it's
allowing for RP > low_mark.
We should be able to say that low_mark >= RP, as a way of asserting we're not
leaving data targeted by truncate behind (or that we're not removing the wrong
data).
But the problem is that we're saving low_mark in step 1, before preparation
steps (2 and 3). When truncated_at is recorded in step 4, it's a way of saying
all data written so far is targeted for removal. But as of today, low_mark
refers to all data written up to step 1. So low_mark is now only one set
before issuing flush, and also accounts for all potentially flushed data.
Fixes#18059.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Closesscylladb/scylladb#23560
(cherry picked from commit 0f59deffaa)
(cherry picked from commit 7554d4bbe09967f9b7a55575b5dfdde4f6616862)
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Closesscylladb/scylladb#23649
The split monitor wasn't handling the scenario where the table being
split is dropped. The monitor would be unable to find the tablet map
of such a table, and the error would be treated as a retryable one
causing the monitor to fall into an endless retry loop, with sleeps
in between. And that would block further splits, since the monitor
would be busy with the retries. The fix is about detecting table
was dropped and skipping to the next candidate, if any.
Fixes#21859.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Closesscylladb/scylladb#22933
(cherry picked from commit 4d8a333a7f)
Closesscylladb/scylladb#23480
The row cache can garbage-collect tombstones in two places:
1) When populating the cache - the underlying reader pipeline has a `compacting_reader` in it;
2) During reads - reads now compact data including garbage collection;
In both cases, garbage collection has to do overlap checks against memtables, to avoid collecting tombstones which cover data in the memtables.
This PR includes fixes for (2), which were not handled at all currently.
(1) was already supposed to be fixed, see https://github.com/scylladb/scylladb/issues/20916. But the test added in this PR showed that the test is incomplete: https://github.com/scylladb/scylladb/issues/23291. A fix for this issue is also included.
Fixes: https://github.com/scylladb/scylladb/issues/23291
Fixes: https://github.com/scylladb/scylladb/issues/23252
The fix will need backport to all live release.
- (cherry picked from commit c2518cdf1a)
- (cherry picked from commit 6b5b563ef7)
- (cherry picked from commit 7e600a0747)
- (cherry picked from commit d126ea09ba)
- (cherry picked from commit cb76cafb60)
- (cherry picked from commit df09b3f970)
- (cherry picked from commit e5afd9b5fb)
- (cherry picked from commit 34b18d7ef4)
- (cherry picked from commit f7938e3f8b)
- (cherry picked from commit 6c1f6427b3)
- (cherry picked from commit 0d39091df2)
Parent PR: #23255Closesscylladb/scylladb#23673
* github.com:scylladb/scylladb:
test/boost/row_cache_test: add memtable overlap check tests
replica/table: add error injection to memtable post-flush phase
utils/error_injection: add a way to set parameters from error injection points
test/cluster: add test_data_resurrection_in_memtable.py
test/pylib/utils: wait_for_cql_and_get_hosts(): sort hosts
replica/mutation_dump: don't assume cells are live
replica/database: do_apply() add error injection point
replica: improve memtable overlap checks for the cache
replica/memtable: add is_merging_to_cache()
db/row_cache: add overlap-check for cache tombstone garbage collection
mutation/mutation_compactor: copy key passed-in to consume_new_partition()
sstable features indicate that an sstable has some extension, or that
some bug was fixed. They allow us to know if we can rely on certain
properties in a read sstables.
Currently, sstable features are set early in the read path (when we
read the scylla metadata file) and very late in the write path
(when we write the scylla metadata file just before sealing the sstable).
However, we happen to read features before we set them in the write path -
when we resize the bloom filter for a newly written sstable we instantiate
an index reader, and that depends on some features. As a result,
we read a disengaged optional (for the scylla metadata component) as if
it was engaged. This somehow worked so far, but fails with libstdc++
hash table implementation.
Fix it by moving storage of the features to the sstable itself, and
setting it early in the write path.
Fixes#23484Closesscylladb/scylladb#23485
(cherry picked from commit 73e4a3c581)
Closesscylladb/scylladb#23504
Bootstrap or replace can take a long time, but
since feef7d3fa1,
the stop_signal is checked only in checkpoints,
and in particular, abort isn't requested during
join_cluster.
Fixes#23222
* requires backport on top of https://github.com/scylladb/scylladb/pull/23184
- (cherry picked from commit 0fc196991a)
- (cherry picked from commit f269480f53)
- (cherry picked from commit 41f02c521d)
Parent PR: #23306Closesscylladb/scylladb#23461
* github.com:scylladb/scylladb:
main: allow abort during join_cluster
main: add checkpoint before joining cluster
storage_service: add start_sys_dist_ks
During streaming receiving node gets and processes mutation fragments.
If this operation fails, receiver responds with -1 status code, unless
it failed due to no_such_column_family in which case streaming of this
table should be skipped.
However, when the table was dropped, an exception handler on receiver
side may get not only data_dictionary::no_such_column_family, but also
seastar::nested_exception of two no_such_column_family.
Encountered example:
```
ERROR 2025-02-12 15:20:51,508 [shard 0:strm] stream_session - [Stream #f1cd6830-e954-11ef-afd9-b022e40bf72d] Failed to handle STREAM_MUTATION_FRAGMENTS (receive and distribute phase) for ks=ks, cf=cf, peer=756dd3fe-2bf0-4dcd-afbc-cfd5202669a0: seastar::nested_exception: data_dictionary::no_such_column_family (Can't find a column family with UUID ef9b1ee0-e954-11ef-ba4a-faf17acf4e14) (while cleaning up after data_dictionary::no_such_column_family (Can't find a column family with UUID ef9b1ee0-e954-11ef-ba4a-faf17acf4e14))
```
In this case, the exception does not match the try_catch<data_dictionary::no_such_column_family>
clause and gets handled the same as any other exception type.
Replace try_catch clause with table_sync_and_check that synchronizes
the schema and check if the table exists.
Fixes: https://github.com/scylladb/scylladb/issues/22834.
Needs backport to all live version, as they all contain the bug
- (cherry picked from commit 876cf32e9d)
- (cherry picked from commit faf3aa13db)
- (cherry picked from commit 44748d624d)
- (cherry picked from commit 35bc1fe276)
Parent PR: #22868Closesscylladb/scylladb#23290
* github.com:scylladb/scylladb:
streaming: fix the way a reason of streaming failure is determined
streaming: save a continuation lambda
streaming: use streaming namespace in table_check.{cc,hh}
repair: streaming: move table_check.{cc,hh} to streaming
Resize finalization is executed in a separate topology transition state,
`tablet_resize_finalization`, to ensure it does not overlap with tablet
transitions. The topology transitions into the
`tablet_resize_finalization` state only when no tablet migrations are
scheduled or being executed. If there is a large load-balancing backlog,
split finalization might be delayed indefinitely, leaving the tables
with large tablets.
To fix this, do not schedule tablet migrations on any tables when there
are pending resize finalizations. This ensures that migrations from the
same table and other unrelated tables do not block resize finalization.
Also added a testcase to verify the fix.
Fixes#21762
Signed-off-by: Lakshmi Narayanan Sreethar <lakshmi.sreethar@scylladb.com>
(cherry picked from commit 5b47d84399)
Do not make repair plans if any table has pending resize finalization.
This is to ensure that the finalization doesn't get delayed by reapir
tasks.
Refs #21762
Signed-off-by: Lakshmi Narayanan Sreethar <lakshmi.sreethar@scylladb.com>
(cherry picked from commit 8cabc66f07)
Similar to test/cluster/test_data_resurrection_in_memtable.py but works
on a single node and uses more low-level mechanism. These tests can also
reproduce more advanced scenarios, like concurrent reads, with some
reading from flushed memtables.
(cherry picked from commit 0d39091df2)
After the memtable was flushed to disk, but before it is merged to
cache. The injection point will only active for the table specified in
the "table_name" injection parameter.
(cherry picked from commit 6c1f6427b3)
With this, now it is possible to have two-way communication between
the error injection point and its enabler. The test can enable the error
injection point, then wait until it is hit, before proceedin.
(cherry picked from commit f7938e3f8b)
During streaming receiving node gets and processes mutation fragments.
If this operation fails, receiver responds with -1 status code, unless
it failed due to no_such_column_family in which case streaming of this
table should be skipped.
However, when the table was dropped, an exception handler on receiver
side may get not only data_dictionary::no_such_column_family, but also
seastar::nested_exception of two no_such_column_family.
Encountered example:
```
ERROR 2025-02-12 15:20:51,508 [shard 0:strm] stream_session - [Stream #f1cd6830-e954-11ef-afd9-b022e40bf72d] Failed to handle STREAM_MUTATION_FRAGMENTS (receive and distribute phase) for ks=ks, cf=cf, peer=756dd3fe-2bf0-4dcd-afbc-cfd5202669a0: seastar::nested_exception: data_dictionary::no_such_column_family (Can't find a column family with UUID ef9b1ee0-e954-11ef-ba4a-faf17acf4e14) (while cleaning up after data_dictionary::no_such_column_family (Can't find a column family with UUID ef9b1ee0-e954-11ef-ba4a-faf17acf4e14))
```
In this case, the exception does not match the try_catch<data_dictionary::no_such_column_family>
clause and gets handled the same as any other exception type.
Replace try_catch clause with table_sync_and_check that synchronizes
the schema and check if the table exists.
Fixes: https://github.com/scylladb/scylladb/issues/22834.
(cherry picked from commit 35bc1fe276)
In the following patches, an additional preemption point will be
added to the coroutine lambda in register_stream_mutation_fragments.
Assign a lambda to a variable to prolong the captures lifetime.
(cherry picked from commit 44748d624d)
Such that a given index in the return hosts refers to the same
underlying Scylla instance, as the same index in the passed-in nodes
list. This is what users of this method intuitively expect, but
currently the returned hosts list is unordered (has random order).
(cherry picked from commit e5afd9b5fb)
Currently the dumper unconditionally extracts the value of atomic cells,
assuming they are live. This doesn't always hold of course and
attempting to get the value of a dead cell will lead to marshalling
errors. Fix by checking is_live() before attempting to get the cell
value. Fix for both regular and collection cells.
(cherry picked from commit df09b3f970)
So writes (to user tables) can be failed on a replica, via error
injection. Should simplify tests which want to create differences in
what writes different replicas receive.
(cherry picked from commit cb76cafb60)
The current memtable overlap check that is used by the cache
-- table::get_max_purgeable_fn_for_cache_underlying_reader() -- only
checks the active memtable, so memtables which are either being flushed
or are already flushed and also have active reads against them do not
participate in the overlap check.
This can result in temporary data resurrection, where a cache read can
garbage-collect a tombstone which still covers data in a flushing or
flushed memtable, which still have active read against it.
To prevent this, extend the overlap check to also consider all of the
memtable list. Furthermore, memtable_list::erase() now places the removed
(flushed) memtable in an intrusive list. These entries are alive only as
long as there are readers still keeping an `lw_shared_ptr<memtable>`
alive. This list is now also consulted on overlap checks.
(cherry picked from commit d126ea09ba)
The cache should not garbage-collect tombstone which cover data in the
memtable. Add overlap checks (get_max_purgeable) to garbage collection
to detect tombstones which cover data in the memtable and to prevent
their garbage collection.
(cherry picked from commit 6b5b563ef7)
This doesn't introduce additional work for single-partition queries: the
key is copied anyway on consume_end_of_stream().
Multi-partition reads and compaction are not that sensitive to
additional copy added.
This change fixes a bug in the compacting_reader: currently the reader
passes _last_uncompacted_partition_start.key() to the compactor's
consume_new_partition(). When the compactor emits enough content for this
partition, _last_uncompacted_partition_start is moved from to emit the
partition start, this makes the key reference passed to the compaction
corrupt (refer to moved-from value). This in turn means that subsequent
GC checks done by the compactor will be done with a corrupt key and
therefore can result in tombstone being garbage-collected while they
still cover data elsewhere (data resurrection).
The compacting reader is violating the API contract and normally the bug
should be fixed there. We make an exception here because doing the fix
in the mutation compactor better aligns with our future plans:
* The fix simplifies the compactor (gets rid of _last_dk).
* Prepares the way to get rid of the consume API used by the compactor.
(cherry picked from commit c2518cdf1a)
Before, it was equalizing per-node load (tablet count), which is wrong
in heterogeneous clusters. Nodes with fewer shards will end up with
overloaded shards.
Fixes#23378
- (cherry picked from commit d6232a4f5f)
- (cherry picked from commit 6bff596fce)
Parent PR: #23478Closesscylladb/scylladb#23635
* github.com:scylladb/scylladb:
tablets: Make tablet allocation equalize per-shard load
tablets: load_balancer: Fix reporting of total load per node
GetInt() was observed to fail when the integer JSON value overflows the
int32_t type, which `GetInt()` uses for storage. When this happens,
rapidjson will assign a distinct 64 bit integer type to the value, and
attempting to access it as 32 bit integer triggers the wrong-type error,
resulting in assert failure. This was hit on the field where invoking
nodetool netstats resulted in nodetool crashing when the streamed bytes
amounts were higher than maxint.
To avoid such bugs in the future, replace all usage of GetInt() in
nodetool of GetInt64(), just to be sure.
A reproducer is added to the nodetool netstats crash.
Fixes: scylladb/scylladb#23394Closesscylladb/scylladb#23395
(cherry picked from commit bd8973a025)
Closesscylladb/scylladb#23476
Normally, when a node is shutting down, `gate_closed_exception` and `rpc::closed_error`
in `send_to_live_endpoints` should be ignored. However, if these exceptions are wrapped
in a `nested_exception`, an error message is printed, causing tests to fail.
This commit adds handling for nested exceptions in this case to prevent unnecessary
error messages.
Fixes scylladb/scylladb#23325
Fixes scylladb/scylladb#23305
Fixes scylladb/scylladb#21815
Backport: looks like this is quite a frequent issue, therefore backport to 2025.1.
- (cherry picked from commit 6abfed9817)
- (cherry picked from commit b1e89246d4)
- (cherry picked from commit 0d9d0fe60e)
- (cherry picked from commit d448f3de77)
Parent PR: #23336Closesscylladb/scylladb#23470
* github.com:scylladb/scylladb:
database: Pass schema_ptr as const ref in `wrap_commitlog_add_error`
database: Unify exception handling in `do_apply` and `apply_with_commitlog`
storage_proxy: Ignore wrapped `gate_closed_exception` and `rpc::closed_error` when node shuts down.
exceptions: Add `try_catch_nested` to universally handle nested exceptions of the same type.
In commit 4812a57f, the fmt-based formatter for gossip_digest_syn had
formatting code for cluster_id, partitioner, and group0_id
accidentally commented out, preventing these fields from being included
in the output. This commit restores the formatting by uncommenting the
code, ensuring full visibility of all fields in the gossip_digest_syn
message when logging permits.
This fixes a regression introduced in 4812a57f, which obscured these
fields and reduced debugging insight. Backporting is recommended for
improved observability.
Fixes#23142
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
Closesscylladb/scylladb#23155
(cherry picked from commit 2a9966a20e)
Closesscylladb/scylladb#23199
It is possible that the permit handed in to register_inactive_read() is already aborted (currently only possible if permit timed out). If the permit also happens to have wait for memory, the current code will attempt to call promise<>::set_exception() on the permit's promise to abort its waiters. But if the permit was already aborted via timeout, this promise will already have an exception and this will trigger an assert. Add a separate case for checking if the permit is aborted already. If so, treat it as immediate eviction: close the reader and clean up.
Fixes: scylladb/scylladb#22919
Bug is present in all live versions, backports are required.
- (cherry picked from commit 4d8eb02b8d)
- (cherry picked from commit 7ba29ec46c)
Parent PR: #23044Closesscylladb/scylladb#23145
* github.com:scylladb/scylladb:
reader_concurrency_semaphore: register_inactive_read(): handle aborted permit
test/boost/reader_concurrency_semaphore_test: move away from db::timeout_clock::now()
The test `test_mv_topology_change` is a regression test for
scylladb/scylladb#19529. The problem was that CL=ANY writes issued when
all replicas were down would be kept in memory until the timeout. In
particular, MV updates are CL=ANY writes and have a 5 minute timeout.
When doing topology operations for vnodes or when migrating tablet
replicas, the cluster goes through stages where the replica sets for
writes undergo changes, and the writes started with the old replica set
need to be drained first.
Because of the aforementioned MV updates, the removenode operation could
be delayed by 5 minutes or more. Therefore, the
`test_mv_topology_change` test uses a short timeout for the removenode
operation, i.e. 30s. Apparently, this is too low for the debug mode and
the test has been observed to time out even though the removenode
operation is progressing fine.
Increase the timeout to 60s. This is the lowest timeout for the
removenode operation that we currently use among the in-repo tests, and
is lower than 5 minutes so the test will still serve its purpose.
Fixes: scylladb/scylladb#22953Closesscylladb/scylladb#22958
(cherry picked from commit 43ae3ab703)
Closesscylladb/scylladb#23053
The lambda which dumps the diagnostics for each semaphore, is static.
Considering that said lambda captures a local (writeln) by reference, this
is wrong on two levels:
* The writeln captured on the shard which happens to initialize this
static, will be used on all shards.
* The writeln captured on the first dump, will be used on later dumps,
possibly triggering a segfault.
Drop the `static` to make the lambda local and resolve this problem.
Fixes: scylladb/scylladb#22756Closesscylladb/scylladb#22776
(cherry picked from commit 820f196a49)
Closesscylladb/scylladb#22938
Fixes#22688
If we set a dc rf to zero, the options map will still retain a dc=0 entry.
If this dc is decommissioned, any further alters of keyspace will fail,
because the union of new/old options will now contained an unknown keyword.
Change alter ks options processing to simply remove any dc with rf=0 on
alter, and treat this as an implicit dc=0 in nw-topo strategy.
This means we change the reallocate_tablets routine to not rely on
the strategy objects dc mapping, but the full replica topology info
for dc:s to consider for reallocation. Since we verify the input
on attribute processing, the amount of rf/tablets moved should still
be legal.
v2:
* Update docs as well.
v3:
* Simplify dc processing
* Reintroduce options empty check, but do early in ks_prop_defs
* Clean up unit test some
Closesscylladb/scylladb#22693
(cherry picked from commit 342df0b1a8)
Closesscylladb/scylladb#22877
Bootstrap or replace can take a long time, but
since feef7d3fa1,
the stop_signal is checked only in checkpoints,
and in particular, abort isn't requested during
join_cluster.
Fixes#23222
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
(cherry picked from commit 41f02c521d)
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Currently, there's a call to
`supervisor::notify("starting system distributed keyspace")`
which is misleading as it is identical to a similar
message in main() when starting the sharded service.
Change that to a storage_service log messages
and be more specific that the sys_dist_ks shards are started.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
(cherry picked from commit 0fc196991a)
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
audit_syslog_storage_helper::syslog_send_helper uses Seastar's
net::datagram_channel to write to syslog device (usually /dev/log).
However, datagram_channel.send() is not fiber-safe (ref seastar#2690),
so unserialized use of send() results in packets overwriting its state.
This, in turn, causes a corruption of audit logs, as well as assertion
failures.
To workaround the problem, a new semaphore is introduced in
audit_syslog_storage_helper. As storage_helper is a member of sharded
audit service, the semaphore allows for one datagram_channel.send() on
each shard. Each audit_syslog_storage_helper stores its own
datagram_channel, therefore concurrent sends to datagram_channel are
eliminated.
This change:
- Introduce semaphore with count=1 in audit_syslog_storage_helper.
- Added 1 hour timeout to the semaphore, so semaphore stalls are
failed just as all other syslog auditing failures.
Fixes: scylladb#22973
(cherry picked from commit c12f976389)
This change:
- Make syslog_send_helper() a method of audit_syslog_storage_helper, so
syslog_send_helper() can access private members of
audit_syslog_storage_helper in the next commits.
- Remove unneeded syslog_send_helper() arguments that now are class
members.
(cherry picked from commit dbd2acd2be)
Add a new nodetool cluster repair command that repairs tablet keyspaces.
Users may specify keyspace and tables that they want to repair.
If the keyspace and tables are not specified, all tablet keyspaces
are repaired.
The command calls the new tablet repair API /storage_service/tablets/repair.
(cherry picked from commit 8bbc5e8923)
Warn about an attempt to repair tablet keysapce with nodetool repair.
A nodetool cluster repair command to repair tablet keyspaces will
be added in the following patches.
(cherry picked from commit b81c81c7f4)
A default timestamp (not to confuse with the timestamp passed via 'USING TIMESTAMP' query clause) can be set using 0x20 flag and the <timestamp> field in the binary CQL frame payload of QUERY, EXECUTE and BATCH ops. It also happens to be a default of a Java CQL Driver.
However, we were only setting the corresponding info in the CQL Tracing context of a QUERY operation. For an unknown reason we were not setting this for an EXECUTE and for a BATCH traces (I guess I simply forgot to set it back then).
This patch fixes this.
Fixes#23173
The issue fixed by this PR is not critical but the fix is simple and safe enough so we should backport it to all live releases.
- (cherry picked from commit ca6bddef35)
- (cherry picked from commit f7e1695068)
Parent PR: #23174Closesscylladb/scylladb#23524
* github.com:scylladb/scylladb:
CQL Tracing: set common query parameters in a single function
transport/server.cc: set default timestamp info in EXECUTE and BATCH tracing
Fix UBSan abort caused by integer overflow when calculating time difference
between read and write operations. The issue occurs when:
1. The queried partition on replicas is not purgeable (has no recorded
modified time)
2. Digests don't match across replicas
3. The system attempts to calculate timespan using missing/negative
last_modified timestamps
This change skips cross-DC repair optimization when write timestamp is
negative or missing, as this optimization is only relevant for reads
occurring within write_timeout of a write.
Error details:
```
service/storage_proxy.cc:5532:80: runtime error: signed integer overflow: -9223372036854775808 - 1741940132787203 cannot be represented in type 'int64_t' (aka 'long')
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior service/storage_proxy.cc:5532:80
Aborting on shard 1, in scheduling group sl:default
```
Related to previous fix 39325cf which handled negative read_timestamp cases.
Fixes#23314
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
Closesscylladb/scylladb#23359
(cherry picked from commit ebf9125728)
Closesscylladb/scylladb#23387
To simplify aborting scylla while starting the services,
add a _ready state to stop_signal, so that until
main is ready to be stopped by the abort_source,
just register that the signal is caught, and
let a check() method poll that and request abort
and throw respective exception only then, in controlled
points that are in-between starting of services
after the service started successfully and a deferred
stop action was installed.
This patch prevents gate_closed_exception to escape handling
when start-up is aborted early with the stop signal,
causing https://github.com/scylladb/scylladb/issues/23153
The regression is apparently due to a25c3eaa1c
Fixes https://github.com/scylladb/scylladb/issues/23153
* Requires backport to 2025.1 due to a25c3eaa1c
- (cherry picked from commit 23433f593c)
- (cherry picked from commit 282ff344db)
- (cherry picked from commit feef7d3fa1)
- (cherry picked from commit b6705ad48b)
Parent PR: #23103Closesscylladb/scylladb#23184
* github.com:scylladb/scylladb:
main: add checkpoints
main: safely check stop_signal in-between starting services
main: move prometheus start message
main: move per-shard database start message
Alternator Streams' "GetRecords" operation has a "Limit" parameter on
how many records to return. The DynamoDB documentations says that the
upper limit on this Limit parameter is 1000 - but Alternator didn't
enforce this. In this patch we begin enforcing this highest Limit, and
also add a test for verifying this enforcement. As usual, the new test
passes on DynamoDB, and after this patch - also on Alternator.
The reason why it's useful to have *some* upper limit on Limit is that
the existing executor::get_records() implementation does not really have
preemption points in all the necessary places. In particular, we have a
loop on all returned records without preemption points. We also store
the returned records in a RapidJson vector, which requires a contiguous
allocation.
Even before this patch, GetRecords had a hard limit of 1 MB of results.
But still, in some cases 1 MB of results may be a lot of results, and we
can see stalls in the aforementioned places being O(number of results).
Fixes#23534
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Closesscylladb/scylladb#23547
(cherry picked from commit 84fd52315f)
Closesscylladb/scylladb#23643
This series add a new config option: `tablets_mode_for_new_keyspaces` that replaces the existing
`enable_tablets` option. It can be set to the following values:
disabled: New keyspaces use vnodes by default, unless enabled by the tablets={'enabled':true} option
enabled: New keyspaces use tablets by default, unless disabled by the tablets={'disabled':true} option
enforced: New keyspaces must use tablets. Tablets cannot be disabled using the CREATE KEYSPACE option
`tablets_mode_for_new_keyspaces=disabled` or `tablets_mode_for_new_keyspaces=enabled` control whether
tablets are disabled or enabled by default for new keyspaces, respectively.
In either cases, tablets can be opted-in or out using the `tablets={'enabled':...}`
keyspace option, when the keyspace is created.
`tablets_mode_for_new_keyspaces=enforced` enables tablets by default for new keyspaces,
like `tablets_mode_for_new_keyspaces=enabled`.
However, it does not allow to opt-out when creating
new keyspaces by setting `tablets = {'enabled': false}`
Fixes scylladb/scylla-enterprise#4355
[Edit: changed `Refs` above to `Fixes` to apeace the backport bot gods]
* Requires backport to 2025.1
- (cherry picked from commit c62865df90)
- (cherry picked from commit 62aeba759b)
- (cherry picked from commit 9fac0045d1)
Parent PR: #22273Closesscylladb/scylladb#23602
* github.com:scylladb/scylladb:
boost/tablets_test: verify failure to create keyspace with tablets and non network replication strategy
tablets: enforce tablets using tablets_mode_for_new_keyspaces=enforced config option
db/config: add tablets_mode_for_new_keyspaces option
`safe_foreach_sstable` doesn't do its job correctly.
It iterates over an sstable set under the sstable deletion
lock in an attempt to ensure that SSTables aren't deleted during the iteration.
The thing is, it takes the deletion lock after the SSTable set is
already obtained, so SSTables might get unlinked *before* we take the lock.
Remove this function and fix its usages to obtain the set and iterate
over it under the lock.
Closesscylladb/scylladb#23397
(cherry picked from commit e23fdc0799)
Closesscylladb/scylladb#23628
The `table::do_apply()` method verifies if the compaction group's async
gate is open to determine if the compaction group is active. Closing
this async gate prevents any new operations but waits for existing
holders to exit, allowing their operations to complete. When holding a
gate, holders will observe the gate as closed when it is being closed,
but this is irrelevant as they are already inside the gate and are
allowed to complete. All the callers of `table::do_apply()` already
enter the gate before calling the method. So, the async gate check
inside `table::do_apply()` will erroneously throw an exception when the
compaction group is closing despite holding the gate. This commit
removes the check to prevent this from happening.
Fixes#23348
Signed-off-by: Lakshmi Narayanan Sreethar <lakshmi.sreethar@scylladb.com>
Closesscylladb/scylladb#23579
(cherry picked from commit 750f4baf44)
Closesscylladb/scylladb#23645
`tablets_mode_for_new_keyspaces=enforced` enables tablets by default for
new keyspaces, like `tablets_mode_for_new_keyspaces=enabled`.
However, it does not allow to opt-out when creating
new keyspaces by setting `tablets = {'enabled': false}`.
Refs scylladb/scylla-enterprise#4355
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
(cherry picked from commit 62aeba759b)
The new option deprecates the existing `enable_tablets` option.
It will be extended in the next patch with a 3rd value: "enforced"
while will enable tablets by default for new keyspace but
without the posibility to opt out using the `tablets = {'enabled':
false}` keyspace schema option.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
(cherry picked from commit c62865df90)
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Before, it was equalizing per-node load (tablet count), which is wrong
in heterogenous clusters. Nodes with fewer shards will end up with
overloaded shards.
Refs #23378
(cherry picked from commit 6bff596fce)
Load is now utilization, not count, so we should report average
per-shard load, which is equivalent to node's utilization.
(cherry picked from commit d6232a4f5f)
Currently, repair_writer_impl::create_writer keeps erm to ensure that a sharder is valid. If we repair a tablet, erm blocks the state machine and no operation on any tablet of this table might be performed.
Use auto_refreshing_sharder and topology_guard to ensure that the operation is safe and that tablet operations on the whole table aren't blocked.
Fixes: #23453.
Needs backport to 2025.1 that introduces the tablet repair scheduler.
- (cherry picked from commit 1dc29ddc86)
- (cherry picked from commit bae6711809)
Parent PR: #23455Closesscylladb/scylladb#23580
* github.com:scylladb/scylladb:
\test: add test to check concurrent migration and repair of two different tablets
repair: release erm in repair_writer_impl::create_writer when possible
The "make-pr-ready-for-review" workflow was failing with an "Input
required and not supplied: token" error. This was due to GitHub Actions
security restrictions preventing access to the token when the workflow
is triggered in a fork:
```
Error: Input required and not supplied: token
```
This commit addresses the issue by:
- Running the workflow in the base repository instead of the fork. This
grants the workflow access to the required token with write permissions.
- Simplifying the workflow by using a job-level `if` condition to
controlexecution, as recommended in the GitHub Actions documentation
(https://docs.github.com/en/actions/writing-workflows/choosing-when-your-workflow-runs/using-conditions-to-control-job-execution).
This is cleaner than conditional steps.
- Removing the repository checkout step, as the source code is not required for this workflow.
This change resolves the token error and ensures the
"make-pr-ready-for-review" workflow functions correctly.
Fixesscylladb/scylladb#22765
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
Closesscylladb/scylladb#22766
(cherry picked from commit ca832dc4fb)
Closesscylladb/scylladb#23561
Before this patch, granting a user MODIFY permissions on ALL KEYSPACES allowed the user to write to system tables, where the user could also set himself to "superuser" granting him all other permissions. After this patch, MODIFY permissions on ALL KEYSPACES is limited only to non-system keyspaces.
Fixes: scylladb/scylladb#23218
(cherry picked from commit fee50f287c)
Parent PR: #23219Closesscylladb/scylladb#23594
On our testing infrastructure, tests often run a hundred times (!)
slower than usual, for various reasons that we can't always avoid.
This is why all our test frameworks drastically increase the default
timeouts.
We forgot to increase the timeout in one place - where Alternator tests
use CQL. This is needed for the Alternator role-based access control
(RBAC) tests, which is configured via CQL and therefore the Alternator
test unusually uses CQL.
So in this patch we increase the timeout of CQL driver used by
Alternator tests to the same high timeouts (60-120 seconds) used by
the regular CQL tests. As the famous saying goes, these timeouts should
be enough for anyone.
Fixes#23569.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Closesscylladb/scylladb#23578
(cherry picked from commit a9a6f9eecc)
Closesscylladb/scylladb#23601
The member in question is unconditionally .stop()-ed in task's
release_resources() method, however, it may happen that the thing wasn't
.start()-ed in the first place. Start happens in the middle of the
task's .run() method and there can be several reasons why it can be
skipped -- e.g. the task is aborted early, or collecting sstables from
S3 throws.
fixes: #23231
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Closesscylladb/scylladb#23483
(cherry picked from commit 832d83ae4b)
Closesscylladb/scylladb#23557
Function modification_statement::add_raw() is never called, which
makes query string in audit_info of batch queries empty. In enterprise
branch, add_raw is called in Cql.g and those changes were never merged
to master.
This changes:
- Add missing call of add_raw() to Cql.g
- Include other related changes (from PR#3228 in scylla-enterprise)
Fixes scylladb#23311
Closesscylladb/scylladb#23315
(cherry picked from commit b8adbcbc84)
Closesscylladb/scylladb#23495
Currently, repair_writer_impl::create_writer keeps erm to ensure
that a sharder is valid. If we repair a tablet, erm blocks the state
machine and no operation on any tablet of this table might be performed.
Use auto_refreshing_sharder and topology_guard to ensure that the
operation is safe and that tablet operations on the whole table
aren't blocked.
Fixes: #23453.
(cherry picked from commit 1dc29ddc86)
Draining hints may occur in one of the two scenarios:
* a node leaves the cluster and the local node drains all of the hints
saved for that node,
* the local node is being decommissioned.
Draining may take some time and the hint manager won't stop until it
finishes. It's not a problem when decommissioning a node, especially
because we want the cluster to retain the data stored in the hints.
However, it may become a problem when the local node started draining
hints saved for another node and now it's being shut down.
There are two reasons for that:
* Generally, in situations like that, we'd like to be able to shut down
nodes as fast as possible. The data stored in the hints won't
disappear from the cluster yet since we can restart the local node.
* Draining hints may introduce flakiness in tests. Replaying hints doesn't
have the highest priority and it's reflected in the scheduling groups we
use as well as the explicitly enforced throughput. If there are a large
number of hints to be replayed, it might affect our tests.
It's already happened, see: scylladb/scylladb#21949.
To solve those problems, we change the semantics of draining. It will behave
as before when the local node is being decommissioned. However, when the
local node is only being stopped, we will immediately cancel all ongoing
draining processes and stop the hint manager. To amend for that, when we
start a node and it initializes a hint endpoint manager corresponding to
a node that's already left the cluster, we will begin the draining process
of that endpoint manager right away.
That should ensure all data is retained, while possibly speeding up
the shutdown process.
There's a small trade-off to it, though. If we stop a node, we can then
remove it. It won't have a chance to replay hints it might've before
these changes, but that's an edge case. We expect this commit to bring
more benefit than harm.
We also provide tests verifying that the implementation works as intended.
Fixesscylladb/scylladb#21949Closesscylladb/scylladb#22811
(cherry picked from commit 0a6137218a)
Closesscylladb/scylladb#23370
Before this patch, the load balancer was equalizing tablet count per
shard, so it achieved balance assuming that:
1) tablets have the same size
2) shards have the same capacity
That can cause imbalance of utilization if shards have different
capacity, which can happen in heterogeneous clusters with different
instance types. One of the causes for capacity difference is that
larger instances run with fewer shards due to vCPUs being dedicated to
IRQ handling. This makes those shards have more disk capacity, and
more CPU power.
After this patch, the load balancer equalizes shard's storage
utilization, so it no longer assumes that shards have the same
capacity. It still assumes that each tablet has equal size. So it's a
middle step towards full size-aware balancing.
One consequence is that to be able to balance, the load balancer need
to know about every node's capacity, which is collected with the same
RPC which collects load_stats for average tablet size. This is not a
significant set back because migrations cannot proceed anyway if nodes
are down due to barriers. We could make intra-node migration
scheduling work without capacity information, but it's pointless due
to above, so not implemented.
Also, per-shard goal for tablet count is still the same for all nodes in the cluster,
so nodes with less capacity will be below limit and nodes with more capacity will
be slightly above limit. This shouldn't be a significant problem in practice, we could
compensate for this by increasing the limit.
Fixes#23042
* github.com:scylladb/scylladb:
tablets: Make load balancing capacity-aware
topology_coordinator: Fix confusing log message
topology_coordinator: Refresh load stats after adding a new node
topology_coordinator: Allow capacity stats to be refreshed with some nodes down
topology_coordinator: Refactor load status refreshing so that it can be triggered from multiple places
test: boost: tablets_test: Always provide capacity in load_stats
test: perf_load_balancing: Set node capacity
test: perf_load_balancing: Convert to topology_builder
config, disk_space_monitor: Allow overriding capacity via config
storage_service, tablets: Collect per-node capacity in load_stats
test: tablets_test: Add support for auto-split mode
test: cql_test_env: Expose db config
Closesscylladb/scylladb#23443
* github.com:scylladb/scylladb:
Merge 'tablets: Make load balancing capacity-aware' from Tomasz Grabiec
test: tablets_test: Add support for auto-split mode
test: cql_test_env: Expose db config
Each query-type (QUERY, EXECUTE, BATCH) CQL opcode has a number of parameters
in their payload which we always want to record in the Tracing object.
Today it's a Consistency Level, Serial Consistency Level and a Default Timestamp.
Setting each of them individually can lead to a human error when one (or more) of
them would not be set. Let's eliminate such a possibility by defining
a single function that sets them all.
This also allows an easy addition of such parameters to this function in
the future.
(cherry picked from commit f7e1695068)
A default timestamp (not to confuse with the timestamp passed via 'USING TIMESTAMP' query clause)
can be set using 0x20 flag and the <timestamp> field in the binary CQL frame payload of
QUERY, EXECUTE and BATCH ops. It also happens to be a default of a Java CQL Driver.
However, we were only setting the corresponding info in the CQL Tracing context of a QUERY operation.
For an unknown reason we were not setting this for an EXECUTE and for a BATCH traces (I guess I simply forgot to
set it back then).
This patch fixes this.
Fixes#23173
(cherry picked from commit ca6bddef35)
Refs scylla-enterprise#5185
Fixes#22901
If a tls socket gets EPIPE the error is not translated to a specific
gnutls error code, but only a generic ERROR_PULL/PUSH. Since we treat
EPIPE as ignorable for plain sockets, we need to unwind nested exception
here to detect that the error was in fact due to this, so we can suppress
log output for this.
Closesscylladb/scylladb#22888
(cherry picked from commit e49f2046e5)
Closesscylladb/scylladb#23045
The series fixes a regression and demotes a barrier_and_drain logging error to a warning since this particular condition may happen during normal operation.
We want to backport both since one is a bug fix and another is trivial and reduces CI flakiness.
- (cherry picked from commit 1da7d6bf02)
- (cherry picked from commit fe45ea505b)
Parent PR: #22650Closesscylladb/scylladb#22923
* https://github.com/scylladb/scylladb:
topology_coordinator: demote barrier_and_drain rpc failure to warning
topology_coordinator: read peers table only once during topology state application
In the current scenario, the problem discovered is that there is a time
gap between group0 creation and raft_initialize_discovery_leader call.
Because of that, the group0 snapshot/apply entry enters wrong values
from the disk(null) and updates the in-memory variables to wrong values.
During the above time gap, the in-memory variables have wrong values and
perform absurd actions.
This PR removes the variable `_manage_topology_change_kind_from_group0`
which was used earlier as a work around for correctly handling
`topology_change_kind` variable, it was brittle and had some bugs
(causing issues like scylladb/scylladb#21114). The reason for this bug
that _manage_topology_change_kind used to block reading from disk and
was enabled after group0 initialization and starting raft server for the
restart case. Similarly, it was hard to manage `topology_change_kind`
using `_manage_topology_change_kind_from_group0` correctly in bug free
anner.
Post `_manage_topology_change_kind_from_group0` removal, careful
management of `topology_change_kind` variable was needed for maintaining
correct `topology_change_kind` in all scenarios. So this PR also performs
a refactoring to populate all init data to system tables even before
group0 creation(via `raft_initialize_discovery_leader` function). Now
because `raft_initialize_discovery_leader` happens before the group 0
creation, we write mutations directly to system tables instead of a
group 0 command. Hence, post group0 creation, the node can read the
correct values from system tables and correct values are maintained
throughout.
Added a new function `initialize_done_topology_upgrade_state` which
takes care of updating the correct upgrade state to system tables before
starting group0 server. This ensures that the node can read the correct
values from system tables and correct values are maintained throughout.
By moving `raft_initialize_discovery_leader` logic to happen before
starting group0 server, and not as group0 command post server start, we
also get rid of the potential problem of init group0 command not being
the 1st command on the server. Hence ensuring full integrity as expected
by programmer.
This PR fixes a bug. Hence we need to backport it.
Fixes: scylladb/scylladb#21114
- (cherry picked from commit 4748125a48)
- (cherry picked from commit e491950c47)
- (cherry picked from commit 623e01344b)
- (cherry picked from commit d7884cf651)
Parent PR: #22484Closesscylladb/scylladb#22966
* https://github.com/scylladb/scylladb:
storage_service: Remove the variable _manage_topology_change_kind_from_group0
storage_service: fix indentation after the previous commit
raft topology: Add support for raft topology system tables initialization to happen before group0 initialization
service/raft: Refactor mutation writing helper functions.
This commit removes the outdated information about seed nodes.
We no longer need it in the docs, as a) the documentation is versioned,
and b) the ScyllaDB Open Source 4.3 and ScyllaDB Enterprise 2021.1 versions
mentioned in the docs are no longer supported.
In addition, some clarification has been added to the existing sections.
Fixes https://github.com/scylladb/scylladb/issues/22400Closesscylladb/scylladb#23282
(cherry picked from commit dbbf9e19e4)
Closesscylladb/scylladb#23327
Moving a PR out of draft is only allowed to users with write access,
adding a github action to switch PR to `ready for review` once the
`conflicts` label was removed
Closesscylladb/scylladb#22446
(cherry picked from commit ed4bfad5c3)
Closesscylladb/scylladb#23023
The test fails sporadically with:
cassandra.ReadFailure: Error from server: code=1300 [Replica(s) failed to execute read] message="Operation failed for test3.test2 - received 1 responses and 1 failures from 2 CL=QUORUM." info={'consistency': 'QUORUM', 'required_responses': 2, 'received_responses': 1, 'failures': 1}
That's becase a server is stopped in the middle of the workload.
The server is stopped ungracefully which will cause some requests to
time out. We should stop it gracefully to allow in-flight requests to
finish.
Fixes#20492Closesscylladb/scylladb#23451
(cherry picked from commit 8e506c5a8f)
Closesscylladb/scylladb#23469
Move exception wrapping logic from `do_apply` and `apply_with_commitlog`
to `wrap_commitlog_add_error` to ensure consistent error handling.
(cherry picked from commit 0d9d0fe60e)
Normally, when a node is shutting down, `gate_closed_exception` and `rpc::closed_error`
in `send_to_live_endpoints` should be ignored. However, if these exceptions are wrapped
in a `nested_exception`, an error message is printed, causing tests to fail.
This commit adds handling for nested exceptions in this case to prevent unnecessary
error messages.
Fixesscylladb/scylladb#23325
(cherry picked from commit b1e89246d4)
After recent changes #18640 and #19151 started to reproduce for
stop_after_sending_join_node_request and
stop_after_bootstrapping_initial_raft_configuration error injections too.
The solution is the same: deselect the tests.
Fixes#23302Closesscylladb/scylladb#23405
(cherry picked from commit 574c81eac6)
Closesscylladb/scylladb#23460
Before this patch, the load balancer was equalizing tablet count per
shard, so it achieved balance assuming that:
1) tablets have the same size
2) shards have the same capacity
That can cause imbalance of utilization if shards have different
capacity, which can happen in heterogeneous clusters with different
instance types. One of the causes for capacity difference is that
larger instances run with fewer shards due to vCPUs being dedicated to
IRQ handling. This makes those shards have more disk capacity, and
more CPU power.
After this patch, the load balancer equalizes shard's storage
utilization, so it no longer assumes that shards have the same
capacity. It still assumes that each tablet has equal size. So it's a
middle step towards full size-aware balancing.
One consequence is that to be able to balance, the load balancer need
to know about every node's capacity, which is collected with the same
RPC which collects load_stats for average tablet size. This is not a
significant set back because migrations cannot proceed anyway if nodes
are down due to barriers. We could make intra-node migration
scheduling work without capacity information, but it's pointless due
to above, so not implemented.
Also, per-shard goal for tablet count is still the same for all nodes in the cluster,
so nodes with less capacity will be below limit and nodes with more capacity will
be slightly above limit. This shouldn't be a significant problem in practice, we could
compensate for this by increasing the limit.
Refs #23042Closesscylladb/scylladb#23079
* github.com:scylladb/scylladb:
tablets: Make load balancing capacity-aware
topology_coordinator: Fix confusing log message
topology_coordinator: Refresh load stats after adding a new node
topology_coordinator: Allow capacity stats to be refreshed with some nodes down
topology_coordinator: Refactor load status refreshing so that it can be triggered from multiple places
test: boost: tablets_test: Always provide capacity in load_stats
test: perf_load_balancing: Set node capacity
test: perf_load_balancing: Convert to topology_builder
config, disk_space_monitor: Allow overriding capacity via config
storage_service, tablets: Collect per-node capacity in load_stats
(cherry picked from commit b1d9f80d85)
rebalance_tablets() was performing migrations and merges automatically
but not splits, because splits need to be acked by replicas via
load_stats. It's inconvenient in tests which want to rebalance to the
equilibrium point. This patch changes rebalance_tablets() to split
automatically by default, can be disabled for tests which expect
differently.
shared_load_stats was introduced to provide a stable holder of
load_stats which can be reused across rebalance_tablets() calls.
(cherry picked from commit 5e471c6f1b)
Do not hold erm during repair of a tablet that is started with tablet
repair scheduler. This way two different tablets can be repaired
and migrated concurrently. The same tablet won't be migrated while
being repaired as it is provided by topology coordinator.
Use topology_guard to maintain safety.
Fixes: https://github.com/scylladb/scylladb/issues/22408.
Needs backport to 2025.1 that introduces the tablet repair scheduler.
Closesscylladb/scylladb#23362
* github.com:scylladb/scylladb:
test: add test to check concurrent tablets migration and repair
repair: do not hold erm for repair scheduled by scheduler
repair: get total rf based on current erm
repair: make shard_repair_task_impl::erm private
repair: do not pass erm to put_row_diff_with_rpc_stream when unnecessary
repair: do not pass erm to flush_rows_in_working_row_buf when unnecessary
repair: pass session_id to repair_writer_impl::create_writer
repair: keep materialized topology guard in shard_repair_task_impl
repair: pass session_id to repair_meta
This PR is an introductory step towards enforcing
RF-rack-valid keyspaces in Scylla.
The scope of changes:
* defining RF-rack-valid keyspaces,
* introducing a configuration option enforcing RF-rack-valid
keyspaces,
* restricting the CREATE and ALTER KEYSPACE statements
so that they never lead to RF-rack invalid keyspaces,
* during the initialization of a node, it verifies that all existing
keyspaces are RF-rack-valid. If not, the initialization fails.
We provide tests verifying that the changes behave as intended.
---
Note that there are a number of things that still need to be implemented.
That includes, for instance, restricting topology operations too.
---
Implementation strategy (going beyond the scope of this PR):
1. Introduce the new configuration option `rf_rack_valid_keyspaces`.
2. Start enforcing RF-rack-validity in keyspaces if the option is enabled.
3. Adjust the tests: in the tree and out of it. Explicitly enable the option in all tests.
4. Once the tests have been adjusted, change the default value of the option to enabled.
5. Stop explicitly enabling the option in tests.
6. Get rid of the option.
---
Fixes scylladb/scylladb#20356
Fixes scylladb/scylladb#23276
Fixes scylladb/scylladb#23300
---
Backport: this is part of the requirements for releasing 2025.1.
- (cherry picked from commit 32879ec0d5)
- (cherry picked from commit 41f862d7ba)
- (cherry picked from commit 0e04a6f3eb)
Parent PR: #23138Closesscylladb/scylladb#23398
* github.com:scylladb/scylladb:
main: Refuse to start node when RF-rack-invalid keyspace exists
cql3: Ensure that CREATE and ALTER never lead to RF-rack-invalid keyspaces
db/config: Introduce RF-rack-valid keyspaces
When a node is started with the option `rf_rack_valid_keyspaces`
enabled, the initialization will fail if there is an RF-rack-invalid
keyspace. We want to force the user to adjust their existing
keyspaces when upgrading to 2025.* so that the invariant that
every keyspace is RF-rack-valid is always satisfied.
Fixesscylladb/scylladb#23300
(cherry picked from commit 0e04a6f3eb)
In this commit, we refuse to create or alter a keyspace when that operation
would make it RF-rack-invalid if the option `rf_rack_valid_keyspaces` is
enabled.
We provide two tests verifying that the changes work as intended.
Fixesscylladb/scylladb#23276
(cherry picked from commit 41f862d7ba)
We introduce a new term in the glossary: RF-rack-valid keyspace.
We also highlight in our user documentation that all keyspaces
must remain RF-rack-valid throughout their lifetime, and failing
to guarantee that may result in data inconsistencies or other
issues. We base that information on our experience with materialized
views in keyspaces using tablets, even though they remain
an experimental feature.
Along with the new term, we introduce a new configuration option
called `rf_rack_valid_keyspaces`, which, when enabled, will enforce
preserving all keyspaces RF-rack-valid. That functionality will be
implemented in upcoming commits. For now, we materialize the
restriction in form of a named requirement: a function verifying
that the passed keyspace is RF-rack-valid.
The option is disabled by default. That will change once we adjust
the existing tests to the new semantics. Once that is done, the option
will first be enabled by default, and then it will be removed.
Fixesscylladb/scylladb#20356
(cherry picked from commit 32879ec0d5)
Do not hold erm for tablet repair scheduled by scheduler. Thanks to
that one tablet repair won't exclude migration of other tablets.
Concurrent repair and migration of the same tablet isn't possible,
since a tablet can be in one type of transition only at the time.
Hence the change is safe.
Refs: https://github.com/scylladb/scylladb/issues/22408.
(cherry picked from commit 5b792bdc98)
Get total rf based on erm. Currently, it does not change anything
because erm stays the same during the whole repair.
(cherry picked from commit a1375896df)
When small_table_optimization isn't enabled, put_row_diff_with_rpc_stream
does not access erm. Pass small_table_optimization_params containing erm
only when small_table_optimization is enabled.
This is safe as erm is kept by shard_repair_task_impl.
(cherry picked from commit 444c7eab90)
When small_table_optimization isn't enabled, flush_rows_in_working_row_buf
does not access erm. Add small_table_optimization_params containing erm and
pass it only when small_table_optimization is enabled.
This is safe as erm is kept by shard_repair_task_impl.
(cherry picked from commit e56bb5b6e2)
Keep materialized topology guard in shard_repair_task_impl and check
it in check_in_abort_or_shutdown and before each range repair.
(cherry picked from commit 47bb9dcf78)
Pass session_id of tablet repair down the stack from the repair request
to repair_meta.
The session_id will be utiziled in the following patches.
(cherry picked from commit 928f92c780)
service: Introduce rack-aware co-location migrations for tablet merge
Merge co-location can emit migrations across racks even when RF=#racks,
reducing availability and affecting consistency of base-view pairing.
Given replica set of sibling tablets T0 and T1 below:
[T0: (rack1,rack3,rack2)]
[T1: (rack2,rack1,rack3)]
Merge will co-locate T1:rack2 into T0:rack1, T1 will be temporarily only at
only a subset of racks, reducing availability.
This is the main problem fixed by this patch.
It also lays the ground for consistent base-view replica pairing,
which is rack-based. For tables on which views can be created we plan
to enforce the constraint that replicas don't move across racks and
that all tablets use the same set of racks (RF=#racks). This patch
avoids moving replicas across racks unless it's necessary, so if the
constraint is satisfied before merge, there will be no co-locating
migrations across racks. This constraint of RF=#racks is not enforced
yet, it requires more extensive changes.
Fixes#22994.
Refs #17265.
This patch is based on Raphael's work done in PR #23081. The main differences are:
1) Instead of sorting replicas by rack, we try to find
replicas in sibling tablets which belong to the same rack.
This is similar to how we match replicas within the same host.
It reduces number of across-rack migrations even if RF!=#racks,
which the original patch didn't handle.
Unlike the original patch, it also avoids rack-overloaded in case
RF!=#racks
2) We emit across-rack co-locating migrations if we have no other choice
in order to finalize the merge
This is ok, since views are not supported with tablets yet. Later,
we will disallow this for tables which have views, and we will
allow creating views in the first place only when no such migrations
can happen (RF=#racks).
3) Added boost unit test which checks that rack overload is avoided during merge
in case RF<#racks
4) Moved logging of across-rack migration to debug level
5) Exposed metric for across-rack co-locating migrations
(cherry picked from commit af949f3b6a)
Also backports dependent patches:
- locator: network_topology_strategy: Fix SIGSEGV when creating a table when there is a rack with no normal nodes
- locator: network_topology_startegy: Ignore leaving nodes when computing capacity for new tables
- Merge 'test: tablets_test: Create proper schema in load balancer tests' from Tomasz Grabiec
Closesscylladb/scylladb#22657Closesscylladb/scylladb#22652Closesscylladb/scylladb#23297
* github.com:scylladb/scylladb:
service: Introduce rack-aware co-location migrations for tablet merge
Merge 'test: tablets_test: Create proper schema in load balancer tests' from Tomasz Grabiec
locator: network_topology_startegy: Ignore leaving nodes when computing capacity for new tables
locator: network_topology_strategy: Fix SIGSEGV when creating a table when there is a rack with no normal nodes
In commit c24bc3b we decided that creating a new table in Alternator
will by default use vnodes - not tablets - because of all the missing
features in our tablets implementation that are important for
Alternator, namely - LWT, CDC and Alternator TTL.
We never documented this, or the fact that we support a tag
`experimental:initial_tablets` which allows to override this decision
and create an Alternator table using tablets. We also never documented
what exactly doesn't work when Alternator uses tablet.
This patch adds the missing documentation in docs/alternator/new-apis.md
(which is a good place for describing the `experimental:initial_tablets`
tag). The patch also adds a new test file, test_tablets.py, which
includes tests for all the statements made in the document regarding
how `experimental:initial_tablets` works and what works or doesn't
work when tablets are enabled.
Two existing tests - for TTL and Streams non-support with tablets -
are moved to the new test file.
When the tablets feature will finally be completed, both the document
and the tests will need to be modified (some of the tests should be
outright deleted). But it seems this will not happen for at least
several months, and that is too long to wait without accurate
documentation.
Fixes#21629
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Closesscylladb/scylladb#22462
(cherry picked from commit c0821842de)
Closesscylladb/scylladb#23298
Merge co-location can emit migrations across racks even when RF=#racks,
reducing availability and affecting consistency of base-view pairing.
Given replica set of sibling tablets T0 and T1 below:
[T0: (rack1,rack3,rack2)]
[T1: (rack2,rack1,rack3)]
Merge will co-locate T1:rack2 into T0:rack1, T1 will be temporarily only at
only a subset of racks, reducing availability.
This is the main problem fixed by this patch.
It also lays the ground for consistent base-view replica pairing,
which is rack-based. For tables on which views can be created we plan
to enforce the constraint that replicas don't move across racks and
that all tablets use the same set of racks (RF=#racks). This patch
avoids moving replicas across racks unless it's necessary, so if the
constraint is satisfied before merge, there will be no co-locating
migrations across racks. This constraint of RF=#racks is not enforced
yet, it requires more extensive changes.
Fixes#22994.
Refs #17265.
This patch is based on Raphael's work done in PR #23081. The main differences are:
1) Instead of sorting replicas by rack, we try to find
replicas in sibling tablets which belong to the same rack.
This is similar to how we match replicas within the same host.
It reduces number of across-rack migrations even if RF!=#racks,
which the original patch didn't handle.
Unlike the original patch, it also avoids rack-overloaded in case
RF!=#racks
2) We emit across-rack co-locating migrations if we have no other choice
in order to finalize the merge
This is ok, since views are not supported with tablets yet. Later,
we will disallow this for tables which have views, and we will
allow creating views in the first place only when no such migrations
can happen (RF=#racks).
3) Added boost unit test which checks that rack overload is avoided during merge
in case RF<#racks
4) Moved logging of across-rack migration to debug level
5) Exposed metric for across-rack co-locating migrations
(cherry picked from commit af949f3b6a)
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Signed-off-by: Tomasz Grabiec <tgrabiec@scylladb.com>
This PR converts boost load balancer tests in preparation for load balancer changes
which add per-table tablet hints. After those changes, load balancer consults with the replication
strategy in the database, so we need to create proper schema in the
database. To do that, we need proper topology for replication
strategies which use RF > 1, otherwise keyspace creation will fail.
Topology is created in tests via group0 commands, which is abstracted by
the new `topology_builder` class.
Tests cannot modify token_metadata only in memory now as it needs to be
consistent with the schema and on-disk metadata. That's why modifications to
tablet metadata are now made under group0 guard and save back metadata to disk.
Closesscylladb/scylladb#22648
* github.com:scylladb/scylladb:
test: tablets: Drop keyspace after do_test_load_balancing_merge_colocation() scenario
tests: tablets: Set initial tablets to 1 to exit growing mode
test: tablets_test: Create proper schema in load balancer tests
test: lib: Introduce topology_builder
test: cql_test_env: Expose topology_state_machine
topology_state_machine: Introduce lock transition
(cherry picked from commit 51a273401c)
For example, nodes which are being decommissioned should not be
consider as available capacity for new tables. We don't allocate
tablets on such nodes.
Would result in higher per-shard load then planned.
Closesscylladb/scylladb#22657
(cherry picked from commit 3bb19e9ac9)
In that case, new_racks will be used, but when we discover no
candidates, we try to pop from existing_racks.
Fixes#22625Closesscylladb/scylladb#22652
(cherry picked from commit e22e3b21b1)
If hosts and/or dcs filters are specified for tablet repair and
some replicas match these filters, choose the replica that will
be the repair master according to round-robin principle
(currently it's always the first replica).
If hosts and/or dcs filters are specified for tablet repair and
no replica matches these filters, the repair succeeds and
the repair request is removed (currently an exception is thrown
and tablet repair scheduler reschedules the repair forever).
Fixes: https://github.com/scylladb/scylladb/issues/23100.
Needs backport to 2025.1 that introduces hosts and dcs filters for tablet repair
- (cherry picked from commit 9bce40d917)
- (cherry picked from commit fe4e99d7b3)
- (cherry picked from commit 2b538d228c)
- (cherry picked from commit c40eaa0577)
- (cherry picked from commit c7c6d820d7)
Parent PR: #23101Closesscylladb/scylladb#23109
* github.com:scylladb/scylladb:
test: add new cases to tablet_repair tests
test: extract repiar check to function
locator: add round-robin selection of filtered replicas
locator: add tablet_task_info::selected_by_filters
service: finish repair successfully if no matching replica found
Currently, the tablet repair scheduler repairs all replicas of a tablet. It does not support hosts or DCs selection. It should be enough for most cases. However, users might still want to limit the repair to certain hosts or DCs in production. https://github.com/scylladb/scylladb/pull/21985 added the preparation work to add the config options for the selection. This patch adds the hosts or DCs selection support.
Fixes https://github.com/scylladb/scylladb/issues/22417
New feature. No backport is needed.
- (cherry picked from commit 4c75701756)
- (cherry picked from commit 5545289bfa)
- (cherry picked from commit 1c8a41e2dd)
- (cherry picked from commit e499f7c971)
Parent PR: #22621Closesscylladb/scylladb#23080
* github.com:scylladb/scylladb:
test: add test to check dcs and hosts repair filter
test: add repair dc selection to test_tablet_metadata_persistence
repair: Introduce Host and DC filter support
docs: locator: update the docs and formatter of tablet_task_info
This commit adds documentation for zero-token nodes and an explanation
of how to use them to set up an arbiter DC to prevent a quorum loss
in multi-DC deployments.
The commit adds two documents:
- The one in Architecture describes zero-token nodes.
- The other in Cluster Management explains how to use them.
We need separate documents because zero-token nodes may be used
for other purposes in the future.
In addition, the documents are cross-linked, and the link is added
to the Create a ScyllaDB Cluster - Multi Data Centers (DC) document.
Refs https://github.com/scylladb/scylladb/pull/19684
Fixes https://github.com/scylladb/scylladb/issues/20294Closesscylladb/scylladb#21348
(cherry picked from commit 9ac0aa7bba)
Closesscylladb/scylladb#23201
Before starting significant services that didn't
have a corresponding call to supervisor::notify
before them.
Fixes#23153
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
(cherry picked from commit b6705ad48b)
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
To simplify aborting scylla while starting the services,
Add a _ready state to stop_signal, so that until
main is ready to be stopped by the abort_source,
just register that the signal is caught, and
let a check() method poll that and request abort
and throw respective exception only then, in controlled
points that are in-between starting of services
after the service started successfully and a deferred
stop action was installed.
Refs #23153
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
(cherry picked from commit feef7d3fa1)
The `prometheus_server` is started only conditionally
but the notification message is sent and logged
unconditionally.
Move it inside the condtional code block.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
(cherry picked from commit 282ff344db)
It is now logged out of place, so move it to right before
calling `start` on every database shard.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
(cherry picked from commit 23433f593c)
This commit adds a link to the Limitations section on the Tablets page
to the CQL pag, the tablets option.
This is actually the place where the user will need the information:
when creating a keyspace.
In addition, I've reorganized the section for better readability
(otherwise, the section about limitations was easy to miss)
and moved the section up on the page.
Note that I've removed the updated content from the `_common` folder
(which I deleted) to the .rst page - we no longer split OSS and Enterprise,
so there's no need to keep using the `scylladb_include_flag` directive
to include OSS- and Ent-specific content.
Fixes https://github.com/scylladb/scylladb/issues/22892
Fixes https://github.com/scylladb/scylladb/issues/22940Closesscylladb/scylladb#22939
(cherry picked from commit 0999fad279)
Closesscylladb/scylladb#23091
If hosts and/or dcs filters are specified for tablet repair and
no replica matches these filters, an exception is thrown. The repair
fails and tablet repair scheduler reschedules it forever.
Such a repair should actually succeed (as all specified relpicas were
repaired) and the repair request should be removed.
Treat the repair as successful if the filters were specified and
selected no replica.
(cherry picked from commit 9bce40d917)
It is possible that the permit handed in to register_inactive_read() is
already aborted (currently only possible if permit timed out).
If the permit also happens to have wait for memory, the current code
will attempt to call promise<>::set_exception() on the permit's promise
to abort its waiters. But if the permit was already aborted via timeout,
this promise will already have an exception and this will trigger an
assert. Add a separate case for checking if the permit is aborted
already. If so, treat it as immediate eviction: close the reader and
clean up.
Fixes: scylladb/scylladb#22919
(cherry picked from commit 7ba29ec46c)
Unless the test in question actually wants to test timeouts. Timeouts
will have more pronounced consequences soon and thus using
db::timeout_clock::now() becomes a sure way to make tests flaky.
To avoid this, use db::no_timeout in the tests that don't care about
timeouts.
(cherry picked from commit 4d8eb02b8d)
This commit adds the upgrade guides relevant in version 2025.1:
- From 6.2 to 2025.1
- From 2024.x to 2025.1
It also removes the upgrade guides that are not relevant in 2025.1 source available:
- Open Source upgrade guides
- From Open Source to Enterprise upgrade guides
- Links to the Enterprise upgrade guides
Also, as part of this PR, the remaining relevant content has been moved to
the new About Upgrade page.
WHAT NEEDS TO BE REVIEWED
- Review the instructions in the 6.2-to-2025.1 guide
- Review the instructions in the 2024.x-to-2025.1 guide
- Verify that there are no references to Open Source and Enterprise.
The scope of this PR does not have to include metrics - the info can be added
in a follow-up PR.
Fixes https://github.com/scylladb/scylladb/issues/22208
Fixes https://github.com/scylladb/scylladb/issues/22209
Fixes https://github.com/scylladb/scylladb/issues/23072
Fixes https://github.com/scylladb/scylladb/issues/22346Closesscylladb/scylladb#22352
(cherry picked from commit 850aec58e0)
Closesscylladb/scylladb#23106
Currently, the tablet repair scheduler repairs all replicas of a tablet.
It does not support hosts or DCs selection. It should be enough for most
cases. However, users might still want to limit the repair to certain
hosts or DCs in production. #21985 added the preparation work to add the
config options for the selection. This patch adds the hosts or DCs
selection support.
Fixes#22417
(cherry picked from commit 5545289bfa)
In some cases the paused/unpaused node can hang not after 30s timeout.
This make the test flaky. Change the condition to always check the
coordinator's log if there is a hung node.
Add `stop_after_streaming` to the list of error injections which can
cause a node's hang.
Also add a wait for a new coordinator election in cluster events
which cause such elections.
Closesscylladb/scylladb#22825
(cherry picked from commit 99be9ac8d8)
Closesscylladb/scylladb#23007
This commit removes the variable _manage_topology_change_kind_from_group0
which was used earlier as a work around for correctly handling
topology_change_kind variable, it was brittle and had some bugs. Earlier commits
made some modifications to deal with handling topology_change_kind variable
post _manage_topology_change_kind_from_group0 removal
(cherry picked from commit d7884cf651)
In the current scenario, topology_change_kind variable, was been handled using
_manage_topology_change_kind_from_group0 variable. This method was brittle
and had some bugs(e.g. for restart case, it led to a time gap between group0
server start and topology_change_kind being managed via group0)
Post _manage_topology_change_kind_from_group0 removal, careful management of
topology_change_kind variable was needed for maintaining correct
topology_change_kind in all scenarios. So this PR also performs a refactoring
to populate all init data to system tables even before group0 creation(via
raft_initialize_discovery_leader function). Now because raft_initialize_discovery_leader
happens before the group 0 creation, we write mutations directly to system
tables instead of a group 0 command. Hence, post group0 creation, the node
can read the correct values from system tables and correct values are
maintained throughout.
Added a new function initialize_done_topology_upgrade_state which takes
care of updating the correct upgrade state to system tables before starting
group0 server. This ensures that the node can read the correct values from
system tables and correct values are maintained throughout.
By moving raft_initialize_discovery_leader logic to happen before starting
group0 server, and not as group0 command post server start, we also get rid
of the potential problem of init group0 command not being the 1st command on
the server. Hence ensuring full integrity as expected by programmer.
Fixes: scylladb/scylladb#21114
(cherry picked from commit e491950c47)
The failure may happen during normal operation as well (for instance if
leader changes).
Fixes: scylladb/scylladb#22364
(cherry picked from commit fe45ea505b)
During topology state application peers table may be updated with the
new ip->id mapping. The update is not atomic: it adds new mapping and
then removes the old one. If we call get_host_id_to_ip_map while this is
happening it may trigger an internal error there. This is a regression
since ef929c5def. Before that commit the
code read the peers table only once before starting the update loop.
This patch restores the behaviour.
Fixes: scylladb/scylladb#22578
(cherry picked from commit 1da7d6bf02)
In this series we implement the UpdateTable operation to add a GSI to an existing table, or remove a GSI from a table. As the individual commit messages will explained, this required changing how Alternator stores materialized view keys - instead of insisting that these key must be real columns (that is **not** the case when adding a GSI to an existing table), the materialized view can now take as its key any Alternator attribute serialized inside the ":attrs" map holding all non-key attributes. Fixes#11567.
We also fix the IndexStatus and Backfilling attributes returned by DescribeTable - as DynamoDB API users use this API to discover when a newly added GSI completed its "backfilling" (what we call "view building") stage. Fixes#11471.
This series should not be backported lightly - it's a new feature and required fairly large and intrusive changes that can introduce bugs to use cases that don't even use Alternator or its UpdateTable operations - every user of CQL materialized views or secondary indexes, as well as Alternator GSI or LSI, will use modified code. **It should be backported to 2025.1**, though - this version was actually branched long after this PR was sent, and it provides a feature that was promised for 2025.1.
Closesscylladb/scylladb#21989
* github.com:scylladb/scylladb:
alternator: fix view build on oversized GSI key attribute
mv: clean up do_delete_old_entry
test/alternator: unflake test for IndexStatus
test/alternator: work around unrelated bug causing test flakiness
docs/alternator: adding a GSI is no longer an unimplemented feature
test/alternator: remove xfail from all tests for issue 11567
alternator: overhaul implementation of GSIs and support UpdateTable
mv: support regular_column_transformation key columns in view
alternator: add new materialized-view computed column for item in map
build: in cmake build, schema needs alternator
build: build tests with Alternator
alternator: add function serialized_value_if_type()
mv: introduce regular_column_transformation, a new type of computed column
alternator: add IndexStatus/Backfilling in DescribeTable
alternator: add "LimitExceededException" error type
docs/alternator: document two more unimplemented Alternator features
(cherry picked from commit 529ff3efa5)
Closesscylladb/scylladb#22826
The test
topology_custom/test_alternator::test_localnodes_broadcast_rpc_address
sets up nodes with a silly "broadcast rpc address" and checks that
Alternator's "/localnodes" requests returns it correctly.
The problem is that although we don't use CQL in this test, the test
framework does open a CQL connection when the test starts, and closes
it when it ends. It turns out that when we set a silly "broadcast RPC
address", the driver tends to try to connect to it when shutting down,
I'm not even sure why. But the choice of the silly address was 1.2.3.4
is unfortunate, because this IP address is actually routable - and
the driver hangs until it times out (in practice, in a bit over two
minutes). This trivial patch changes 1.2.3.4 to 127.0.0.0 - and equally
silly address but one to which connections fail immediately.
Before this patch, the test often takes more than 2 minutes to finish
on my laptop, after this patch, it always finishes in 4-5 seconds.
Fixes#22744
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Closesscylladb/scylladb#22746
(cherry picked from commit f89235517d)
Closesscylladb/scylladb#22875
This PR addresses two related issues in our task system:
1. Prepares for asynchronous resource cleanup by converting release_resources() to a coroutine. This refactoring enables future improvements in how we handle resource cleanup.
2. Fixes a cross-shard resource cleanup issue in the SSTable loader where destruction of per-shard progress elements could trigger "shared_ptr accessed on non-owner cpu" errors in multi-shard environments. The fix uses coroutines to ensure resources are released on their owner shards.
Fixes#22759
---
this change addresses a regression introduced by d815d7013c, which is contained by 2025.1 and master branches. so it should be backported to 2025.1 branch.
- (cherry picked from commit 4c1f1baab4)
- (cherry picked from commit b448fea260)
Parent PR: #22791Closesscylladb/scylladb#22871
* github.com:scylladb/scylladb:
sstable_loader: fix cross-shard resource cleanup in download_task_impl
tasks: make release_resources() a coroutine
Set true to wait for the repair to complete. Set false to skip waiting
for the repair to complete. When the option is not provided, it defaults
to false.
It is useful for management tool that wants the api to be async.
Fixes#22418Closesscylladb/scylladb#22436
(cherry picked from commit fb318d0c81)
Closesscylladb/scylladb#22851
We are supposed to be loading the most recent RPC compression dictionary on startup, but we forgot to port the relevant piece of logic during the source-available port. This causes a restarted node not to use the dictionary for RPC compression until the next dictionary update.
Fix that.
Fixes#22738
This is more of a bugfix than an improvement, so it should be backported to 2025.1.
* (cherry picked from commit [dd82b40](dd82b40186))
* (cherry picked from commit [8fb2ea6](8fb2ea61ba))
Additionally cherry picked https://github.com/scylladb/scylladb/pull/22836 to fix the timeout.
Parent PR: #22739Closesscylladb/scylladb#22837
* github.com:scylladb/scylladb:
test_rpc_compression.py: fix an overly-short timeout
test_rpc_compression.py: test the dictionaries are loaded on startup
raft/group0_state_machine: load current RPC compression dict on startup
The code currently assumes that a session has both sender and receiver
streams, but it is possible to have just one or the other.
Change the test to include this scenario and remove this assumption from
the code.
Fixes: #22770Closesscylladb/scylladb#22771
(cherry picked from commit 87e8e00de6)
Closesscylladb/scylladb#22874
We need to allow replacing nodetool from scylla-enterprise-tools < 2024.2,
just like we did for scylla-tools < 5.5.
This is required to make packages able to upgrade from 2024.1.
Fixes#22820Closesscylladb/scylladb#22821
(cherry picked from commit b5e306047f)
Closesscylladb/scylladb#22867
Previously, download_task_impl's destructor would destroy per-shard progress
elements on whatever shard the task was destroyed on. In multi-shard
environments, this caused "shared_ptr accessed on non-owner cpu" errors when
attempting to free memory allocated on a different shard.
Fix by:
- Convert progress_per_shard into a sharded service
- Stop the service on owner shards during cleanup using coroutines
- Add operator+= to stream_progress to leverage seastar's built-in adder
instead of a custom adder struct
Alternative approaches considered:
1. Using foreign_ptr: Rejected as it would require interface changes
that complicate stream delegation. foreign_ptr manages the underlying
pointee with another smart pointer but does not expose the smart
pointer instance in its APIs, making it impossible to use
shared_ptr<stream_progress> in the interface.
2. Using vector<stream_progress>: Rejected for similar interface
compatibility reasons.
This solution maintains the existing interfaces while ensuring proper
cross-shard cleanup.
Fixesscylladb/scylladb#22759
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
(cherry picked from commit b448fea260)
Convert tasks::task_manager::task::impl::release_resources() to a coroutine
to prepare for upcoming changes that will implement asynchronous resource
release.
This is a preparatory refactoring that enables future coroutine-based
implementation of resource cleanup logic.
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
(cherry picked from commit 4c1f1baab4)
`set_notify_handler()` is called after a querier was inserted into the querier cache. It has two purposes: set a callback for eviction and set a TTL for the cache entry. This latter was not disabling the pre-existing timeout of the permit (if any) and this would lead to premature eviction of the cache entry if the timeout was shorter than TTL (which his typical).
Disable the timeout before setting the TTL to prevent premature eviction.
Fixes: https://github.com/scylladb/scylladb/issues/22629
Backport required to all active releases, they are all affected.
- (cherry picked from commit a3ae0c7cee)
- (cherry picked from commit 9174f27cc8)
Parent PR: #22701Closesscylladb/scylladb#22752
* github.com:scylladb/scylladb:
reader_concurrency_semaphore: set_notify_handler(): disable timeout
reader_permit: mark check_abort() as const
As of right now, materialized views (and consequently secondary
indexes), lwt and counters are unsupported or experimental with tablets.
Since by defaults tablets are enabled, training cases using those
features are currently broken.
The right thing to do here is to disable tablets in those cases.
Fixes https://github.com/scylladb/scylladb/issues/22638Closesscylladb/scylladb#22661
(cherry picked from commit bea434f417)
Closesscylladb/scylladb#22808
This config item controls how many CPU-bound reads are allowed to run in
parallel. The effective concurrency of a single CPU core is 1, so
allowing more than one CPU-bound reads to run concurrently will just
result in time-sharing and both reads having higher latency.
However, restricting concurrency to 1 means that a CPU bound read that
takes a lot of time to complete can block other quick reads while it is
running. Increase this default setting to 2 as a compromise between not
over-using time-sharing, while not allowing such slow reads to block the
queue behind them.
Fixes: #22450Closesscylladb/scylladb#22679
(cherry picked from commit 3d12451d1f)
Closesscylladb/scylladb#22722
On short-pages, cut short because of a tombstone prefix.
When page-results are filtered and the filter drops some rows, the
last-position is taken from the page visitor, which does the filtering.
This means that last partition and row position will be that of the last
row the filter saw. This will not match the last position of the
replica, when the replica cut the page due to tombstones.
When fetching the next page, this means that all the tombstone suffix of
the last page, will be re-fetched. Worse still: the last position of the
next page will not match that of the saved reader left on the replica, so
the saved reader will be dropped and a new one created from scratch.
This wasted work will show up as elevated tail latencies.
Fix by always taking the last position from raw query results.
Fixes: #22620Closesscylladb/scylladb#22622
(cherry picked from commit 7ce932ce01)
Closesscylladb/scylladb#22719
When a replica get a write request it performs get_schema_for_write,
which waits until the schema is synced. However, database::add_column_family
marks a schema as synced before the table is added. Hence, the write may
see the schema as synced, but hit no_such_column_family as the table
hasn't been added yet.
Mark schema as synced after the table is added to database::_tables_metadata.
Fixes: #22347.
Closesscylladb/scylladb#22348
(cherry picked from commit 328818a50f)
Closesscylladb/scylladb#22604
Fixes#22401
In the fix for scylladb/scylla-enterprise#892, the extraction and check for sstable component encryption mask was copied
to a subroutine for description purposes, but a very important 1 << <value> shift was somehow
left on the floor.
Without this, the check for whether we actually contain a component encrypted can be wholly
broken for some components.
Closesscylladb/scylladb#22398
(cherry picked from commit 7db14420b7)
Closesscylladb/scylladb#22599
Currently, when the status of a task is queried and the task is already finished,
it gets unregistered. Getting the status shouldn't be a one-time operation.
Stop removing the task after its status is queried. Adjust tests not to rely
on this behavior. Add task_manager/drain API and nodetool tasks drain
command to remove finished tasks in the module.
Fixes: https://github.com/scylladb/scylladb/issues/21388.
It's a fix to task_manager API, should be backported to all branches
- (cherry picked from commit e37d1bcb98)
- (cherry picked from commit 18cc79176a)
Parent PR: #22310Closesscylladb/scylladb#22598
* github.com:scylladb/scylladb:
api: task_manager: do not unregister tasks on get_status
api: task_manager: add /task_manager/drain
`tablet_storage_group_manager::all_storage_groups_split()` calls `set_split_mode()` for each of its storage groups to create split ready compaction groups. It does this by iterating through storage groups using `std::ranges::all_of()` which is not guaranteed to iterate through the entire range, and will stop iterating on the first occurrence of the predicate (`set_split_mode()`) returning false. `set_split_mode()` creates the split compaction groups and returns false if the storage group's main compaction group or merging groups are not empty. This means that in cases where the tablet storage group manager has non-empty storage groups, we could have a situation where split compaction groups are not created for all storage groups.
The missing split compaction groups are later created in `tablet_storage_group_manager::split_all_storage_groups()` which also calls `set_split_mode()`, and that is the reason why split completes successfully. The problem is that
`tablet_storage_group_manager::all_storage_groups_split()` runs under a group0 guard, but
`tablet_storage_group_manager::split_all_storage_groups()` does not. This can cause problems with operations which should exclude with compaction group creation. i.e. DROP TABLE/DROP KEYSPACE
Fixes#22431
This is a bugfix and should be back ported to versions with tablets: 6.1 6.2 and 2025.1
- (cherry picked from commit 24e8d2a55c)
- (cherry picked from commit 8bff7786a8)
Parent PR: #22330Closesscylladb/scylladb#22560
* github.com:scylladb/scylladb:
test: add reproducer and test for fix to split ready CG creation
table: run set_split_mode() on all storage groups during all_storage_groups_split()
Since mid December, tests started failing with ENOMEM while
submitting I/O requests.
Logs of failed tests show IO uring was used as backend, but we
never deliberately switched to IO uring. Investigation pointed
to it happening accidentaly in commit 1bac6b75dc,
which turned on IO uring for allowing native tool in production,
and picked linux-aio backend explicitly when initializing Scylla.
But it missed that seastar-based tests would pick the default
backend, which is io_uring once enabled.
There's a reason we never made io_uring the default, which is
that it's not stable enough, and turns out we made the right
choice back then and it apparently continue to be unstable
causing flakiness in the tests.
Let's undo that accidental change in tests by explicitly
picking the linux-aio backend for seastar-based tests.
This should hopefully bring back stability.
Refs #21968.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Closesscylladb/scylladb#22695
(cherry picked from commit ce65164315)
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Closesscylladb/scylladb#22800
We are supposed to be loading the most recent RPC compression dictionary
on startup, but we forgot to port the relevant piece of logic during
the source-available port.
(cherry picked from commit dd82b40186)
In few test cases of test_view_build_status we create a view, wait for
it and then query the view_build_status table and expect it to have all
rows for each node and view.
But it may fail because it could happen that the wait_for_view query and
the following queries are done on different nodes, and some of the nodes
didn't apply all the table updates yet, so they have missing rows.
To fix it, we change the assert to work in the eventual consistency
sense, retrying until the number of rows is as expectd.
Fixesscylladb/scylladb#22644Closesscylladb/scylladb#22654
(cherry picked from commit c098e9a327)
Closesscylladb/scylladb#22780
When upgrading for example from `2024.1` to `2025.1` the package name is
not identical casuing the upgrade command to fail:
```
Command: 'sudo DEBIAN_FRONTEND=noninteractive apt-get dist-upgrade scylla -y -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold"'
Exit code: 100
Stdout:
Selecting previously unselected package scylla.
Preparing to unpack .../6-scylla_2025.1.0~dev-0.20250118.1ef2d9d07692-1_amd64.deb ...
Unpacking scylla (2025.1.0~dev-0.20250118.1ef2d9d07692-1) ...
Errors were encountered while processing:
/tmp/apt-dpkg-install-JbOMav/0-scylla-conf_2025.1.0~dev-0.20250118.1ef2d9d07692-1_amd64.deb
/tmp/apt-dpkg-install-JbOMav/1-scylla-python3_2025.1.0~dev-0.20250118.1ef2d9d07692-1_amd64.deb
/tmp/apt-dpkg-install-JbOMav/2-scylla-server_2025.1.0~dev-0.20250118.1ef2d9d07692-1_amd64.deb
/tmp/apt-dpkg-install-JbOMav/3-scylla-kernel-conf_2025.1.0~dev-0.20250118.1ef2d9d07692-1_amd64.deb
/tmp/apt-dpkg-install-JbOMav/4-scylla-node-exporter_2025.1.0~dev-0.20250118.1ef2d9d07692-1_amd64.deb
/tmp/apt-dpkg-install-JbOMav/5-scylla-cqlsh_2025.1.0~dev-0.20250118.1ef2d9d07692-1_amd64.deb
Stderr:
E: Sub-process /usr/bin/dpkg returned an error code (1)
```
Adding `Obsoletes` (for rpm) and `Replaces` (for deb)
Fixes: https://github.com/scylladb/scylladb/issues/22420Closesscylladb/scylladb#22457
(cherry picked from commit 93f53f4eb8)
Closesscylladb/scylladb#22753
set_notify_handler() is called after a querier was inserted into the
querier cache. It has two purposes: set a callback for eviction and set
a TTL for the cache entry. This latter was not disabling the
pre-existing timeout of the permit (if any) and this would lead to
premature eviction of the cache entry if the timeout was shorter than
TTL (which his typical).
Disable the timeout before setting the TTL to prevent premature
eviction.
Fixes: #scylladb/scylladb#22629
(cherry picked from commit 9174f27cc8)
Currently, the session ID under which the truncate for tablets request is
running is created during the request creation and queuing. This is a problem
because this could overwrite the session ID of any ongoing operation on
system.topology#session
This change moves the creation of the session ID for truncate from the request
creation to the request handling.
Fixes#22613Closesscylladb/scylladb#22615
(cherry picked from commit a59618e83d)
Closesscylladb/scylladb#22705
with_permit() creates a permit, with a self-reference, to avoid
attaching a continuation to the permit's run function. This
self-reference is used to keep the permit alive, until the execution
loop processes it. This self reference has to be carefully cleared on
error-paths, otherwise the permit will become a zombie, effectively
leaking memory.
Instead of trying to handle all loose ends, get rid of this
self-reference altogether: ask caller to provide a place to save the
permit, where it will survive until the end of the call. This makes the
call-site a little bit less nice, but it gets rid of a whole class of
possible bugs.
Fixes: #22588Closesscylladb/scylladb#22624
(cherry picked from commit f2d5819645)
Closesscylladb/scylladb#22704
Currently, when the tablet repair is started, info regarding
the operation is kept in the system.tablets. The new tablet states
are reflected in memory after load_topology_state is called.
Before that, the data in the table and the memory aren't consistent.
To check the supported operations, tablet_virtual_task uses in-memory
tablet_metadata. Hence, it may not see the operation, even though
its info is already kept in system.tablets table.
Run read barrier in tablet_virtual_task::contains to ensure it will
see the latest data. Add a test to check it.
Fixes: #21975.
Closesscylladb/scylladb#21995
(cherry picked from commit 610a761ca2)
Closesscylladb/scylladb#22694
If start_time/end_time is unspecified for a task, task_manager API
returns epoch. Nodetool prints the value in task status.
Fix nodetool tasks commands to print empty string for start_time/end_time
if it isn't specified.
Modify nodetool tasks status docs to show empty end_time.
Fixes: #22373.
Closesscylladb/scylladb#22370
(cherry picked from commit 477ad98b72)
Closesscylladb/scylladb#22601
- To make Scylla able to run in FIPS-compliant system, add .hmac files for
crypto libraries on relocatable/rpm/deb packages.
- Currently we just write hmac value on *.hmac files, but there is new
.hmac file format something like this:
```
[global]
format-version = 1
[lib.xxx.so.yy]
path = /lib64/libxxx.so.yy
hmac = <hmac>
```
Seems like GnuTLS rejects fips selftest on .libgnutls.so.30.hmac when
file format is older one.
Since we need to absolute path on "path" directive, we need to generate
.libgnutls.so.30.hmac in older format on create-relocatable-script.py,
Fixesscylladb/scylladb#22573
Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Closesscylladb/scylladb#22384
(cherry picked from commit fb4c7dc3d8)
Closesscylladb/scylladb#22587
Materialized views with tablets are not stable yet, but we want
them available as an experimental feature, mainly for teseting.
The feature was added in scylladb/scylladb#21833,
but currently it has no effect. All tests have been updated to use the
feature, so we should finally make it work.
This patch prevents users from creating materialized views in keyspaces
using tablets when the VIEWS_WITH_TABLETS feature is not enabled - such
requests will now get rejected.
Fixesscylladb/scylladb#21832Closesscylladb/scylladb#22217
(cherry picked from commit 677f9962cf)
Closesscylladb/scylladb#22659
The test expects and asserts that after wait_for_view is completed we
read the view_build_status table and get a row for each node and view.
But this is wrong because wait_for_view may have read the table on one
node, and then we query the table on a different node that didn't insert
all the rows yet, so the assert could fail.
To fix it we change the test to retry and check that eventually all
expected rows are found and then eventually removed on the same host.
Fixesscylladb/scylladb#22547Closesscylladb/scylladb#22585
(cherry picked from commit 44c06ddfbb)
Closesscylladb/scylladb#22608
The view builder builds a view by going over the entire token ring,
consuming the base table partitions, and generating view updates for
each partition.
A view is considered as built when we complete a full cycle of the
token ring. Suppose we start to build a view at a token F. We will
consume all partitions with tokens starting at F until the maximum
token, then go back to the minimum token and consume all partitions
until F, and then we detect that we pass F and complete building the
view. This happens in the view builder consumer in
`check_for_built_views`.
The problem is that we check if we pass the first token F with the
condition `_step.current_token() >= it->first_token` whenever we consume
a new partition or the current_token goes back to the minimum token.
But suppose that we don't have any partitions with a token greater than
or equal to the first token (this could happen if the partition with
token F was moved to another node for example), then this condition will never be
satisfied, and we don't detect correctly when we pass F. Instead, we
go back to the minimum token, building the same token ranges again,
in a possibly infinite loop.
To fix this we add another step when reaching the end of the reader's
stream. When this happens it means we don't have any more fragments to
consume until the end of the range, so we advance the current_token to
the end of the range, simulating a partition, and check for built views
in that range.
Fixesscylladb/scylladb#21829Closesscylladb/scylladb#22493
(cherry picked from commit 6d34125eb7)
Closesscylladb/scylladb#22607
Fixes#22236
If reading a file and not stopping on block bounds returned by `size()`, we could allow reading from (_file_size+<1-15>) (if crossing block boundary) and try to decrypt this buffer (last one).
Simplest example:
Actual data size: 4095
Physical file size: 4095 + key block size (typically 16)
Read from 4096: -> 15 bytes (padding) -> transform return `_file_size` - `read offset` -> wraparound -> rather larger number than we expected (not to mention the data in question is junk/zero).
Check on last block in `transform` would wrap around size due to us being >= file size (l).
Just do an early bounds check and return zero if we're past the actual data limit.
- (cherry picked from commit e96cc52668)
- (cherry picked from commit 2fb95e4e2f)
Parent PR: #22395Closesscylladb/scylladb#22583
* github.com:scylladb/scylladb:
encrypted_file_test: Test reads beyond decrypted file length
encrypted_file_impl: Check for reads on or past actual file length in transform
Currently, /task_manager/task_status_recursive/{task_id} and
/task_manager/task_status/{task_id} unregister queries task if it
has already finished.
The status should not disappear after being queried. Do not unregister
finished task when its status or recursive status is queried.
(cherry picked from commit 18cc79176a)
In the following patches, get_status won't be unregistering finished
tasks. However, tests need a functionality to drop a task, so that
they could manipulate only with the tasks for operations that were
invoked by these tests.
Add /task_manager/drain/{module} to unregister all finished tasks
from the module. Add respective nodetool command.
(cherry picked from commit e37d1bcb98)
This is a manual backport of #22452
Truncate table for tablets is implemented as a global topology operation. However, it does not have a transition state associated with it, and performs the truncate logic in topology_coordinator::handle_global_request() while topology::tstate remains empty. This creates problems because topology::is_busy() uses transition_state to determine if the topology state machine is busy, and will return false even though a truncate operation is ongoing.
This change introduces a new topology transition topology::transition_state::truncate_table and moves the truncate logic to a new method topology_coordinator::handle_truncate_table(). This method is now called as a handler of the truncate_table transition state instead of a handler of the truncate_table global topology request.
Fixes#22552Closesscylladb/scylladb#22557
* github.com:scylladb/scylladb:
truncate: trigger truncate logic from transition state instead of global request handler
truncate: add truncate_table transition state
Add a test to reproduce a bug in the read DMA API of
`encrypted_file_impl` (the file implementation for Encryption-at-Rest).
The test creates an encrypted file that contains padding, and then
attempts to read from an offset within the padding area. Although this
offset is invalid on the decrypted file, the `encrypted_file_impl` makes
no checks and proceeds with the decryption of padding data, which
eventually leads to bogus results.
Refs #22236.
Signed-off-by: Nikos Dragazis <nikolaos.dragazis@scylladb.com>
(cherry picked from commit 8f936b2cbc)
(cherry picked from commit 2fb95e4e2f)
Fixes#22236
If reading a file and not stopping on block bounds returned by `size()`, we could
allow reading from (_file_size+1-15) (block boundary) and try to decrypt this
buffer (last one).
Check on last block in `transform` would wrap around size due to us being >=
file size (l).
Simplest example:
Actual data size: 4095
Physical file size: 4095 + key block size (typically 16)
Read from 4096: -> 15 bytes (padding) -> transform return _file_size - read offset
-> wraparound -> rather larger number than we expected
(not to mention the data in question is junk/zero).
Just do an early bounds check and return zero if we're past the actual data limit.
v2:
* Moved check to a min expression instead
* Added lengthy comment
* Added unit test
v3:
* Fixed read_dma_bulk handling of short, unaligned read
* Added test for unaligned read
v4:
* Added another unaligned test case
(cherry picked from commit e96cc52668)
Since now topology does not contain ip addresses there is no need to
create topology on an ip address change. Only peers table has to be
updated. The series factors out peers table update code from
sync_raft_topology_nodes() and calls it on topology and ip address
updates. As a side effect it fixes#22293 since now topology loading
does not require IP do be present, so the assert that is triggered in
this bug is removed.
Fixes: scylladb/scylladb#22293
- (cherry picked from commit ef929c5def)
- (cherry picked from commit fbfef6b28a)
Parent PR: #22519Closesscylladb/scylladb#22543
* github.com:scylladb/scylladb:
topology coordinator: do not update topology on address change
topology coordinator: split out the peer table update functionality from raft state application
Fixes#21993
Removes configuration_encryptor mention from docs.
The tool itself (java) is not included in the main branch
java tools, thus need not remove from there. Only the words.
Closesscylladb/scylladb#22427
(cherry picked from commit bae5b44b97)
Closesscylladb/scylladb#22556
During raft upgrade, a node may gossip about a new CDC generation that
was propagated through raft. The node that receives the generation by
gossip may have not applied the raft update yet, and it will not find
the generation in the system tables. We should consider this error
non-fatal and retry to read until it succeeds or becomes obsolete.
Another issue is when we fail with a "fatal" exception and not retrying
to read, the cdc metadata is left in an inconsistent state that causes
further attempts to insert this CDC generation to fail.
What happens is we complete preparing the new generation by calling `prepare`,
we insert an empty entry for the generation's timestamp, and then we fail. The
next time we try to insert the generation, we skip inserting it because we see
that it already has an entry in the metadata and we determine that
there's nothing to do. But this is wrong, because the entry is empty,
and we should continue to insert the generation.
To fix it, we change `prepare` to return `true` when the entry already
exists but it's empty, indicating we should continue to insert the
generation.
Fixesscylladb/scylladb#21227Closesscylladb/scylladb#22093
(cherry picked from commit 4f5550d7f2)
Closesscylladb/scylladb#22546
This commit adds the OS support information for version 2025.1.
In addition, the OS support page is reorganized so that:
- The content is moved from the include page _common/os-support-info.rst
to the regular os-support.rst page. The include page was necessary
to document different support for OSS and Enterprise versions, so
we don't need it anymore.
- I skipped the entries for versions that won't be supported when 2025.1
is released: 6.1 and 2023.1.
- I moved the definition of "supported" to the end of the page for better
readability.
- I've renamed the index entry to "OS Support" to be shorter on the left menu.
Fixes https://github.com/scylladb/scylladb/issues/22474Closesscylladb/scylladb#22476
(cherry picked from commit 61c822715c)
Closesscylladb/scylladb#22538
Currently, data sync repair handles most no_such_keyspace exceptions,
but it omits the preparation phase, where the exception could be thrown
during make_global_effective_replication_map.
Skip the keyspace repair if no_such_keyspace is thrown during preparations.
Fixes: #22073.
Requires backport to 6.1 and 6.2 as they contain the bug
- (cherry picked from commit bfb1704afa)
- (cherry picked from commit 54e7f2819c)
Parent PR: #22473Closesscylladb/scylladb#22542
* github.com:scylladb/scylladb:
test: add test to check if repair handles no_such_keyspace
repair: handle keyspace dropped
This adds a reproducer for #22431
In cases where a tablet storage group manager had more than one storage
group, it was possible to create compaction groups outside the group0
guard, which could create problems with operations which should exclude
with compaction group creation.
(cherry picked from commit 8bff7786a8)
tablet_storage_group_manager::all_storage_groups_split() calls set_split_mode()
for each of its storage groups to create split ready compaction groups. It does
this by iterating through storage groups using std::ranges::all_of() which is
not guaranteed to iterate through the entire range, and will stop iterating on
the first occurance of the predicate (set_split_mode()) returning false.
set_split_mode() creates the split compaction groups and returns false if the
storage group's main compaction group or merging groups are not empty. This
means that in cases where the tablet storage group manager has non-empty
storage groups, we could have a situation where split compaction groups are not
created for all storage groups.
The missing split compaction groups are later created in
tablet_storage_group_manager::split_all_storage_groups() which also calls
set_split_mode(), and that is the reason why split completes successfully. The
problem is that tablet_storage_group_manager::all_storage_groups_split() runs
under a group0 guard, and tablet_storage_group_manager::split_all_storage_groups()
does not. This can cause problems with operations which should exclude with
compaction group creation. i.e. DROP TABLE/DROP KEYSPACE
(cherry picked from commit 24e8d2a55c)
request handler
Before this change, the logic of truncate for tablets was triggered from
topology_coordinator::handle_global_request(). This was done without
using a topology transition state which remained empty throughout the
truncate handler's execution.
This change moves the truncate logic to a new method
topology_coordinator::handle_truncate_table(). This method is now called
as a handler of the truncate_table topology transition state instead of
a handler of the trunacate_table global topology request.
Truncate table for tablets is implemented as a global topology operation.
However, it does not have a transition state associated with it, and
performs the truncate logic in handle_global_request() while
topology::tstate remains empty. This creates problems because
topology::is_busy() uses transition_state to determine if the topology
state machine is busy, and will return false even though a truncate
operation is ongoing.
This change adds a new transition state: truncate_table
Since now topology does not contain ip addresses there is no need to
create topology on an ip address change. Only peers table has to be
updated, so call a function that does peers table update only.
(cherry picked from commit fbfef6b28a)
Raft topology state application does two things: re-creates token metadata
and updates peers table if needed. The code for both task is intermixed
now. The patch separates it into separate functions. Will be needed in
the next patch.
(cherry picked from commit ef929c5def)
Currently, data sync repair handles most no_such_keyspace exceptions,
but it omits the preparation phase, where the exception could be thrown
during make_global_effective_replication_map.
Skip the keyspace repair if no_such_keyspace is thrown during preparations.
(cherry picked from commit bfb1704afa)
// FIXME: this is alternator limitation only, because Scylla's materialized views
// we use underneath do not allow more than 1 base regular column to be part of the MV key
elogger.warn("Only 1 regular column from the base table should be used in the GSI key in order to ensure correct liveness management without assumptions");
// FIXME: This warning should go away. See issue #6714
elogger.warn("Only 1 regular column from the base table should be used in the GSI key in order to ensure correct liveness management without assumptions");
throwapi_error::validation(fmt::format("AttributeDefinitions redefined {} to {} already a key attribute of type {} in this table",def.name_as_text(),type,def_type));
// FIXME: This warning should go away. See issue #6714
elogger.warn("Only 1 regular column from the base table should be used in the GSI key in order to ensure correct liveness management without assumptions");
"description":"Repair replicas listed in the comma-separated host_id list.",
"required":false,
"allowMultiple":false,
"type":"string",
"paramType":"query"
},
{
"name":"dcs_filter",
"description":"Repair replicas listed in the comma-separated DC list",
"required":false,
"allowMultiple":false,
"type":"string",
"paramType":"query"
},
{
"name":"await_completion",
"description":"Set true to wait for the repair to complete. Set false to skip waiting for the repair to complete. When the option is not provided, it defaults to false.",
throwexceptions::configuration_exception("Cannot disable tablets for keyspace since tablets are enforced using the `tablets_mode_for_new_keyspaces: enforced` config option.");
}
returnstd::nullopt;
}else{
throwexceptions::configuration_exception(sstring("Tablets enabled value must be true or false; found: ")+enabled);
"The directory where the schema commit log is stored. This is a special commitlog instance used for schema and system tables. For optimal write performance, it is recommended the commit log be on a separate disk partition (ideally, a separate physical device) from the data file directories.")
,error_injections_at_startup(this,"error_injections_at_startup",error_injection_value_status,{},"List of error injections that should be enabled on startup.")
,topology_barrier_stall_detector_threshold_seconds(this,"topology_barrier_stall_detector_threshold_seconds",value_status::Used,2,"Report sites blocking topology barrier if it takes longer than this.")
,enable_tablets(this,"enable_tablets",value_status::Used,false,"Enable tablets for newly created keyspaces.")
,enable_tablets(this,"enable_tablets",value_status::Used,false,"Enable tablets for newly created keyspaces. (deprecated)")
,tablets_mode_for_new_keyspaces(this,"tablets_mode_for_new_keyspaces",value_status::Used,tablets_mode_t::mode::unset,"Control tablets for new keyspaces. Can be set to the following values:\n"
"\tdisabled: New keyspaces use vnodes by default, unless enabled by the tablets={'enabled':true} option\n"
"\tenabled: New keyspaces use tablets by default, unless disabled by the tablets={'disabled':true} option\n"
"\tenforced: New keyspaces must use tablets. Tablets cannot be disabled using the CREATE KEYSPACE option")
,disk_space_monitor_high_polling_interval_in_seconds(this,"disk_space_monitor_high_polling_interval_in_seconds",value_status::Used,1,"Disk-space polling interval at or above polling threshold")
,disk_space_monitor_polling_interval_threshold(this,"disk_space_monitor_polling_interval_threshold",value_status::Used,0.9,"Disk-space polling threshold. Polling interval is increased when disk utilization is greater than or equal to this threshold")
,enable_create_table_with_compact_storage(this,"enable_create_table_with_compact_storage",liveness::LiveUpdate,value_status::Used,false,"Enable the deprecated feature of CREATE TABLE WITH COMPACT STORAGE. This feature will eventually be removed in a future version.")
"Enforce RF-rack-valid keyspaces. Additionally, if there are existing RF-rack-invalid "
"keyspaces, attempting to start a node with this option ON will fail.")
,default_log_level(this,"default_log_level",value_status::Used,seastar::log_level::info,"Default log level for log messages")
,logger_log_level(this,"logger_log_level",value_status::Used,{},"Map of logger name to log level. Valid log levels are 'error', 'warn', 'info', 'debug' and 'trace'")
,log_to_stdout(this,"log_to_stdout",value_status::Used,true,"Send log output to stdout")
@@ -70,8 +70,6 @@ Set the parameters for :ref:`Leveled Compaction <leveled-compaction-strategy-lcs
Incremental Compaction Strategy (ICS)
=====================================
..versionadded:: 2019.1.4 Scylla Enterprise
ICS principles of operation are similar to those of STCS, merely replacing the increasingly larger SSTables in each tier, by increasingly longer SSTable runs, modeled after LCS runs, but using larger fragment size of 1 GB, by default.
Compaction is triggered when there are two or more runs of roughly the same size. These runs are incrementally compacted with each other, producing a new SSTable run, while incrementally releasing space as soon as each SSTable in the input run is processed and compacted. This method eliminates the high temporary space amplification problem of STCS by limiting the overhead to twice the (constant) fragment size, per shard.
Raft Consensus Algorithm in ScyllaDB </architecture/raft>
Zero-token Nodes </architecture/zero-token-nodes>
*:doc:`Data Distribution with Tablets </architecture/tablets/>` - Tablets in ScyllaDB
@@ -22,5 +23,6 @@ ScyllaDB Architecture
*:doc:`SSTable </architecture/sstable/index/>` - ScyllaDB SSTable 2.0 and 3.0 Format Information
*:doc:`Compaction Strategies </architecture/compaction/compaction-strategies>` - High-level analysis of different compaction strategies
*:doc:`Raft Consensus Algorithm in ScyllaDB </architecture/raft>` - Overview of how Raft is implemented in ScyllaDB.
*:doc:`Zero-token Nodes </architecture/zero-token-nodes>` - Nodes that do not replicate any data.
Learn more about these topics in the `ScyllaDB University: Architecture lesson <https://university.scylladb.com/courses/scylla-essentials-overview/lessons/architecture/>`_.
*In ScyllaDB 6.0 and above, the ``me`` format is mandatory, and ``md`` format is used only when upgrading from an existing cluster using ``md``. The ``sstable_format`` parameter is ignored if it is set to ``md``.
* In ScyllaDB 5.1 and above, the ``me`` format is enabled by default.
* In ScyllaDB 4.3 to 5.0, the ``md`` format is enabled by default.
* In ScyllaDB 3.1 to 4.2, the ``mc`` format is enabled by default.
* In ScyllaDB 3.0, the ``mc`` format is disabled by default. You can enable it by adding the ``enable_sstables_mc_format`` parameter set to ``true`` in the ``scylla.yaml`` file. For example:
.. code-block:: shell
enable_sstables_mc_format: true
.. REMOVE IN FUTURE VERSIONS - Remove the note above in version 5.2.
In ScyllaDB 6.0 and above, the ``me`` format is mandatory, and ``md`` format is used only when upgrading from an existing cluster using ``md``. The ``sstable_format`` parameter is ignored if it is set to ``md``.
Thank you for your interest in making ScyllaDB better!
We appreciate your help and look forward to welcoming you to the ScyllaDB Community.
There are two ways you can contribute:
* Send a patch to the ScyllaDB source code
* Write documentation for ScyllaDB Docs
Contribute to ScyllaDB's Source Code
------------------------------------
ScyllaDB developers use patches and email to share and discuss changes.
Setting up can take a little time, but once you have done it the first time, it’s easy.
The basic steps are:
* Join the ScyllaDB community
* Create a Git branch to work on
* Commit your work with clear commit messages and sign-offs.
* Send a PR or use ``git format-patch`` and ``git send-email`` to send to the list
The entire process is `documented here <https://github.com/scylladb/scylla/blob/master/CONTRIBUTING.md>`_.
Contribute to ScyllaDB Docs
---------------------------
Each ScyllaDB project has accompanying documentation. For information about contributing documentation to a specific ScyllaDB project, refer to the README file for the individual project.
For general information or to contribute to the ScyllaDB Sphinx theme, read the `Contributor's Guide <https://sphinx-theme.scylladb.com/stable/contribute/>`_.
When using ICS, SSTable runs are put in different buckets depending on their size.
When an SSTable run is bucketed, the average size of the runs in the bucket is compared to the new run, as well as the ``bucket_high`` and ``bucket_low`` levels.
Modifying a keyspace with tablets enabled is possible and doesn't require any special CQL syntax. However, there are some limitations:
@@ -295,6 +299,7 @@ Modifying a keyspace with tablets enabled is possible and doesn't require any sp
- If there's any other ongoing global topology operation, executing the ``ALTER`` statement will fail (with an explicit and specific error) and needs to be repeated.
- The ``ALTER`` statement may take longer than the regular query timeout, and even if it times out, it will continue to execute in the background.
- The replication strategy cannot be modified, as keyspaces with tablets only support ``NetworkTopologyStrategy``.
- The ``ALTER`` statement will fail if it would make the keyspace :term:`RF-rack-invalid <RF-rack-valid keyspace>`.
@@ -18,7 +18,7 @@ For example, consider the following two workloads:
- Slow queries
- In essence - Latency agnostic
Using Service Level CQL commands, database administrators (working on Scylla Enterprise) can set different workload prioritization levels (levels of service) for each workload without sacrificing latency or throughput.
Using Service Level CQL commands, database administrators (working on ScyllaDB) can set different workload prioritization levels (levels of service) for each workload without sacrificing latency or throughput.
By assigning each service level to the different roles within your organization, DBAs ensure that each role_ receives the level of service the role requires.
You can `build ScyllaDB from source <https://github.com/scylladb/scylladb#build-prerequisites>`_ on other x86_64 or aarch64 platforms, without any guarantees.
Pick a zone where Haswell CPUs are found. Local SSD performance offers, according to Google, less than 1 ms of latency and up to 680,000 read IOPS and 360,000 write IOPS.
Image with NVMe disk interface is recommended, CentOS 7 for ScyllaDB Enterprise 2020.1 and older, and Ubuntu 20 for 2021.1 and later.
Image with NVMe disk interface is recommended.
(`More info <https://cloud.google.com/compute/docs/disks/local-ssd>`_)
Recommended instances types are `n1-highmem <https://cloud.google.com/compute/docs/general-purpose-machines#n1_machines>`_ and `n2-highmem <https://cloud.google.com/compute/docs/general-purpose-machines#n2_machines>`_
ScyllaDB Web Installer is a platform-agnostic installation script you can run with ``curl`` to install ScyllaDB on Linux.
See `ScyllaDB Download Center <https://www.scylladb.com/download/#core>`_ for information on manually installing ScyllaDB with platform-specific installation packages.
See :doc:`Install ScyllaDB Linux Packages </getting-started/install-scylla/install-on-linux/>` for information on manually installing ScyllaDB with platform-specific installation packages.
Prerequisites
--------------
@@ -20,44 +20,50 @@ To install ScyllaDB with Web Installer, run:
curl -sSf get.scylladb.com/server | sudo bash
By default, running the script installs the latest official version of ScyllaDB Open Source. You can use the following
options to install a different version or ScyllaDB Enterprise:
..list-table::
:widths:20 25 55
:header-rows:1
* - Option
- Acceptable values
- Description
* - ``--scylla-product``
- ``scylla`` | ``scylla-enterprise``
- Specifies the ScyllaDB product to install: Open Source (``scylla``) or Enterprise (``scylla-enterprise``) The default is ``scylla``.
* - ``--scylla-version``
- ``<version number>``
- Specifies the ScyllaDB version to install. You can specify the major release (``x.y``) to install the latest patch for that version or a specific patch release (``x.y.x``). The default is the latest official version.
By default, running the script installs the latest official version of ScyllaDB.
You can run the command with the ``-h`` or ``--help`` flag to print information about the script.
Examples
===========
Installing a Non-default Version
---------------------------------------
Installing ScyllaDB Open Source 6.0.1:
You can install a version other than the default.
Versions 2025.1 and Later
==============================
Run the command with the ``--scylla-version`` option to specify the version
All releases are available as a Docker container, EC2 AMI, GCP, and Azure images.
.._os-support-definition:
By *supported*, it is meant that:
- A binary installation package is available to `download <https://www.scylladb.com/download/>`_.
- The download and install procedures are tested as part of ScyllaDB release process for each version.
- An automated install is included from :doc:`ScyllaDB Web Installer for Linux tool </getting-started/installation-common/scylla-web-installer>` (for latest versions)
- The download and install procedures are tested as part of the ScyllaDB release process for each version.
- An automated install is included from :doc:`ScyllaDB Web Installer for Linux tool </getting-started/installation-common/scylla-web-installer>` (for the latest versions).
You can `build ScyllaDB from source <https://github.com/scylladb/scylladb#build-prerequisites>`_
on other x86_64 or aarch64 platforms, without any guarantees.
**Learn: What a seed node is, and how they should be used in a ScyllaDB Cluster**
**Audience: ScyllaDB Administrators**
What is the Function of a Seed Node in ScyllaDB?
------------------------------------------------
..note::
Seed nodes function was changed in ScyllaDB Open Source 4.3 and ScyllaDB Enterprise 2021.1; if you are running an older version, see :ref:`Older Version Of ScyllaDB <seeds-older-versions>`.
A ScyllaDB seed node is a node specified with the ``seeds`` configuration parameter in ``scylla.yaml``. It is used by new node joining as the first contact point.
It allows nodes to discover the cluster ring topology on startup (when joining the cluster). This means that any time a node is joining the cluster, it needs to learn the cluster ring topology, meaning:
@@ -22,27 +9,8 @@ It allows nodes to discover the cluster ring topology on startup (when joining t
- Which token ranges are available
- Which nodes will own which tokens when a new node joins the cluster
**Once the nodes have joined the cluster, seed node has no function.**
**Once the nodes have joined the cluster, the seed node has no function.**
The first node in a new cluster needs to be a seed node.
.._seeds-older-versions:
Older Version Of ScyllaDB
-------------------------
In ScyllaDB releases older than ScyllaDB Open Source 4.3 and ScyllaDB Enterprise 2021.1, seed node has one more function: it assists with :doc:`gossip </kb/gossip>` convergence.
Gossiping with other nodes ensures that any update to the cluster is propagated across the cluster. This includes detecting and alerting whenever a node goes down, comes back, or is removed from the cluster.
This functions was removed, as described in `Seedless NoSQL: Getting Rid of Seed Nodes in ScyllaDB <https://www.scylladb.com/2020/09/22/seedless-nosql-getting-rid-of-seed-nodes-in-scylla/>`_.
If you run an older ScyllaDB release, we recommend upgrading to version 4.3 (ScyllaDB Open Source) or 2021.1 (ScyllaDB Enterprise) or later. If you choose to run an older version, it is good practice to follow these guidelines:
* The first node in a new cluster needs to be a seed node.
* Ensure that all nodes in the cluster have the same seed nodes listed in each node's scylla.yaml.
* To maintain resiliency of the cluster, it is recommended to have more than one seed node in the cluster.
* If you have more than one seed in a DC with multiple racks (or availability zones), make sure to put your seeds in different racks.
* You must have at least one node that is not a seed node. You cannot create a cluster where all nodes are seed nodes.
* You should have more than one seed node.
The first node in a new cluster must be a seed node. In typical scenarios,
there's no need to configure more than one seed node.
* configuration_encryptor - :doc:`encrypt at rest </operating-scylla/security/encryption-at-rest>` sensitive scylla configuration entries using system key.
* scylla local-file-key-generator - Generate a local file (system) key for :doc:`encryption at rest </operating-scylla/security/encryption-at-rest>`, with the provided length, Key algorithm, Algorithm block mode and Algorithm padding method.
*`scyllatop <https://www.scylladb.com/2016/03/22/scyllatop/>`_ - A terminal base top-like tool for scylladb collectd/prometheus metrics.
*:doc:`scylla_dev_mode_setup</getting-started/installation-common/dev-mod>` - run ScyllaDB in Developer Mode.
The cassandra-stress tool is used for benchmarking and load testing both ScyllaDB and Cassandra clusters. The cassandra-stress tool also supports testing arbitrary CQL tables and queries to allow users to benchmark their data model.
This documentation focuses on user mode as this allows the testing of your actual schema.
Usage
-----
There are several operation types:
* write-only, read-only, and mixed workloads of standard data
* write-only and read-only workloads for counter columns
* user configured workloads, running custom queries on custom schemas
* The syntax is cassandra-stress <command> [options]. If you want more information on a given command or options, just run cassandra-stress help
Commands:
read: Multiple concurrent reads - the cluster must first be populated by a write test.
write: Multiple concurrent writes against the cluster.
mixed: Interleaving of any basic commands, with configurable ratio and distribution - the cluster must first be populated by a write test.
counter_write: Multiple concurrent updates of counters.
counter_read: Multiple concurrent reads of counters. The cluster must first be populated by a counterwrite test.
user: Interleaving of user provided queries, with configurable ratio and distribution.
help: Print help for a command or option.
print: Inspect the output of a distribution definition.
legacy: Legacy support mode.
Primary Options:
-pop: Population distribution and intra-partition visit order.
-insert: Insert specific options relating to various methods for batching and splitting partition updates.
-col: Column details such as size and count distribution, data generator, names, comparator and if super columns should be used.
-rate: Thread count, rate limit or automatic mode (default is auto).
-mode: CQL with options.
-errors: How to handle errors when encountered during stress.
-sample: Specify the number of samples to collect for measuring latency.
-schema: Replication settings, compression, compaction, etc.
-node: Nodes to connect to.
-log: Where to log progress to, and the interval at which to do it.
-transport: Custom transport factories.
-port: The port to connect to cassandra nodes on.
-sendto: Specify a stress server to send this command to.
-graph: Graph recorded metrics.
-tokenrange: Token range settings.
User mode
---------
User mode allows you to use your stress your own schemas. This can save time in the long run rather than building an application and then realising your schema doesn’t scale.
Profile
.......
User mode requires a profile defined in YAML. Multiple YAML files may be specified in which case operations in the ops argument are referenced as specname.opname.
An identifier for the profile:
..code-block::cql
specname:staff_activities
The keyspace for the test:
..code-block::cql
keyspace:staff
CQL for the keyspace. Optional if the keyspace already exists:
CQL for the table. Optional if the table already exists:
..code-block::cql
table_definition:|
CREATETABLEstaff_activities(
nametext,
whentimeuuid,
whattext,
PRIMARYKEY(name,when,what)
)
Optional meta information on the generated columns in the above table. The min and max only apply to text and blob types. The distribution field represents the total unique population distribution of that column across rows:
This will create the schema then run tests for 1 minute with an equal number of inserts, latest_event queries and events queries. Additionally the table will be truncated once before the test.
The full example can be found here yaml
Running a user mode test with multiple yaml files:
..code-block::
cassandra-stress user profile=./example.yaml,./example2.yaml duration=1m “ops(ex1.insert=1,ex1.latest_event=1,ex2.insert=2)” truncate=once
This will run operations as specified in both the example.yaml and example2.yaml files. example.yaml and example2.yaml can reference the same table
although care must be taken that the table definition is identical (data generation specs can be different).
.. Lightweight transaction support
.. ...............................
.. cassandra-stress supports lightweight transactions. In this it will first read current data from Cassandra and then uses read value(s) to fulfill lightweight transaction condition(s).
.. Lightweight transaction update query:
.. .. code-block:: cql
.. queries:
.. regularupdate:
.. cql: update blogposts set author = ? where domain = ? and published_date = ?
.. fields: samerow
.. updatewithlwt:
.. cql: update blogposts set author = ? where domain = ? and published_date = ? IF body = ? AND url = ?
Cassandra Stress is not part of ScyllaDB and it is not distributed along side it anymore. It has it's own seperate repository and release cycle. More information about it can be found on `GitHub <https://github.com/scylladb/cassandra-stress>`_ or on `DockerHub <https://hub.docker.com/r/scylladb/cassandra-stress>`_.
@@ -257,8 +257,6 @@ ScyllaDB uses experimental flags to expose non-production-ready features safely.
In recent ScyllaDB versions, these features are controlled by the ``experimental_features`` list in scylla.yaml, allowing one to choose which experimental to enable.
Use ``scylla --help`` to get the list of experimental features.
ScyllaDB Enterprise and ScyllaDB Cloud do not officially support experimental Features.
.._admin-keyspace-storage-options:
Keyspace storage options
@@ -286,6 +284,24 @@ Before creating keyspaces with object storage, you also need to
:ref:`configure <object-storage-configuration>` the object storage
credentials and endpoint.
.._admin-views-with-tablets:
Views with tablets
------------------
By default, Materialized Views (MV) and Secondary Indexes (SI)
are disabled in keyspaces that use tablets.
Support for MV and SI with tablets is experimental and must be explicitly
enabled in the ``scylla.yaml`` configuration file by specifying
the ``views-with-tablets`` option:
..code-block::yaml
experimental_features:
- views-with-tablets
Monitoring
==========
ScyllaDB exposes interfaces for online monitoring, as described below.
**cluster repair** - A process that runs in the background and synchronizes the data between nodes. It only repairs keyspaces with tablets enabled (default).
To repair keyspaces with tablets disabled (vnodes-based), see :doc:`nodetool repair </operating-scylla/nodetool-commands/repair/>`.
Running ``cluster repair`` on a **single node** synchronizes all data on all nodes in the cluster.
To synchronize all data in clusters that have both tablets-based and vnodes-based keyspaces, run :doc:`nodetool repair -pr </operating-scylla/nodetool-commands/repair/>` on **all**
of the nodes in the cluster, and :doc:`nodetool cluster repair </operating-scylla/nodetool-commands/cluster/repair/>` on **any** of the nodes in the cluster.
To check if a keyspace enables tablets, use:
..code-block::cql
DESCRIBEKEYSPACE`keyspace_name`
ScyllaDB node will ensure the exclusivity of tablet repair and maintenance operations (add/remove/decommission/replace/rebuild).
ScyllaDB nodetool cluster repair command supports the following options:
-``-dc````--in-dc`` syncs data between all nodes in a list of Data Centers (DCs).
.. warning:: This command leaves part of the data subset (all remaining DCs) out of sync.
For example:
::
nodetool cluster repair -dc US_DC
nodetool cluster repair --in-dc US_DC, EU_DC
-``-hosts````--in-hosts`` syncs the data only between a list of nodes, using host ID.
..warning:: this command leaves part of the data subset (on nodes that are *not* listed) out of sync.
-``--tablet-tokens`` selects which tablets to repair. When the listed token belongs to a tablet, the whole tablet that owns the token will be repaired. By default, all tablets are repaired.
..warning:: this command leaves part of the data subset (on nodes that are *not* listed) out of sync.
**Repair** - a process that runs in the background and synchronizes the data between nodes.
**Repair** - A process that runs in the background and synchronizes the data between nodes. It only repairs keyspaces with tablets disabled (vnode-based).
To repair keyspaces with tablets, see :doc:`nodetool cluster repair </operating-scylla/nodetool-commands/cluster/repair/>`.
When running ``nodetool repair`` on a **single node**, it acts as the **repair master**. Only the data contained in the master node and its replications will be repaired.
Typically, this subset of data is replicated on many nodes in the cluster, often all, and the repair process syncs between all the replicas until the master data subset is in-sync.
To repair **all** of the data in the cluster, you need to run a repair on **all** of the nodes in the cluster, or let `ScyllaDB Manager <https://manager.docs.scylladb.com/>`_ do it for you.
To synchronize all data in clusters that have both tablets- and vnodes-based keyspaces, run :doc:`nodetool repair -pr </operating-scylla/nodetool-commands/repair/>` on **all**
of the nodes in the cluster, and :doc:`nodetool cluster repair </operating-scylla/nodetool-commands/cluster/repair/>` on **any** of the nodes in the cluster.
To check if a keyspace enables tablets, use:
..code-block::cql
DESCRIBEKEYSPACE`keyspace_name`
..note:: Run the :doc:`nodetool repair </operating-scylla/nodetool-commands/repair/>` command regularly. If you delete data frequently, it should be more often than the value of ``gc_grace_seconds`` (by default: 10 days), for example, every week. Use the **nodetool repair -pr** on each node in the cluster, sequentially.
* When a task completes, its status is temporarily stored on the executing node
* Status information is retained for up to :confval:`task_ttl_in_seconds` seconds
* The status information of a completed task is automatically removed after being queried with ``tasks status`` or ``tasks tree``
*``tasks wait`` returns the status, but it does not remove the task information of the queried task
..note:: Multiple status queries using ``tasks status`` and ``tasks tree`` for the same completed task will only receive a response for the first query, since the status is removed after being retrieved.
Supported tasks suboperations
-----------------------------
*:doc:`abort </operating-scylla/nodetool-commands/tasks/abort>` - Aborts the task.
*:doc:`drain </operating-scylla/nodetool-commands/tasks/drain>` - Unregisters all finished local tasks.
*:doc:`user-ttl </operating-scylla/nodetool-commands/tasks/user-ttl>` - Gets or sets user_task_ttl value.
*:doc:`list </operating-scylla/nodetool-commands/tasks/list>` - Lists tasks in the module.
@@ -85,6 +86,7 @@ Operations that are not listed below are currently not available.
*:doc:`checkandrepaircdcstreams </operating-scylla/nodetool-commands/checkandrepaircdcstreams/>` - Checks and fixes CDC streams.
*:doc:`cleanup </operating-scylla/nodetool-commands/cleanup/>` - Triggers the immediate cleanup of keys no longer belonging to a node.
*:doc:`clearsnapshot </operating-scylla/nodetool-commands/clearsnapshot/>` - This command removes snapshots.
*:doc:`cluster <nodetool-commands/cluster/index>` - Run a cluster operation.
*:doc:`compactionhistory </operating-scylla/nodetool-commands/compactionhistory/>` - Provides the history of compactions.
*:doc:`compactionstats </operating-scylla/nodetool-commands/compactionstats/>`- Print statistics on compactions.
*:doc:`compact </operating-scylla/nodetool-commands/compact/>`- Force a (major) compaction on one or more column families.
@@ -115,7 +117,7 @@ Operations that are not listed below are currently not available.
*:doc:`rebuild </operating-scylla/nodetool-commands/rebuild/>`:code:`[<src-dc-name>]`- Rebuild data by streaming from other nodes
*:doc:`refresh </operating-scylla/nodetool-commands/refresh/>`- Load newly placed SSTables to the system without restart
*:doc:`removenode </operating-scylla/nodetool-commands/removenode/>`- Remove node with the provided ID
*:doc:`repair <nodetool-commands/repair/>`:code:`<keyspace>`:code:`<table>` - Repair one or more tables
*:doc:`repair <nodetool-commands/repair/>`:code:`<keyspace>`:code:`<table>` - Repair one or more vnode tables.
*:doc:`restore </operating-scylla/nodetool-commands/restore/>` - Load SSTables from a designated bucket in object store into a specified keyspace or table
*:doc:`resetlocalschema </operating-scylla/nodetool-commands/resetlocalschema/>` - Reset the node's local schema.
*:doc:`ring <nodetool-commands/ring/>` - The nodetool ring command display the token ring information.
ScyllaDB Open Source 3.0 and later and ScyllaDB Enterprise 2019.1 and later support :doc:`Materialized View(MV) </features/materialized-views>` and :doc:`Secondary Index(SI) </features/secondary-indexes>`.
ScyllaDB supports:doc:`Materialized View(MV) </features/materialized-views>` and :doc:`Secondary Index(SI) </features/secondary-indexes>`.
When migrating data from Apache Cassandra with MV or SI, you can either:
Make sure to use the same ScyllaDB **patch release** on the new/replaced node, to match the rest of the cluster. It is not recommended to add a new node with a different release to the cluster.
For example, use the following for installing ScyllaDB patch release (use your deployed version)
For example, use the following for installing ScyllaDB patch release (use your deployed version):
#. If you are using ScyllaDB Monitoring, update the `monitoring stack <https://monitoring.docs.scylladb.com/stable/install/monitoring_stack.html#configure-scylla-nodes-from-files>`_ to monitor it. If you are using ScyllaDB Manager, make sure you install the `Manager Agent <https://manager.docs.scylladb.com/stable/install-scylla-manager-agent.html>`_ and Manager can access the new DC.
* :doc:`Adding a New Data Center Into an Existing ScyllaDB Cluster </operating-scylla/procedures/cluster-management/add-dc-to-existing-dc/>`
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.