Since 5e1cf90a51
("build: replace tools/java submodule with packaged cassandra-stress")
we run pre-packaged cassandra-stress. As such, we don't need to look for
a Java runtime (which is missing on the frozen toolchain) and can
rely on the cassandra-stress package finding its own Java runtime.
Fix by just dropping all the Java-finding stuff.
Note: Java 11 is in fact present on the frozen toolchain, just
not in a way that pgo.py can find it.
Fixes#24176.
Closesscylladb/scylladb#24178
(cherry picked from commit 29932a5af1)
Closesscylladb/scylladb#24254
The default and recommended way to use zstd compressors is to let
zstd allocate and free memory for compressors on its own.
That's what we did for zstd compressors used in RPC compression.
But it turns out that it generates allocation patterns we dislike.
We expected zstd not to generate allocations after the context object
is initialized, but it turns out that it tries to downsize the context
sometimes (by reallocation). We don't want that because the allocations
generated by zstd are large (1 MiB with the parameters we use),
so repeating them periodically stresses the reclaimer.
We can avoid this by using the "static context" API of zstd,
in which the memory for context is allocated manually by the user
of the library. In this mode, zstd doesn't allocate anything
on its own.
The implementation details of this patch adds a consideration for
forward compatibility: later versions of Scylla can't use a
window size greater than the one we hardcoded in this patch
when talking to the old version of the decompressor.
(This is not a problem, since those compressors are only used
for RPC compression at the moment, where cross-version communication
can be prevented by bumping COMPRESSOR_NAME. But it's something
that the developer who changes the window size must _remember_ to do).
Fixes#24160Fixes#24183Closesscylladb/scylladb#24161
(cherry picked from commit 185a032044)
Closesscylladb/scylladb#24281
Max purgeable has two possible values for each partition: one for
regular tombstones and one for shadowable ones. Yet currently a single
member is used to cache the max-purgeable value for the partition, so
whichever kind of tombstone is checked first, its max-purgeable will
become sticky and apply to the other kind of tombstones too. E.g. if the
first can_gc() check is for a regular tombstone, its max-purgeable will
apply to shadowable tombstones in the partition too, meaning they might
not be purged, even though they are purgeable, as the shadowable
max-purgeable is expected to be more lenient. The other way around is
worse, as it will result in regular tombstone being incorrectly purged,
permitted by the more lenient shadowable tombstone max-purgeable.
Fix this by caching the two possible values in two separate members.
A reproducer unit test is also added.
Fixes: scylladb/scylladb#23272Closesscylladb/scylladb#24171
(cherry picked from commit 7db956965e)
Closesscylladb/scylladb#24329
Fixes: #23970
use correct string literals:
KMIP_TAG_CRYPTOGRAPHIC_LENGTH_STR --> KMIP_TAGSTR_CRYPTOGRAPHIC_LENGTH
KMIP_TAG_CRYPTOGRAPHIC_USAGE_MASK_STR --> KMIP_TAGSTR_CRYPTOGRAPHIC_USAGE_MASK
From https://github.com/scylladb/scylladb/issues/23970 description of the
problem (emphasizes are mine):
When transparent data encryption at rest is enabled with KMIP as a key
provider, the observation is that before creating a new key, Scylla tries
to locate an existing key with provided specifications (key algorithm &
length), with the intention to re-use existing key, **but the attributes
sent in the request have minor spelling mistakes** which are rejected by
the KMIP server key provider, and hence scylla assumes that a key with
these specifications doesn't exist, and creates a new key in the KMIP
server. The issue here is that for every new table, ScyllaDB will create
a key in the KMIP server, which could clutter the KMS, and make key
lifecycle management difficult for DBAs.
Closesscylladb/scylladb#24057
(cherry picked from commit 37854acc92)
Closesscylladb/scylladb#24303
The test test_multiple_unpublished_cdc_generations reads the CDC
generation timestamps to verify they are published in the correct order.
To do so it issues reads in a loop with a short sleep period and checks
the differences between consecutive reads, assuming they are monotonic.
However the assumption that the reads are monotonic is not valid,
because the reads are issued with consistency_level=ONE, thus we may read
timestamps {A,B} from some node, then read timestamps {A} from another
node that didn't apply the write of the new timestamp B yet. This will
trigger the assert in the test and fail.
To ensure the reads are monotonic we change the test to use consistency
level ALL for the reads.
Fixesscylladb/scylladb#24262Closesscylladb/scylladb#24272
(cherry picked from commit 3a1be33143)
Closesscylladb/scylladb#24336
When map_reduce is called on a collection, one shouldn't expect that it
processes the elements of the collection in any specific order.
Current test of map-reduce over boost outcome assumes that if reduce
function is the string concatenation, then it would concatenate the
given vector of strings in the order they are listed. That requirement
should be relaxed, and the result may have reversed concatentation.
Fixesscylladb/scylladb#24321
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Closesscylladb/scylladb#24325
(cherry picked from commit a65ffdd0df)
Closesscylladb/scylladb#24337
The get_blob method linearizes data by copying it into a single buffer, which can cause 'oversized allocation' warnings.
In this commit we avoid copying by creating input stream on top of the original fragmened managed bytes, returned by untyped_result_set_row::get_view.
fixes scylladb/scylladb#23903
backport: no need, not a critical issue.
- (cherry picked from commit 6496ae6573)
- (cherry picked from commit f245b05022)
Parent PR: #24123Closesscylladb/scylladb#24317
* github.com:scylladb/scylladb:
raft_sys_table_storage: avoid temporary buffer when deserializing log_entry
serializer_impl.hh: add as_input_stream(managed_bytes_view) overload
In test_tablet_mv_replica_pairing_during_replace, after we create
the tables, we want to wait for their tablets to distribute evenly
across nodes and we have a wait_for for that.
But we don't await this wait_for, so it's a no-op. This patch fixes
it by adding the missing await.
Refs scylladb/scylladb#23982
Refs scylladb/scylladb#23997Closesscylladb/scylladb#24250
(cherry picked from commit 5074daf1b7)
Closesscylladb/scylladb#24311
The get_blob() method linearizes data by copying it into a
single buffer, which can trigger "oversized allocation" warnings.
This commit avoids that extra copy by creating an input stream
directly over the original fragmented managed bytes returned by
untyped_result_set_row::get_view().
Fixesscylladb/scylladb#23903
(cherry picked from commit f245b05022)
Apparently `test_kms_network_error` will succeed at any circumstances since most of our exceptions derive from `std::exception`, so whatever happens to the test, for whatever reason it will throw, the test will be marked as passed.
Start catching the exact exception that we expect to be thrown.
Closesscylladb/scylladb#24065
(cherry picked from commit 2d5c0f0cfd)
Closesscylladb/scylladb#24147
Currently, stream_manager is initialized after storage_service and
so it is stopped before the storage_service is. In its stop method
storage_service accesses stream_manager which is uninitialized
at a time.
Move stream_manager initialization over the storage_service initialization.
Fixes: #23207.
Closesscylladb/scylladb#24008
(cherry picked from commit 9c03255fd2)
Closesscylladb/scylladb#24190
Each view update is correlated to a write that generates it (aside from view
building which is throttled separately). These writes are limited by a throttling
mechanism, which effectively works by performing the writes with CL=ALL if
ongoing writes exceed some memory usage limit
When writes generate view updates, they usually also need to perform a read. This read
goes through a read concurrency semaphore where it can get delayed or killed. The
semaphore allows up to 100 concurrent reads and puts all remaining reads in a queue.
If the number of queued reads exceeds a specific limit, the view update will fail on
the replica, causing inconsistencies.
This limit is not necessary. When a read gets queued on the semaphore, the write that's
causing the view update is paused, so the write takes part in the regular write throttling.
If too many writes get stuck on view update reads, they will get throttled, so their
number is limited and the number of queued reads is also limited to the same amount.
In this patch we remove the specified queue length limit for the view update read concurrency
semaphore. Instead of this limit, the queue will be now limited indirectly, by the base write
throttling mechanism. This may allow the queue grow longer than with the previous limit, but
it shouldn't ever cause issues - we only perform up to 100 actual reads at once, and the
remaining ones that get queued use a tiny amount of memory, less than the writes that generated
them and which are getting limited directly.
Fixes https://github.com/scylladb/scylladb/issues/23319Closesscylladb/scylladb#24112
(cherry picked from commit 5920647617)
Closesscylladb/scylladb#24170
Starting with 2025.1, ScyllaDB versions are no longer called "Enterprise",
but the OS support page still uses that label.
This commit fixes that by replacing "Enterprise" with "ScyllaDB".
This update is required since we've removed "Enterprise" from everywhere else,
including the commands, so having it here is confusing.
Fixes https://github.com/scylladb/scylladb/issues/24179Closesscylladb/scylladb#24181
(cherry picked from commit 2d7db0867c)
Closesscylladb/scylladb#24204
We're reducing the log level in case the provided property file is incomplete.
The rationale behind this change is related to how CCM interacts with Scylla:
* The `GossipingPropertyFileSnitch` reloads the `cassandra-rackdc.properties`
configuration every 60 seconds.
* When a new node is added to the cluster, CCM recreates the
`cassandra-rackdc.properties` file for EVERY node.
If those two processes start happening at about the same time, it may lead
to Scylla trying to read a not-completely-recreated file, and an error will
be produced.
Although we would normally fix this issue and try to avoid the race, that
behavior will be no longer relevant as we're making the rack and DC values
immutable (cf. scylladb/scylladb#23278). What's more, trying to fix the problem
in the older versions of Scylla could bring a more serious regression. Having
that in mind, this commit is a compromise between making CI less flaky and
having minimal impact when backported.
We do the same for when the format of the file is invalid: the rationale
is the same.
We also do that for when there is a double declaration. Although it seems
impossible that this can stem from the same scenario the other two errors
can (since if the format of the file is valid, the error is justified;
if the format is invalid, it should be detected sooner than a doubled
declaration), let's stay consistent with the logging level.
Fixesscylladb/scylladb#20092Closesscylladb/scylladb#23956
(cherry picked from commit 9ebd6df43a)
Closesscylladb/scylladb#24143
In the test test_tablet_mv_replica_pairing_during_replace we stop 2 out of 4 servers while using RF=2.
Even though in the test we use exactly 4 tablets (1 for each replica of a base table and view), intially,
the tablets may not be split evenly between all nodes. Because of this, even when we chose a server that
hosts the view and a different server that hosts the base table, we sometimes stoped all replicas of the
base or the view table because the node with the base table replica may also be a view replica.
After some time, the tablets should be distributed across all nodes. When that happens, there will be
no common nodes with a base and view replica, so the test scenario will continue as planned.
In this patch, we add this waiting period after creating the base and view, and continue the test only
when all 4 tablets are on distinct nodes.
Fixes https://github.com/scylladb/scylladb/issues/23982
Fixes https://github.com/scylladb/scylladb/issues/23997Closesscylladb/scylladb#24111
(cherry picked from commit bceb64fb5a)
Closesscylladb/scylladb#24126
In this PR, we're adjusting most of the cluster tests so that they pass
with the `rf_rack_valid_keyspaces` configuration option enabled. In most
cases, the changes are straightforward and require little to no additional
insight into what the tests are doing or verifying. In some, however, doing
that does require a deeper understanding of the tests we're modifying.
The justification for those changes and their correctness is included in
the commit messages corresponding to them.
Note that this PR does not cover all of the cluster tests. There are few
remaining ones, but they require a bit more effort, so we delegate that
work to a separate PR.
I tested all of the modified tests locally with `rf_rack_valid_keyspaces`
set to true, and they all passed.
Fixes scylladb/scylladb#23959
Backport: we want to backport these changes to 2025.1 since that's the version where we introduced RF-rack-valid keyspaces in. Although the tests are not, by default, run with `rf_rack_valid_keyspaces` enabled yet, that will most likely change in the near future and we'll also want to backport those changes too. The reason for this is that we want to verify that Scylla works correctly even with that constraint.
- (cherry picked from commit dbb8835fdf)
- (cherry picked from commit 9281bff0e3)
- (cherry picked from commit 5b83304b38)
- (cherry picked from commit 73b22d4f6b)
- (cherry picked from commit 2882b7e48a)
- (cherry picked from commit 4c46551c6b)
- (cherry picked from commit 92f7d5bf10)
- (cherry picked from commit 5d1bb8ebc5)
- (cherry picked from commit d3c0cd6d9d)
- (cherry picked from commit 04567c28a3)
- (cherry picked from commit c8c28dae92)
- (cherry picked from commit c4b32c38a3)
- (cherry picked from commit ee96f8dcfc)
Parent PR: #23661Closesscylladb/scylladb#24121
* github.com:scylladb/scylladb:
test/cluster/suite.yaml: Enable rf_rack_valid_keyspaces in suite
test/cluster: Disable rf_rack_valid_keyspaces in problematic tests
test/cluster/test_tablets: Divide rack into two to adjust tests to RF-rack-validity
test/cluster/test_tablets: Adjust test_tablet_rf_change to RF-rack-validity
test/cluster/test_tablet_repair_scheduler.py: Adjust to RF-rack-validity
test/pylib/repair.py: Assign nodes to multiple racks in create_table_insert_data_for_repair
test/cluster/test_zero_token_nodes_topology_ops: Adjust to RF-rack-validity
test/cluster/test_zero_token_nodes_no_replication.py: Adjust to RF-rack-validity
test/cluster/test_zero_token_nodes_multidc.py: Adjust to RF-rack-validity
test/cluster/test_not_enough_token_owners.py: Adjust to RF-rack-validity
test/cluster/test_multidc.py: Adjust to RF-rack-validity
test/cluster/object_store/test_backup.py: Adjust to RF-rack-validity
test/cluster: Adjust simple tests to RF-rack-validity
The test is failing in CI sometimes due to performance reasons.
There are at least two problems:
1. The initial 500ms (wall time) sleep might be too short. If the reclaimer
doesn't manage to evict enough memory during this time, the test will fail.
2. During the 100ms (thread CPU time) window given by the test to background
reclaim, the `background_reclaim` scheduling group isn't actually
guaranteed to get any CPU, regardless of shares. If the process is
switched out inside the `background_reclaim` group, it might
accumulate so much vruntime that it won't get any more CPU again
for a long time.
We have seen both.
This kind of timing test can't be run reliably on overcommitted machines
without modifying the Seastar scheduler to support that (by e.g. using
thread clock instead of wall time clock in the scheduler), and that would
require an amount of effort disproportionate to the value of the test.
So for now, to unflake the test, this patch removes the performance test
part. (And the tradeoff is a weakening of the test). After the patch,
we only check that the background reclaim happens *eventually*.
Fixes https://github.com/scylladb/scylladb/issues/15677
Backporting this is optional. The test is flaky even in stable branches, but the failure is rare.
- (cherry picked from commit c47f438db3)
- (cherry picked from commit 1c1741cfbc)
Parent PR: #24030Closesscylladb/scylladb#24094
* github.com:scylladb/scylladb:
logalloc_test: don't test performance in test `background_reclaim`
logalloc: make background_reclaimer::free_memory_threshold publicly visible
A decommissioned node is removed from a raft config after operation is
marked as completed. This is required since otherwise the decommissioned
node will not see that decommission has completed (the status is
propagated through raft). But right after the decommission is marked as
completed a decommissioned node may terminate, so in case of a two node
cluster, the configuration change that removes it from the raft will fail,
because there will no be quorum.
The solution is to mark the decommissioning node as non voter before
reporting the operation as completed.
Fixes: #24026
Backport to 2025.2 because it fixes a potential hang. Don't backport to
branches older than 2025.2 because they don't have
8b186ab0ff, which caused this issue.
Closesscylladb/scylladb#24027
(cherry picked from commit c6e1758457)
Closesscylladb/scylladb#24093
When schema is changed, sstable set is updated according to the compaction strategy of the new schema (no changes to set are actually made, just the underlying set type is updated), but the problem is that it happens without a lock, causing a use-after-free when running concurrently to another set update.
Example:
1) A: sstable set is being updated on compaction completion
2) B: schema change updates the set (it's non deferring, so it happens in one go) and frees the set used by A.
3) when A resumes, system will likely crash since the set is freed already.
ASAN screams about it:
SUMMARY: AddressSanitizer: heap-use-after-free sstables/sstable_set.cc ...
Fix is about deferring update of the set on schema change to compaction, which is triggered after new schema is set. Only strategy state and backlog tracker are updated immediately, which is fine since strategy doesn't depend on any particular implementation of sstable set.
Fixes#22040.
- (cherry picked from commit 628bec4dbd)
- (cherry picked from commit 434c2c4649)
Parent PR: #23680Closesscylladb/scylladb#24085
* github.com:scylladb/scylladb:
replica: Fix use-after-free with concurrent schema change and sstable set update
sstables: Implement sstable_set_impl::all_sstable_runs()
Materialized Views and Secondary Indexes are yet another features that
keyspaces with tablets do not support, but these were not listed in a
warning message returned to the user on CREATE KEYSPACE statement. This
commit adds the 2 missing features.
Fixes: #24006Closesscylladb/scylladb#23902
(cherry picked from commit f740f9f0e1)
Closesscylladb/scylladb#24084
When the topology coordinator is shut down while doing a long-running
operation, the current operation might throw a raft::request_aborted
exception. This is not a critical issue and should not be logged with
ERROR verbosity level.
Make sure that all the try..catch blocks in the topology coordinator
which:
- May try to acquire a new group0 guard in the `try` part
- Have a `catch (...)` block that print an ERROR-level message
...have a pass-through `catch (raft::request_aborted&)` block which does
not log the exception.
Fixes: scylladb/scylladb#22649Closesscylladb/scylladb#23962
(cherry picked from commit 156ff8798b)
Closesscylladb/scylladb#24082
test_tablet_repair_hosts_filter checks whether the host filter
specfied for tablet repair is correctly persisted. To check this,
we need to ensure that the repair is still ongoing and its data
is kept. The test achieves that by failing the repair on replica
side - as the failed repair is going to be retried.
However, if the filter does not contain any host (included_host_count = 0),
the repair is started on no replica, so the request succeeds
and its data is deleted. The test fails if it checks the filter
after repair request data is removed.
Fail repair on topology coordinator side, so the request is ongoing
regardless of the specified hosts.
Fixes: #23986.
Closesscylladb/scylladb#24003
(cherry picked from commit 2549f5e16b)
Closesscylladb/scylladb#24080
Almost all of the tests have been adjusted to be able to be run with
the `rf_rack_valid_keyspaces` configuration option enabled, while
the rest, a minority, create nodes with it disabled. Thanks to that,
we can enable it by default, so let's do that.
(cherry picked from commit ee96f8dcfc)
Some of the tests in the test suite have proven to be more problematic
in adjusting to RF-rack-validity. Since we'd like to run as many tests
as possible with the `rf_rack_valid_keyspaces` configuration option
enabled, let's disable it in those. In the following commit, we'll enable
it by default.
(cherry picked from commit c4b32c38a3)
compress: fix an internal error when a specific debug log is enabled
While iterating over the recent 69684e16d8,
series I shot myself in the foot by defining `algorithm_to_name(algorithm::none)`
to be an internal error, and later calling that anyway in a debug log.
(Tests didn't catch it because there's no test which simultaneously
enables the debug log and configures some table to have no compression).
This proves that `algorithm_to_name` is too much of a footgun.
Fix it so that calling `algorithm_to_name(algorithm::none)` is legal.
In hindsight, I should have done that immediately.
Fixes#23624
Fix for recently-added code, no backporting needed.
Closesscylladb/scylladb#23625
* github.com:scylladb/scylladb:
test_sstable_compression_dictionaries: reproduce an internal error in debug logging
compress: fix an internal error when a specific debug log is enabled
(cherry picked from commit 746382257c)
compress: distribute compression dictionaries over shards
We don't want each shard to have its own copy of each dictionary.
It would unnecessary pressure on cache and memory.
Instead, we want to share dictionaries between shards.
Before this commit, all dictionaries live on shard 0.
All other shards borrow foreign shared pointers from shard 0.
There's a problem with this setup: dictionary blobs receive many random
accesses. If shard 0 is on a remote NUMA node, this could pose
a performance problem.
Therefore, for each dictionary, we would like to have one copy per NUMA node,
not one copy per the entire machine. And each shard should use the copy
belonging to its own NUMA node. This is the main goal of this patch.
There is another issue with putting all dicts on shard 0: it eats
an assymetric amount of memory from shard 0.
This commit spreads the ownership of dicts over all shards within
the NUMA group, to make the situation more symmetric.
(Dict owner is decided based on the hash of dict contents).
It should be noted that the last part isn't necessarily a good thing,
though.
While it makes the situation more symmetric within each node,
it makes it less symmetric across the cluster, if different node
sizes are present.
If dicts occupy 1% of memory on each shard of a 100-shard node,
then the same dicts would occupy 100% of memory on a 1-shard node.
So for the sake of cluster-wide symmetry, we might later want to consider
e.g. making the memory limit for dictionaries inversely proportional
to the number of shards.
New functionality, added to a feature which isn't in any stable branch yet. No backporting.
Edit: no backporting to <=2025.1, but need backporting to 2025.2, where the feature is introduced.
Fixes#24108
- (cherry picked from commit 0e4d0ded8d)
- (cherry picked from commit 8649adafa8)
- (cherry picked from commit 1bcf77951c)
- (cherry picked from commit 6b831aaf1b)
- (cherry picked from commit e952992560)
- (cherry picked from commit 66a454f61d)
- (cherry picked from commit 518f04f1c4)
- (cherry picked from commit f075674ebe)
Parent PR: #23590Closesscylladb/scylladb#24109
* github.com:scylladb/scylladb:
test: add test/boost/sstable_compressor_factory_test
compress: add some test-only APIs
compress: rename sstable_compressor_factory_impl to dictionary_holder
compress: fix indentation
compress: remove sstable_compressor_factory_impl::_owner_shard
compress: distribute compression dictionaries over shards
test: switch uses of make_sstable_compressor_factory() to a seastar::thread-dependent version
test: remove sstables::test_env::do_with()
Three tests in the file use a multi-DC cluster. Unfortunately, they put
all of the nodes in a DC in the same rack and because of that, they fail
when run with the `rf_rack_valid_keyspaces` configuration option enabled.
Since the tests revolve mostly around zero-token nodes and how they
affect replication in a keyspace, this change should have zero impact on
them.
(cherry picked from commit c8c28dae92)
We reduce the number of nodes and the RF values used in the test
to make sure that the test can be run with the `rf_rack_valid_keyspaces`
configuration option. The test doesn't seem to be reliant on the
exact number of nodes, so the reduction should not make any difference.
(cherry picked from commit 04567c28a3)
The change boils down to matching the number of created racks to the number
of created nodes in each DC in the auxiliary function `prepare_multi_dc_repair`.
This way, we ensure that the created keyspace will be RF-rack-valid and so
we can run the test file even with the `rf_rack_valid_keyspaces` configuration
option enabled.
The change has no impact on the tests that use the function; the distribution
of nodes across racks does not affect how repair is performed or what the
tests do and verify. Because of that, the change is correct.
(cherry picked from commit d3c0cd6d9d)
We assign the newly created nodes to multiple racks. If RF <= 3,
we create as many racks as the provided RF. We disallow the case
of RF > 3 to avoid trying to create an RF-rack-invalid keyspace;
note that no existing test calls `create_table_insert_data_for_repair`
providing a higher RF. The rationale for doing this is we want to ensure
that the tests calling the function can be run with the
`rf_rack_valid_keyspaces` configuration option enabled.
(cherry picked from commit 5d1bb8ebc5)
We assign the nodes to the same DC, but multiple racks to ensure that
the created keyspace is RF-rack-valid and we can run the test with
the `rf_rack_valid_keyspaces` configuration option enabled. The changes
do not affect what the test does and verifies.
(cherry picked from commit 92f7d5bf10)
We simply assign the nodes used in the test to seprate racks to
ensure that the created keyspace is RF-rack-valid to be able
to run the test with the `rf_rack_valid_keyspaces` configuration
option set to true. The change does not affect what the test
does and verifies -- it only depends on the type of nodes,
whether they are normal token owners or not -- and so the changes
are correct in that sense.
(cherry picked from commit 4c46551c6b)
We parameterize the test so it's run with and without enforced
RF-rack-valid keyspaces. In the test itself, we introduce a branch
to make sure that we won't run into a situation where we're
attempting to create an RF-rack-invalid keyspace.
Since the `rf_rack_valid_keyspaces` option is not commonly used yet
and because its semantics will most likely change in the future, we
decide to parameterize the test rather than try to get rid of some
of the test cases that are problematic with the option enabled.
(cherry picked from commit 2882b7e48a)
We simply assign DC/rack properties to every node used in the test.
We put all of them in the same DC to make sure that the cluster behaves
as closely to how it would before these changes. However, we distribute
them over multiple racks to ensure that the keyspace used in the test
is RF-rack-valid, so we can also run it with the `rf_rack_valid_keyspaces`
configuration option set to true. The distribution of nodes between racks
has no effect on what the test does and verifies, so the changes are
correct in that sense.
(cherry picked from commit 73b22d4f6b)