Waiting for all tasks does not guarantee
that test will not spawn new tasks while we wait
Manager broken state prevents all future put requests in case of
1) fail during task waiting
2) Test continue to create tasks in test_after stage
To ensure the atomicity of tests and recycle clusters without any issues, it is crucial
that all active requests in ScyllaClusterManager are completed before proceeding further.
Topology tests might spawn asynchronous tasks in parallel in ScyllaClusterManager.
Tasks history is introduced to be able log and analyze all actions
against cluster in case of failures
The methods stop, stop_gracefully, and start in ScyllaServer
are not designed for parallel execution.
To circumvent issues arising from concurrent calls,
a start_stop_lock has been introduced.
This lock ensures that these methods are executed sequentially.
As part of the unification process, alternator tests are migrated to the PythonTestSuite instead of using the RunTestSuite. The main idea is to have one suite, so there will be easier to maintain and introduce new features.
Introduce the prepare_sql option for suite.yaml to add possibility to run cql statements as precondition for the test suite.
Related: https://github.com/scylladb/scylladb/issues/18188Closesscylladb/scylladb#18442
In b4e66ddf1d (4.0) we added a new batchlog_manager configuration
named delay, but forgot to initialize it in cql_test_env. This somehow
worked, but doesn't with clang 18.
Fix it by initializing to 0 (there isn't a good reason to delay it).
Also provide a default to make it safer.
Closesscylladb/scylladb#18572
The code is based on similar idea as perf_simple_query. The main differences are:
- it starts full scylla process
- communicates with alternator via http (localhost)
- uses richer table schema with all dynamoDB types instead of only strings
Testing code runs in the same process as scylla so we can easily get various perf counters (tps, instr, allocation, etc).
Results on my machine (with 1 vCPU):
> ./build/release/scylla perf-alternator-workloads --workdir ~/tmp --smp 1 --developer-mode 1 --alternator-port 8000 --alternator-write-isolation forbid --workload read --duration 10 2> /dev/null
...
median 23402.59616090321
median absolute deviation: 598.77
maximum: 24014.41
minimum: 19990.34
> ./build/release/scylla perf-alternator-workloads --workdir ~/tmp --smp 1 --developer-mode 1 --alternator-port 8000 --alternator-write-isolation forbid --workload write --duration 10 2> /dev/null
...
median 16089.34211320635
median absolute deviation: 552.65
maximum: 16915.95
minimum: 14781.97
The above seem more realistic than results from perf_simple_query which are 96k and 49k tps (per core).
Related: https://github.com/scylladb/scylladb/issues/12518Closesscylladb/scylladb#13121
* github.com:scylladb/scylladb:
test: perf: alternator: add option to skip data pre-population
perf-alternator-workloads: add operations-per-shard option
test: perf: add global secondary indexes write workload for alternator
test: perf: add option to continue after failed request
test: perf: add read modify write workload for alternator (lwt)
test: perf: add scan workload for alternator
test: perf: add end-to-end benchmark for alternator
test: perf: extract result aggregation logic to a separate struct
in 906700d5, we accepted 0 as well as the return code of
"nodetool <command> --help", because we needed to be prepared for
the newer seastar submodule while be compatible with the older
seastar versions. now that in 305f1bd3, we bumped up the seastar
module, and this commit picked up the change to return 0 when
handling "--help" command line option in seastar, we are able to
drop the workaround.
so, in this change, we only use "0" as the expected return code.
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
Closesscylladb/scylladb#18627
Currently empty storage_groups are allocated for tablets that are
not on this shard.
Allocate storage groups dynamically, i.e.:
- on table creation allocate only storage groups that are on this
shard;
- allocate a storage group for tablet that is moved to this shard;
- deallocate storage group for tablet that is cleaned up.
Stop compaction group before it's deallocated.
Add a flag to table::cleanup_tablet deciding whether to deallocate
sgs and use it in commitlog tests.
PR #17771 introduced a threshold for the total memory used by all bloom filters across SSTables. When the total usage surpasses the threshold, the largest bloom filter will be removed from memory, bringing the total usage back under the threshold. This PR adds support for reloading such reclaimed bloom filters back into memory when memory becomes available (i.e., within the 10% of available memory earmarked for the reclaimable components).
The SSTables manager now maintains a list of all SSTables whose bloom filter was removed from memory and attempts to reload them when an SSTable, whose bloom filter is still in memory, gets deleted. The manager reloads from the smallest to the largest bloom filter to maximize the number of filters being reloaded into memory.
Closesscylladb/scylladb#18186
* github.com:scylladb/scylladb:
sstable_datafile_test: add testcase to test reclaim during reload
sstable_datafile_test: add test to verify auto reload of reclaimed components
sstables_manager: reload previously reclaimed components when memory is available
sstables_manager: start a fiber to reload components
sstable_directory_test: fix generation in sstable_directory_test_table_scan_incomplete_sstables
sstable_datafile_test: add test to verify reclaimed components reload
sstables: support reloading reclaimed components
sstables_manager: add new intrusive set to track the reclaimed sstables
sstable: add link and comparator class to support new instrusive set
sstable: renamed intrusive list link type
sstable: track memory reclaimed from components per sstable
sstable: rename local variable in sstable::total_reclaimable_memory_size
When a tablet is migrated away, any inactive read which might be reading from said tablet, has to be dropped. Otherwise these inactive reads can prevent sstables from being removed and these sstables can potentially survive until the tablet is migrated back and resurrect data.
This series introduces the fix as well as a reproducer test.
Fixes: https://github.com/scylladb/scylladb/issues/18110Closesscylladb/scylladb#18179
* github.com:scylladb/scylladb:
test: add test for cleaning up cached querier on tablet migration
querier: allow injecting cache entry ttl by error injector
replica/table: cleanup_tablet(): clear inactive reads for the tablet
replica/database: introduce clear_inactive_reads_for_tablet()
replica/database: introduce foreach_reader_concurrency_semaphore
reader_concurrency_semaphore: add range param to evict_inactive_reads_for_table()
reader_concurrency_semaphore: allow storing a range with the inactive reader
reader_concurrency_semaphore: avoid detach() in inactive_read_handle::abandon()
The testcase uses an sstable whose mutation key and the generation are
owned by different shards. Due to this, when process_sstable_dir is
called, the sstable gets loaded into a different shard than the one that
was intended. This also means that the sstable and the sstable manager
end up in different shards.
The following patch will introduce a condition variable in sstables
manager which will be signalled from the sstables. If the sstable and
the sstable manager are in different shards, the signalling will cause
the testcase to fail in debug mode with this error : "Promise task was
set on shard x but made ready on shard y". So, fix it by supplying
appropriate generation number owned by the same shard which owns the
mutation key as well.
Signed-off-by: Lakshmi Narayanan Sreethar <lakshmi.sreethar@scylladb.com>
The code is based on similar idea as perf_simple_query. The main differences are:
- it starts full scylla process
- communicates with alternator via http (localhost)
- uses richer table schema with all dynamoDB types instead of only strings
Testing code runs in the same process as scylla so we can easily get various perf counters (tps, instr, allocation, etc).
Results on my machine (with 1 vCPU):
> ./build/release/scylla perf-alternator-workloads --workdir ~/tmp --smp 1 --developer-mode 1 --alternator-port 8000 --alternator-write-isolation forbid --workload read --duration 10 2> /dev/null
...
median 23402.59616090321
median absolute deviation: 598.77
maximum: 24014.41
minimum: 19990.34
> ./build/release/scylla perf-alternator-workloads --workdir ~/tmp --smp 1 --developer-mode 1 --alternator-port 8000 --alternator-write-isolation forbid --workload write --duration 10 2> /dev/null
...
median 16089.34211320635
median absolute deviation: 552.65
maximum: 16915.95
minimum: 14781.97
The above seem more realistic than results from perf_simple_query which are 96k and 49k tps (per core).
Even if there is no endpoint for the given IP the state can still belong to existing endpoint that
was restarted with different IP, so lets try to locate the endpoint by host id as well. Do it in raft
topology mode only to not have impact on gossiper mode.
Also make the test more robust in detecting wrong amount of entries in
the peers table. Today it may miss that there is a wrong entry there
because the map will squash two entries for the same host id into one.
Fixes: scylladb/scylladb#18419Fixes: scylladb/scylladb#18457
in newer seastar, 0 is returned as the returncode of the application
when handling `--help`. to prepare for this behavior, let's
accept it before updating the seastar submodule.
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
Closesscylladb/scylladb#18574
Some time ago #16558 was merged that moved view builder drain into generic drain. After this merge dtests started to fail from time to time, so the PR was reverted (see #18278). In #18295 the hang was found. View builder drain was moved from "before stopping messaging service to "after" it, and view update write handlers in proxy hanged for hard-coded timeout of 5 minutes without being aborted. Tests don't wait for 5 minutes and kill scylla, then complain about it and fail.
This PR brings back the original PR as well as the necessary fix that cancels view update write handlers on stop.
Closesscylladb/scylladb#18408
* github.com:scylladb/scylladb:
Reapply "Merge 'Drain view_builder in generic drain' from ScyllaDB"
view: Abort pending view updates when draining
The name of the Scylla table backing an Alternator LSI looks like `basename:!lsiname`. Some REST API clients (including Scylla Manager) when they send a "!" character in the REST API request path may decide to "URL encode" it - convert it to `%21`.
Because of a Seastar bug (https://github.com/scylladb/seastar/issues/725) Scylla's REST API server forgets to do the URL decoding on the path part of the request, which leads to the REST API request failing to address the LSI table.
The first patch in this PR fixes the bug by using a new Seastar API introduced in https://github.com/scylladb/seastar/pull/2125 that does the URL decoding as appropriate. The second patch in the PR is a new test for this bug, which fails without the fix, and passes afterwards.
Fixes#5883.
Closesscylladb/scylladb#18286
* github.com:scylladb/scylladb:
test/alternator: test addressing LSI using REST API
REST API: stop using deprecated, buggy, path parameter
Currently, LWT is not supported with tablets.
In particular the interaction between paxos and tablet
migration is not handled yet.
Therefore, it is better to outright reject LWT queries
for tablets-enabled tables rather than support them
in a flaky way.
This commit also marks tests that depend on LWT
as expeced to fail.
Fixesscylladb/scylladb#18066
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Closesscylladb/scylladb#18103
This is the second half of the fix for issue #13968. The first half is already merged with PR #18346
Scylla issues warnings for partitions containing more rows than a configured threshold. The warning is issued by inserting a row into the `system.large_partitions` table. This row contains the information about the partition for which the warning is issued: keyspace, table, sstable, partition key and size, compaction time and the number of rows in the partition. A previous PR #18346 also added range tombstone count to this row.
This change adds a new counter for dead rows to the large_partitions table.
This change also adds cluster feature protection for writing into these new counters. This is needed in case a cluster is in the process of being upgraded to this new version, after which an upgraded node writes data with the new schema into `system.large_partitions`, and finally a node is then rolled back to an old version. This node will then revert the schema to the old version, but the written sstables will still contain data with the new counters, causing any readers of this table to throw errors when they encounter these cells.
This is an enhancement, and backporting is not needed.
Fixes#13968Closesscylladb/scylladb#18458
* github.com:scylladb/scylladb:
sstable: added test for counting dead rows
sstable: added docs for system.large_partitions.dead_rows
sstable: added cluster feature for dead rows and range tombstones
sstable: write dead_rows count to system.large_partitions
sstable: added counter for dead rows
Instead of performing a rolling restart by calling `restart` in a loop over every node in the cluster, use the dedicated
`manager.rolling_restart` function. This method waits until all other nodes see the currently processed node as up or down before proceeding to the next step. Not doing so may lead to surprising behavior.
In particular, in scylladb/scylladb#18369, a test failed shortly after restarting three nodes. Because nodes were restarted one after another too fast, when the third node was restarted it didn't send a notification to the second node because it still didn't know that the second node was alive. This led the second node to notice that the third node restarted by observing that it incremented its generation in gossip (it restarted too fast to be marked as down by the failure detector). In turn, this caused the second node to send "third node down" and "third node up" notifications to the driver in a quick succession, causing it to drop and reestablish all connections to that node. However, this happened _after_ rolling upgrade finished and _after_ the test logic confirmed that all nodes were alive. When the notifications were sent to the driver, the test was executing some statements necessary for the test to pass - as they broke, the test failed.
Fixes: scylladb/scylladb#18369Closesscylladb/scylladb#18379
* github.com:scylladb/scylladb:
test: get rid of server-side server_restart
test: util: get rid of the `restart` helper
test: {auth,topology}: use manager.rolling_restart
The direct failure detector design is simplistic. It sends pings
sequentially and times out listeners that reached the threshold (i.e.
didn't hear from a given endpoint for too long) in-between pings.
Given the sequential nature, the previous ping must finish so the next
ping can start. We timeout pings that take too long. The timeout was
hardcoded and set to 300ms. This is too low for wide-area setups --
latencies across the Earth can indeed go up to 300ms. 3 subsequent timed
out pings to a given node were sufficient for the Raft listener to "mark
server as down" (the listener used a threshold of 1s).
Increase the ping timeout to 600ms which should be enough even for
pinging the opposite side of Earth, and make it tunable.
Increase the Raft listener threshold from 1s to 2s. Without the
increased threshold, one timed out ping would be enough to mark the
server as down. Increasing it to 2s requires 3 timed out pings which
makes it more robust in presence of transient network hiccups.
In the future we'll most likely want to decrease the Raft listener
threshold again, if we use Raft for data path -- so leader elections
start quickly after leader failures. (Faster than 2s). To do that we'll
have to improve the design of the direct failure detector.
Ref: scylladb/scylladb#16410Fixes: scylladb/scylladb#16607
---
I tested the change manually using `tc qdisc ... netem delay`, setting
network delay on local setup to ~300ms with jitter. Without the change,
the result is as observed in scylladb/scylladb#16410: interleaving
```
raft_group_registry - marking Raft server ... as dead for Raft groups
raft_group_registry - marking Raft server ... as alive for Raft groups
```
happening once every few seconds. The "marking as dead" happens whenever
we get 3 subsequent failed pings, which is happens with certain (high)
probability depending on the latency jitter. Then as soon as we get a
successful ping, we mark server back as alive.
With the change, the phenomenon no longer appears.
Closesscylladb/scylladb#18443
Until https://github.com/scylladb/scylladb/issues/15356 is fixed, this
will be handled by explicitly closing the connection, so if scylla fails
to update gossiper state due to premature abort on shutdown, then we
won't be stuck in an endless reconnection attempt (later through
heartbeats (30s interval)), causing the test to timeout.
Manifests in scylla logs like this:
gossip - failure_detector_loop: Got error in the loop, live_nodes={127.147.5.10, 127.147.5.16}: seastar::sleep_aborted (Sleep is aborted)
gossip - failure_detector_loop: Finished main loop
migration_manager - stopping migration service
storage_service - Shutting down native transport server
gossip - Fail to apply application_state: seastar::abort_requested_exception (abort requested)
cql_server_controller - CQL server stopped
...
gossip - My status = NORMAL
gossip - Announcing shutdown
gossip - Fail to apply application_state: seastar::abort_requested_exception (abort requested)
gossip - Sending a GossipShutdown to 127.147.5.10 with generation 1714449924
gossip - Sending a GossipShutdown to 127.147.5.16 with generation 1714449924
gossip - === Gossip round FAIL: seastar::abort_requested_exception (abort requested)
Refs #14746.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Closesscylladb/scylladb#18484
We won't run:
- old pre auth-v1 migration code
- code creating auth-v1 tables
We will keep running:
- code creating default rows
- code creating auth-v1 keyspace (needed due to cqlsh legacy hack,
it errors when executing `list roles` or `list users` if
there is no system_auth keyspace, it does support case when
there is no expected tables)
Fixes https://github.com/scylladb/scylladb/issues/17737Closesscylladb/scylladb#17939
* github.com:scylladb/scylladb:
auth: don't run legacy migrations on auth-v2 startup
auth: fix indent in password_authenticator::start
auth: remove unused service::has_existing_legacy_users func
Restarting a node amounts to just shutting it down and then starting
again. There is no good reason to have a dedicated endpoint in the
ScyllaClusterManager for restarting when it can be implemented by
calling two endpoints in a sequence: stop and start - it's just code
duplication.
Remove the server_restart endpoint in ScyllaClusterManager and
reimplement it as two endpoint calls in the ManagerClient.
Instead of performing a rolling restart by calling `restart` in a loop
over every node in the cluster, use the dedicated
`manager.rolling_restart` function. This method waits until all other
nodes see the currently processed node as up or down before proceeding
to the next step. Not doing so may lead to surprising behavior.
In particular, in scylladb/scylladb#18369, a test failed shortly after
restarting three nodes. Because nodes were restarted one after another
too fast, when the third node was restarted it didn't send a
notification to the second node because it still didn't know that the
second node was alive. This led the second node to notice that the third
node restarted by observing that it incremented its generation in gossip
(it restarted too fast to be marked as down by the failure detector). In
turn, this caused the second node to send "third node down" and "third
node up" notifications to the driver in a quick succession, causing it
to drop and reestablish all connections to that node. However, this
happened _after_ rolling upgrade finished and _after_ the test logic
confirmed that all nodes were alive. When the notifications were sent to
the driver, the test was executing some statements necessary for the
test to pass - as they broke, the test failed.
Fixes: scylladb/scylladb#18369
We want to clear CDC generations that are no longer needed
(because all writes are already using a new generation) so they
don't take space and are not sent during snapshot transfers
(see e.g. https://github.com/scylladb/scylladb/issues/17545).
The condition used previously was that we clear generations which
were closed (i.e., a new generation started at this time) more than
24h ago. This is a safe choice, but too conservative: we could
easily end up with a large number of obsolete generations if we
boot multiple nodes during 24h (which is especially easy to do
with tablets.)
Change this bound from 24h to `5s + ring_delay`. The choice is
explained in a comment in the code.
Additionally, improve `test_raft_snapshot_request` that would
become flaky after the change so it's not sensitive to changes
anymore.
The raft-based topology was experimental before 6.0, no need
to backport.
Ref: scylladb/scylladb#17545Closesscylladb/scylladb#18497
* github.com:scylladb/scylladb:
topology_coordinator: clear obsolete generations earlier
test: test_raft_snapshot_request: improve the last assertion
test: test_raft_snapshot_request: find raft leader after restart
test: test_raft_shanpshot_request: simplify appended_command
During upgrade to raft topology, information about service levels is copied from the legacy tables in system_distributed to the raft-managed tables of group 0. system_distributed has RF=3, so if the cluster has only one or two nodes we should use lower consistency level than ALL - and the current procedure does exactly that, it selects QUORUM in case of two nodes and ONE in case of only one node. The cluster size is determined based on the call to _gossiper.num_endpoints().
Despite its name, gossiper::num_endpoints() does not necessarily return the number of nodes in the cluster but rather the number of endpoint states in gossiper (this behavior is documented in a comment near the declaration of this function). In some cases, e.g. after gossiper-based nodetool remove, the state might be kept for some time after removal (3 days in this case).
The consequence of this is that gossiper::num_endpoints() might return more than the current number of nodes during upgrade, and that in turn might cause migration of data from one table to another to fail - causing the upgrade procedure to get stuck if there is only 1 or two nodes in the cluster.
In order to fix this, use token_metadata::get_all_endpoints() as a measure of the cluster size.
Fixes: scylladb/scylladb#18198Closesscylladb/scylladb#18261
* github.com:scylladb/scylladb:
test: topology: test that upgrade succeeds after recent removal
topology_coordinator: compute cluster size correctly during upgrade
With topology over raft all operation are already serialized by the
coordinator anyway, so no need to synchronize removenode using api lock.
All others are still synchronized since there cannot be executed in
parallel for the same node anyway.
* 'gleb/17681-fix' of github.com:scylladb/scylla-dev:
storage_service: do not take API lock for removenode operation if topology coordinator is enabled
test: return file mark from wait_for that points after the found string
More than three years ago, in issue #7949, we noticed that trying to
set a `map<ascii, int>` from JSON input (i.e., using INSERT JSON or the
fromJson() function) fails - the ascii key is incorrectly parsed.
We fixed that issue in commit 75109e9519
but unfortunately, did not do our due diligence: We did not write enough
tests inspired by this bug, and failed to discover that actually we have
the same bug for many other key types, not just for "ascii". Specifically,
the following key types have exactly the same bug:
* blob
* date
* inet
* time
* timestamp
* timeuuid
* uuid
Other types, like numbers or boolean worked "by accident" - instead of
parsing them as a normal string, we asked the JSON parser to parse them
again after removing the quotes, and because unquoted numbers and
unquoted true/false happwn to work in JSON, this didn't fail.
The fix here is very simple - for all *native* types (i.e., not
collections or tuples), the encoding of the key in JSON is simply a
quoted string - and removing the quotes is all we need to do and there's
no need to run the JSON parser a second time. Only for more elaborate
types - collections and tuples - we need to run the JSON parser a
second time on the key string to build the more elaborate object.
This patch also includes tests for fromJson() reading a map with all
native key types, confirming that all the aforementioned key types
were broken before this patch, and all key types (including the numbers
and booleans which worked even befoe this patch) work with this patch.
Fixes#18477.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Closesscylladb/scylladb#18482
`boost::range::random_shuffle()` uses the deprecated
`std::random_shuffle()` under the hood, so let's use
`std::ranges::shuffle()` which is available since C++20.
this change should address the warning like:
```
[312/753] CXX build/debug/test/boost/counter_test.o In file included from test/boost/counter_test.cc:17:
/usr/include/boost/range/algorithm/random_shuffle.hpp:106:13: warning: 'random_shuffle<__gnu_cxx::__normal_iterator<counter_shard *, std::vector<counter_shard>>>' is deprecated: use 'std::shuffle' instead [-Wdepr
ecated-declarations]
106 | detail::random_shuffle(boost::begin(rng), boost::end(rng));
| ^
test/boost/counter_test.cc:507:27: note: in instantiation of function template specialization 'boost::range::random_shuffle<std::vector<counter_shard>>' requested here
507 | boost::range::random_shuffle(shards);
| ^
/usr/lib/gcc/x86_64-redhat-linux/14/../../../../include/c++/14/bits/stl_algo.h:4489:5: note: 'random_shuffle<__gnu_cxx::__normal_iterator<counter_shard *, std::vector<counter_shard>>>' has been explicitly marked
deprecated here
4489 | _GLIBCXX14_DEPRECATED_SUGGEST("std::shuffle")
| ^
/usr/lib/gcc/x86_64-redhat-linux/14/../../../../include/c++/14/x86_64-redhat-linux/bits/c++config.h:1957:45: note: expanded from macro '_GLIBCXX14_DEPRECATED_SUGGEST'
1957 | # define _GLIBCXX14_DEPRECATED_SUGGEST(ALT) _GLIBCXX_DEPRECATED_SUGGEST(ALT)
| ^
/usr/lib/gcc/x86_64-redhat-linux/14/../../../../include/c++/14/x86_64-redhat-linux/bits/c++config.h:1941:19: note: expanded from macro '_GLIBCXX_DEPRECATED_SUGGEST'
1941 | __attribute__ ((__deprecated__ ("use '" ALT "' instead")))
| ^
```
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
Closesscylladb/scylladb#18517
{fmt} v10.0.0 introduces formatter for `std::optional`, so there
is no need to test it. furthermore the behavior of this formatter
is different from our homebrew one. so let's skip this test if
{fmt} v10.0.0 or up is used.
Refs #18508
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
Closesscylladb/scylladb#18509
This series adds facilities to gently convert canonical mutations back to mutations
and to gently make canonical mutations or freeze mutations in a seastar thread.
Those are used in storage_service::merge_topology_snapshot to prevent reactor stalls
due to large mutation, as seed in the test_add_many_nodes_under_load dtest.
Also, migration_manager migration_request was converted to use a seastar thread
to use the above facilities to prevent reactor stalls with large schema mutations,
e,g, with a large number of tables, and/or when reading tablets mutations with
a large number of tablets in a table.
perf-simple-query --write results:
Before:
```
median 79151.53 tps ( 59.3 allocs/op, 16.0 logallocs/op, 14.3 tasks/op, 53289 insns/op, 0 errors)
```
After:
```
median 79716.73 tps ( 59.3 allocs/op, 16.0 logallocs/op, 14.3 tasks/op, 53314 insns/op, 0 errors)
```
Closesscylladb/scylladb#18290
* github.com:scylladb/scylladb:
storage_proxy: add mutate_locally(vector<frozen_mutation_and_schema>) method
raft: group0_state_machine: write_mutations_to_database: freeze mutations gently
database: apply_in_memory: unfreeze_gently large mutations
storage_service: get_system_mutations: make_canonical_mutation_gently
tablets: read_tablet_mutations: make_canonical_mutation_gently
schema_tables: convert_schema_to_mutations: make_canonical_mutation_gently
schema_tables: redact_columns_for_missing_features: get input mutation using rvalue reference
storage_service: merge_topology_snapshot: freeze_gently
canonical_mutation: add make_canonical_mutation_gently
frozen_mutation: move unfreeze_gently to async_utils
mutation: add freeze_gently
idl-compiler: generate async serialization functions for stub members
raft: group0_state_machine: write_mutations_to_database: use to_mutation_gently
storage_service: merge_topology_snapshot: co_await to_mutation_gently
canonical_mutation: add to_mutation_gently
idl-compiler: emit include directive in generated impl header file
mutation_partition: add apply_gently
collection_mutation: improve collection_mutation_view formatting
mutation_partition: apply_monotonically: do not support schema upgrade
test/perf: report also log_allocations/op