Compare commits

..

175 Commits

Author SHA1 Message Date
copilot-swe-agent[bot]
c222ea8b2f Increase precision of compression ratio from 1 to 2 decimal places
Co-authored-by: mykaul <4655593+mykaul@users.noreply.github.com>
2026-01-03 19:47:04 +00:00
copilot-swe-agent[bot]
9c66f52e46 Initial plan 2026-01-03 19:42:20 +00:00
Nadav Har'El
6c8ddfc018 test/alternator: fix typo in test_returnvalues.py
Different DynamoDB operations have different settings allowed for
their "ReturnValues" argument. In particular, some operations allow
ReturnValues=UPDATED_OLD but the DeleteItem operation *does not*.

We have a test, test_delete_item_returnvalues, aimed to verify this
but it had a typo and didn't actually check "UPDATED_OLD". This patch
fixes this typo.

The test still passes because the code itself (executor.cc,
delete_item_operation's constructor) has the correct check - it was
just the test that was wrong.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>

Closes scylladb/scylladb#27918
2026-01-01 19:33:23 +02:00
Israel Fruchter
40ada3f187 Update tools/cqlsh submodule (v6.0.32)
* tools/cqlsh scylladb/scylla-cqlsh@9e5a91d7...scylladb/scylla-cqlsh@5a1d7842 (9):
  > fix wrong reference in copyutil.py
  > Add GitHub Action workflow to create releases on new tags
  > test_copyutil.py: introdcue test for ImportTask
  > fix(copyutil.py): avoid situatuions file might be move withing multiple processes
  > Fix Unix socket port display in show_host() method
  > Merge pull request #157 from scylladb/alert-autofix-1
    .github/workflows/build-push.yml: Potential fix for code scanning alert no. 1: Workflow does not contain permissions
  > .github/workflows/dockerhub-description.yml: Potential fix for code scanning alert no. 9: Workflow does not contain permissions
  > test_cqlsh_output: skip some cassandra 5.0 table options
  > tests: template compression cql to use `class` insted of `sstable_comprission`
  > Pin Cassandra version to 5.0 for reproducible builds
  > Remove scylla-enterprise integration test and update Cassandra to latest

Closes scylladb/scylladb#27924
2026-01-01 19:30:34 +02:00
Łukasz Paszkowski
76b84b71d1 storage/test_out_of_space_prevention.py: Fix async/await bugs
- Add missing await keywords for async operations on s2_log.wait_for()
  and coord_log.wait_for()
- Fix incorrect regex: "compaction .* Split {cf}" → "compaction.*Split {cf}"
- The commit https://github.com/scylladb/scylladb/commit/f7324a4 demoted
  compaction start/end log messages to debug level. Hence add
  compaction=debug log messages to the following tests:
    test_split_compaction_not_triggered
    test_node_restart_while_tablet_split
    test_repair_failure_on_split_rejection

Fixes https://github.com/scylladb/scylladb/issues/27931

Closes scylladb/scylladb#27932
2026-01-01 14:24:30 +02:00
Anna Stuchlik
624869de86 doc: remove cassandra-stress from installation instructions
The cassandra-stress tool is no longer part of the default package
and cannot be run in the way described.

This commit removes the instruction to run cassandra-stress.

Fixes https://github.com/scylladb/scylladb/issues/24994

Closes scylladb/scylladb#27726
2026-01-01 14:20:58 +02:00
Jenkins Promoter
69d6e63a58 Update pgo profiles - aarch64 2026-01-01 05:10:51 +02:00
Jenkins Promoter
d6e2d3d34c Update pgo profiles - x86_64 2026-01-01 04:27:14 +02:00
Nadav Har'El
e28df9b3d0 test: fix Python warnings in regular expressions
Like C, Python supports some escape sequences in strings such as the
familiar "\n" that converts to a newline character.
Originally, when backslash was used before a random character, for
example, "\.", Python used to just use these literal characters
backslash and dot, in the string - and not make a fuss about it.
This made it ok to use a string like "hi\.there" as a regular expression.
We have a few instances of this in our Python tests.

But recent releases of Python started to produce ugly warnings about
these cases. The error message looks like:

    SyntaxWarning: "\." is an invalid escape sequence. Such sequences
    will not work in the future. Did you mean "\\."? A raw string is
    also an option.

Indeed in most cases the easiest solution is to use a "raw string",
a string literal preceded with r. For example, r"hi\.there". In such
strings Python doesn't replace escape sequences like \n in the string,
and also leaves the \. unchanged for the regular expression to see.

So in this patch we use raw strings in all places in test/ where Python
warns have this problem.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>

Closes scylladb/scylladb#27856
2025-12-31 20:44:01 +02:00
Yaniv Michael Kaul
597d300527 main.cc: remove warning: 'metric_help' is deprecated
Signed-off-by: Yaniv Kaul <yaniv.kaul@scylladb.com>

Backport: no, benign issue.

Closes scylladb/scylladb#27680
2025-12-31 18:36:55 +02:00
Avi Kivity
b690ddb9e5 tools: toolchain: dbuild: bind-mount full ~/.cache to container
In afb96b6387, we added support for sccache. As a side effect
it changed the method of invoking ccache from transparent via PATH
(if it contains /usr/lib64/ccache) to explicit, by changing the compiler
command line from 'clang++' (which may or may not resolve the the ccache
binary) to 'ccache /usr/local/bin/clang++', which always invokes ccache.

In the default dbuild configuration, PATH does not contain /usr/lib64/ccache,
so ccache isn't invoked by default. Users can change this via the
SCYLLADB_DBUILD environment variable.

As a result of ccache being suddenly enabled for dbuild builds, ccache
will now attempt to create ~/.cache/ccache. Under docker, this does
not work, because we bind-mount ~/.cache/dbuild. Docker will create the
intermediate ~/.cache, but under the root user, not $USER. The intermediate
directory being root-owned prevents ~/.cache/ccache from being created.

Under podman, this does work, because everything runs under the container's
root user.

The fix is to bind-mount the entire ~/.ccache into the container. This
not only lets ccache create the directory, it will also find an existing
~/.cache/ccache directory and use it, enabling reuse across invocations.

Since ccache will now respect configuration changes without access to
its configuration file (notably, the maximum cache size), we also
bind-mount ~/.config.

Since ~/.ccache and ~/.config are not automatically created, we create
them explicitly so the bind mounts can work. This is for new nodes enlisted
from the cloud; developer machines will have those directories preexisting.

Note that the ccache directory used to be ~/.ccache, but was later changed.
Had the author known, we would have bind-mounted ~/.cache much earlier.

Fixes #27919.

Closes scylladb/scylladb#27920
2025-12-31 14:08:41 +01:00
Asias He
3abda7d15e topology_coordinator: Ensure repair_update_compaction_ctrl is executed
Consider this:

- n1 is a coordinator and schedules tablet repair
- n1 detects tablet repair failed, so it schedules tablet transition to end_repair state
- n1 loses leadership and n2 becomes the new topology coordinator
- n2 runs end_repair on the tablet with session_id=00000000-0000-0000-0000-000000000000
- when a new tablet repair is scheduled, it hangs since the lock is already taken because it was not removed in previous step

To fix, we use the global_tablet_id to index the lock instead of the
session id.

In addition, we retry the repair_update_compaction_ctrl verb in case of
error to ensure the verb is eventually executed. The verb handler is
also updated to check if it is still in end_repair stage.

Fixes #26346

Closes scylladb/scylladb#27740
2025-12-31 13:17:18 +01:00
Benny Halevy
3e9b071838 Update seastar submodule
* seastar f0298e40...4dcd4df5 (29):
  > file: provide a default implementation for file_impl::statat
  > util: Genralize memory_data_sink
  > defer: Replace static_assert() with concept
  > treewide: drop the support of fmtlib < 9.0.0
  > test: Improve resilience of netsed scheduling fairness test
  > Merge 'file: Use query_device_alignment_info in blkdev_alignments ' from Kefu Chai
    file: Put alignment helpers in anonymous namespace
    file: Use query_device_alignment_info in blkdev_alignments
  > Merge 'file: Query physical block size and minimum I/O size' from Kefu Chai
    file: Apply physical_block_size override to filesystem files
    file: Use designated initializers in xfs_alignments
    iotune: Add physical block size detection
    disk_params: Add support for physical_block_size overrides from io_properties.yaml
    block_device: Query alignment requirements separately for memory and I/O
  > Merge 'json: formatter: fix formatting of std:string_view' from Benny Halevy
    json: formatter: fix formatting of std:string_view
    json: formatter: make sure std::string_view conforms to is_string_like

Fixes #27887

  > demos:improve the output of demo_with_io_intent() in file_demo
  > test: Add accept() vs accept_abort() socket test
  > file: Refine posix_file_impl alignments initialization
  > Add file::statat and a corresponding file_stat overload
  > cmake: don't compile memcached app for API < 9
  > Merge 'Revert to ~old lifetime semantics for lvalues passed to then()-alikes' from Travis Downs
    future: adjust lifetime for lvalue continuations
    future: fix value class operator()
  > pollable_fd: Unfriend everything
  > Merge 'file: experimental_list_directory: use buffered generator' from Benny Halevy
    file: experimental_list_directory: use buffered generator
    file: define list_directory_generator_type
  > Merge 'Make datagram API use temporary_buffer<>-s' from Pavel Emelyanov
    net: Deprecate datagram::get_data() returning packet
    memcache: Fix indentation after previous patch
    memcache: Use new datagram::get_buffers() API
    dns: Use new datagram::get_buffers() API
    tests: Use new datagram::get_buffers() API
    demo: Use new datagram::get_buffers() API
    udp: Make datagram implementations return span of temporary_buffer-s
  > Merge 'Remove callback from timer_set::complete()' from Pavel Emelyanov
    reactor: Fix indentation after previous patch
    timers: Remove enabling callback from timer_set::complete()
  > treewide: avoid 'static sstring' in favor of 'constexpr string_view'
  > resource: Hide hwloc from public interface
  > Merge 'Fix handle_exception_type for lvalues' from Travis Downs
    futures_test: compile-time tests
    function_traits: handle reference_wrapper
  > posix_data_sink_impl: Assert to guard put UB
  > treewide: fix build with `SEASTAR_SSTRING` undefined
  > avoid deprecation warnings for json_exception
  > `util/variant_utils`: correct type deduction for `seastar::visit`
  > net/dns: fixed socket concurrent access
  > treewide: add missing headers
  > Merge 'Remove posix file helper file_read_state class' from Pavel Emelyanov
    file: Remove file_read_state
    test: Add a test for posix_file_impl::do_dma_read_bulk()
  > membarrier: simplify locking

Adjust scylla to the following changes in scylla:
- file_stat became polymorphic
  - needs explicit inference in table::snapshot_exists, table::get_snapshot_details
- file::experimental_list_directory now returns list_directory_generator_type

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>

Closes scylladb/scylladb#27916
2025-12-30 19:37:13 +03:00
Yaniv Kaul
0264ec3c1d test: test_downgrade_after_partial_upgrade: check that feature is disabled on all nodes after partial upgrade
We should check that the test feature is disabled on all nodes after a partial
upgrade. This hardens the test a bit, although the old code wasn't that bad,
since enabled features are a part of the group 0 state shared by all nodes.

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>

Closes scylladb/scylladb#27654
2025-12-30 17:34:56 +01:00
Nadav Har'El
ffcce1ffc8 test/boost: fix flaky test node_view_update_backlog
The boost test view_schema_test.cc::node_view_update_backlog can be
flaky if the test machine has a hiccup of 100ms, and this patch fixes
it:

The test is a unit test for db::view::node_update_backlog, which is
supposed to cache the backlog calculation for a given interval. The
test asks to cache the backlog for 100ms, and then without sleeping
at all tries to fetch a value again and expect the unchanged cached
value to be returned. However, if the test run experiences a context
switch of 100ms, it can fail, and it did once as reported in #27876.

The fix is to change the interval in this test from 100ms to something
much larger, like 10 seconds. We don't sleep this amount - we just need
the second fetch to happen *before* 10 seconds has passed, so there's
no harm in using a very large interval.

However, the second half of this test wants to check that after the
interval is over, we do get a new backlog calculation. So for the
second half of this test we can and should use a shorter backlog -
e.g., 10ms. We don't care if the test machine is slow or context switched,
for this half of the test we want to to sleep *more* than 10ms, and
that's easy.

The fixed test is faster than the old one (10ms instead of 100ms) and
more reliable on a shared test machine.

Fixes #27876.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>

Closes scylladb/scylladb#27878
2025-12-30 10:10:42 +01:00
Benny Halevy
c9eab7fbd4 test: test_refresh: add test_refresh_deletes_uploaded_sstables
The refresh api is expected to automatically delete
the sstable files from the uploads/ dir.  Verify that.

The code that does that is currently called by
sstables_loader::load_new_sstables:
```c++
        if (load_and_stream) {
...
                co_await loader.load_and_stream(ks_name, cf_name, table_id, std::move(sstables_on_shards[this_shard_id()]), primary_replica_only(primary), true /* unlink */, scope, {});
```

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>

Closes scylladb/scylladb#27586
2025-12-30 10:51:24 +03:00
Nadav Har'El
80e5860a8c docs/alternator: document that Streams needs vnodes
The current state (after PR #26836) is that Alternator tables are
created by default using tablets. But due to issue #23838, Alternator
Streams cannot be enabled on a table that uses tablets... An attempt to
enable Streams on such a table results in a clear error:

    "Streams not yet supported on a table using tablets (issue #23838).
    If you want to use streams, create a table with vnodes by setting
    the tag 'system:initial_tablets' set to 'none'."

But users should be able to learn this fact from the documentation -
not just retroactively from an error message. This is especially important
because a user might create and fill a table using tablets, and only get
this error when attempting to enable Streams on the existing table -
when it is too late to change anything.

So this patch adds a paragraph on this to compatibility.md, where
several other requirements of Alternator Streams are already mentioned.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>

Closes scylladb/scylladb#27000
2025-12-30 10:45:34 +03:00
Avi Kivity
853f3dadda Merge 'treewide: fix some spelling errors' from Piotr Smaron
Irritated by prevailing spellchecker comments attached to every PR, I aim to fix them all.

No need to backport, just cosmetic changes.

Closes scylladb/scylladb#27897

* github.com:scylladb/scylladb:
  treewide: fix some spelling errors
  codespell: ignore `iif` and `tread`
2025-12-29 20:45:31 +02:00
Patryk Jędrzejczak
4e63e74438 messaging: improve the error messages of closed_errors
The default error message of `closed_error` is "connection is closed".
It lacks the host ID and the IP address of the connected node, which
makes debugging harder. Also, it can be more specific when
`closed_error` is thrown due to the local node shutting down.

Fixes #16923

Closes scylladb/scylladb#27699
2025-12-29 18:36:07 +02:00
Avi Kivity
567c28dd0d Merge 'Decouple sstables::storage::snapshot() and ::clone() functionality' from Pavel Emelyanov
The storage::snapshot() is used in two different modes -- one to save sstable as snapshot somewhere, and another one to create a copy of sstable. The latter use-case is "optimized" by snapshotting an sstable under new generation, but it's only true for local storage. Despite for S3 storage snapshot is not implemented, _cloning_ sstable stored on S3 is not necessarily going to be the same as doing a snapshot.

Another sign of snapshot and clone being different is that calling snapshot() for snapshot itself and for clone use two very different sets of arguments -- snapshotting specifies relative name and omits new generation, while cloning doesn't need "name" and instead provides generation. Recently (#26528) cloning got extra "leave_unsealed" tag, that makes no sense for snapshotting.

Having said that, this PR introduces sstables::storage::clone() method and modifies both, callers and implementations, according to the above features of each. As a result, code logic in both methods become much simpler and a bunch of bool classes and "_tag" helper structures goes away.

Improving internal APIs, no need to backport

Closes scylladb/scylladb#27871

* github.com:scylladb/scylladb:
  sstables, storage: Drop unused bool classes and tags
  sstables/storage: Drop create_links_common() overloads
  sstable: Simplify storage::snapshot()
  sstables: Introduce storage::clone()
2025-12-29 17:50:54 +02:00
Avi Kivity
9927c6a3d4 Merge 'Reapply "audit: enable some subset of auditing by default"' from Piotr Smaron
This reverts commit a5edbc7d612df237a1dd9d46fd5cecf251ccfd13.

<h3>Why re-enabling table audit</h3>

Audit has been disabled (scylladb/scylla-enterprise/pull/3094) over many concerns raised against the table implementation, e.g. scylladb/scylla-enterprise/issues/2939 / scylladb/scylla-enterprise/issues/2759 + there's whole outstanding backlog of issues . One of the concerns was also a possible loss of availability, and since then we migrated audit keyspace from SimpleStrategy RF=1 to NetworkTopologyStrategy RF=3 (scylladb/scylla-enterprise/pull/3399) and stopped failing queries when auditing fails (scylladb/scylla-enterprise/pull/3118 & scylladb/scylla-enterprise/pull/3117), which improves the situation but doesn't address all the concerns. Eventually we want to use syslog as audit's sink, but it's not fully ready just yet, and so we'll restore table audit for now to increase the security, but later switch to syslog. BTW. cloud will enable table audit for AUTH category scylladb/sre-ops-automation/issues/2970 separately from this effort.

<h3>Performance considerations</h3>

We are assuming that the events for the enabled categories, i.e. DCL, DDL, AUTH & ADMIN, should appear at about the same, low cadence, with AUTH perhaps having the biggest impact of them all under some workloads. The performance penalty of enabling just the AUTH category [has been measured](https://scylladb.atlassian.net/wiki/spaces/RND/pages/148308005/Audit+performance+impact+test) and while authentication throughput and read/write throughput remain stable, the queries' P99 latency may decrease by a couple of % in the most hardcore scenarios.

Fixes: https://github.com/scylladb/scylladb/issues/26020

Gradually re-enabling audit feature, no need to backport.

Closes scylladb/scylladb#27262

* github.com:scylladb/scylladb:
  doc: audit: set audit as enabled by default
  Reapply "audit: enable some subset of auditing by default"
2025-12-29 16:41:04 +02:00
Tomasz Grabiec
bbf9ce18ef Merge 'load_balancer: compute node load based on tablet sizes' from Ferenc Szili
Currently, the tablet load balancer performs capacity based balancing by collecting the gross disk capacity of the nodes, and computes balance assuming that all tablet sizes are the same.

This change introduces size-based load balancing. The load balancer does not assume identical tablet sizes any more, and computes load based on actual tablet sizes.

The size-based load balancer computes the difference between the most and least loaded nodes in the balancing set (nodes in DC, or nodes in a rack in case of `rf-rack-valid-keyspaces`) and stops further balancing if this difference is bellow the config option `size_based_balance_threshold_percentage`.

This config option does not apply to the absolute load, but instead to the percentage of how much the most loaded node is more loaded than the least loaded node:

`delta = (most_loaded - least_loaded) / most_loaded`

If this delta is smaller then the config threshold, the balancer will consider the nodes balanced.

This PR is a part of a series of PRs which are based on top of each other.

- First part for tablet size collection via load_stats: #26035
- Second part reconcile load_stats: #26152
- The third part for load_sketch changes: #26153
- The fourth part which performs tablet load balancing based on tablet size: #26254
- The fifth part changes the load balancing simulator: #26438

This is a new feature, backport is not needed.

Fixes #26254

Closes scylladb/scylladb#26254

* github.com:scylladb/scylladb:
  test, load balancing: add test for table balance
  load_balancer: add cluster feature for size based balancing
  load_balancer: implement size-based load balancing
  config: add size based load balancing config params
  load_stats: use trinfo to decide how to reconcile tablet size
  load_sketch: use tablet sizes in load computation
  load_stats: add get_tablet_size_in_transition()
2025-12-29 15:01:38 +01:00
Pavel Emelyanov
d892140655 Merge 'Reduce allocations when traversing compaction_groups' from Benny Halevy
- table, storage_group: add compaction_group_count
  - And use to reserve vector capacity before adding an item per compaction_group
- table: reduce allocations by using for_each_compaction_group rather than compaction_groups()
  - compaction_groups() may allocate memory, but when called from a synchronous call site, the caller can use for_each_compaction_group instead.

* Improvement, no backport needed

Closes scylladb/scylladb#27479

* github.com:scylladb/scylladb:
  table: reduce allocations by using for_each_compaction_group rather than compaction_groups()
  replica: storage_group: rename compaction_groups to compaction_groups_immediate
2025-12-29 16:26:33 +03:00
Gleb Natapov
4a5292e815 raft topology: Notify that a node was removed only once
Raft topology goes over all nodes in a 'left' state and triggers 'remove
node' notification in case id/ip mapping is available (meaning the node
left recently), but the problem is that, since the mapping is not removed
immediately, when multiple nodes are removed in succession a notification
for the same node can be sent several times. Fix that by sending
notification only if the node still exists in the peers table. It will
be removed by the first notification and following notification will not
be sent.

Closes scylladb/scylladb#27743
2025-12-29 14:22:34 +01:00
Piotr Smaron
fb4d89f789 treewide: fix some spelling errors 2025-12-29 13:53:56 +01:00
Piotr Smaron
ba5c70d5ab codespell: ignore iif and tread
There are correct:
- iif is a boost's header name
- `tread carefully` is an actual english phrase
2025-12-29 13:53:56 +01:00
Nadav Har'El
8df9cfcde8 Merge 'Add table size bytes to describe table' from Radosław Cybulski
Add table size to DescribeTable's reply in Alternator

Fills DescribeTable's reply with missing field TableSizeBytes.

- add helper class simple_value_with_expiry, which is like std::optional
  but the value put has a timeout.
- add ignore_errors to estimate_total_sstable_volume function - if set
  to true the function will catch errors during RPC and ignore them,
  substituting 0 for missing value.
- add a reference to storage_service to executor class (needed to call
  estimate_total_sstable_volume function).
- add fill_table_description and create_table_on_shard0 as non static
  methods to executor class
- calculate TableSizeBytes value for a given table and return it as
  part of DescribeTable's return value. The value calculated is cached for
  approximately 6 hours (as per DescribeTable's specification).
  The algorithm is as follows:
  - if the requested value is in cache and is still valid it's returned,
    nothing else happens.
  - otherwise:
    - every shard of every node is requested to calculate size of its data
    - if the error happens, the error is ignored and we assume the given
      shard has a size of 0
    - all such values are summed producing total size
    - produced value is returned to a caller
    - on the node the call for a size happened every shard is requested to
      cache produced value with a 6 hour timeout.
    - if the next call comes for a differet shard on the same node that
      doesn't yet have cached value, the shard will request the value to
      be calculated again. The new value will overwrite the old one on
      every shard on this node.
    - if the next call comes to a different node, the process of
      calculation will happen from start, possibly producing different
      value. The value will have it's own timeout, there's no attempt made
      to synchronize value between nodes.
- add a alternator_describe_table_info_timeout_in_seconds parameter, which
  will control, how long DescribeTable's table information are being held
  in cache. Default is 6 hours.
- update test to use parameter
  `alternator_describe_table_info_timeout_in_seconds` - setting it to 0
  and forcing flushing memtables to disk allows checking, that table size
  has grown.

Fixes #7551

Closes scylladb/scylladb#24634

* github.com:scylladb/scylladb:
  alternator: fix invalid rebase
  Update tests
  Update documentation
  Add table size to DescribeTable's output
  Promote fill_table_description and create_table_on_shard0 to methods
  Modify estimate_total_sstable_volume to opt ignore errors
  Add alternator_describe_table_info_cache_validity_in_seconds config option
  Add ref to service::storage_service to executor
  Add simple_value_with_expiry util class
2025-12-29 14:47:36 +02:00
Benny Halevy
f60033db63 db: system_keyspace: get_group0_history: unfreeze_gently
Prevent stall when the group0 history is too long using unfreeze_gently
rather than the synchronous unfreeze() function

Fixes #27872

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>

Closes scylladb/scylladb#27873
2025-12-29 12:00:54 +02:00
Radosław Cybulski
df20f178aa alternator: fix invalid rebase
Fix an invalid rebase, that would properly merge code coming
from master, except that code would ignore refactor done in the patch.
2025-12-29 08:33:10 +01:00
Radosław Cybulski
a31c8762ca Update tests 2025-12-29 08:33:09 +01:00
Radosław Cybulski
5e1254eef0 Update documentation 2025-12-29 08:33:08 +01:00
Radosław Cybulski
a86b782d3f Add table size to DescribeTable's output
Add a table size to DescribeTable's output.
2025-12-29 08:33:07 +01:00
Radosław Cybulski
1bd855a650 Promote fill_table_description and create_table_on_shard0 to methods
Promote `executor::fill_table_description` and
`executor::create_table_on_shard0` to methods (from static functions).
2025-12-29 08:33:06 +01:00
Radosław Cybulski
6a26381f4f Modify estimate_total_sstable_volume to opt ignore errors
Modify `storage_service::estimate_total_sstable_volume` function to
optionally ignore errors (instead substitute 0), when `ignore_errors`
parameter is set to `yes`.
2025-12-29 08:33:06 +01:00
Radosław Cybulski
a532fc73bc Add alternator_describe_table_info_cache_validity_in_seconds config option
Add a `alternator_describe_table_info_cache_validity_in_seconds`
configuration option with default value of 6 hours.
2025-12-29 08:33:05 +01:00
Radosław Cybulski
e246abec4d Add ref to service::storage_service to executor
Add a reference to `service::storage_service` to executor object.
2025-12-29 08:33:03 +01:00
Radosław Cybulski
dfa600fb8f Add simple_value_with_expiry util class
Add a `simple_value_with_expiry` utility class, which functions like
a `std::optional` with added timeout. When emplacing a value, user
needs to provide timeout, after which value expires (in which case
the `simple_value_with_expiry` object behaves as if was never set
at all).
Add boost tests for the new class.
2025-12-29 08:32:52 +01:00
Pavel Emelyanov
2e33234e91 util: Remove lister::rmdir()
There's seastar helper that does the same, no need to carry yet another
implementation

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>

Closes scylladb/scylladb#27851
2025-12-28 19:46:19 +02:00
Avi Kivity
63e3a22f2e Merge 'group0_state_machine: don't update in-memory state machine until start' from Piotr Dulikowski
Group0 commands consist of one or more mutations and are supposed to be
atomic - i.e. the data structures that reflect the group0 tables state
are not supposed to be updated while only some mutations of a command
are applied, the logic responsible for that is not supposed to observe
an inconsistent state of group0 tables.

It turns out that this assumption can be broken if a node crashes in the
middle of applying a multi-mutation group0 command. Because these
mutations are, in general, applied separately, only some mutations might
survive a crash and a restart, so the group0 tables might be in an
inconsistent state. The current logic of group0_state_machine will
attempt to read the group0 tables' state as it was left after restart,
so it may observe inconsistent state.

This can confuse the node as it may observe a state that it was not
supposed to observe, or the state will just outright break some
invariants and trigger some sanity checks. One of those was observed in
https://github.com/scylladb/scylladb/issues/26945, where a command from the CDC generation
publisher fiber was partially applied. The fiber, in addition to
publishing generations, it removes old, expired generations as well.
Removal is done by removing data that describes the generation from
cdc_generations_v3 and by removing the generation's ID from the
committed generation list in the topology table. If only the first
mutation gets through but not the other one, on reload the node will see
a committed CDC generation without data, which will trigger an
on_internal_error check.

Fix this by delaying the moment when the in memory data structures are
first loaded. In 579dcf187a, a mechanism was introduced which persists the
commit index before applying commands that are considered committed.
Starting a raft server waits until commands are replayed up to that
point. The fix is to start the group0_state_machine in a mode which only
applies mutations - the aforementioned mechanism will re-apply the
commands which will, thanks to the mutation idempotency, bring the
group0 to a consistent state. After the group0 is known to be in
consistent state (so, after raft::server_impl::start) the in-memory data
structures of group0 are loaded for the first time.

There is an exception, however: schema tables. Information about schema
is actually loaded into memory earlier than the moment when group0 is
started. Applying changes to schema is done through the migration
manager module which compares the persisted state before and after the
schema mutations are applied and acts on that. Refactoring migration
manager is out of scope of this PR. However, this is not a problem
because the migration manager takes care to apply all of the mutations
given in a command in a single commitlog segment, so the initial schema
loading code should not see an inconsistent state due to the state being
partially applied.

The fix is accompanied by a reproducer of scylladb/scylladb#26945.

Fixes: scylladb/scylladb#26945

This is not a regression, so no need to backport.

Closes scylladb/scylladb#27528

* github.com:scylladb/scylladb:
  test: cluster: test for recovery after partial group0 command
  group0_state_machine: remove obsolete comment about group0 consistency
  group0_state_machine: don't update in-memory state machine until start
  group0_state_machine: move reloading out of std::visit
  service: raft: add state machine ref to raft_server_for_group
2025-12-28 13:59:26 +02:00
Pavel Emelyanov
e963a8d603 checked-file: Implement experimental_list_directory()
The method in question returns coroutine generator that co_yields
directory_entry-s. In case the method is not implemented, seastar
creates a fallback generator, that calls existing subscription-based
list_directory() and co_yields them. And since checked file doesn't yet
have it, fallback generator is used, thus skipping the lower file
yielding lister. Not nice.

This patch implements the generator lister for checked file, thus making
full use of lower file generator lister too.

A side note. It's not enough to implement it like

    return do_io_check([] {
        return lower_file->experimental_list_directory();
    });

like list_directory() does, since io-checking will _not_ happen on
directory reading itself, as it's supposed to.

This is the problem of the check_file::list_directory() implementation
-- it only checks for exception when creating the subscription (and it
really never happens), but reading the directory itself happens without
io checks.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>

Closes scylladb/scylladb#27850
2025-12-28 13:37:44 +02:00
Yaron Kaikov
1ee89c9682 Revert "scripts: benign fixes flagged by CodeQL/PyLens"
This reverts commit 377c3ac072.

This breaks all artifact tests and cloud image build process

Closes scylladb/scylladb#27881
2025-12-28 09:49:49 +02:00
Ferenc Szili
6d3c720a08 test, load balancing: add test for table balance
This change adds a boost test which validates the resulting table
balance of size based load balancing. The threshold was set to a
conservative 1.5 overcommit to avoid flakyness.
2025-12-27 11:39:08 +01:00
Ferenc Szili
b7ebd73e53 load_balancer: add cluster feature for size based balancing
This patch adds a cluster feature size_based_load_balancing which, until
enabled, will force capacity based balancing. This is needed because
during rolling upgrades some of the nodes will have incomplete data in
load_stats (missing tablet sizes and effective_capacity) which are
needed for size based balancing to make good decisions and issue correct
migrations.
2025-12-27 11:39:08 +01:00
Ferenc Szili
10eb364821 load_balancer: implement size-based load balancing
This changes introduces tablet size based load balancing. It is an
extension of capacity based balancing with the addition of actual tablet
sizes.

It computes the difference between the most and least loaded nodes in
the DC and stops further balancing if this difference is bellow the
config option size_based_balance_threshold_percentage.

This config option does not apply to the absolute load, but instead to
the percentage of how much the most loaded node is more loaded than the
least loaded node:

delta = (most_loaded - least_loaded) / most_loaded

If this delta is smaller then the config threshold, the balancer will
consider the nodes balanced.
2025-12-27 11:20:20 +01:00
Ferenc Szili
cc9e125f12 config: add size based load balancing config params
This change adds:

- The config paremeter force_capacity_based_balancing which, when
  enabled performs capacity based balancing instead of size based.
- The config parameter size_based_balance_threshold_percentage which
  sets the balance threshold for the size based load balancer.
- The config parameter minimal_tablet_size_for_balancing which sets the
  minimal tablet size for the load balancer.
2025-12-27 10:37:38 +01:00
Ferenc Szili
0c9b93905e load_stats: use trinfo to decide how to reconcile tablet size
This patch corrects the way update_load_stats_on_end_migration() decides
which tablet transition occured, in order to reconcile tablet sizes in
load_stats. Before, the transition kind was inferred from the value of
leaving and pending replicas. This patch changes this to use the value
of trinfo.transition.

In case of a rebuild, and in case there is only one replica, the new
tablet size will be set to 0.
2025-12-27 10:37:38 +01:00
Ferenc Szili
621cb19045 load_sketch: use tablet sizes in load computation
This commit changes load_sketch so that it computes node and shard load
based on tablet sizes instead of tablet count.
2025-12-27 10:37:23 +01:00
Ferenc Szili
1c9ec9a76d load_stats: add get_tablet_size_in_transition()
This patch adds a method to load_stats which searches for the tablet
size during tablet transition. In case of tablet migration, the tablet
will be searched on the leaving replica, and during rebuild we will
return the average tablet size of the pending replicas.
2025-12-27 10:37:23 +01:00
Pavel Emelyanov
bda1709734 Merge 'test: fix infinite loop in python log browsing code triggered from test_orphaned_sstables_on_startup' from Avi Kivity
Recently, test/cluster/test_tablet.py::test_orphaned_sstables_on_startup started
spinning in the log browsing code, part of a the test library that looks into log files
for expected or unexpected patterns. This reproduced somewhat in continuous
integration, and very reliably for me locally.

The test was introduced in fa10b0b390, a year ago.

There are two bugs involved: first, that we're looking for crashes in this test,
since in fact it is expected to crash. The node expectedly fails with an
on_internal_error. Second, the log browsing code contains an infinite loop
if the crash backtrace happens to be the last thing in the log. The series
fixes both bugs.

Fixes #27860.

While the bad code exists in release branches, it doesn't trigger there so far, so best
to only backport it if it starts manifesting there.

Closes scylladb/scylladb#27879

* github.com:scylladb/scylladb:
  test: pylib: log_browsing: fix infinite loop in find_backtraces()
  test: pylib/log_browsing, cluster/test_tablets: don't look for expected crashes
2025-12-26 10:45:56 +03:00
Pavel Emelyanov
712cc8b8f1 sstables, storage: Drop unused bool classes and tags
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2025-12-26 09:47:27 +03:00
Pavel Emelyanov
9e189da23a sstables/storage: Drop create_links_common() overloads
There's a bunch of tagged create_links_common() overloads that call the
most generic one with properly crafted arguments and the link_mode.
Callers of those one-liners can craft the args themselves.

As a result, there's only one create_links_common() overload and callers
explicitly specifying what they want from it.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2025-12-26 09:47:27 +03:00
Pavel Emelyanov
32cf358f44 sstable: Simplify storage::snapshot()
Now there are only two callers left -- sstable::snapshot() and
sstable::seal() that wants to auto-backup the sealed sstable.

The snapshot arguments are:
- relative path, use _base_dir
- no new generation provided
- no leave-unsealed tag

With that, the implementation of filesystem_storage::snapshot() is as
simple as
- prepare full path relative to _base_dir
- touch new directory
- call create_links_common()

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2025-12-26 09:47:27 +03:00
Pavel Emelyanov
8e496a2f2f sstables: Introduce storage::clone()
And call it from sstable::clone() instead of storage::snapshot().

The snapshot arguements are:
- target directory is storage::prefix(), that's _dir itself
- new generation is always provided, no need for optional
- leave_unsealed bool flag

With that, the implementation of filesystem_storage::clone() is as
simple as call create_links_common() forwarding args and _dir to it. The
unification of leave_unsealed branches will come a bit later making this
code even shorter.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2025-12-26 09:47:27 +03:00
Nadav Har'El
9c50d29a00 test/boost: fix flaky test_inject_future_disabled
The test boost/error_injection_test.cc::test_inject_future_disabled
checks what happens when a sleep injection is *disabled*: The test
has a 10-millisecond-sleep injection and measures how much it takes.
The test expects it to take less than 10 milliseconds - in fact it
should take almost zero. But this is not guaranteed - on a slow debug
build and an overcommitted server this do-nothing injection can take
some time, and in one run (#27798) it took 14 milliseconds - and the
test failed.

The solution is easy - make the sleep-that-doesn't-happen much longer -
e.g., 10 whole seconds. Since this sleep still doesn't happen, we
expect the injection to return in less - much less - than 10 seconds.
This 10 seconds is so ridiculously high we don't expect the do-nothing
injection to take 10 seconds, not even a ridiculously busy test machine.

Fixes #27798

Signed-off-by: Nadav Har'El <nyh@scylladb.com>

Closes scylladb/scylladb#27874
2025-12-25 20:46:31 +02:00
Avi Kivity
92996ce9fa test: pylib: log_browsing: fix infinite loop in find_backtraces()
The find_backtraces() function uses a very convoluted loop to
read the log file. The loop fails to terminate if the last thing
in the log file is the backtrace, since the loop termination condition
(`not line`) continues to be true.

It's not clear why this did not reliably hit before, but it now
reliably reproduces for me on both x86 and aarch64. Perhaps timing
changed, or perhaps previously we had more text on the log.
2025-12-25 20:22:17 +02:00
Avi Kivity
50a3460441 test: pylib/log_browsing, cluster/test_tablets: don't look for expected crashes
test_tablets.test_orphaned_sstables_on_startup verifies that an
on_internal_error("Unable to load SSTable...") is generated when
an sstable outside a tablet boundary is found on startup.

The test indeed finds the error, but then proceeds to hang in
find_backtraces(), or fail if find_backtraces() is fixed, since
it finds an unexpected (for it) crash.

Fix this by not looking for crashes if a new option expected_crash
is set. Set it for this test.
2025-12-25 20:22:17 +02:00
Avi Kivity
55c7bc746e Revert "vector_search_validator: move high availability tests from vector-store.git"
This reverts commit caa0cbe328. It is
either extremely slow or broken. I was never able to get it to
run on an r8gd.8xlarge (on the NVMe disk). Even when it passes,
it is very slow.

Test script:

```

git submodule update --recursive || exit 125

rm -rf build

d() { ./tools/toolchain/dbuild -it -- "$@"; }

d ./configure.py --mode release || exit 125
d ninja release-build || exit 125
d ./test.py --mode release
```

Ref #27858
Ref #27859
Ref #27860
2025-12-25 12:30:22 +00:00
Botond Dénes
ebb101f8ae scylla-gdb.py: scylla small-objects: make freelist traversal more robust
Traversing the span's freelist is known to generate "Cannot access
memory at address ..." errors, which is especially annoying when it
results in failed CI. Make this loop more robust: catch gdb.error coming
from it and just log a warning that some listed objects in the span may
be free ones.

Fixes: #27681

Closes scylladb/scylladb#27805
2025-12-25 13:26:09 +03:00
Alex
f769e52877 test: boost: Fix flaky test_large_file_upload_s3 by creating induvidual files for testing During CI runs, multiple instances of the same test may execute concurrently. Although the test uses a temporary directory, the downloaded bucket artifacts were written using an identical filename across all instances.
This caused concurrent writers to operate on the same file, leading to file corruption. In some cases, this manifested as test failures and intermittent std::bad_alloc exceptions.

Change Description

This change ensures that each test instance uses a unique filename for downloaded bucket files.
By isolating file writes per test execution, concurrent runs no longer interfere with each other.

Fixes: #27824

backport not required

Closes scylladb/scylladb#27843
2025-12-25 09:40:13 +02:00
Benny Halevy
51433b838a table: reduce allocations by using for_each_compaction_group rather than compaction_groups()
compaction_groups_immediate() may allocate memory, but when called from a
synchronous call site, the caller can use for_each_compaction_group
instead to iterate over the compaction groups with no extra allocations.

Calling compaction_groups_immediate() is still required from an async
context when we want to "sample" the compaction groups
so we can safely iterate over them and yield in the inner loop.

Also, some performance insensitive call sites using
compaction_groups_immediate had been left as they are
to keep them simple.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-12-24 21:19:28 +02:00
Benny Halevy
0e27ee67d2 replica: storage_group: rename compaction_groups to compaction_groups_immediate
To better reflect that it returns a materialized vector
of compaction_group ptrs.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-12-24 21:19:26 +02:00
Nadav Har'El
186c91233b Merge 'scylla-gdb.py: improve scylla fiber and scylla read-stats' from Botond Dénes
Improve scylla fiber's ability to traverse through coroutines.
Add --direction command-line parameter to scylla-fiber.
Fix out-of-date premit collection in scylla read-stat and improve the printout.

scylla-gdb.py improvements, no backport needed

Closes scylladb/scylladb#27766

* github.com:scylladb/scylladb:
  scylla-gdb.py: scylla read-stats: include all permit lists
  scylla-gdb.py: scylla fiber: add --direction command-line param
  scylla-gdb.py: scylla fiber: add support for traversing through coroutines backward
2025-12-24 17:49:58 +02:00
Botond Dénes
27bf65e77a db/batchlog_manager: add missing <seastar/coroutine/parallel_for_each.hh> include
Build only fails if `--disable-precompiled-header` is passed to
`configure.py`. Not sure why.

Closes scylladb/scylladb#27721
2025-12-24 16:32:12 +02:00
Botond Dénes
c66275e05c cql3/statements/batch_statement: make size error message more verbose
Mention the type of batch: Logged or Unlogged. The size (warn/fail on
too large size) error has different significance depending on the type.

Refs: #27605

Closes scylladb/scylladb#27664
2025-12-24 15:27:01 +02:00
Piotr Szymaniak
9c5b4e74c3 doc: Correct reference in dev/audit.md
Closes scylladb/scylladb#27832
2025-12-24 15:25:15 +02:00
Botond Dénes
ccc03d0026 test/pylib/runner.py: pytest_configure(): coerce repeat to int
Coerce the return value of config.getoption("--repeat") to int to avoid:

    Traceback (most recent call last):
      File "/usr/bin/pytest", line 8, in <module>
        sys.exit(console_main())
                 ~~~~~~~~~~~~^^
      File "/usr/lib/python3.14/site-packages/_pytest/config/__init__.py", line 201, in console_main
        code = main()
      File "/usr/lib/python3.14/site-packages/_pytest/config/__init__.py", line 175, in main
        ret: ExitCode | int = config.hook.pytest_cmdline_main(config=config)
                              ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^
      File "/usr/lib/python3.14/site-packages/pluggy/_hooks.py", line 512, in __call__
        return self._hookexec(self.name, self._hookimpls.copy(), kwargs, firstresult)
               ~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/lib/python3.14/site-packages/pluggy/_manager.py", line 120, in _hookexec
        return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
               ~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/lib/python3.14/site-packages/pluggy/_callers.py", line 167, in _multicall
        raise exception
      File "/usr/lib/python3.14/site-packages/pluggy/_callers.py", line 121, in _multicall
        res = hook_impl.function(*args)
      File "/usr/lib/python3.14/site-packages/_pytest/helpconfig.py", line 154, in pytest_cmdline_main
        config._do_configure()
        ~~~~~~~~~~~~~~~~~~~~^^
      File "/usr/lib/python3.14/site-packages/_pytest/config/__init__.py", line 1118, in _do_configure
        self.hook.pytest_configure.call_historic(kwargs=dict(config=self))
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/lib/python3.14/site-packages/pluggy/_hooks.py", line 534, in call_historic
        res = self._hookexec(self.name, self._hookimpls.copy(), kwargs, False)
      File "/usr/lib/python3.14/site-packages/pluggy/_manager.py", line 120, in _hookexec
          return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
               ~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/lib/python3.14/site-packages/pluggy/_callers.py", line 167, in _multicall
        raise exception
      File "/usr/lib/python3.14/site-packages/pluggy/_callers.py", line 121, in _multicall
        res = hook_impl.function(*args)
      File "/home/bdenes/ScyllaDB/scylladb/scylladb/test/pylib/runner.py", line 206, in pytest_configure
        config.run_ids = tuple(range(1, config.getoption("--repeat") + 1))
                                        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~
    TypeError: can only concatenate str (not "int") to str

Closes scylladb/scylladb#27649
2025-12-24 15:13:02 +02:00
Nadav Har'El
8df5189f9c Merge 'docs: scylla-sstable.rst: extract script API to separate document' from Botond Dénes
The script API is 500+ lines long in an already too long and hard to navigate document. Extract it to a separate document, making both documents shorter and easier to navigate.

Documentation refactoring, no backport needed.

Closes scylladb/scylladb#27609

* github.com:scylladb/scylladb:
  docs: scylla-sstable-script-api.rst: add introduction and title
  docs: scylla-sstable.rst: extract script API to separate document
  docs: scylla-sstable: prepare for script API extract
2025-12-24 15:02:57 +02:00
Botond Dénes
b036a461b7 tools/scylla-sstable: dump-schema: incude UDT description in dump
If the table uses UDTs, include the description of these (CREATE TYPE
statement) in the schema dump. Without these the schema is not useful.

Closes scylladb/scylladb#27559
2025-12-24 14:46:52 +02:00
Botond Dénes
3071ccd54a Merge 'Storage-agnostic table::snapshot_on_all_shards()' from Pavel Emelyanov
The method in question knows that it writes snapshot to local filesystem and uses this actively. This PR relaxes this knowledge and splits the logic into two parts -- one that orchestrates sstables snapshot and collects the necessary metadata, and the code that writes the metadata itself.

Closes scylladb/scylladb#27762

* github.com:scylladb/scylladb:
  table: Move snapshot_file_set to table.cc
  table: Rename and move snapshot_on_all_shards() method
  table: Ditch jsondir variable
  table, sstables: Pass snapshot name to sstable::snapshot()
  table: Use snapshot_writer in write_manifest()
  table: Use snapshot_writer in write_schema_as_cql()
  table: Add snapshot_writer::sync()
  table: Add snapshot_writer::init()
  table: Introduce snapshot_writer
  table: Move final sync and rename seal_snapshot()
  table: Hide write_schema_as_cql()
  table: Hide table::seal_snapshot()
  table: Open-code finalize_snapshot()
  table: Fix indentation after previuous patch
  table: Use smp::invoke_on_all() to populate the vector with filenames
  table: Don't touch dir once more on seal_snapshot()
  table: Open-code table::take_snapshot() into caller lambda
  table: Move parts of table::take_snapshot to sstables_manager
  table: Introduce table::take_snapshot()
  table: Store the result of smp::submit_to in local variable
2025-12-24 13:46:47 +02:00
Nadav Har'El
4ae45eb367 test/alternator: remove unused imports
Remove many unused "import" statements or parts of import statement.
All of them were detected by Copilot, but I verified each one manually
and prepared this patch.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>

Closes scylladb/scylladb#27676
2025-12-24 13:44:28 +02:00
Nadav Har'El
da00401b7d test/alternator: rename test with duplicate name
The file test/alternator/test_transact.py accidentally had two tests
with the same name, test_transact_get_items_projection_expression.
This means the first of the two tests was ignored and never run.

This patch renames the second of the two to a more appropriate
(and unique...) name.

I verified that after this change the number of tests in this file
grows by one, and that still all tests pass on DynamoDB and fail
(as expected by xfail) on Alternator.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>

Closes scylladb/scylladb#27702
2025-12-24 13:43:43 +02:00
Botond Dénes
95d4c73eb1 Merge 'Make object storage config truly updateable' from Pavel Emelyanov
The db::config::object_storage_endpoints parameter is live-updateable, but when the update really happens, the new endpoints may fail to propagate to non-zero shards because of the way db::config sharding is implemented.

Refs: #7316
Fixes: #26509

Backport to 2025.3 and 2025.4, AFAIK there are set ups with object storage configs for native backup

Closes scylladb/scylladb#27689

* github.com:scylladb/scylladb:
  sstables/storage_manager: Fix configured endpoints observer
  test/object_store: Add test to validate how endpoint config update works
2025-12-24 13:42:44 +02:00
Botond Dénes
12dcf79c60 Merge 'build: support (and prefer) sccache as the compiler cache' from Avi Kivity
Currently, we support ccache as the compiler cache. Since it is transparent, nothing
much is needed to support it.

This series adds support to sccache[1] and prefers it over ccache when it is installed.

sccache brings the following benefits over ccache:
1. Integrated distributed build support similar to distcc, but with automatic toolchain packaging and a scheduler
2. Rust support
3. C++20 modules (upcoming[2])

It is the C++20 modules support that motivates the series. C++20 modules have the potential to reduce
build times, but without a compiler cache and distributed build support, they come with too large
a penalty. This removes the penalty.

The series detects that sccache is installed, selects it if so (and if not overridden
by a new option), enables it for C++ and Rust, and disables ccache transparent
caching if sccache is selected.

Note: this series doesn't add sccache to the frozen toolchain or add dbuild support. That
is left for later.

[1] https://github.com/mozilla/sccache
[2] https://github.com/mozilla/sccache/pull/2516

Toolchain improvement, won't be backported.

Closes scylladb/scylladb#27834

* github.com:scylladb/scylladb:
  build: apply sccache to rust builds too
  build: prevent double caching by compiler cache
  build: allow selecting compiler cache, including sccache
2025-12-24 13:40:02 +02:00
Nadav Har'El
74a57d2872 test/cqlpy: remove unused imports
Remove many unused "import" statements or parts of import statement.
All of them were detected by Copilot, but I verified each one manually
and prepared this patch.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>

Closes scylladb/scylladb#27675
2025-12-24 13:31:41 +02:00
Andrzej Jackowski
632ff66897 doc: audit: mention double audit sink in Enabling Audit section
Configuration of both table and syslog audit is possible since
scylladb/scylladb#26613 was implemented. However, the "Enabling Audit"
section of the documentation wasn't updated, which can be misleading.

Ref: scylladb/scylladb#26613

Closes scylladb/scylladb#27790
2025-12-24 13:20:03 +02:00
Gleb Natapov
04976875cc topology coordinator: set session id for streaming at the correct time
Commit d3efb3ab6f added streaming session for rebuild, but it set
the session and request submission time. The session should be set when
request starts the execution, so this patch moved it to the correct
place.

Closes scylladb/scylladb#27757
2025-12-24 13:17:53 +02:00
Yaniv Michael Kaul
377c3ac072 scripts: benign fixes flagged by CodeQL/PyLens
Unused imports, unused variables and such.
No functional changes, just to get rid of some standard CodeQL warnings.

Benign - no need to backport.
Signed-off-by: Yaniv Kaul <yaniv.kaul@scylladb.com>

Closes scylladb/scylladb#27801
2025-12-24 13:08:24 +02:00
Avi Kivity
d6edad4117 test: pylib: resource_gather: don't take ownership of /sys/fs/cgroup under podman
Under podman, we already own /sys/fs/cgroup. Run the chown command only
under docker where the container does not map the host user to the
container root user.

The chown process is sometimes observed to fail with EPERM (see issue).
But it's not needed, so avoid it.

Fixes #27837.

Closes scylladb/scylladb#27842
2025-12-24 10:56:24 +02:00
Marcin Maliszkiewicz
3c1e1f867d raft: auth: add semaphore to auth_cache::load_all
Auth cache loading at startup is racing between
auth service and raft code and it doesn't support
concurrency causing it to crash.

We can't easily remove any of the places as during
raft recovery snapshot is not loaded and it relies
on loading cache via auth service. Therefore we add
semaphore.

Fixes https://github.com/scylladb/scylladb/issues/27540

Closes scylladb/scylladb#27573
2025-12-24 10:56:24 +02:00
Nadav Har'El
f3a4af199f test/cqlpy/test_materialized_view.py: Fix for Commented-out code
This patch was suggested and prepared by copilot, I am writing the commit
message because the original one was worthless.

In commit cf138da, for an an unexplained reason, a loop waiting until the
expected value appears in a materialized view was replaced by a call for
wait_for_view_built(). The old loop code was left behind in a comment,
and this commented-out code is now bothering our AI. So let's delete the
commented-out code.

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>

Closes scylladb/scylladb#27646
2025-12-24 10:56:23 +02:00
Botond Dénes
1bb897c7ca Merge 'Unify configuration of object storage endpoints' from Pavel Emelyanov
To configure S3 storage, one needs to do

```
object_storage_endpoints:
  - name: s3.us-east-1.amazonaws.com
    port: 443
    https: true
    aws_region: us-east-1
```

and for GCS it's

```
object_storage_endpoints:
  - name: https://storage.googleapis.com:433
    type: gs
    credentials_file: <gcp account credentials json file>
```

This PR updates the S3 part to look like

```
object_storage_endpoints:
  - name: https://s3.us-east-1.amazonaws.com:443
    aws_region: us-east-1
```

fixes: #26570

Not-yet released feature, no need to backport. Old configs are not accepted any longer. If it's needed, then this decision needs to be revised.

Closes scylladb/scylladb#27360

* github.com:scylladb/scylladb:
  object_storage: Temporarily handle pure endpoint addresses as endpoints
  code: Remove dangling mentions of s3::endpoint_config
  docs: Update docs according to new endpoints config option format
  object_storage: Create s3 client with "extended" endpoint name
  test: Add named constants for test_get_object_store_endpoints endpoint names
  s3/storage: Tune config updating
  sstable: Shuffle args for s3_client_wrapper
2025-12-24 06:59:02 +02:00
Botond Dénes
954f2cbd2f Merge 'config, transport: add listeners for native protocol fronted by proxy protocol v2' from Avi Kivity
For deployments fronted by a reverse proxy (haproxy or privatelink), we want to
use proxy protocol v2 so that client information in system.clients is correct and so
that the shard-aware selection protocol, which depends on the source port, works
correctly. Add proxy-protocol enabled variants of each of the existing native transport
listeners.

Tests are added to verify this works. I also manually tested with haproxy.

New feature, no backport.

Closes scylladb/scylladb#27522

* github.com:scylladb/scylladb:
  test: add proxy protocol tests
  config, transport: support proxy protocol v2 enhanced connections
2025-12-24 06:58:00 +02:00
Nadav Har'El
e75c75f8cd test/cqlpy: fix two tests that couldn't fail because of typo
As noticed by copilot, two tests in test_guardrail_compact_storage.py
could never fail, because they used `pytest.fail` instead of the
correct `pytest.fail()` to fail. Unfortunately, Python has a footgun
where if it sees a bare function name without parenthesis, instead of
complaining it evaluates the function object and then ignores it,
and absolutely nothing happens.

So let's add the missing `()`. The test still passes, but now it at
least has a chance of failing if we have a regression.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>

Closes scylladb/scylladb#27658
2025-12-24 06:49:54 +02:00
Yaron Kaikov
d671ca9f53 fix: remove return from finally block in s3_proxy.py
during any jenkins job that trigger `test.py` we get:
```
/jenkins/workspace/releng-testing/byo/byo_build_tests_dtest/scylla/test/pylib/s3_proxy.py:152: SyntaxWarning: 'return' in a 'finally' block
```

The 'return' statement in the finally block was causing a SyntaxWarning.
Moving the return outside the finally block ensures proper exception
handling while maintaining the intended behavior.

Closes scylladb/scylladb#27823
2025-12-24 06:48:03 +02:00
Avi Kivity
fc81983d42 test: sstable_validation_test: actually test ms version
sstable_validation_test tests the `scylla sstable validate` command
by passing it intentionally corrupted sstables. It uses an sstable
cache to avoid re-creating the same sstables. However, the cache
does not consider the sstable version, so if called twice with the
same inputs for different versions, it will return an sstable with
the original version for both calls. As a results, `ms` sstables
were not tested. Fix this bug by adding the sstable version (and
the schema for good measure) to the cache key.

An additional bug, hidden by the first, was that we corrupted the
sstable by overwriting its Index.db component. But `ms` sstables
don't have an Index.db component, they have a Partitions.db component.
Adjust the corrupting code to take that into account.

With these two fixes, test_scylla_sstable_validate_mismatching_partition_large
fails on `ms` sstables. Disable it for that version. Since it was
previously practically untested, we're not losing any coverage.

Fixing this test unblocks further work on making pytest take charge
of running the tests. pytest exposed this problem, likely by running
it on different runners (and thus reducing the effectiveness of the
cache).

Fixes #27822.

Closes scylladb/scylladb#27825
2025-12-24 06:47:31 +02:00
Botond Dénes
cf70250a5c Update seastar submodule
* seastar 7ec14e83...f0298e40 (8):
  > Merge 'coroutine/try_future: call set_current_task() when resuming the coroutine' from Botond Dénes
    coroutine/try_future: call set_current_task() when resuming the coroutine
    core: move set_current_task() out-of-line
  > stop_signal: stop including reactor.hh
  > cmake: Mark hwloc headers as system includes to suppress warnings
  > build: explicitly enable vptr sanitizer
  > httpd: Add API to set tcp keepalive params
  > Merge 'Make datagram_channel::send() use temporary_buffer-s' from Pavel Emelyanov
    net: Remove no longer used to_iovec() helpers
    net,code: Update callers to use new datagram_channel::send()
    net: Introduce datagram_channel::send(span<temporary_buffer>) method
    posix-stack: Make UDP socket implementation use wrapped_iovec
    posix-stack: Introduce wrapped_iovec
  > code: Move pollable_fd_state::write_all(const char*) from API level 9
  > thread: Remove unused sched_group() helper

configure.py: added -lubsan to DEBUG sanitizer flags

Closes scylladb/scylladb#27511
2025-12-24 06:46:36 +02:00
Nadav Har'El
54f3e69fdc Fix for Statement has no effect
This problem and its fix was suggested by copilot, I'm just writing the
cover letter.
test/nodetool/test_status.py has the silly statement tokens == "?" which
has no effect. Looking around the code suggested to me (and also to
Copilot, nice) that the correct intent was assert tokens == "?" and not,
say, tokens = "?".

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>

Closes scylladb/scylladb#27659
2025-12-24 06:43:26 +02:00
Piotr Dulikowski
9ed820cbf5 test: cluster: test for recovery after partial group0 command
Add a reproducer for scylladb/scylladb#26945. By using error injections,
the test triggers a situation where a command that removes an obsolete
CDC generation is partially applied, then the node is killed an brought
back. Thanks to the fix, restarting the node succeeds and does not
trigger any consistency checks in the group0 reload logic.
2025-12-23 20:50:43 +01:00
Piotr Dulikowski
71bc1886ee group0_state_machine: remove obsolete comment about group0 consistency
The comment is outdated. It is concerned about group0 consistency after
crash, and that re-applying committed commands may require a raft
quorum. First, 579dcf1 was introduced (long ago) which gets rid of the
need for quorum as the node persists the commit index before applying
the commands - so it knows up to which command it should re-apply on
restart. Second, the preceding commits in this PR makes use of this
mechanism for group0.

Remove the comment as the concern was fully addressed. Additionally,
remove a mention of the comment in raft_group0_client.cc - although it
claims that the comment is placed in `group0_state_machine::apply`, it
has been moved to `merge_and_apply` in 96c6e0d (both comments were
originally introduced in 6a00e79).
2025-12-23 20:44:17 +01:00
Piotr Dulikowski
b24001b5e7 group0_state_machine: don't update in-memory state machine until start
Group0 commands consist of one or more mutations and are supposed to be
atomic - i.e. the data structures that reflect the group0 tables state
are not supposed to be updated while only some mutations of a command
are applied, the logic responsible for that is not supposed to observe
an inconsistent state of group0 tables.

It turns out that this assumption can be broken if a node crashes in the
middle of applying a multi-mutation group0 command. Because these
mutations are, in general, applied separately, only some mutations might
survive a crash and a restart, so the group0 tables might be in an
inconsistent state. The current logic of group0_state_machine will
attempt to read the group0 tables' state as it was left after restart,
so it may observe inconsistent state.

This can confuse the node as it may observe a state that it was not
supposed to observe, or the state will just outright break some
invariants and trigger some sanity checks. One of those was observed in
scylladb/scylladb#26945, where a command from the CDC generation
publisher fiber was partially applied. The fiber, in addition to
publishing generations, it removes old, expired generations as well.
Removal is done by removing data that describes the generation from
cdc_generations_v3 and by removing the generation's ID from the
committed generation list in the topology table. If only the first
mutation gets through but not the other one, on reload the node will see
a committed CDC generation without data, which will trigger an
on_internal_error check.

Fix this by delaying the moment when the in memory data structures are
first loaded. In 579dcf1, a mechanism was introduced which persists the
commit index before applying commands that are considered committed.
Starting a raft server waits until commands are replayed up to that
point. The fix is to start the group0_state_machine in a mode which only
applies mutations - the aforementioned mechanism will re-apply the
commands which will, thanks to the mutation idempotency, bring the
group0 to a consistent state. After the group0 is known to be in
consistent state (so, after raft::server_impl::start) the in-memory data
structures of group0 are loaded for the first time.

There is an exception, however: schema tables. Information about schema
is actually loaded into memory earlier than the moment when group0 is
started. Applying changes to schema is done through the migration
manager module which compares the persisted state before and after the
schema mutations are applied and acts on that. Refactoring migration
manager is out of scope of this PR. However, this is not a problem
because the migration manager takes care to apply all of the mutations
given in a command in a single commitlog segment, so the initial schema
loading code should not see an inconsistent state due to the state being
partially applied.

Fixes: scylladb/scylladb#26945
2025-12-23 20:44:16 +01:00
Piotr Dulikowski
f4efdf18a5 group0_state_machine: move reloading out of std::visit
In the next commit, we will adjust the logic so that it only reloads in
memory state only when a flag is set. By moving the reload logic to one
place in `merge_and_apply`, the next commit will be able to reach its
goal by only adding a single `if`.
2025-12-23 20:44:16 +01:00
Piotr Dulikowski
6bdbd91cf7 service: raft: add state machine ref to raft_server_for_group
This reference will be used by the code that starts group0. It will
manually enable the in-memory state machine only after the group0 server
is fully started, which entails replaying the group0 commands that are,
locally, seen as committed - in order to repair any inconsistencies that
might have arisen due to some commands being applied only partially
(e.g. due to a crash).
2025-12-23 20:44:16 +01:00
Michał Hudobski
ce3320a3ff auth: add system table permissions to VECTOR_SEARCH_INDEXING
Due to the recent changes in the vector store service,
the service needs to read two of the system tables
to function correctly. This was not accounted for
when the new permission was added. This patch fixes that
by allowing these tables (group0_history and versions)
to be read with the VECTOR_SEARCH_INDEXING permission.

We also add a test that validates this behavior.

Fixes: SCYLLADB-73

Closes scylladb/scylladb#27546
2025-12-23 15:53:07 +02:00
Pawel Pery
caa0cbe328 vector_search_validator: move high availability tests from vector-store.git
Initially, tests for high availability were implemented in vector-store.git
repository. High availability is currently implemented in scylladb.git
repository so this repository should be the better place to store them. This
commit copies these tests into the scylladb.git.

The commit copies validator-vector-store/src/high_availability.rs (tests logic)
and validator-tests/src/common.rs (utils for tests) into the local crate
validator-scylla. The common.rs should be copied to be able for reviewer to see
common test code and this code most likely be frequent to change - it will be
hard to maintain one common version between two repositories.

The commit updates also README for vector_search_validator; it shortly describe
the validator modules.

The commit updates reference to the latest vector-store.git master.

As a next step on the vector-store.git high_availability.rs would be removed
and common.rs moved from validator-tests into validator-vector-store.

References: VECTOR-394

Closes scylladb/scylladb#27499
2025-12-23 15:53:07 +02:00
Yaron Kaikov
bad2fe72b6 .github/workflows: Add email validator workflow
This workflow validates that all commits in a pull request use email
addresses ending in @scylladb.com. For each commit with an author or
committer email that doesn't match this pattern, the workflow automatically
adds a comment to the pull request with a warning.

This serves two purposes:

1. Alert maintainers when external contributors submit code (which is
   acceptable, but good to be aware of)

2. Help ScyllaDB developers catch cases where they haven't configured
   their git email correctly

When a non-@scylladb.com email is detected, the workflow posts this
comment on the pull request:
```
    ⚠️ Non-@scylladb.com Email Addresses Detected

    Found commit(s) with author or committer emails that don't end with
    @scylladb.com.

    This indicates either:

    - An external contributor (acceptable, but maintainer should be aware)
    - A developer who hasn't configured their git email correctly

    For ScyllaDB developers:

    If you're a ScyllaDB employee, please configure your git email globally:

        git config --global user.email "your.name@scylladb.com"

    If only your most recent commit is invalid, you can amend it:

        git commit --amend --reset-author --no-edit
        git push --force

    If you have multiple invalid commits, you need to rewrite them all:

        git rebase -i <base-commit>
        # Mark each invalid commit as 'edit', then for each:
        git commit --amend --reset-author --no-edit
        git rebase --continue
        # Repeat for each invalid commit
        git push --force
```
Fixes: https://scylladb.atlassian.net/browse/RELENG-35

Closes scylladb/scylladb#27796
2025-12-23 15:53:06 +02:00
Pavel Emelyanov
ec15a1b602 table: Do not move foreign string when writing snapshot
The table::seal_snapshot() accepts a vector of sstables filenames and
writes them into manifest file. For that, it iterates over the vector
and moves all filenames from it into the streamer object.

The problem is that the vector contains foreign pointers on sets with
sstrings. Not only sets are foreign, sstrings in it are foreign too.
It's not correct to std::move() them to local CPU.

The fix is to make streamer object work on string_view-s and populate it
with non-owning references to the sstrings from aforementioned sets.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>

Closes scylladb/scylladb#27755
2025-12-23 15:53:06 +02:00
Pavel Emelyanov
ecef158345 api: Use ranges library to process views in get_built_indexes()
No functional changes, just make the loop shorter and more
self-contained.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>

Closes scylladb/scylladb#27742
2025-12-23 15:53:06 +02:00
Israel Fruchter
53abf93bd8 Update tools/cqlsh submodule
* tools/cqlsh 22401228...9e5a91d7 (7):
  > Add pip retry configuration to handle network timeouts
  > Clean up unwanted build artifacts and update .gitignore
  > test_legacy_auth: update to pytest format
  > Add support for disabling compression via CLI and cqlshrc
  > Update scylla-driver to 3.29.7 (#144)
  > Update scylla-driver version to 3.29.6
  > Revert "Migrate workflows to Blacksmith"

Closes scylladb/scylladb#27567

[avi: build optimized clang 21.1.7
      regenerate frozen toolchain with optimized clang from

      https://devpkg.scylladb.com/clang/clang-21.1.7-Fedora-43-aarch64.tar.gz
      https://devpkg.scylladb.com/clang/clang-21.1.7-Fedora-43-x86_64.tar.gz
]

Closes scylladb/scylladb#27734
2025-12-23 15:53:06 +02:00
Aleksandra Martyniuk
bbe64e0e2a test: rename duplicate tests
There are two test with name test_repair_options_hosts_tablets in
test/nodetool/test_cluster_repair.py and and two test_repair_keyspace
in test/nodetool/test_repair.py. Due to that one of each pair is ignored.

Rename the tests so that they are unique.

Fixes: https://github.com/scylladb/scylladb/issues/27701.

Closes scylladb/scylladb#27720
2025-12-23 15:53:06 +02:00
Anna Stuchlik
7198191aa9 doc: fix the license information on DockerHub
This commit removes the OSS-related information from DockerHub.
It adds the link to the Source Available license.

Fixes https://github.com/scylladb/scylladb/issues/22440

Closes scylladb/scylladb#27706
2025-12-23 15:53:06 +02:00
Calle Wilund
d5f72cd5fc test::pylib::encryption_provider: Push up setting system_key_directory to all providers
Fixes #27694

Unless set by config, the location will default to /etc/scylla, which is not a good
place to write things for tests. Push the config properly and the directory (but
_not_ creation) to all provider basetype.

Closes scylladb/scylladb#27696
2025-12-23 15:53:06 +02:00
Dawid Mędrek
afde5f668a test: Implement describing Boost tests in JSON format
The Boost.Test framework offers a way to describe tests written in it
by running them with the option `--list_content`. It can be
parametrized by either HRF (Human Readable Format) or DOT (the Graphviz
graph format) [1]. Thanks to that, we can learn the test tree structure
and collect additional information about the tests (e.g. labels [2]).

We currently emply that feature of the framework to collect and run
Boost tests in Scylla. Unfortunately, both formats have their
shortcomings:

* HRF: the format is simple to parse, but it doesn't contain all
       relevant information, e.g. labels.
* DOT: the format is designed for creating graphical visualizations,
       and it's relatively difficult to parse.

To amend those problems, we implement a custom extension of the feature.
It produces output in the JSON format and contains more than the most
basic information about the tests; at the same time, it's easy to browse
and parse.

To obtain that output, the user needs to call a Boost.Test executable
with the option `--list_json_content`. For example:

```
$ ./path/to/test/exec -- --list_json_content
```

Note that the argument should be prepended with a `--` to indicate that
it targets user code, not Boost.Test itself.

---

The structure of the new format looks like this (top-level downwards):

- File name
- Test suite(s) & free test cases
- Test cases wrapped in test suites

Note that it's different from the output the default Boost.Test formats
produce: they organize information within test suites, which can
potentially span multiple files [3]. The JSON format makes test files
the primary object of interest and test suites from different files
are always considered distinct.

Example of the output (after applying some formatting):

```
$ ./build/dev/test/boost/canonical_mutation_test -- --list_json_content
[{"file":"test/boost/canonical_mutation_test.cc", "content": {
  "suites": [],
  "tests": [
    {"name": "test_conversion_back_and_forth", "labels": ""},
    {"name": "test_reading_with_different_schemas", "labels": ""}
  ]
}}]
```

---

The implementation may be seen as a bit ugly, and it's effectively
a hack. It's based on registering a global fixture [4] and linking
that code to every Boost.Test executable.

Unfortunately, there doesn't seem to be any better way. That would
require more extensive changes in the test files (e.g. enforcing
going through the same entry point in all of them).

This implementation is a compromise between simplicity and
effectiveness. The changes are kept minimal, while the developers
writing new tests shouldn't need to remember to do anything special.
Everything should work out of the box (at least as long as there's
no non-trivial linking involved).

Fixes scylladb/scylladb#25415

---

References:
[1] https://www.boost.org/doc/libs/1_89_0/libs/test/doc/html/boost_test/utf_reference/rt_param_reference/list_content.html
[2] https://www.boost.org/doc/libs/1_89_0/libs/test/doc/html/boost_test/tests_organization/tests_grouping.html
[3] https://www.boost.org/doc/libs/1_89_0/libs/test/doc/html/boost_test/tests_organization/test_tree/test_suite.html
[4] https://www.boost.org/doc/libs/1_89_0/libs/test/doc/html/boost_test/tests_organization/fixtures/global.html

Closes scylladb/scylladb#27527
2025-12-23 15:53:06 +02:00
Asias He
140858fc22 tablet-mon.py: Add repair support
Add repair support in the tablet monitor.

Fixes #24824

Closes scylladb/scylladb#27400
2025-12-23 15:53:06 +02:00
Pavel Emelyanov
132aa753da sstables/storage_manager: Fix configured endpoints observer
On start the manager creates observer for object_storage_endpoints
config parameter. The goal is to refresh the maintained set of endpoint
parameters and client upon config change. The observer is created on
shard 0 only, and when kicked it calls manager.invoke-on-all to update
manager on all shards.

However, there's a race here. The thing is that db::config values are
implicitly "sharded" under the hood with the help of plain array. When
any code tries to read a value from db::config::something, the reading
code secretly gets the value from this inner array indexed by the
current shard id.

Next, when the config is updated, it first assigns new values to [0]
element of the hidden array, then calls broadcast_to_all_shards() helper
that copies the valaues from zeroth slot to all the others. But the
manager's observer is triggered when the new value is assigned on zero
index, and if the invoke-on-all lambda (mentioned above) happens to be
faster than broadcast_to_all_shards, the non-zero shards will read old
values from db::config's inner array.

The fix is to instantiate observer on all shards and update only local
shard, whenever this update is triggered.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2025-12-23 12:43:11 +03:00
Pavel Emelyanov
f902eb1632 test/object_store: Add test to validate how endpoint config update works
There's a test for backup with non-existing endpoint/bucket/snapshot. It
checks that API call to backup sstables properly fails in that case.
This patch adds similar test for "unconfigured endpoint", but it adds
the endpoint configuration on-the-fly and expects that backup will
proceed after config update.

Currently the test fails, as config update only affect the config
itself, the storage_manager, that's in charge of maintaining endpoint
clients, is not really updated. Next patch will fix it.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2025-12-23 12:41:38 +03:00
Pavel Emelyanov
e0cddc8c99 table: Move snapshot_file_set to table.cc
It's not used anywhere else now.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2025-12-23 12:14:36 +03:00
Pavel Emelyanov
e31b72c61f table: Rename and move snapshot_on_all_shards() method
Now it's database::snapshot_table_on_all_shards(). This is symmetric to
database::truncate_table_on_all_shards().

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2025-12-23 12:14:36 +03:00
Pavel Emelyanov
48b1ceefaf table: Ditch jsondir variable
Now the table::snapshot_on_all_shards() is storage-independent and can
stop maintaining the local path variable.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2025-12-23 12:14:36 +03:00
Pavel Emelyanov
a21aa5bdf6 table, sstables: Pass snapshot name to sstable::snapshot()
Currently sstable::snapshot() is called with directory name where to put
snapshots into. This patch changes it to accept snapshot name instead.
This makes the table-sstable API be unware of snapshot destination
storage type.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2025-12-23 12:14:36 +03:00
Pavel Emelyanov
d0812c951e table: Use snapshot_writer in write_manifest()
The manifest writing code is self-contained in a sense that it needs
list of sstable files and output_stream to write it too. The
snapshot_writer can provide output_stream for specific component, it can
be re-used for manifest writing code, thus making it independent from
local filesystem.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2025-12-23 12:13:47 +03:00
Pavel Emelyanov
fe8923bdc7 table: Use snapshot_writer in write_schema_as_cql()
The schema writing code is self-contained in a sense that it needs
schema description and output_stream to write it too. Teach the
snapshot_writer to provide output_stream and make write_schema_as_cql()
be independent from local filesystem.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2025-12-23 12:06:17 +03:00
Pavel Emelyanov
9fee06d3bc table: Add snapshot_writer::sync()
And move calls to sync_directory() into it too.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2025-12-23 12:06:16 +03:00
Pavel Emelyanov
7a298788c0 table: Add snapshot_writer::init()
And move the call to recursive_touch_directory into it.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2025-12-23 12:06:05 +03:00
Pavel Emelyanov
db6a5aa20b table: Introduce snapshot_writer
It's an abstract class that defines how to write data and metadata with
table snapshot. Currently it just replaces the storage_options checks
done by table::snapshot_on_all_shards(), but it will soon evolve.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2025-12-23 12:00:26 +03:00
Pavel Emelyanov
8e247b06a2 table: Move final sync and rename seal_snapshot()
The seal_snapshot() syncs directory at the end. Now when the method is
table.cc-local, it doesn't need to be that careful. It looks nicer if
being renamed to write_manifest() and it's caller that syncs directory
after calling it.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2025-12-23 11:57:26 +03:00
Pavel Emelyanov
37746ba814 table: Hide write_schema_as_cql()
The method only needs schema description from table. The caller can
pre-get it and pass it as argument. This makes it symmetric with
seal_snapshot() (that will be renamed soon) and reduces the class table
API size.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2025-12-23 11:57:26 +03:00
Pavel Emelyanov
976bcef5d0 table: Hide table::seal_snapshot()
The method is static and has nothing to do with table. The
snapshot_file_set needs to become public, but it will be moved to
table.cc soon.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2025-12-23 11:57:26 +03:00
Pavel Emelyanov
8a4daf3ef1 table: Open-code finalize_snapshot()
It makes is easier to modify this code further.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2025-12-23 11:57:26 +03:00
Pavel Emelyanov
6975234d1c table: Fix indentation after previuous patch
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2025-12-23 11:57:26 +03:00
Pavel Emelyanov
4a88f15465 table: Use smp::invoke_on_all() to populate the vector with filenames
There's a vector of foreign pointers to sets with sstable filenames
that's populated on all shards. The code does the invoke-on-all by hand
to grow the vector with push-back-s. However, if resizing the vector in
advance, shards will be able to just populate their slots.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2025-12-23 11:57:25 +03:00
Pavel Emelyanov
41ed11cdbe table: Don't touch dir once more on seal_snapshot()
The directory had been created way before that, in the caller.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2025-12-23 11:57:25 +03:00
Pavel Emelyanov
07708acebd table: Open-code table::take_snapshot() into caller lambda
Now when the logic of take_snapshot() is split between two components
(table and sstables_manager) it's no longer useful

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2025-12-23 11:57:25 +03:00
Pavel Emelyanov
ef3b651e1b table: Move parts of table::take_snapshot to sstables_manager
Move the loop over vector of sstables that calls sstable->snapshot()
into sstables manager.

This makes it symmetric with sstables_manager::delete_atomically() and
allows for future changes.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2025-12-23 11:57:25 +03:00
Pavel Emelyanov
0c45a7df00 table: Introduce table::take_snapshot()
The method returns all sstables vector with a guard that prevents this
list from being modified. Currently this is the part of another existing
table::take_snapshot() method, but the newer, smaller one, is more
atomic and self-contained, next patches will benefit from it.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2025-12-23 11:57:25 +03:00
Pavel Emelyanov
2e2cd2aa39 table: Store the result of smp::submit_to in local variable
Tossing bits around not to make it in the next, larger, patch

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2025-12-23 11:57:25 +03:00
Botond Dénes
58b5d43538 Merge 'test: multi LWT and counters test during tablets resize and migration' from Yauheni Khatsianevich
This PR extends BaseLWTTester with optional counter-table configuration and
verification, enabling randomized LWT tests over tablets with counters.

And introduces new LWT with counters test durng tablets resize and migration
- Workload: N workers perform CAS updates
- Update counter table each time CAS was successful
- Enable balancing and increase min_tablet_count to force split,
 and lower min_tablet_count to merge.
- Run tablets migrations loop
- Stop workload and verify data consistency

Refs: https://github.com/scylladb/qa-tasks/issues/1918
Refs: https://github.com/scylladb/qa-tasks/issues/1988
Refs https://github.com/scylladb/scylladb/issues/18068

Closes scylladb/scylladb#27170

* github.com:scylladb/scylladb:
  test: new LWT with counters test during tablets migration/resize - Workload: N workers perform CAS updates - Update counter table each time CAS was successful - Enable balancing and increase min_tablet_count to force split,  and lower min_tablet_count to merge. - Run tablets migrations loop - Stop workload and verify data consistency
  test/lwt: add counter-table support to BaseLWTTester
2025-12-23 07:29:35 +02:00
Botond Dénes
bfdd4f7776 Merge 'Synchronize incremental repair and tablet split' from Raphael Raph Carvalho
Split prepare can run concurrently with repair.

Consider this:

1) split prepare starts
2) incremental repair starts
3) split prepare finishes
4) incremental repair produces unsplit sstable
5) split is not happening on sstable produced by repair
        5.1) that sstable is not marked as repaired yet
        5.2) might belong to repairing set (has compaction disabled)
6) split executes
7) repairing or repaired set has unsplit sstable

If split was acked to coordinator (meaning prepare phase finished),
repair must make sure that all sstables produced by it are split.
It's not happening today with incremental repair because it disables
split on sstables belonging to repairing group. And there's a window
where sstables produced by repair belong to that group.

To solve the problem, we want the invariant where all sealed sstables
will be split.
To achieve this, streaming consumers are patched to produce unsealed
sstable, and the new variant add_new_sstable_and_update_cache() will
take care of splitting the sstable while it's unsealed.
If no split is needed, the new sstable will be sealed and attached.

This solution was also needed to interact nicely with out of space
prevention too. If disk usage is critical, split must not happen on
restart, and the invariant aforementioned allows for it, since any
unsplit sstable left unsealed will be discarded on restart.
The streaming consumer will fail if disk usage is critical too.

The reason interposer consumer doesn't fully solve the problem is
because incremental repair can start before split, and the sstable
being produced when split decision was emitted must be split before
attached. So we need a solution which covers both scenarios.

Fixes #26041.
Fixes #27414.

Should be backported to 2025.4 that contains incremental repair

Closes scylladb/scylladb#26528

* github.com:scylladb/scylladb:
  test: Add reproducer for split vs intra-node migration race
  test: Verify split failure on behalf of repair during critical disk utilization
  test: boost: Add failure_when_adding_new_sstable_test
  test: Add reproducer for split vs incremental repair race condition
  compaction: Fail split of new sstable if manager is disabled
  replica: Don't split in do_add_sstable_and_update_cache()
  streaming: Leave sstables unsealed until attached to the table
  replica: Wire add_new_sstables_and_update_cache() into intra-node streaming
  replica: Wire add_new_sstable_and_update_cache() into file streaming consumer
  replica: Wire add_new_sstable_and_update_cache() into streaming consumer
  replica: Document old add_sstable_and_update_cache() variants
  replica: Introduce add_new_sstables_and_update_cache()
  replica: Introduce add_new_sstable_and_update_cache()
  replica: Account for sstables being added before ACKing split
  replica: Remove repair read lock from maybe_split_new_sstable()
  compaction: Preserve state of input sstable in maybe_split_new_sstable()
  Rename maybe_split_sstable() to maybe_split_new_sstable()
  sstables: Allow storage::snapshot() to leave destination sstable unsealed
  sstables: Add option to leave sstable unsealed in the stream sink
  test: Verify unsealed sstable can be compacted
  sstables: Allow unsealed sstable to be loaded
  sstables: Restore sstable_writer_config::leave_unsealed
2025-12-23 07:28:56 +02:00
Botond Dénes
bf9640457e Merge 'test: add crash detection during tests' from Cezar Moise
After tests end, an extra check is performed, looking into node logs for crashes, aborts and similar issues.
The test directory is also scanned for coredumps.
If any of the above are found, the test will fail with an error.

The following checks are made:
- Any log line matching `Assertion.*failed` or containing `AddressSanitizer` is marked as a critical error
- Lines matching `Aborting on shard` will only be marked as a critical error if the paterns in `manager.ignore_cores_log_patterns` are not found in that log
- If any critical error is found, the log is also scanned for backtraces
- Any backtraces found are decoded and saved
- If the test is marked with `@pytest.mark.check_nodes_for_errors`, the logs are checked for any `ERROR` lines
- Any pattern in `manager.ignore_log_patterns` and `manager.ignore_cores_log_patterns` will cause above check to ignore that line
- The `expected_error` value that many methods, like `manager.decommission_node`, have will be automatically appended to `manager.ignore_log_patterns`

refs: https://github.com/scylladb/qa-tasks/issues/1804

---

[Examples](https://jenkins.scylladb.com/job/scylla-staging/job/cezar/job/byo_build_tests_dtest/46/testReport/):
Following examples are run on a separate branch where changes have been made to enable these failures.

`test_unfinished_writes_during_shutdown`
- Errors are found in logs and are not ignored
```
failed on teardown with "Failed:
Server 2096: found 1 error(s) (log: scylla-2096.log)
  ERROR 2025-12-15 14:20:06,563 [shard 0: gms] raft_topology - raft_topology_cmd barrier_and_drain failed with: std::runtime_error (raft topology: command::barrier_and_drain, the version has changed, version 11, current_version 12, the topology change coordinator  had probably migrated to another node)
Server 2101: found 4 error(s) (log: scylla-2101.log)
  ERROR 2025-12-15 14:20:04,674 [shard 0:strm] repair - repair[c434c0c0-68da-472c-ba3e-ed80960ce0d5]: Repair 1 out of 4 ranges, keyspace=system_distributed, table=view_build_status, range=(minimum token,maximum token), peers=[27c027a6-603d-49d0-8766-1b085d8c7d29, b549cb36-fae8-490b-a19e-86d42e7aa07a, f7049967-81ff-4296-9be7-9d6a4d33a29e], live_peers=[b549cb36-fae8-490b-a19e-86d42e7aa07a, f7049967-81ff-4296-9be7-9d6a4d33a29e], status=failed: mandatory neighbor=27c027a6-603d-49d0-8766-1b085d8c7d29 is not alive
  ERROR 2025-12-15 14:20:04,674 [shard 1:strm] repair - repair[c434c0c0-68da-472c-ba3e-ed80960ce0d5]: Repair 1 out of 4 ranges, keyspace=system_distributed, table=view_build_status, range=(minimum token,maximum token), peers=[27c027a6-603d-49d0-8766-1b085d8c7d29, b549cb36-fae8-490b-a19e-86d42e7aa07a, f7049967-81ff-4296-9be7-9d6a4d33a29e], live_peers=[b549cb36-fae8-490b-a19e-86d42e7aa07a, f7049967-81ff-4296-9be7-9d6a4d33a29e], status=failed: mandatory neighbor=27c027a6-603d-49d0-8766-1b085d8c7d29 is not alive
  ERROR 2025-12-15 14:20:04,675 [shard 0: gms] raft_topology - raft_topology_cmd stream_ranges failed with: std::runtime_error (["shard 0: std::runtime_error (repair[c434c0c0-68da-472c-ba3e-ed80960ce0d5]: 1 out of 4 ranges failed, keyspace=system_distributed, tables=[\"view_build_status\", \"cdc_generation_timestamps\", \"service_levels\", \"cdc_streams_descriptions_v2\"], repair_reason=bootstrap, nodes_down_during_repair={27c027a6-603d-49d0-8766-1b085d8c7d29}, aborted_by_user=false, failed_because=std::runtime_error (Repair mandatory neighbor=27c027a6-603d-49d0-8766-1b085d8c7d29 is not alive, keyspace=system_distributed, mandatory_neighbors=[27c027a6-603d-49d0-8766-1b085d8c7d29, b549cb36-fae8-490b-a19e-86d42e7aa07a, f7049967-81ff-4296-9be7-9d6a4d33a29e]))", "shard 1: std::runtime_error (repair[c434c0c0-68da-472c-ba3e-ed80960ce0d5]: 1 out of 4 ranges failed, keyspace=system_distributed, tables=[\"view_build_status\", \"cdc_generation_timestamps\", \"service_levels\", \"cdc_streams_descriptions_v2\"], repair_reason=bootstrap, nodes_down_during_repair={27c027a6-603d-49d0-8766-1b085d8c7d29}, aborted_by_user=false, failed_because=std::runtime_error (Repair mandatory neighbor=27c027a6-603d-49d0-8766-1b085d8c7d29 is not alive, keyspace=system_distributed, mandatory_neighbors=[27c027a6-603d-49d0-8766-1b085d8c7d29, b549cb36-fae8-490b-a19e-86d42e7aa07a, f7049967-81ff-4296-9be7-9d6a4d33a29e]))"])
  ERROR 2025-12-15 14:20:06,812 [shard 0:main] init - Startup failed: std::runtime_error (Bootstrap failed. See earlier errors (Rolled back: Failed stream ranges: std::runtime_error (failed status returned from 9dd942aa-acec-4105-9719-9bda403e8e94)))
Server 2094: found 1 error(s) (log: scylla-2094.log)
  ERROR 2025-12-15 14:20:04,675 [shard 0: gms] raft_topology - send_raft_topology_cmd(stream_ranges) failed with exception (node state is bootstrapping): std::runtime_error (failed status returned from 9dd942aa-acec-4105-9719-9bda403e8e94)"
```

`test_kill_coordinator_during_op`
- aborts caused by injection
- `ignore_cores_log_patterns` is not set
- while there are errors in logs and `ignore_log_patterns` is not set, they are ignored automatically due to the `expected_error` parameter, such as in `await manager.decommission_node(server_id=other_nodes[-1].server_id, expected_error="Decommission failed. See earlier errors")`
```
failed on teardown with "Failed:
Server 1105: found 1 critical error(s), 1 backtrace(s) (log: scylla-1105.log)
  Aborting on shard 0, in scheduling group gossip.
  1 backtrace(s) saved in scylla-1105-backtraces.txt
Server 1106: found 1 critical error(s), 1 backtrace(s) (log: scylla-1106.log)
  Aborting on shard 0, in scheduling group gossip.
  1 backtrace(s) saved in scylla-1106-backtraces.txt
Server 1113: found 1 critical error(s), 1 backtrace(s) (log: scylla-1113.log)
  Aborting on shard 0, in scheduling group gossip.
  1 backtrace(s) saved in scylla-1113-backtraces.txt
Server 1148: found 1 critical error(s), 1 backtrace(s) (log: scylla-1148.log)
  Aborting on shard 0, in scheduling group gossip.
  1 backtrace(s) saved in scylla-1148-backtraces.txt"
```
Decoded backtrace can be found in [failed_test_logs](https://jenkins.scylladb.com/job/scylla-staging/job/cezar/job/byo_build_tests_dtest/46/artifact/testlog/x86_64/dev/failed_test/test_kill_coordinator_during_op.dev.1)

Closes scylladb/scylladb#26177

* github.com:scylladb/scylladb:
  test: add logging to crash_coordinator_before_stream injection
  test: add crash detection during tests
  test.py: add pid to ServerInfo
2025-12-23 07:27:58 +02:00
Avi Kivity
8e462d06be build: apply sccache to rust builds too
sccache works for rust as well as for C++; use it for rust builds
as well.
2025-12-22 15:36:15 +02:00
Avi Kivity
9ac82d63e9 build: prevent double caching by compiler cache
To avoid both sccache and ccache being used simultaneously,
detect if ccache is in $PATH and use whatever compiler would
have been called instead.
2025-12-22 15:36:14 +02:00
Avi Kivity
afb96b6387 build: allow selecting compiler cache, including sccache
Add a new configuration option for selecting the compiler
cache. Prefer sccache if found, since it supports rust as
well as C++, has better support for distributed compilation,
and is slated to receive module support soon.

cmake is also supported.
2025-12-22 15:27:15 +02:00
Botond Dénes
c899c117c7 scylla-gdb.py: scylla read-stats: include all permit lists
The current code which collects permit stats is out-of-date (by a few
years), as it only iterates through _permit_list. There are 4 additional
lists that permits can be part of now (all intrusive). Include all of
these in the stat collection.
As a bonus, also print the semaphore pointer in the printout, so the
user can hand-examine it, should they wish to.
2025-12-18 18:26:07 +02:00
Botond Dénes
db9f9e1b7e scylla-gdb.py: scylla fiber: add --direction command-line param
Can be "forward", "backward" or "both" (default).
Allows traversing the fiber in just one direction. Useful when scylla
fiber fails to traverse through a task and the user has to locate the
next one in the chain manually. When resuming from this next item, the
user might want to skip the already seen part of the fiber, to save time
on the invokation.
2025-12-18 18:26:07 +02:00
Botond Dénes
4c6e508811 scylla-gdb.py: scylla fiber: add support for traversing through coroutines backward
Traversing through coroutines forward (finding task waiting on this
coroutine) is already supported. This patch adds support for traversing
through coroutines backwards (finding task waited on by coroutine).
Coroutines need special handling: the future<> object is most likely
allocated on the coroutine frame, so we have to search throgh that to
find it. When doing so the first two pointers on the frame have to be
skipped: these are pointers to .resume and .destroy respectively and
will halt the search algorithm if seen.
2025-12-18 18:26:06 +02:00
Cezar Moise
0ef8ca4c57 test: add logging to crash_coordinator_before_stream injection
In order to have the test ignore crashes caused by the injection,
it needs to log its occurence.
2025-12-18 16:28:13 +02:00
Cezar Moise
95d0782f89 test: add crash detection during tests
After tests end, an extra check if performed, looking into node logs.
By default, it only searches for critical errors and scans for coredumps.
If the test has the fixture `check_nodes_for_errors`, it will search for all errors.
Both checks can be ignored by setting `ignore_cores_log_patterns` and `ignore_log_patterns`.
If any of the above are found, the test will fail with an error.
2025-12-18 16:28:13 +02:00
Avi Kivity
cfd91545f9 test: add proxy protocol tests
Test that the new configuration options work and that we can
connect to them. Use direct connections with an inline implementation
of the proxy protocol and the CQL native protocol, since we want
to maintain direct control over the source port number (for shard-aware
ports). Also test we land on the expected shard.
2025-12-17 14:18:04 +02:00
Avi Kivity
1382b47d45 config, transport: support proxy protocol v2 enhanced connections
We have four native transport ports: two for plain/TLS, and two
more for shard-aware (plain/TLS as well). Add four more that expect
the proxy protocol v2 header. This allows nodes behind a reverse
proxy to record the correct source address and port in system.clients,
and the shard-aware port to see the correct source port selection
made my the client.
2025-12-17 14:18:04 +02:00
Yauheni Khatsianevich
07867a9a0d test: new LWT with counters test during tablets migration/resize
- Workload: N workers perform CAS updates
- Update counter table each time CAS was successful
- Enable balancing and increase min_tablet_count to force split,
 and lower min_tablet_count to merge.
- Run tablets migrations loop
- Stop workload and verify data consistency
2025-12-15 14:32:30 +01:00
Raphael S. Carvalho
a0a7941eb1 test: Add reproducer for split vs intra-node migration race
This is a problem caught after removing split from
add_sstable_and_update_cache(), which was used by
intra node migration when loading new sstables
into the destination shard.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2025-12-12 17:01:18 -03:00
Raphael S. Carvalho
e3b9abdb30 test: Verify split failure on behalf of repair during critical disk utilization
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2025-12-12 17:01:18 -03:00
Raphael S. Carvalho
bc772b791d test: boost: Add failure_when_adding_new_sstable_test
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2025-12-12 17:01:18 -03:00
Raphael S. Carvalho
77a4f95eb8 test: Add reproducer for split vs incremental repair race condition
Refs #26041.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2025-12-12 17:01:16 -03:00
Raphael S. Carvalho
992bfb9f63 compaction: Fail split of new sstable if manager is disabled
If manager has been disabled due to out of space prevention, it's
important to throw an exception rather than silently not
splitting the new sstable.

Not splitting a sstable when needed can cause correctness issue
when finalizing split later.
It's better to fail the writer (e.g. repair one) which will be
retried than making caller think everything succeeded.

The new replica::table::add_new_sstable_and_update_cache() will
now unlink the new sstable on failure, so the table dir will
not be left with sstables not loaded.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2025-12-12 16:59:51 -03:00
Raphael S. Carvalho
ee3a743dc4 replica: Don't split in do_add_sstable_and_update_cache()
Now, only sstable loader on boot and refresh from upload uses this
procedure. The idea is that maybe_split_new_sstable() will throw
when compaction cannot run due to e.g. out of space prevention.
It could fail repair writer, but we don't want it to fail boot.

As for refresh from upload, it's not supposed to work when tablet
map at the time of backup is not the same when restoring.
Even before this, refresh would fail if split already executed,
split would only happen if split was still ongoing. We need
token range stability for local restore. The safe variant will
always be load and stream.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2025-12-12 16:59:51 -03:00
Raphael S. Carvalho
48d243f32f streaming: Leave sstables unsealed until attached to the table
We want the invariant that after ACK, all sealed sstables will be split.

This guarantee that on restart, no unsplit sstables will be found
sealed.

The paths that generate unsplit sstables are streaming and file
streaming consumers. It includes intra-node streaming, which
is local but can clone an unsplit sstable into destination.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2025-12-12 16:59:51 -03:00
Raphael S. Carvalho
d9d58780e2 replica: Wire add_new_sstables_and_update_cache() into intra-node streaming
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2025-12-12 16:59:51 -03:00
Raphael S. Carvalho
ddb27488fa replica: Wire add_new_sstable_and_update_cache() into file streaming consumer
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2025-12-12 16:59:51 -03:00
Raphael S. Carvalho
10225ee434 replica: Wire add_new_sstable_and_update_cache() into streaming consumer
After the wiring, failure to attach the new sstable in the streaming
consumer will unlink the sstable automatically.

Fixes #27414.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2025-12-12 16:59:51 -03:00
Raphael S. Carvalho
a72025bbf6 replica: Document old add_sstable_and_update_cache() variants
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2025-12-12 16:59:51 -03:00
Raphael S. Carvalho
3f8363300a replica: Introduce add_new_sstables_and_update_cache()
Piggyback on new add_new_sstable_and_update_cache(), replacing
the previous add_sstables_and_update_cache().

Will be used by intra-node migration since we want it to be
safe when loading the cloned sstables. An unsplit sstable
can be cloned into destination which already ACKed split,
so we need this variant which splits sstable if needed,
while it's unsealed.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2025-12-12 16:59:50 -03:00
Raphael S. Carvalho
63d1d6c39b replica: Introduce add_new_sstable_and_update_cache()
Failure to load sstable in streaming can leave sealed sstables
on disk since they're not unlinked on failure.

This can result in several problems:
1) Data resurrection: since the sstable may contain deleted data
2) Split issue: since the finalization requires all sstables to be split
3) Disk usage issue: since the sstables hold space and streaming retries
can keep accumulating these files.

This new procedure will be later wired into streaming consumers, in
order to fix those problems.

Another benefit of the interface is that if there's split when adding
the new sstable, the output sstables will be returned to the caller,
allowing them to register the actual loaded sstables into e.g.
the view builder.

Refs #27414.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2025-12-12 16:59:50 -03:00
Raphael S. Carvalho
27d460758f replica: Account for sstables being added before ACKing split
We want the invariant that after ACK, all sealed sstables will be split.
If check-and-attach is not atomic, this sequence is possible:

1) no split decision set.
2) Unsplit sstable is checked, no need to split, sealed.
3) split decision is set and ACKed
4) unsplit sstable is attached

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2025-12-12 16:59:50 -03:00
Raphael S. Carvalho
794e03856a replica: Remove repair read lock from maybe_split_new_sstable()
The lock is intended to serialize some maintenance compactions,
such as major, with repair. But maybe_split_new_sstable() is
restricted solely to new sstables that aren't part of the
sstable set.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2025-12-12 16:59:50 -03:00
Raphael S. Carvalho
2dae0a7380 compaction: Preserve state of input sstable in maybe_split_new_sstable()
This is crucial with MVs, since the splitting must preserve the state of
the original sstable. We want the sstable to be in staging dir, so it's
excluded when calculating the diff for performing pushes to view
replicas.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2025-12-12 16:59:50 -03:00
Raphael S. Carvalho
1fdc410e24 Rename maybe_split_sstable() to maybe_split_new_sstable()
Since the function must only be used on new sstables, it should
be renamed to something describing its usage should be restricted.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2025-12-12 16:59:50 -03:00
Raphael S. Carvalho
1a077a80f1 sstables: Allow storage::snapshot() to leave destination sstable unsealed
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2025-12-12 16:59:50 -03:00
Raphael S. Carvalho
c5e840e460 sstables: Add option to leave sstable unsealed in the stream sink
That will be needed for file streaming to leave output sstable unsealed.

we want the invariant where all sealed sstables are split after split
was ACKed.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2025-12-12 16:59:50 -03:00
Raphael S. Carvalho
c10486a5e9 test: Verify unsealed sstable can be compacted
This is crucial for splitting before sealing the sstable produced by
repair. This way, unsplit sstables won't be left on disk sealed.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2025-12-12 16:59:50 -03:00
Raphael S. Carvalho
ab82428228 sstables: Allow unsealed sstable to be loaded
File streaming will have to load an unsealed sstable, so we need
to be able to parse components from temporary TOC instead.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2025-12-12 16:59:50 -03:00
Raphael S. Carvalho
b1be4ba2fc sstables: Restore sstable_writer_config::leave_unsealed
This option was retired in commit 0959739216, but
it will be again needed in order to implement split before sealing.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2025-12-12 16:59:50 -03:00
Cezar Moise
7c8ab3d3d3 test.py: add pid to ServerInfo
Adding pid info to servers allows matching coredumps with servers

Other improvements:
- When replacing just some fields of ServerInfo, use `_replace` instead of
building a new object. This way it is agnostic to changes to the Object
- When building ServerInfo from a list, the types defined for its fields are
not enforced, so ServerInfo(*list) works fine and does not need to be changed if
fields are added or removed.
2025-12-12 15:11:03 +02:00
Botond Dénes
cb7f2e4953 docs: scylla-sstable-script-api.rst: add introduction and title 2025-12-12 13:50:12 +02:00
Botond Dénes
dd5b6770c8 docs: scylla-sstable.rst: extract script API to separate document
The script API is 500+ lines long in an already too long and hard to
navigate document. Extract it to a separate document, making both
documents shorter and easier to navigate.
2025-12-12 13:44:32 +02:00
Botond Dénes
3d73a9781e docs: scylla-sstable: prepare for script API extract
We are about to extract the script API to a separate document. In
preparation convert soon-to-be cross-document references, so they keep
working after the extraction.
2025-12-12 13:15:48 +02:00
Piotr Smaron
982339e73f doc: audit: set audit as enabled by default 2025-12-12 09:18:54 +01:00
Piotr Smaron
d1a04b3913 Reapply "audit: enable some subset of auditing by default"
This reverts commit a5edbc7d612df237a1dd9d46fd5cecf251ccfd13.

Fixes: https://github.com/scylladb/scylladb/issues/26020
2025-12-12 09:18:54 +01:00
Pavel Emelyanov
a63508a6f7 object_storage: Temporarily handle pure endpoint addresses as endpoints
To keep backward compatibility, support

- old configs -- where endpoint is just an address and port is separate.
  When it happens, format the "new" endpoint name

- lookup by address-only. If it happens, scan all endpoints and see if
  any one matches the provided address

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2025-12-10 15:33:47 +03:00
Pavel Emelyanov
ca4564e41c code: Remove dangling mentions of s3::endpoint_config
Collect some code that's unused after previous changes.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2025-12-10 15:33:47 +03:00
Pavel Emelyanov
f93cafac51 docs: Update docs according to new endpoints config option format
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2025-12-10 15:33:47 +03:00
Pavel Emelyanov
a3ca4fccef object_storage: Create s3 client with "extended" endpoint name
For this, add the s3::client::make(endpoint, ...) overload that accepts
endpoint in proto://host:port format. Then it parses the provided url
and calls the legacy one, that accepts raw host string and config with
port, https bit, etc.

The generic object_storage_endpoint_param no longer needs to carry the
internal s3::endpoint_config, the config option parsing changes
respectively.

Tests, that generate the config files, and docs are updated.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2025-12-10 15:33:47 +03:00
Pavel Emelyanov
5953a89822 test: Add named constants for test_get_object_store_endpoints endpoint
names

Instead of hard-coded 'a' and 'b' here and there

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2025-12-10 15:33:46 +03:00
Pavel Emelyanov
932b008107 s3/storage: Tune config updating
Don't prepare s3::endpoint_config from generic code, jut pass the region
and iam_role_arn (those that can potentially change) to the callback.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2025-12-10 15:33:46 +03:00
Pavel Emelyanov
e47f0c6284 sstable: Shuffle args for s3_client_wrapper
Make it construct like gs_client_wrapper -- with generic endpoint param
reference and make the storage-specific casts/gets/whatever internally.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2025-12-10 15:33:46 +03:00
Yauheni Khatsianevich
f12adfc292 test/lwt: add counter-table support to BaseLWTTester
Extend BaseLWTTester with optional counter-table configuration and
verification, enabling randomized LWT tests over tablets
 with counters.
2025-12-08 15:57:33 +01:00
202 changed files with 6306 additions and 1623 deletions

View File

@@ -0,0 +1,13 @@
name: validate_pr_author_email
on:
pull_request_target:
types:
- opened
- synchronize
- reopened
jobs:
validate_pr_author_email:
uses: scylladb/github-automation/.github/workflows/validate_pr_author_email.yml@main

View File

@@ -13,5 +13,5 @@ jobs:
- uses: codespell-project/actions-codespell@master
with:
only_warn: 1
ignore_words_list: "ans,datas,fo,ser,ue,crate,nd,reenable,strat,stap,te,raison"
ignore_words_list: "ans,datas,fo,ser,ue,crate,nd,reenable,strat,stap,te,raison,iif,tread"
skip: "./.git,./build,./tools,*.js,*.lock,./test,./licenses,./redis/lolwut.cc,*.svg"

View File

@@ -28,6 +28,7 @@ static logging::logger logger("alternator_controller");
controller::controller(
sharded<gms::gossiper>& gossiper,
sharded<service::storage_proxy>& proxy,
sharded<service::storage_service>& ss,
sharded<service::migration_manager>& mm,
sharded<db::system_distributed_keyspace>& sys_dist_ks,
sharded<cdc::generation_service>& cdc_gen_svc,
@@ -39,6 +40,7 @@ controller::controller(
: protocol_server(sg)
, _gossiper(gossiper)
, _proxy(proxy)
, _ss(ss)
, _mm(mm)
, _sys_dist_ks(sys_dist_ks)
, _cdc_gen_svc(cdc_gen_svc)
@@ -89,7 +91,7 @@ future<> controller::start_server() {
auto get_timeout_in_ms = [] (const db::config& cfg) -> utils::updateable_value<uint32_t> {
return cfg.alternator_timeout_in_ms;
};
_executor.start(std::ref(_gossiper), std::ref(_proxy), std::ref(_mm), std::ref(_sys_dist_ks),
_executor.start(std::ref(_gossiper), std::ref(_proxy), std::ref(_ss), std::ref(_mm), std::ref(_sys_dist_ks),
sharded_parameter(get_cdc_metadata, std::ref(_cdc_gen_svc)), _ssg.value(),
sharded_parameter(get_timeout_in_ms, std::ref(_config))).get();
_server.start(std::ref(_executor), std::ref(_proxy), std::ref(_gossiper), std::ref(_auth_service), std::ref(_sl_controller)).get();

View File

@@ -15,6 +15,7 @@
namespace service {
class storage_proxy;
class storage_service;
class migration_manager;
class memory_limiter;
}
@@ -57,6 +58,7 @@ class server;
class controller : public protocol_server {
sharded<gms::gossiper>& _gossiper;
sharded<service::storage_proxy>& _proxy;
sharded<service::storage_service>& _ss;
sharded<service::migration_manager>& _mm;
sharded<db::system_distributed_keyspace>& _sys_dist_ks;
sharded<cdc::generation_service>& _cdc_gen_svc;
@@ -74,6 +76,7 @@ public:
controller(
sharded<gms::gossiper>& gossiper,
sharded<service::storage_proxy>& proxy,
sharded<service::storage_service>& ss,
sharded<service::migration_manager>& mm,
sharded<db::system_distributed_keyspace>& sys_dist_ks,
sharded<cdc::generation_service>& cdc_gen_svc,

View File

@@ -67,6 +67,14 @@ using namespace std::chrono_literals;
logging::logger elogger("alternator-executor");
namespace std {
template <> struct hash<std::pair<sstring, sstring>> {
size_t operator () (const std::pair<sstring, sstring>& p) const {
return std::hash<sstring>()(p.first) * 1009 + std::hash<sstring>()(p.second) * 3;
}
};
}
namespace alternator {
// Alternator-specific table properties stored as hidden table tags:
@@ -248,14 +256,66 @@ static const rjson::value::Member& get_single_member(const rjson::value& v, cons
return *(v.MemberBegin());
}
class executor::describe_table_info_manager : public service::migration_listener::empty_listener {
executor &_executor;
struct table_info {
utils::simple_value_with_expiry<std::uint64_t> size_in_bytes;
};
std::unordered_map<std::pair<sstring, sstring>, table_info> info_for_tables;
bool active = false;
public:
describe_table_info_manager(executor& executor) : _executor(executor) {
_executor._proxy.data_dictionary().real_database_ptr()->get_notifier().register_listener(this);
active = true;
}
describe_table_info_manager(const describe_table_info_manager &) = delete;
describe_table_info_manager(describe_table_info_manager&&) = delete;
~describe_table_info_manager() {
if (active) {
on_fatal_internal_error(elogger, "describe_table_info_manager was not stopped before destruction");
}
}
describe_table_info_manager &operator = (const describe_table_info_manager &) = delete;
describe_table_info_manager &operator = (describe_table_info_manager&&) = delete;
static std::chrono::high_resolution_clock::time_point now() {
return std::chrono::high_resolution_clock::now();
}
std::optional<std::uint64_t> get_cached_size_in_bytes(const sstring &ks_name, const sstring &cf_name) const {
auto it = info_for_tables.find({ks_name, cf_name});
if (it != info_for_tables.end()) {
return it->second.size_in_bytes.get();
}
return std::nullopt;
}
void cache_size_in_bytes(sstring ks_name, sstring cf_name, std::uint64_t size_in_bytes, std::chrono::high_resolution_clock::time_point expiry) {
info_for_tables[{std::move(ks_name), std::move(cf_name)}].size_in_bytes.set_if_longer_expiry(size_in_bytes, expiry);
}
future<> stop() {
co_await _executor._proxy.data_dictionary().real_database_ptr()->get_notifier().unregister_listener(this);
active = false;
co_return;
}
void on_drop_column_family(const sstring& ks_name, const sstring& cf_name) override {
if (!ks_name.starts_with(executor::KEYSPACE_NAME_PREFIX)) return;
info_for_tables.erase({ks_name, cf_name});
}
};
executor::executor(gms::gossiper& gossiper,
service::storage_proxy& proxy,
service::storage_service& ss,
service::migration_manager& mm,
db::system_distributed_keyspace& sdks,
cdc::metadata& cdc_metadata,
smp_service_group ssg,
utils::updateable_value<uint32_t> default_timeout_in_ms)
: _gossiper(gossiper),
_ss(ss),
_proxy(proxy),
_mm(mm),
_sdks(sdks),
@@ -268,6 +328,7 @@ executor::executor(gms::gossiper& gossiper,
_stats))
{
s_default_timeout_in_ms = std::move(default_timeout_in_ms);
_describe_table_info_manager = std::make_unique<describe_table_info_manager>(*this);
register_metrics(_metrics, _stats);
}
@@ -752,12 +813,44 @@ static future<bool> is_view_built(
}
static future<rjson::value> fill_table_description(schema_ptr schema, table_status tbl_status, service::storage_proxy& proxy, service::client_state& client_state, tracing::trace_state_ptr trace_state, service_permit permit)
future<> executor::cache_newly_calculated_size_on_all_shards(schema_ptr schema, std::uint64_t size_in_bytes, std::chrono::nanoseconds ttl) {
auto expiry = describe_table_info_manager::now() + ttl;
return container().invoke_on_all(
[schema, size_in_bytes, expiry] (executor& exec) {
exec._describe_table_info_manager->cache_size_in_bytes(schema->ks_name(), schema->cf_name(), size_in_bytes, expiry);
});
}
future<> executor::fill_table_size(rjson::value &table_description, schema_ptr schema, bool deleting) {
auto cached_size = _describe_table_info_manager->get_cached_size_in_bytes(schema->ks_name(), schema->cf_name());
std::uint64_t total_size = 0;
if (cached_size) {
total_size = *cached_size;
} else {
// there's no point in trying to estimate value of table that is being deleted, as other nodes more often than not might
// move forward with deletion faster than we calculate the size
if (!deleting) {
total_size = co_await _ss.estimate_total_sstable_volume(schema->id(), service::storage_service::ignore_errors::yes);
const auto expiry = std::chrono::seconds{ _proxy.data_dictionary().get_config().alternator_describe_table_info_cache_validity_in_seconds() };
// Note: we don't care when the notification of other shards will finish, as long as it will be done
// it's possible to get into race condition (next DescribeTable comes to other shard, that new shard doesn't have
// the size yet, so it will calculate it again) - this is not a problem, because it will call cache_newly_calculated_size_on_all_shards
// with expiry, which is extremely unlikely to be exactly the same as the previous one, all shards will keep the size coming with expiry that is further into the future.
// In case of the same expiry, some shards will have different size, which means DescribeTable will return different values depending on the shard
// which is also fine, as the specification doesn't give precision guarantees of any kind.
co_await cache_newly_calculated_size_on_all_shards(schema, total_size, expiry);
}
}
rjson::add(table_description, "TableSizeBytes", total_size);
}
future<rjson::value> executor::fill_table_description(schema_ptr schema, table_status tbl_status, service::client_state& client_state, tracing::trace_state_ptr trace_state, service_permit permit)
{
rjson::value table_description = rjson::empty_object();
auto tags_ptr = db::get_tags_of_table(schema);
rjson::add(table_description, "TableName", rjson::from_string(schema->cf_name()));
co_await fill_table_size(table_description, schema, tbl_status == table_status::deleting);
auto creation_timestamp = get_table_creation_time(*schema);
@@ -801,9 +894,7 @@ static future<rjson::value> fill_table_description(schema_ptr schema, table_stat
rjson::add(table_description["ProvisionedThroughput"], "WriteCapacityUnits", wcu);
rjson::add(table_description["ProvisionedThroughput"], "NumberOfDecreasesToday", 0);
data_dictionary::table t = proxy.data_dictionary().find_column_family(schema);
data_dictionary::table t = _proxy.data_dictionary().find_column_family(schema);
if (tbl_status != table_status::deleting) {
rjson::add(table_description, "CreationDateTime", rjson::value(creation_timestamp));
@@ -840,7 +931,7 @@ static future<rjson::value> fill_table_description(schema_ptr schema, table_stat
// (for a built view) or CREATING+Backfilling (if view building
// is in progress).
if (!is_lsi) {
if (co_await is_view_built(vptr, proxy, client_state, trace_state, permit)) {
if (co_await is_view_built(vptr, _proxy, client_state, trace_state, permit)) {
rjson::add(view_entry, "IndexStatus", "ACTIVE");
} else {
rjson::add(view_entry, "IndexStatus", "CREATING");
@@ -868,9 +959,8 @@ static future<rjson::value> fill_table_description(schema_ptr schema, table_stat
}
rjson::add(table_description, "AttributeDefinitions", std::move(attribute_definitions));
}
executor::supplement_table_stream_info(table_description, *schema, proxy);
executor::supplement_table_stream_info(table_description, *schema, _proxy);
// FIXME: still missing some response fields (issue #5026)
co_return table_description;
}
@@ -890,7 +980,7 @@ future<executor::request_return_type> executor::describe_table(client_state& cli
get_stats_from_schema(_proxy, *schema)->api_operations.describe_table++;
tracing::add_alternator_table_name(trace_state, schema->cf_name());
rjson::value table_description = co_await fill_table_description(schema, table_status::active, _proxy, client_state, trace_state, permit);
rjson::value table_description = co_await fill_table_description(schema, table_status::active, client_state, trace_state, permit);
rjson::value response = rjson::empty_object();
rjson::add(response, "Table", std::move(table_description));
elogger.trace("returning {}", response);
@@ -993,7 +1083,7 @@ future<executor::request_return_type> executor::delete_table(client_state& clien
auto& p = _proxy.container();
schema_ptr schema = get_table(_proxy, request);
rjson::value table_description = co_await fill_table_description(schema, table_status::deleting, _proxy, client_state, trace_state, permit);
rjson::value table_description = co_await fill_table_description(schema, table_status::deleting, client_state, trace_state, permit);
co_await verify_permission(_enforce_authorization, _warn_authorization, client_state, schema, auth::permission::DROP, _stats);
co_await _mm.container().invoke_on(0, [&, cs = client_state.move_to_other_shard()] (service::migration_manager& mm) -> future<> {
size_t retries = mm.get_concurrent_ddl_retries();
@@ -1557,8 +1647,7 @@ static future<> mark_view_schemas_as_built(utils::chunked_vector<mutation>& out,
}
}
static future<executor::request_return_type> create_table_on_shard0(service::client_state&& client_state, tracing::trace_state_ptr trace_state, rjson::value request,
service::storage_proxy& sp, service::migration_manager& mm, gms::gossiper& gossiper, bool enforce_authorization, bool warn_authorization, stats& stats, const db::tablets_mode_t::mode tablets_mode) {
future<executor::request_return_type> executor::create_table_on_shard0(service::client_state&& client_state, tracing::trace_state_ptr trace_state, rjson::value request, bool enforce_authorization, bool warn_authorization, const db::tablets_mode_t::mode tablets_mode) {
SCYLLA_ASSERT(this_shard_id() == 0);
// We begin by parsing and validating the content of the CreateTable
@@ -1745,7 +1834,7 @@ static future<executor::request_return_type> create_table_on_shard0(service::cli
rjson::value* stream_specification = rjson::find(request, "StreamSpecification");
if (stream_specification && stream_specification->IsObject()) {
if (executor::add_stream_options(*stream_specification, builder, sp)) {
if (executor::add_stream_options(*stream_specification, builder, _proxy)) {
validate_cdc_log_name_length(builder.cf_name());
}
}
@@ -1764,7 +1853,7 @@ static future<executor::request_return_type> create_table_on_shard0(service::cli
set_table_creation_time(tags_map, db_clock::now());
builder.add_extension(db::tags_extension::NAME, ::make_shared<db::tags_extension>(tags_map));
co_await verify_create_permission(enforce_authorization, warn_authorization, client_state, stats);
co_await verify_create_permission(enforce_authorization, warn_authorization, client_state, _stats);
schema_ptr schema = builder.build();
for (auto& view_builder : view_builders) {
@@ -1780,18 +1869,18 @@ static future<executor::request_return_type> create_table_on_shard0(service::cli
view_builder.with_view_info(schema, include_all_columns, ""/*where clause*/);
}
size_t retries = mm.get_concurrent_ddl_retries();
size_t retries = _mm.get_concurrent_ddl_retries();
for (;;) {
auto group0_guard = co_await mm.start_group0_operation();
auto group0_guard = co_await _mm.start_group0_operation();
auto ts = group0_guard.write_timestamp();
utils::chunked_vector<mutation> schema_mutations;
auto ksm = create_keyspace_metadata(keyspace_name, sp, gossiper, ts, tags_map, sp.features(), tablets_mode);
auto ksm = create_keyspace_metadata(keyspace_name, _proxy, _gossiper, ts, tags_map, _proxy.features(), tablets_mode);
// Alternator Streams doesn't yet work when the table uses tablets (#23838)
if (stream_specification && stream_specification->IsObject()) {
auto stream_enabled = rjson::find(*stream_specification, "StreamEnabled");
if (stream_enabled && stream_enabled->IsBool() && stream_enabled->GetBool()) {
locator::replication_strategy_params params(ksm->strategy_options(), ksm->initial_tablets(), ksm->consistency_option());
const auto& topo = sp.local_db().get_token_metadata().get_topology();
const auto& topo = _proxy.local_db().get_token_metadata().get_topology();
auto rs = locator::abstract_replication_strategy::create_replication_strategy(ksm->strategy_name(), params, topo);
if (rs->uses_tablets()) {
co_return api_error::validation("Streams not yet supported on a table using tablets (issue #23838). "
@@ -1801,17 +1890,17 @@ static future<executor::request_return_type> create_table_on_shard0(service::cli
}
// Creating an index in tablets mode requires the rf_rack_valid_keyspaces option to be enabled.
// GSI and LSI indexes are based on materialized views which require this option to avoid consistency issues.
if (!view_builders.empty() && ksm->uses_tablets() && !sp.data_dictionary().get_config().rf_rack_valid_keyspaces()) {
if (!view_builders.empty() && ksm->uses_tablets() && !_proxy.data_dictionary().get_config().rf_rack_valid_keyspaces()) {
co_return api_error::validation("GlobalSecondaryIndexes and LocalSecondaryIndexes with tablets require the rf_rack_valid_keyspaces option to be enabled.");
}
try {
schema_mutations = service::prepare_new_keyspace_announcement(sp.local_db(), ksm, ts);
schema_mutations = service::prepare_new_keyspace_announcement(_proxy.local_db(), ksm, ts);
} catch (exceptions::already_exists_exception&) {
if (sp.data_dictionary().has_schema(keyspace_name, table_name)) {
if (_proxy.data_dictionary().has_schema(keyspace_name, table_name)) {
co_return api_error::resource_in_use(fmt::format("Table {} already exists", table_name));
}
}
if (sp.data_dictionary().try_find_table(schema->id())) {
if (_proxy.data_dictionary().try_find_table(schema->id())) {
// This should never happen, the ID is supposed to be unique
co_return api_error::internal(format("Table with ID {} already exists", schema->id()));
}
@@ -1820,9 +1909,9 @@ static future<executor::request_return_type> create_table_on_shard0(service::cli
for (schema_builder& view_builder : view_builders) {
schemas.push_back(view_builder.build());
}
co_await service::prepare_new_column_families_announcement(schema_mutations, sp, *ksm, schemas, ts);
co_await service::prepare_new_column_families_announcement(schema_mutations, _proxy, *ksm, schemas, ts);
if (ksm->uses_tablets()) {
co_await mark_view_schemas_as_built(schema_mutations, schemas, ts, sp);
co_await mark_view_schemas_as_built(schema_mutations, schemas, ts, _proxy);
}
// If a role is allowed to create a table, we must give it permissions to
@@ -1847,7 +1936,7 @@ static future<executor::request_return_type> create_table_on_shard0(service::cli
}
std::tie(schema_mutations, group0_guard) = co_await std::move(mc).extract();
try {
co_await mm.announce(std::move(schema_mutations), std::move(group0_guard), fmt::format("alternator-executor: create {} table", table_name));
co_await _mm.announce(std::move(schema_mutations), std::move(group0_guard), fmt::format("alternator-executor: create {} table", table_name));
break;
} catch (const service::group0_concurrent_modification& ex) {
elogger.info("Failed to execute CreateTable {} due to concurrent schema modifications. {}.",
@@ -1859,9 +1948,9 @@ static future<executor::request_return_type> create_table_on_shard0(service::cli
}
}
co_await mm.wait_for_schema_agreement(sp.local_db(), db::timeout_clock::now() + 10s, nullptr);
co_await _mm.wait_for_schema_agreement(_proxy.local_db(), db::timeout_clock::now() + 10s, nullptr);
rjson::value status = rjson::empty_object();
executor::supplement_table_info(request, *schema, sp);
executor::supplement_table_info(request, *schema, _proxy);
rjson::add(status, "TableDescription", std::move(request));
co_return rjson::print(std::move(status));
}
@@ -1870,10 +1959,11 @@ future<executor::request_return_type> executor::create_table(client_state& clien
_stats.api_operations.create_table++;
elogger.trace("Creating table {}", request);
co_return co_await _mm.container().invoke_on(0, [&, tr = tracing::global_trace_state_ptr(trace_state), request = std::move(request), &sp = _proxy.container(), &g = _gossiper.container(), &e = this->container(), client_state_other_shard = client_state.move_to_other_shard(), enforce_authorization = bool(_enforce_authorization), warn_authorization = bool(_warn_authorization)]
co_return co_await _mm.container().invoke_on(0, [&, tr = tracing::global_trace_state_ptr(trace_state), request = std::move(request), &e = this->container(), client_state_other_shard = client_state.move_to_other_shard(), enforce_authorization = bool(_enforce_authorization), warn_authorization = bool(_warn_authorization)]
(service::migration_manager& mm) mutable -> future<executor::request_return_type> {
const db::tablets_mode_t::mode tablets_mode = _proxy.data_dictionary().get_config().tablets_mode_for_new_keyspaces(); // type cast
co_return co_await create_table_on_shard0(client_state_other_shard.get(), tr, std::move(request), sp.local(), mm, g.local(), enforce_authorization, warn_authorization, e.local()._stats, std::move(tablets_mode));
// `invoke_on` hopped us to shard 0, but `this` points to `executor` is from 'old' shard, we need to hop it too.
co_return co_await e.local().create_table_on_shard0(client_state_other_shard.get(), tr, std::move(request), enforce_authorization, warn_authorization, std::move(tablets_mode));
});
}
@@ -6087,9 +6177,10 @@ future<> executor::start() {
}
future<> executor::stop() {
co_await _describe_table_info_manager->stop();
// disconnect from the value source, but keep the value unchanged.
s_default_timeout_in_ms = utils::updateable_value<uint32_t>{s_default_timeout_in_ms()};
return _parsed_expression_cache->stop();
co_await _parsed_expression_cache->stop();
}
} // namespace alternator

View File

@@ -17,11 +17,13 @@
#include "service/client_state.hh"
#include "service_permit.hh"
#include "db/timeout_clock.hh"
#include "db/config.hh"
#include "alternator/error.hh"
#include "stats.hh"
#include "utils/rjson.hh"
#include "utils/updateable_value.hh"
#include "utils/simple_value_with_expiry.hh"
#include "tracing/trace_state.hh"
@@ -41,6 +43,7 @@ namespace cql3::selection {
namespace service {
class storage_proxy;
class cas_shard;
class storage_service;
}
namespace cdc {
@@ -57,6 +60,7 @@ class schema_builder;
namespace alternator {
enum class table_status;
class rmw_operation;
class put_or_delete_item;
@@ -136,6 +140,7 @@ class expression_cache;
class executor : public peering_sharded_service<executor> {
gms::gossiper& _gossiper;
service::storage_service& _ss;
service::storage_proxy& _proxy;
service::migration_manager& _mm;
db::system_distributed_keyspace& _sdks;
@@ -148,6 +153,11 @@ class executor : public peering_sharded_service<executor> {
std::unique_ptr<parsed::expression_cache> _parsed_expression_cache;
struct describe_table_info_manager;
std::unique_ptr<describe_table_info_manager> _describe_table_info_manager;
future<> cache_newly_calculated_size_on_all_shards(schema_ptr schema, std::uint64_t size_in_bytes, std::chrono::nanoseconds ttl);
future<> fill_table_size(rjson::value &table_description, schema_ptr schema, bool deleting);
public:
using client_state = service::client_state;
// request_return_type is the return type of the executor methods, which
@@ -173,6 +183,7 @@ public:
executor(gms::gossiper& gossiper,
service::storage_proxy& proxy,
service::storage_service& ss,
service::migration_manager& mm,
db::system_distributed_keyspace& sdks,
cdc::metadata& cdc_metadata,
@@ -220,6 +231,8 @@ private:
friend class rmw_operation;
static void describe_key_schema(rjson::value& parent, const schema&, std::unordered_map<std::string,std::string> * = nullptr, const std::map<sstring, sstring> *tags = nullptr);
future<rjson::value> fill_table_description(schema_ptr schema, table_status tbl_status, service::client_state& client_state, tracing::trace_state_ptr trace_state, service_permit permit);
future<executor::request_return_type> create_table_on_shard0(service::client_state&& client_state, tracing::trace_state_ptr trace_state, rjson::value request, bool enforce_authorization, bool warn_authorization, const db::tablets_mode_t::mode tablets_mode);
future<> do_batch_write(
std::vector<std::pair<schema_ptr, put_or_delete_item>> mutation_builders,

View File

@@ -547,17 +547,13 @@ void set_view_builder(http_context& ctx, routes& r, sharded<db::view::view_build
vp.insert(b.second);
}
}
std::vector<sstring> res;
replica::database& db = vb.local().get_db();
auto uuid = validate_table(db, ks, cf_name);
replica::column_family& cf = db.find_column_family(uuid);
res.reserve(cf.get_index_manager().list_indexes().size());
for (auto&& i : cf.get_index_manager().list_indexes()) {
if (vp.contains(secondary_index::index_table_name(i.metadata().name()))) {
res.emplace_back(i.metadata().name());
}
}
co_return res;
co_return cf.get_index_manager().list_indexes()
| std::views::transform([] (const auto& i) { return i.metadata().name(); })
| std::views::filter([&vp] (const auto& n) { return vp.contains(secondary_index::index_table_name(n)); })
| std::ranges::to<std::vector>();
});
}

View File

@@ -15,6 +15,7 @@
#include "db/system_keyspace.hh"
#include "schema/schema.hh"
#include <iterator>
#include <seastar/core/abort_source.hh>
#include <seastar/coroutine/maybe_yield.hh>
#include <seastar/core/format.hh>
@@ -22,9 +23,11 @@ namespace auth {
logging::logger logger("auth-cache");
cache::cache(cql3::query_processor& qp) noexcept
cache::cache(cql3::query_processor& qp, abort_source& as) noexcept
: _current_version(0)
, _qp(qp) {
, _qp(qp)
, _loading_sem(1)
, _as(as) {
}
lw_shared_ptr<const cache::role_record> cache::get(const role_name_t& role) const noexcept {
@@ -116,6 +119,8 @@ future<> cache::load_all() {
co_return;
}
SCYLLA_ASSERT(this_shard_id() == 0);
auto units = co_await get_units(_loading_sem, 1, _as);
++_current_version;
logger.info("Loading all roles");
@@ -146,6 +151,9 @@ future<> cache::load_roles(std::unordered_set<role_name_t> roles) {
if (legacy_mode(_qp)) {
co_return;
}
SCYLLA_ASSERT(this_shard_id() == 0);
auto units = co_await get_units(_loading_sem, 1, _as);
for (const auto& name : roles) {
logger.info("Loading role {}", name);
auto role = co_await fetch_role(name);

View File

@@ -8,6 +8,7 @@
#pragma once
#include <seastar/core/abort_source.hh>
#include <unordered_set>
#include <unordered_map>
@@ -15,6 +16,7 @@
#include <seastar/core/future.hh>
#include <seastar/core/sharded.hh>
#include <seastar/core/shared_ptr.hh>
#include <seastar/core/semaphore.hh>
#include <absl/container/flat_hash_map.h>
@@ -41,7 +43,7 @@ public:
version_tag_t version; // used for seamless cache reloads
};
explicit cache(cql3::query_processor& qp) noexcept;
explicit cache(cql3::query_processor& qp, abort_source& as) noexcept;
lw_shared_ptr<const role_record> get(const role_name_t& role) const noexcept;
future<> load_all();
future<> load_roles(std::unordered_set<role_name_t> roles);
@@ -52,6 +54,8 @@ private:
roles_map _roles;
version_tag_t _current_version;
cql3::query_processor& _qp;
semaphore _loading_sem;
abort_source& _as;
future<lw_shared_ptr<role_record>> fetch_role(const role_name_t& role) const;
future<> prune_all() noexcept;

View File

@@ -12,6 +12,7 @@
#include <seastar/core/condition-variable.hh>
#include "schema/schema_fwd.hh"
#include "sstables/open_info.hh"
#include "compaction_descriptor.hh"
class reader_permit;
@@ -44,7 +45,7 @@ public:
virtual compaction_strategy_state& get_compaction_strategy_state() noexcept = 0;
virtual reader_permit make_compaction_reader_permit() const = 0;
virtual sstables::sstables_manager& get_sstables_manager() noexcept = 0;
virtual sstables::shared_sstable make_sstable() const = 0;
virtual sstables::shared_sstable make_sstable(sstables::sstable_state) const = 0;
virtual sstables::sstable_writer_config configure_writer(sstring origin) const = 0;
virtual api::timestamp_type min_memtable_timestamp() const = 0;
virtual api::timestamp_type min_memtable_live_timestamp() const = 0;

View File

@@ -416,7 +416,9 @@ future<compaction_result> compaction_task_executor::compact_sstables(compaction_
descriptor.enable_garbage_collection(co_await sstable_set_for_tombstone_gc(t));
}
descriptor.creator = [&t] (shard_id) {
return t.make_sstable();
// All compaction types going through this path will work on normal input sstables only.
// Off-strategy, for example, waits until the sstables move out of staging state.
return t.make_sstable(sstables::sstable_state::normal);
};
descriptor.replacer = [this, &t, &on_replace, offstrategy] (compaction_completion_desc desc) {
t.get_compaction_strategy().notify_completion(t, desc.old_sstables, desc.new_sstables);
@@ -1847,6 +1849,10 @@ protected:
throw make_compaction_stopped_exception();
}
}, false);
if (utils::get_local_injector().is_enabled("split_sstable_force_stop_exception")) {
throw make_compaction_stopped_exception();
}
co_return co_await do_rewrite_sstable(std::move(sst));
}
};
@@ -2284,12 +2290,16 @@ future<compaction_manager::compaction_stats_opt> compaction_manager::perform_spl
}
future<std::vector<sstables::shared_sstable>>
compaction_manager::maybe_split_sstable(sstables::shared_sstable sst, compaction_group_view& t, compaction_type_options::split opt) {
compaction_manager::maybe_split_new_sstable(sstables::shared_sstable sst, compaction_group_view& t, compaction_type_options::split opt) {
if (!split_compaction_task_executor::sstable_needs_split(sst, opt)) {
co_return std::vector<sstables::shared_sstable>{sst};
}
if (!can_proceed(&t)) {
co_return std::vector<sstables::shared_sstable>{sst};
// Throw an error if split cannot be performed due to e.g. out of space prevention.
// We don't want to prevent split because compaction is temporarily disabled on a view only for synchronization,
// which is unneeded against new sstables that aren't part of any set yet, so never use can_proceed(&t) here.
if (is_disabled()) {
co_return coroutine::exception(std::make_exception_ptr(std::runtime_error(format("Cannot split {} because manager has compaction disabled, " \
"reason might be out of space prevention", sst->get_filename()))));
}
std::vector<sstables::shared_sstable> ret;
@@ -2297,8 +2307,11 @@ compaction_manager::maybe_split_sstable(sstables::shared_sstable sst, compaction
compaction_progress_monitor monitor;
compaction_data info = create_compaction_data();
compaction_descriptor desc = split_compaction_task_executor::make_descriptor(sst, opt);
desc.creator = [&t] (shard_id _) {
return t.make_sstable();
desc.creator = [&t, sst] (shard_id _) {
// NOTE: preserves the sstable state, since we want the output to be on the same state as the original.
// For example, if base table has views, it's important that sstable produced by repair will be
// in the staging state.
return t.make_sstable(sst->state());
};
desc.replacer = [&] (compaction_completion_desc d) {
std::move(d.new_sstables.begin(), d.new_sstables.end(), std::back_inserter(ret));

View File

@@ -376,7 +376,8 @@ public:
// Splits a single SSTable by segregating all its data according to the classifier.
// If SSTable doesn't need split, the same input SSTable is returned as output.
// If SSTable needs split, then output SSTables are returned and the input SSTable is deleted.
future<std::vector<sstables::shared_sstable>> maybe_split_sstable(sstables::shared_sstable sst, compaction_group_view& t, compaction_type_options::split opt);
// Exception is thrown if the input sstable cannot be split due to e.g. out of space prevention.
future<std::vector<sstables::shared_sstable>> maybe_split_new_sstable(sstables::shared_sstable sst, compaction_group_view& t, compaction_type_options::split opt);
// Run a custom job for a given table, defined by a function
// it completes when future returned by job is ready or returns immediately

View File

@@ -571,10 +571,10 @@ commitlog_total_space_in_mb: -1
# - "none": auditing is disabled (default)
# - "table": save audited events in audit.audit_log column family
# - "syslog": send audited events via syslog (depends on OS, but usually to /dev/log)
# audit: "none"
audit: "table"
#
# List of statement categories that should be audited.
# audit_categories: "DCL,DDL,AUTH"
audit_categories: "DCL,DDL,AUTH,ADMIN"
#
# List of tables that should be audited.
# audit_tables: "<keyspace_name>.<table_name>,<keyspace_name>.<table_name>"

View File

@@ -368,6 +368,87 @@ def find_ninja():
sys.exit(1)
def find_compiler(name):
"""
Find a compiler by name, skipping ccache wrapper directories.
This is useful when using sccache to avoid double-caching through ccache.
Args:
name: The compiler name (e.g., 'clang++', 'clang', 'gcc')
Returns:
Path to the compiler, skipping ccache directories, or None if not found.
"""
ccache_dirs = {'/usr/lib/ccache', '/usr/lib64/ccache'}
for path_dir in os.environ.get('PATH', '').split(os.pathsep):
# Skip ccache wrapper directories
if os.path.realpath(path_dir) in ccache_dirs or path_dir in ccache_dirs:
continue
candidate = os.path.join(path_dir, name)
if os.path.isfile(candidate) and os.access(candidate, os.X_OK):
return candidate
return None
def resolve_compilers_for_sccache(args, compiler_cache):
"""
When using sccache, resolve compiler paths to avoid ccache directories.
This prevents double-caching when ccache symlinks are in PATH.
Args:
args: The argument namespace with cc and cxx attributes.
compiler_cache: Path to the compiler cache binary, or None.
"""
if not compiler_cache or 'sccache' not in compiler_cache:
return
if not os.path.isabs(args.cxx):
real_cxx = find_compiler(args.cxx)
if real_cxx:
args.cxx = real_cxx
if not os.path.isabs(args.cc):
real_cc = find_compiler(args.cc)
if real_cc:
args.cc = real_cc
def find_compiler_cache(preference):
"""
Find a compiler cache based on the preference.
Args:
preference: One of 'auto', 'sccache', 'ccache', 'none', or a path to a binary.
Returns:
Path to the compiler cache binary, or None if not found/disabled.
"""
if preference == 'none':
return None
if preference == 'auto':
# Prefer sccache over ccache
for cache in ['sccache', 'ccache']:
path = which(cache)
if path:
return path
return None
if preference in ('sccache', 'ccache'):
path = which(preference)
if path:
return path
print(f"Warning: {preference} not found on PATH, disabling compiler cache")
return None
# Assume it's a path to a binary
if os.path.isfile(preference) and os.access(preference, os.X_OK):
return preference
print(f"Warning: compiler cache '{preference}' not found or not executable, disabling compiler cache")
return None
modes = {
'debug': {
'cxxflags': '-DDEBUG -DSANITIZE -DDEBUG_LSA_SANITIZER -DSCYLLA_ENABLE_ERROR_INJECTION',
@@ -732,6 +813,8 @@ arg_parser.add_argument('--compiler', action='store', dest='cxx', default='clang
help='C++ compiler path')
arg_parser.add_argument('--c-compiler', action='store', dest='cc', default='clang',
help='C compiler path')
arg_parser.add_argument('--compiler-cache', action='store', dest='compiler_cache', default='auto',
help='Compiler cache to use: auto (default, prefers sccache), sccache, ccache, none, or a path to a binary')
add_tristate(arg_parser, name='dpdk', dest='dpdk', default=False,
help='Use dpdk (from seastar dpdk sources)')
arg_parser.add_argument('--dpdk-target', action='store', dest='dpdk_target', default='',
@@ -1615,6 +1698,7 @@ deps['test/boost/combined_tests'] += [
'test/boost/schema_registry_test.cc',
'test/boost/secondary_index_test.cc',
'test/boost/sessions_test.cc',
'test/boost/simple_value_with_expiry_test.cc',
'test/boost/sstable_compaction_test.cc',
'test/boost/sstable_compressor_factory_test.cc',
'test/boost/sstable_compression_config_test.cc',
@@ -1698,6 +1782,18 @@ deps['test/vector_search/vector_store_client_test'] = ['test/vector_search/vect
deps['test/vector_search/load_balancer_test'] = ['test/vector_search/load_balancer_test.cc'] + scylla_tests_dependencies
deps['test/vector_search/client_test'] = ['test/vector_search/client_test.cc'] + scylla_tests_dependencies
boost_tests_prefixes = ["test/boost/", "test/vector_search/", "test/raft/", "test/manual/", "test/ldap/"]
# We need to link these files to all Boost tests to make sure that
# we can execute `--list_json_content` on them. That will produce
# a similar result as calling `--list_content={HRF,DOT}`.
# Unfortunately, to be able to do that, we're forced to link the
# relevant code by hand.
for key in deps.keys():
for prefix in boost_tests_prefixes:
if key.startswith(prefix):
deps[key] += ["test/lib/boost_tree_lister_injector.cc", "test/lib/boost_test_tree_lister.cc"]
wasm_deps = {}
wasm_deps['wasm/return_input.wat'] = 'test/resource/wasm/rust/return_input.rs'
@@ -2002,7 +2098,7 @@ def semicolon_separated(*flags):
def real_relpath(path, start):
return os.path.relpath(os.path.realpath(path), os.path.realpath(start))
def configure_seastar(build_dir, mode, mode_config):
def configure_seastar(build_dir, mode, mode_config, compiler_cache=None):
seastar_cxx_ld_flags = mode_config['cxx_ld_flags']
# We want to "undo" coverage for seastar if we have it enabled.
if args.coverage:
@@ -2049,6 +2145,10 @@ def configure_seastar(build_dir, mode, mode_config):
'-DSeastar_IO_URING=ON',
]
if compiler_cache:
seastar_cmake_args += [f'-DCMAKE_CXX_COMPILER_LAUNCHER={compiler_cache}',
f'-DCMAKE_C_COMPILER_LAUNCHER={compiler_cache}']
if args.stack_guards is not None:
stack_guards = 'ON' if args.stack_guards else 'OFF'
seastar_cmake_args += ['-DSeastar_STACK_GUARDS={}'.format(stack_guards)]
@@ -2080,7 +2180,7 @@ def configure_seastar(build_dir, mode, mode_config):
subprocess.check_call(seastar_cmd, shell=False, cwd=cmake_dir)
def configure_abseil(build_dir, mode, mode_config):
def configure_abseil(build_dir, mode, mode_config, compiler_cache=None):
abseil_cflags = mode_config['lib_cflags']
cxx_flags = mode_config['cxxflags']
if '-DSANITIZE' in cxx_flags:
@@ -2106,6 +2206,10 @@ def configure_abseil(build_dir, mode, mode_config):
'-DABSL_PROPAGATE_CXX_STD=ON',
]
if compiler_cache:
abseil_cmake_args += [f'-DCMAKE_CXX_COMPILER_LAUNCHER={compiler_cache}',
f'-DCMAKE_C_COMPILER_LAUNCHER={compiler_cache}']
cmake_args = abseil_cmake_args[:]
abseil_build_dir = os.path.join(build_dir, mode, 'abseil')
abseil_cmd = ['cmake', '-G', 'Ninja', real_relpath('abseil', abseil_build_dir)] + cmake_args
@@ -2278,10 +2382,15 @@ def write_build_file(f,
scylla_product,
scylla_version,
scylla_release,
compiler_cache,
args):
use_precompiled_header = not args.disable_precompiled_header
warnings = get_warning_options(args.cxx)
rustc_target = pick_rustc_target('wasm32-wasi', 'wasm32-wasip1')
# If compiler cache is available, prefix the compiler with it
cxx_with_cache = f'{compiler_cache} {args.cxx}' if compiler_cache else args.cxx
# For Rust, sccache is used via RUSTC_WRAPPER environment variable
rustc_wrapper = f'RUSTC_WRAPPER={compiler_cache} ' if compiler_cache and 'sccache' in compiler_cache else ''
f.write(textwrap.dedent('''\
configure_args = {configure_args}
builddir = {outdir}
@@ -2344,7 +2453,7 @@ def write_build_file(f,
command = clang --target=wasm32 --no-standard-libraries -Wl,--export-all -Wl,--no-entry $in -o $out
description = C2WASM $out
rule rust2wasm
command = cargo build --target={rustc_target} --example=$example --locked --manifest-path=test/resource/wasm/rust/Cargo.toml --target-dir=$builddir/wasm/ $
command = {rustc_wrapper}cargo build --target={rustc_target} --example=$example --locked --manifest-path=test/resource/wasm/rust/Cargo.toml --target-dir=$builddir/wasm/ $
&& wasm-opt -Oz $builddir/wasm/{rustc_target}/debug/examples/$example.wasm -o $builddir/wasm/$example.wasm $
&& wasm-strip $builddir/wasm/$example.wasm
description = RUST2WASM $out
@@ -2360,7 +2469,7 @@ def write_build_file(f,
command = llvm-profdata merge $in -output=$out
''').format(configure_args=configure_args,
outdir=outdir,
cxx=args.cxx,
cxx=cxx_with_cache,
user_cflags=user_cflags,
warnings=warnings,
defines=defines,
@@ -2368,6 +2477,7 @@ def write_build_file(f,
user_ldflags=user_ldflags,
libs=libs,
rustc_target=rustc_target,
rustc_wrapper=rustc_wrapper,
link_pool_depth=link_pool_depth,
seastar_path=args.seastar_path,
ninja=ninja,
@@ -2452,10 +2562,10 @@ def write_build_file(f,
description = TEST {mode}
# This rule is unused for PGO stages. They use the rust lib from the parent mode.
rule rust_lib.{mode}
command = CARGO_BUILD_DEP_INFO_BASEDIR='.' cargo build --locked --manifest-path=rust/Cargo.toml --target-dir=$builddir/{mode} --profile=rust-{mode} $
command = CARGO_BUILD_DEP_INFO_BASEDIR='.' {rustc_wrapper}cargo build --locked --manifest-path=rust/Cargo.toml --target-dir=$builddir/{mode} --profile=rust-{mode} $
&& touch $out
description = RUST_LIB $out
''').format(mode=mode, antlr3_exec=args.antlr3_exec, fmt_lib=fmt_lib, test_repeat=args.test_repeat, test_timeout=args.test_timeout, **modeval))
''').format(mode=mode, antlr3_exec=args.antlr3_exec, fmt_lib=fmt_lib, test_repeat=args.test_repeat, test_timeout=args.test_timeout, rustc_wrapper=rustc_wrapper, **modeval))
f.write(
'build {mode}-build: phony {artifacts} {wasms} {vector_search_validator_bins}\n'.format(
mode=mode,
@@ -2519,7 +2629,7 @@ def write_build_file(f,
# In debug/sanitize modes, we compile with fsanitizers,
# so must use the same options during the link:
if '-DSANITIZE' in modes[mode]['cxxflags']:
f.write(' libs = -fsanitize=address -fsanitize=undefined\n')
f.write(' libs = -fsanitize=address -fsanitize=undefined -lubsan\n')
else:
f.write(' libs =\n')
f.write(f'build $builddir/{mode}/{binary}.stripped: strip $builddir/{mode}/{binary}\n')
@@ -2915,6 +3025,9 @@ def create_build_system(args):
os.makedirs(outdir, exist_ok=True)
compiler_cache = find_compiler_cache(args.compiler_cache)
resolve_compilers_for_sccache(args, compiler_cache)
scylla_product, scylla_version, scylla_release = generate_version(args.date_stamp)
for mode, mode_config in build_modes.items():
@@ -2931,8 +3044,8 @@ def create_build_system(args):
# {outdir}/{mode}/seastar/build.ninja, and
# {outdir}/{mode}/seastar/seastar.pc is queried for building flags
for mode, mode_config in build_modes.items():
configure_seastar(outdir, mode, mode_config)
configure_abseil(outdir, mode, mode_config)
configure_seastar(outdir, mode, mode_config, compiler_cache)
configure_abseil(outdir, mode, mode_config, compiler_cache)
user_cflags += ' -isystem abseil'
for mode, mode_config in build_modes.items():
@@ -2955,6 +3068,7 @@ def create_build_system(args):
scylla_product,
scylla_version,
scylla_release,
compiler_cache,
args)
generate_compdb('compile_commands.json', ninja, args.buildfile, selected_modes)
@@ -2997,6 +3111,10 @@ def configure_using_cmake(args):
selected_modes = args.selected_modes or default_modes
selected_configs = ';'.join(build_modes[mode].cmake_build_type for mode
in selected_modes)
compiler_cache = find_compiler_cache(args.compiler_cache)
resolve_compilers_for_sccache(args, compiler_cache)
settings = {
'CMAKE_CONFIGURATION_TYPES': selected_configs,
'CMAKE_CROSS_CONFIGS': selected_configs,
@@ -3014,6 +3132,14 @@ def configure_using_cmake(args):
'Scylla_WITH_DEBUG_INFO' : 'ON' if args.debuginfo else 'OFF',
'Scylla_USE_PRECOMPILED_HEADER': 'OFF' if args.disable_precompiled_header else 'ON',
}
if compiler_cache:
settings['CMAKE_CXX_COMPILER_LAUNCHER'] = compiler_cache
settings['CMAKE_C_COMPILER_LAUNCHER'] = compiler_cache
# For Rust, sccache is used via RUSTC_WRAPPER
if 'sccache' in compiler_cache:
settings['Scylla_RUSTC_WRAPPER'] = compiler_cache
if args.date_stamp:
settings['Scylla_DATE_STAMP'] = args.date_stamp
if args.staticboost:
@@ -3045,7 +3171,7 @@ def configure_using_cmake(args):
if not args.dist_only:
for mode in selected_modes:
configure_seastar(build_dir, build_modes[mode].cmake_build_type, modes[mode])
configure_seastar(build_dir, build_modes[mode].cmake_build_type, modes[mode], compiler_cache)
cmake_command = ['cmake']
cmake_command += [f'-D{var}={value}' for var, value in settings.items()]

View File

@@ -190,7 +190,7 @@ future<utils::chunked_vector<mutation>> batch_statement::get_mutations(query_pro
co_return vresult;
}
void batch_statement::verify_batch_size(query_processor& qp, const utils::chunked_vector<mutation>& mutations) {
void batch_statement::verify_batch_size(query_processor& qp, const utils::chunked_vector<mutation>& mutations) const {
if (mutations.size() <= 1) {
return; // We only warn for batch spanning multiple mutations
}
@@ -209,8 +209,9 @@ void batch_statement::verify_batch_size(query_processor& qp, const utils::chunke
for (auto&& m : mutations) {
ks_cf_pairs.insert(m.schema()->ks_name() + "." + m.schema()->cf_name());
}
return seastar::format("Batch modifying {:d} partitions in {} is of size {:d} bytes, exceeding specified {} threshold of {:d} by {:d}.",
mutations.size(), fmt::join(ks_cf_pairs, ", "), size, type, threshold, size - threshold);
const auto batch_type = _type == type::LOGGED ? "Logged" : "Unlogged";
return seastar::format("{} batch modifying {:d} partitions in {} is of size {:d} bytes, exceeding specified {} threshold of {:d} by {:d}.",
batch_type, mutations.size(), fmt::join(ks_cf_pairs, ", "), size, type, threshold, size - threshold);
};
if (size > fail_threshold) {
_logger.error("{}", error("FAIL", fail_threshold).c_str());

View File

@@ -116,7 +116,7 @@ public:
* Checks batch size to ensure threshold is met. If not, a warning is logged.
* @param cfs ColumnFamilies that will store the batch's mutations.
*/
static void verify_batch_size(query_processor& qp, const utils::chunked_vector<mutation>& mutations);
void verify_batch_size(query_processor& qp, const utils::chunked_vector<mutation>& mutations) const;
virtual future<shared_ptr<cql_transport::messages::result_message>> execute(
query_processor& qp, service::query_state& state, const query_options& options, std::optional<service::group0_guard> guard) const override;

View File

@@ -17,6 +17,7 @@
#include <seastar/core/metrics.hh>
#include <seastar/core/coroutine.hh>
#include <seastar/core/sleep.hh>
#include <seastar/coroutine/parallel_for_each.hh>
#include "batchlog_manager.hh"
#include "batchlog.hh"
@@ -318,8 +319,8 @@ future<db::all_batches_replayed> db::batchlog_manager::replay_all_failed_batches
std::unordered_map<int32_t, replay_stats> replay_stats_per_shard;
// Use a stable `now` accross all batches, so skip/replay decisions are the
// same accross a while prefix of written_at (accross all ids).
// Use a stable `now` across all batches, so skip/replay decisions are the
// same across a while prefix of written_at (across all ids).
const auto now = db_clock::now();
auto batch = [this, cleanup, limiter, schema, &all_replayed, &replay_stats_per_shard, now] (const cql3::untyped_result_set::row& row) -> future<stop_iteration> {

View File

@@ -1105,6 +1105,14 @@ db::config::config(std::shared_ptr<db::extensions> exts)
"Like native_transport_port, but clients-side port number (modulo smp) is used to route the connection to the specific shard.")
, native_shard_aware_transport_port_ssl(this, "native_shard_aware_transport_port_ssl", value_status::Used, 19142,
"Like native_transport_port_ssl, but clients-side port number (modulo smp) is used to route the connection to the specific shard.")
, native_transport_port_proxy_protocol(this, "native_transport_port_proxy_protocol", value_status::Used, 0,
"Port on which the CQL native transport listens for clients using proxy protocol v2. Disabled (0) by default.")
, native_transport_port_ssl_proxy_protocol(this, "native_transport_port_ssl_proxy_protocol", value_status::Used, 0,
"Port on which the CQL TLS native transport listens for clients using proxy protocol v2. Disabled (0) by default.")
, native_shard_aware_transport_port_proxy_protocol(this, "native_shard_aware_transport_port_proxy_protocol", value_status::Used, 0,
"Like native_transport_port_proxy_protocol, but clients-side port number (modulo smp) is used to route the connection to the specific shard.")
, native_shard_aware_transport_port_ssl_proxy_protocol(this, "native_shard_aware_transport_port_ssl_proxy_protocol", value_status::Used, 0,
"Like native_transport_port_ssl_proxy_protocol, but clients-side port number (modulo smp) is used to route the connection to the specific shard.")
, native_transport_max_threads(this, "native_transport_max_threads", value_status::Invalid, 128,
"The maximum number of thread handling requests. The meaning is the same as rpc_max_threads.\n"
"Default is different (128 versus unlimited).\n"
@@ -1470,6 +1478,8 @@ db::config::config(std::shared_ptr<db::extensions> exts)
, alternator_max_expression_cache_entries_per_shard(this, "alternator_max_expression_cache_entries_per_shard", liveness::LiveUpdate, value_status::Used, 2000, "Maximum number of cached parsed request expressions, per shard.")
, alternator_max_users_query_size_in_trace_output(this, "alternator_max_users_query_size_in_trace_output", liveness::LiveUpdate, value_status::Used, uint64_t(4096),
"Maximum size of user's command in trace output (`alternator_op` entry). Larger traces will be truncated and have `<truncated>` message appended - which doesn't count to the maximum limit.")
, alternator_describe_table_info_cache_validity_in_seconds(this, "alternator_describe_table_info_cache_validity_in_seconds", liveness::LiveUpdate, value_status::Used, 60 * 60 * 6,
"The validity of DescribeTable information - table size in bytes. This is how long calculated value will be reused before recalculation.")
, abort_on_ebadf(this, "abort_on_ebadf", value_status::Used, true, "Abort the server on incorrect file descriptor access. Throws exception when disabled.")
, sanitizer_report_backtrace(this, "sanitizer_report_backtrace", value_status::Used, false,
"In debug mode, report log-structured allocator sanitizer violations with a backtrace. Slow.")
@@ -1566,6 +1576,12 @@ db::config::config(std::shared_ptr<db::extensions> exts)
// Bigger tables will take longer to be resized. similar-sized tables can be batched into same iteration.
, tablet_load_stats_refresh_interval_in_seconds(this, "tablet_load_stats_refresh_interval_in_seconds", liveness::LiveUpdate, value_status::Used, 60,
"Tablet load stats refresh rate in seconds.")
, force_capacity_based_balancing(this, "force_capacity_based_balancing", liveness::LiveUpdate, value_status::Used, false,
"Forces the load balancer to perform capacity based balancing, instead of size based balancing.")
, size_based_balance_threshold_percentage(this, "size_based_balance_threshold_percentage", liveness::LiveUpdate, value_status::Used, 1.0,
"Sets the maximum difference in percentages between the most loaded and least loaded nodes, below which the load balancer considers nodes balanced.")
, minimal_tablet_size_for_balancing(this, "minimal_tablet_size_for_balancing", liveness::LiveUpdate, value_status::Used, service::default_target_tablet_size / 100,
"Sets the minimal tablet size for the load balancer. For any tablet smaller than this, the balancer will use this size instead of the actual tablet size.")
, default_log_level(this, "default_log_level", value_status::Used, seastar::log_level::info, "Default log level for log messages")
, logger_log_level(this, "logger_log_level", value_status::Used, {}, "Map of logger name to log level. Valid log levels are 'error', 'warn', 'info', 'debug' and 'trace'")
, log_to_stdout(this, "log_to_stdout", value_status::Used, true, "Send log output to stdout")

View File

@@ -324,6 +324,10 @@ public:
named_value<uint16_t> native_transport_port_ssl;
named_value<uint16_t> native_shard_aware_transport_port;
named_value<uint16_t> native_shard_aware_transport_port_ssl;
named_value<uint16_t> native_transport_port_proxy_protocol;
named_value<uint16_t> native_transport_port_ssl_proxy_protocol;
named_value<uint16_t> native_shard_aware_transport_port_proxy_protocol;
named_value<uint16_t> native_shard_aware_transport_port_ssl_proxy_protocol;
named_value<uint32_t> native_transport_max_threads;
named_value<uint32_t> native_transport_max_frame_size_in_mb;
named_value<sstring> broadcast_rpc_address;
@@ -473,6 +477,7 @@ public:
named_value<bool> alternator_allow_system_table_write;
named_value<uint32_t> alternator_max_expression_cache_entries_per_shard;
named_value<uint64_t> alternator_max_users_query_size_in_trace_output;
named_value<uint32_t> alternator_describe_table_info_cache_validity_in_seconds;
named_value<bool> abort_on_ebadf;
@@ -590,6 +595,9 @@ public:
named_value<bool> rf_rack_valid_keyspaces;
named_value<uint32_t> tablet_load_stats_refresh_interval_in_seconds;
named_value<bool> force_capacity_based_balancing;
named_value<float> size_based_balance_threshold_percentage;
named_value<uint64_t> minimal_tablet_size_for_balancing;
static const sstring default_tls_priority;
private:

View File

@@ -26,6 +26,7 @@
#include <seastar/core/smp.hh>
#include <seastar/coroutine/exception.hh>
#include <seastar/coroutine/parallel_for_each.hh>
#include <seastar/util/file.hh>
// Boost features.
@@ -899,7 +900,7 @@ future<> manager::migrate_ip_directories() {
co_await coroutine::parallel_for_each(dirs_to_remove, [] (auto& directory) -> future<> {
try {
manager_logger.warn("Removing hint directory {}", directory.native());
co_await lister::rmdir(directory);
co_await seastar::recursive_remove_directory(directory);
} catch (...) {
on_internal_error(manager_logger,
seastar::format("Removing a hint directory has failed. Reason: {}", std::current_exception()));

View File

@@ -12,8 +12,6 @@
#include <yaml-cpp/yaml.h>
#include <boost/lexical_cast.hpp>
#include "utils/s3/creds.hh"
#include "object_storage_endpoint_param.hh"
using namespace std::string_literals;
@@ -21,9 +19,6 @@ using namespace std::string_literals;
db::object_storage_endpoint_param::object_storage_endpoint_param(s3_storage s)
: _data(std::move(s))
{}
db::object_storage_endpoint_param::object_storage_endpoint_param(std::string endpoint, s3::endpoint_config config)
: object_storage_endpoint_param(s3_storage{std::move(endpoint), std::move(config)})
{}
db::object_storage_endpoint_param::object_storage_endpoint_param(gs_storage s)
: _data(std::move(s))
{}
@@ -32,8 +27,8 @@ db::object_storage_endpoint_param::object_storage_endpoint_param() = default;
db::object_storage_endpoint_param::object_storage_endpoint_param(const object_storage_endpoint_param&) = default;
std::string db::object_storage_endpoint_param::s3_storage::to_json_string() const {
return fmt::format("{{ \"port\": {}, \"use_https\": {}, \"aws_region\": \"{}\", \"iam_role_arn\": \"{}\" }}",
config.port, config.use_https, config.region, config.role_arn
return fmt::format("{{ \"type\": \"s3\", \"aws_region\": \"{}\", \"iam_role_arn\": \"{}\" }}",
region, iam_role_arn
);
}
@@ -99,8 +94,6 @@ const std::string& db::object_storage_endpoint_param::type() const {
db::object_storage_endpoint_param db::object_storage_endpoint_param::decode(const YAML::Node& node) {
auto name = node["name"];
auto aws_region = node["aws_region"];
auto iam_role_arn = node["iam_role_arn"];
auto type = node["type"];
auto get_opt = [](auto& node, const std::string& key, auto def) {
@@ -108,13 +101,19 @@ db::object_storage_endpoint_param db::object_storage_endpoint_param::decode(cons
return tmp ? tmp.template as<std::decay_t<decltype(def)>>() : def;
};
// aws s3 endpoint.
if (!type || type.as<std::string>() == s3_type || aws_region || iam_role_arn) {
if (!type || type.as<std::string>() == s3_type ) {
s3_storage ep;
ep.endpoint = name.as<std::string>();
ep.config.port = node["port"].as<unsigned>();
ep.config.use_https = node["https"].as<bool>(false);
ep.config.region = aws_region ? aws_region.as<std::string>() : std::getenv("AWS_DEFAULT_REGION");
ep.config.role_arn = iam_role_arn ? iam_role_arn.as<std::string>() : "";
auto aws_region = node["aws_region"];
ep.region = aws_region ? aws_region.as<std::string>() : std::getenv("AWS_DEFAULT_REGION");
ep.iam_role_arn = get_opt(node, "iam_role_arn", ""s);
if (maybe_legacy_endpoint_name(ep.endpoint)) {
// Support legacy config for a while
auto port = node["port"].as<unsigned>();
auto use_https = node["https"].as<bool>(false);
ep.endpoint = fmt::format("http{}://{}:{}", use_https ? "s" : "", ep.endpoint, port);
}
return object_storage_endpoint_param{std::move(ep)};
}

View File

@@ -13,7 +13,6 @@
#include <variant>
#include <compare>
#include <fmt/core.h>
#include "utils/s3/creds.hh"
namespace YAML {
class Node;
@@ -25,7 +24,8 @@ class object_storage_endpoint_param {
public:
struct s3_storage {
std::string endpoint;
s3::endpoint_config config;
std::string region;
std::string iam_role_arn;
std::strong_ordering operator<=>(const s3_storage&) const = default;
std::string to_json_string() const;
@@ -43,7 +43,6 @@ public:
object_storage_endpoint_param();
object_storage_endpoint_param(const object_storage_endpoint_param&);
object_storage_endpoint_param(s3_storage);
object_storage_endpoint_param(std::string endpoint, s3::endpoint_config config);
object_storage_endpoint_param(gs_storage);
std::strong_ordering operator<=>(const object_storage_endpoint_param&) const;
@@ -77,3 +76,7 @@ template <>
struct fmt::formatter<db::object_storage_endpoint_param> : fmt::formatter<std::string_view> {
auto format(const db::object_storage_endpoint_param&, fmt::format_context& ctx) const -> decltype(ctx.out());
};
inline bool maybe_legacy_endpoint_name(std::string_view ep) noexcept {
return !(ep.starts_with("http://") || ep.starts_with("https://"));
}

View File

@@ -55,6 +55,7 @@
#include "message/shared_dict.hh"
#include "replica/database.hh"
#include "db/compaction_history_entry.hh"
#include "mutation/async_utils.hh"
#include <unordered_map>
@@ -2999,7 +3000,9 @@ future<mutation> system_keyspace::get_group0_history(sharded<replica::database>&
SCYLLA_ASSERT(rs);
auto& ps = rs->partitions();
for (auto& p: ps) {
auto mut = p.mut().unfreeze(s);
// Note: we could decorate the frozen_mutation's key to check if it's the expected one
// but since this is a single partition table, we can just check after unfreezing the whole mutation.
auto mut = co_await unfreeze_gently(p.mut(), s);
auto partition_key = value_cast<sstring>(utf8_type->deserialize(mut.key().get_component(*s, 0)));
if (partition_key == GROUP0_HISTORY_KEY) {
co_return mut;

View File

@@ -200,6 +200,7 @@ public:
static constexpr auto DICTS = "dicts";
static constexpr auto VIEW_BUILDING_TASKS = "view_building_tasks";
static constexpr auto CLIENT_ROUTES = "client_routes";
static constexpr auto VERSIONS = "versions";
// auth
static constexpr auto ROLES = "roles";

View File

@@ -605,8 +605,8 @@ public:
}
static schema_ptr build_schema() {
auto id = generate_legacy_id(system_keyspace::NAME, "versions");
return schema_builder(system_keyspace::NAME, "versions", std::make_optional(id))
auto id = generate_legacy_id(system_keyspace::NAME, system_keyspace::VERSIONS);
return schema_builder(system_keyspace::NAME, system_keyspace::VERSIONS, std::make_optional(id))
.with_column("key", utf8_type, column_kind::partition_key)
.with_column("version", utf8_type)
.with_column("build_mode", utf8_type)
@@ -1120,9 +1120,10 @@ public:
}
auto tm = _db.local().get_token_metadata_ptr();
auto target_tablet_size = _db.local().get_config().target_tablet_size_in_bytes();
locator::load_sketch load(tm);
const uint64_t default_tablet_size = _db.local().get_config().target_tablet_size_in_bytes();
locator::load_sketch load(tm, stats, default_tablet_size);
co_await load.populate();
tm->get_topology().for_each_node([&] (const auto& node) {
@@ -1136,18 +1137,23 @@ public:
if (auto ip = _gossiper.local().get_address_map().find(host)) {
set_cell(r.cells(), "ip", data_value(inet_address(*ip)));
}
set_cell(r.cells(), "tablets_allocated", load.get_load(host));
set_cell(r.cells(), "tablets_allocated_per_shard", data_value(double(load.get_real_avg_shard_load(host))));
set_cell(r.cells(), "storage_allocated_load", data_value(int64_t(load.get_load(host) * target_tablet_size)));
set_cell(r.cells(), "tablets_allocated", int64_t(load.get_tablet_count(host)));
set_cell(r.cells(), "tablets_allocated_per_shard", data_value(double(load.get_real_avg_tablet_count(host))));
set_cell(r.cells(), "storage_allocated_load", data_value(int64_t(load.get_tablet_count(host) * default_tablet_size)));
if (stats && stats->capacity.contains(host)) {
auto capacity = stats->capacity.at(host);
set_cell(r.cells(), "storage_capacity", data_value(int64_t(capacity)));
auto utilization = load.get_allocated_utilization(host, *stats, target_tablet_size);
if (utilization) {
if (auto utilization = load.get_allocated_utilization(host)) {
set_cell(r.cells(), "storage_allocated_utilization", data_value(double(*utilization)));
}
if (load.has_complete_data(host)) {
if (auto utilization = load.get_storage_utilization(host)) {
set_cell(r.cells(), "storage_utilization", data_value(double(*utilization)));
}
set_cell(r.cells(), "storage_load", data_value(int64_t(load.get_disk_used(host))));
}
}
mutation_sink(m);
});
@@ -1167,6 +1173,8 @@ private:
.with_column("storage_capacity", long_type)
.with_column("storage_allocated_load", long_type)
.with_column("storage_allocated_utilization", double_type)
.with_column("storage_load", long_type)
.with_column("storage_utilization", double_type)
.with_sharder(1, 0) // shard0-only
.with_hash_version()
.build();

View File

@@ -271,6 +271,12 @@ is different, or can be configured in Alternator:
So for example, if you create a table whose name is 192 characters, you
can't create a GSI whose name is longer than 29 characters.
* DynamoDB's DescribeTable will return information about the table. According to
AWS documentation, fields TableSizeBytes, IndexSizeBytes and ItemCount can
lag behind by up to 6 hours.
The `alternator_describe_table_info_cache_validity_in_seconds` parameter allows
users to change this timeout - the default value in seconds is set to 21600 (6 hours).
## Experimental API features
Some DynamoDB API features are supported by Alternator, but considered
@@ -290,6 +296,14 @@ experimental:
considered experimental so needs to be enabled explicitly with the
`--experimental-features=alternator-streams` configuration option.
In this version, Alternator Streams is only supported if the base table
uses vnodes instead of tablets. However, by default new tables use tablets
so to create a table that can be used with Streams, you must set the tag
`system:initial_tablets` set to `none` during CreateTable - so that the
new table will use vnodes. Streams cannot be enabled on an already-existing
table that uses tablets.
See <https://github.com/scylladb/scylla/issues/23838>.
Alternator streams also differ in some respects from DynamoDB Streams:
* The number of separate "shards" in Alternator's streams is significantly
larger than is typical on DynamoDB.
@@ -375,11 +389,11 @@ they should be easy to detect. Here is a list of these unimplemented features:
another cache in front of the it. We wrote more about this here:
<https://www.scylladb.com/2017/07/31/database-caches-not-good/>
* The DescribeTable is missing information about size estimates, and
also part of the information about indexes enabled on the table.
* The DescribeTable is missing some information about size estimates
(IndexSizeBytes and ItemCount - TableSizeBytes is available), and also
part of the information about indexes enabled on the table.
<https://github.com/scylladb/scylla/issues/5320>
<https://github.com/scylladb/scylla/issues/7550>
<https://github.com/scylladb/scylla/issues/7551>
* The PartiQL syntax (SQL-like SELECT/UPDATE/INSERT/DELETE expressions)
and the operations ExecuteStatement, BatchExecuteStatement and

View File

@@ -1,6 +1,6 @@
# Introduction
Similar to the approach described in CASSANDRA-14471, we add the
Similar to the approach described in CASSANDRA-12151, we add the
concept of an audit specification. An audit has a target (syslog or a
table) and a set of events/actions that it wants recorded. We
introduce new CQL syntax for Scylla users to describe and manipulate

View File

@@ -2,8 +2,11 @@
## What is ScyllaDB?
ScyllaDB is a high-performance NoSQL database system, fully compatible with Apache Cassandra.
ScyllaDB is released under the GNU Affero General Public License version 3 and the Apache License, ScyllaDB is free and open-source software.
ScyllaDB is a high-performance NoSQL database optimized for speed and scalability.
It is designed to efficiently handle large volumes of data with minimal latency,
making it ideal for data-intensive applications.
ScyllaDB is distributed under the [ScyllaDB Source Available License](https://github.com/scylladb/scylladb/blob/master/LICENSE-ScyllaDB-Source-Available.md).
> [ScyllaDB](http://www.scylladb.com/)

View File

@@ -20,9 +20,7 @@ command line option when launchgin scylla.
You can define endpoint details in the `scylla.yaml` file. For example:
```yaml
object_storage_endpoints:
- name: s3.us-east-1.amazonaws.com
port: 443
https: true
- name: https://s3.us-east-1.amazonaws.com:443
aws_region: us-east-1
```
@@ -78,9 +76,7 @@ The examples above are intended for development or local environments. You shoul
For the EC2 Instance Metadata Service to function correctly, no additional configuration is required. However, STS requires the IAM Role ARN to be defined in the `scylla.yaml` file, as shown below:
```yaml
object_storage_endpoints:
- name: s3.us-east-1.amazonaws.com
port: 443
https: true
- name: https://s3.us-east-1.amazonaws.com:443
aws_region: us-east-1
iam_role_arn: arn:aws:iam::123456789012:instance-profile/my-instance-instance-profile
```
@@ -100,9 +96,7 @@ in `scylla.yaml`:
```yaml
object_storage_endpoints:
- name: s3.us-east-2.amazonaws.com
port: 443
https: true
- name: https://s3.us-east-2.amazonaws.com:443
aws_region: us-east-2
```

View File

@@ -372,6 +372,8 @@ Columns:
* `storage_allocated_load` - Disk space allocated for tablets, assuming each tablet has a fixed size (target_tablet_size).
* `storage_allocated_utilization` - Fraction of node's disk capacity taken for `storage_allocated_load`, where 1.0 means full utilization.
* `storage_capacity` - Total disk capacity in bytes. Used to compute `storage_allocated_utilization`. By default equal to file system's capacity.
* `storage_load` - Disk space allocated for tablets, computed with actual tablet sizes. Can be null if some of the tablet sizes are not known.
* `storage_utilization` - Fraction of node's disk capacity taken for `storage_load` (with actual tablet sizes), where 1.0 means full utilization. Can be null if some of the tablet sizes are not known.
* `tablets_allocated` - Number of tablet replicas on the node. Migrating tablets are accounted as if migration already finished.
* `tablets_allocated_per_shard` - `tablets_allocated` divided by shard count on the node.

View File

@@ -45,10 +45,3 @@ Run cqlsh:
cqlsh
Run cassandra-stress:
.. code-block:: console
cassandra-stress write -mode cql3 native

View File

@@ -10,6 +10,7 @@ Admin Tools
Admin REST API </operating-scylla/rest>
Tracing </using-scylla/tracing>
ScyllaDB SStable </operating-scylla/admin-tools/scylla-sstable/>
ScyllaDB SStable Script API </operating-scylla/admin-tools/scylla-sstable-script-api/>
ScyllaDB Types </operating-scylla/admin-tools/scylla-types/>
sstableloader
cassandra-stress </operating-scylla/admin-tools/cassandra-stress/>

View File

@@ -0,0 +1,530 @@
ScyllaDB SStable Script API
---------------------------
The script API consists of two parts:
* `ScyllaDB Consume API <scylla-consume-api_>`_ - Hook methods implemented by the script to consume a :ref:`mutation fragment stream <scylla-sstable-sstable-content>`;
* `ScyllaDB Lua API <scylla-script-lua-api_>`_ - types and methods exposed to the script to work with ScyllaDB types and values.
.. _scylla-consume-api:
ScyllaDB Consume API
~~~~~~~~~~~~~~~~~~~~~~
These methods represent the glue code between scylla-sstable's C++ code and the Lua script.
Conceptually a script is an implementation of a consumer interface. The script has to implement only the methods it is interested in. Each method has a default implementation in the interface, which simply drops the respective :ref:`mutation fragment <scylla-sstable-sstable-content>`.
For example, a script only interested in partitions can define only :ref:`consume_partition_start() <scylla-consume-partition-start-method>` and nothing else.
Therefore a completely empty script is also valid, although not very useful.
Below you will find the listing of the API methods.
These methods (if provided by the script) will be called by the scylla-sstable runtime for the appropriate events and fragment types.
.. _scylla-consume-stream-start-method:
consume_stream_start(args)
""""""""""""""""""""""""""
* Part of the Consume API. Called on the very start of the stream.
* Parameter is a Lua table containing command line arguments for the script, passed via ``--script-arg``.
* Can be used to initialize global state.
.. _scylla-consume-sstable-start-method:
consume_sstable_start(sst)
""""""""""""""""""""""""""
* Part of the Consume API.
* Called on the start of each stable.
* The parameter is of type `ScyllaDB.sstable <scylla-sstable-type_>`_.
* When SStables are merged (``--merge``), the parameter is ``nil``.
Returns whether to stop. If ``true``, `consume_sstable_end() <scylla-consume-sstable-end-method_>`_ is called, skipping the content of the sstable (or that of the entire stream if ``--merge`` is used). If ``false``, consumption follows with the content of the sstable.
.. _scylla-consume-partition-start-method:
consume_partition_start(ps)
"""""""""""""""""""""""""""
* Part of the Consume API. Called on the start of each partition.
* The parameter is of type `ScyllaDB.partition_start <scylla-partition-start-type_>`_.
* Returns whether to stop. If ``true``, `consume_partition_end() <scylla-consume-partition-end-method_>`_ is called, skipping the content of the partition. If ``false``, consumption follows with the content of the partition.
consume_static_row(sr)
""""""""""""""""""""""
* Part of the Consume API.
* Called if the partition has a static row.
* The parameter is of type `ScyllaDB.static_row <scylla-static-row-type_>`_.
* Returns whether to stop. If ``true``, `consume_partition_end() <scylla-consume-partition-end-method_>`_ is called, and the remaining content of the partition is skipped. If ``false``, consumption follows with the remaining content of the partition.
consume_clustering_row(cr)
""""""""""""""""""""""""""
* Part of the Consume API.
* Called for each clustering row.
* The parameter is of type `ScyllaDB.clustering_row <scylla-clustering-row-type_>`_.
* Returns whether to stop. If ``true``, `consume_partition_end() <scylla-consume-partition-end-method_>`_ is called, the remaining content of the partition is skipped. If ``false``, consumption follows with the remaining content of the partition.
consume_range_tombstone_change(crt)
"""""""""""""""""""""""""""""""""""
* Part of the Consume API.
* Called for each range tombstone change.
* The parameter is of type `ScyllaDB.range_tombstone_change <scylla-range-tombstone-change-type_>`_.
* Returns whether to stop. If ``true``, `consume_partition_end() <scylla-consume-partition-end-method_>`_ is called, the remaining content of the partition is skipped. If ``false``, consumption follows with the remaining content of the partition.
.. _scylla-consume-partition-end-method:
consume_partition_end()
"""""""""""""""""""""""
* Part of the Consume API.
* Called at the end of the partition.
* Returns whether to stop. If ``true``, `consume_sstable_end() <scylla-consume-sstable-end-method_>`_ is called, the remaining content of the SStable is skipped. If ``false``, consumption follows with the remaining content of the SStable.
.. _scylla-consume-sstable-end-method:
consume_sstable_end()
"""""""""""""""""""""
* Part of the Consume API.
* Called at the end of the SStable.
* Returns whether to stop. If true, `consume_stream_end() <scylla-consume-stream-end-method_>`_ is called, the remaining content of the stream is skipped. If false, consumption follows with the remaining content of the stream.
.. _scylla-consume-stream-end-method:
consume_stream_end()
""""""""""""""""""""
* Part of the Consume API.
* Called at the very end of the stream.
.. _scylla-script-lua-api:
ScyllaDB LUA API
~~~~~~~~~~~~~~~~
In addition to the `ScyllaDB Consume API <scylla-consume-api_>`_, the Lua bindings expose various types and methods that allow you to work with ScyllaDB types and values.
The listing uses the following terminology:
* Attribute - a simple attribute accessible via ``obj.attribute_name``;
* Method - a method operating on an instance of said type, invocable as ``obj:method()``;
* Magic method - magic methods defined in the metatable which define behaviour of these objects w.r.t. `Lua operators and more <http://www.lua.org/manual/5.4/manual.html#2.4>`_;
The format of an attribute description is the following:
.. code-block:: none
:class: hide-copy-button
attribute_name (type) - description
and that of a method:
.. code-block:: none
:class: hide-copy-button
method_name(arg1_type, arg2_type...) (return_type) - description
Magic methods have their signature defined by Lua and so that is not described here (these methods are not used directly anyway).
.. _scylla-atomic-cell-type:
ScyllaDB.atomic_cell
""""""""""""""""""""
Attributes:
* timestamp (integer)
* is_live (boolean) - is the cell live?
* type (string) - one of: ``regular``, ``counter-update``, ``counter-shards``, ``frozen-collection`` or ``collection``.
* has_ttl (boolean) - is the cell expiring?
* ttl (integer) - time to live in seconds, ``nil`` if cell is not expiring.
* expiry (`ScyllaDB.gc_clock_time_point <scylla-gc-clock-time-point-type_>`_) - time at which cell expires, ``nil`` if cell is not expiring.
* deletion_time (`ScyllaDB.gc_clock_time_point <scylla-gc-clock-time-point-type_>`_) - time at which cell was deleted, ``nil`` unless cell is dead or expiring.
* value:
- ``nil`` if cell is dead.
- appropriate Lua native type if type == ``regular``.
- integer if type == ``counter-update``.
- `ScyllaDB.counter_shards_value <scylla-counter-shards-value-type_>`_ if type == ``counter-shards``.
A counter-shard table has the following keys:
* id (string)
* value (integer)
* clock (integer)
.. _scylla-clustering-key-type:
ScyllaDB.clustering_key
"""""""""""""""""""""""
Attributes:
* components (table) - the column values (`ScyllaDB.data_value <scylla-data-value-type_>`_) making up the composite clustering key.
Methods:
* to_hex - convert the key to its serialized format, encoded in hex.
Magic methods:
* __tostring - can be converted to string with tostring(), uses the built-in operator<< in ScyllaDB.
.. _scylla-clustering-row-type:
ScyllaDB.clustering_row
"""""""""""""""""""""""
Attributes:
* key ($TYPE) - the clustering key's value as the appropriate Lua native type.
* tombstone (`ScyllaDB.tombstone <scylla-tombstone-type_>`_) - row tombstone, ``nil`` if no tombstone.
* shadowable_tombstone (`ScyllaDB.tombstone <scylla-tombstone-type_>`_) - shadowable tombstone of the row tombstone, ``nil`` if no tombstone.
* marker (`ScyllaDB.row_marker <scylla-row-marker-type_>`_) - the row marker, ``nil`` if row doesn't have one.
* cells (table) - table of cells, where keys are the column names and the values are either of type `ScyllaDB.atomic_cell <scylla-atomic-cell-type_>`_ or `ScyllaDB.collection <scylla-collection-type_>`_.
See also:
* `ScyllaDB.unserialize_clustering_key() <scylla-unserialize-clustering-key-method_>`_.
.. _scylla-collection-type:
ScyllaDB.collection
"""""""""""""""""""
Attributes:
* type (string) - always ``collection`` for collection.
* tombstone (`ScyllaDB.tombstone <scylla-tombstone-type_>`_) - ``nil`` if no tombstone.
* cells (table) - the collection cells, each collection cell is a table, with a ``key`` and ``value`` attribute. The key entry is the key of the collection cell for actual collections (list, set and map) and is of type `ScyllaDB.data-value <scylla-data-value-type_>`_. For tuples and UDT this is just an empty string. The value entry is the value of the collection cell and is of type `ScyllaDB.atomic-cell <scylla-atomic-cell-type_>`_.
.. _scylla-collection-cell-value-type:
ScyllaDB.collection_cell_value
""""""""""""""""""""""""""""""
Attributes:
* key (sstring) - collection cell key in human readable form.
* value (`ScyllaDB.atomic_cell <scylla-atomic-cell-type_>`_) - collection cell value.
.. _scylla-column-definition-type:
ScyllaDB.column_definition
""""""""""""""""""""""""""
Attributes:
* id (integer) - the id of the column.
* name (string) - the name of the column.
* kind (string) - the kind of the column, one of ``partition_key``, ``clustering_key``, ``static_column`` or ``regular_column``.
.. _scylla-counter-shards-value-type:
ScyllaDB.counter_shards_value
"""""""""""""""""""""""""""""
Attributes:
* value (integer) - the total value of the counter (the sum of all the shards).
* shards (table) - the shards making up this counter, a lua list containing tables, representing shards, with the following key/values:
- id (string) - the shard's id (UUID).
- value (integer) - the shard's value.
- clock (integer) - the shard's logical clock.
Magic methods:
* __tostring - can be converted to string with tostring().
.. _scylla-data-value-type:
ScyllaDB.data_value
"""""""""""""""""""
Attributes:
* value - the value represented as the appropriate Lua type
Magic methods:
* __tostring - can be converted to string with tostring().
.. _scylla-gc-clock-time-point-type:
ScyllaDB.gc_clock_time_point
""""""""""""""""""""""""""""
A time point belonging to the gc_clock, in UTC.
Attributes:
* year (integer) - [1900, +inf).
* month (integer) - [1, 12].
* day (integer) - [1, 31].
* hour (integer) - [0, 23].
* min (integer) - [0, 59].
* sec (integer) - [0, 59].
Magic methods:
* __eq - can be equal compared.
* __lt - can be less compared.
* __le - can be less-or-equal compared.
* __tostring - can be converted to string with tostring().
See also:
* `ScyllaDB.now() <scylla-now-method_>`_.
* `ScyllaDB.time_point_from_string() <scylla-time-point-from-string-method_>`_.
.. _scylla-json-writer-type:
ScyllaDB.json_writer
""""""""""""""""""""
A JSON writer object, with both low-level and high-level APIs.
The low-level API allows you to write custom JSON and it loosely follows the API of `rapidjson::Writer <https://rapidjson.org/classrapidjson_1_1_writer.html>`_ (upon which it is implemented).
The high-level API is for writing :ref:`mutation fragments <scylla-sstable-sstable-content>` as JSON directly, using the built-in JSON conversion logic that is used by :ref:`dump-data <scylla-sstable-dump-data-operation>` operation.
Low level API Methods:
* null() - write a null json value.
* bool(boolean) - write a bool json value.
* int(integer) - write an integer json value.
* double(number) - write a double json value.
* string(string) - write a string json value.
* start_object() - start a json object.
* key(string) - write the key of a json object.
* end_object() - write the end of a json object.
* start_array() - write the start of a json array.
* end_array() - write the end of a json array.
High level API Methods:
* start_stream() - start the stream, call at the very beginning.
* start_sstable() - start an sstable.
* start_partition() - start a partition.
* static_row() - write a static row to the stream.
* clustering_row() - write a clustering row to the stream.
* range_tombstone_change() - write a range tombstone change to the stream.
* end_partition() - end the current partition.
* end_sstable() - end the current sstable.
* end_stream() - end the stream, call at the very end.
.. _scylla-new-json-writer-method:
ScyllaDB.new_json_writer()
""""""""""""""""""""""""""
Create a `ScyllaDB.json_writer <scylla-json-writer-type_>`_ instance.
.. _scylla-new-position-in-partition-method:
ScyllaDB.new_position_in_partition()
""""""""""""""""""""""""""""""""""""
Creates a `ScyllaDB.position_in_partition <scylla-position-in-partition-type_>`_ instance.
Arguments:
* weight (integer) - the weight of the key.
* key (`ScyllaDB.clustering_key <scylla-clustering-key-type_>`_) - the clustering key, optional.
.. _scylla-new-ring-position-method:
ScyllaDB.new_ring_position()
""""""""""""""""""""""""""""
Creates a `ScyllaDB.ring_position <scylla-ring-position-type_>`_ instance.
Has several overloads:
* ``ScyllaDB.new_ring_position(weight, key)``.
* ``ScyllaDB.new_ring_position(weight, token)``.
* ``ScyllaDB.new_ring_position(weight, key, token)``.
Where:
* weight (integer) - the weight of the key.
* key (`ScyllaDB.partition_key <scylla-partition-key-type_>`_) - the partition key.
* token (integer) - the token (of the key if a key is provided).
.. _scylla-now-method:
ScyllaDB.now()
""""""""""""""
Create a `ScyllaDB.gc_clock_time_point <scylla-gc-clock-time-point-type_>`_ instance, representing the current time.
.. _scylla-partition-key-type:
ScyllaDB.partition_key
""""""""""""""""""""""
Attributes:
* components (table) - the column values (`ScyllaDB.data_value <scylla-data-value-type_>`_) making up the composite partition key.
Methods:
* to_hex - convert the key to its serialized format, encoded in hex.
Magic methods:
* __tostring - can be converted to string with tostring(), uses the built-in operator<< in ScyllaDB.
See also:
* :ref:`ScyllaDB.unserialize_partition_key() <scylla-unserialize-partition-key-method>`.
* :ref:`ScyllaDB.token_of() <scylla-token-of-method>`.
.. _scylla-partition-start-type:
ScyllaDB.partition_start
""""""""""""""""""""""""
Attributes:
* key - the partition key's value as the appropriate Lua native type.
* token (integer) - the partition key's token.
* tombstone (`ScyllaDB.tombstone <scylla-tombstone-type_>`_) - the partition tombstone, ``nil`` if no tombstone.
.. _scylla-position-in-partition-type:
ScyllaDB.position_in_partition
""""""""""""""""""""""""""""""
Currently used only for clustering positions.
Attributes:
* key (`ScyllaDB.clustering_key <scylla-clustering-key-type_>`_) - the clustering key, ``nil`` if the position in partition represents the min or max clustering positions.
* weight (integer) - weight of the position, either -1 (before key), 0 (at key) or 1 (after key). If key attribute is ``nil``, the weight is never 0.
Methods:
* tri_cmp - compare this position in partition to another position in partition, returns -1 (``<``), 0 (``==``) or 1 (``>``).
See also:
* `ScyllaDB.new_position_in_partition() <scylla-new-position-in-partition-method_>`_.
.. _scylla-range-tombstone-change-type:
ScyllaDB.range_tombstone_change
"""""""""""""""""""""""""""""""
Attributes:
* key ($TYPE) - the clustering key's value as the appropriate Lua native type.
* key_weight (integer) - weight of the position, either -1 (before key), 0 (at key) or 1 (after key).
* tombstone (`ScyllaDB.tombstone <scylla-tombstone-type_>`_) - tombstone, ``nil`` if no tombstone.
.. _scylla-ring-position-type:
ScyllaDB.ring_position
""""""""""""""""""""""
Attributes:
* token (integer) - the token, ``nil`` if the ring position represents the min or max ring positions.
* key (`ScyllaDB.partition_key <scylla-partition-key-type_>`_) - the partition key, ``nil`` if the ring position represents a position before/after a token.
* weight (integer) - weight of the position, either -1 (before key/token), 0 (at key) or 1 (after key/token). If key attribute is ``nil``, the weight is never 0.
Methods:
* tri_cmp - compare this ring position to another ring position, returns -1 (``<``), 0 (``==``) or 1 (``>``).
See also:
* `ScyllaDB.new_ring_position() <scylla-new-ring-position-method_>`_.
.. _scylla-row-marker-type:
ScyllaDB.row_marker
"""""""""""""""""""
Attributes:
* timestamp (integer).
* is_live (boolean) - is the marker live?
* has_ttl (boolean) - is the marker expiring?
* ttl (integer) - time to live in seconds, ``nil`` if marker is not expiring.
* expiry (`ScyllaDB.gc_clock_time_point <scylla-gc-clock-time-point-type_>`_) - time at which marker expires, ``nil`` if marker is not expiring.
* deletion_time (`ScyllaDB.gc_clock_time_point <scylla-gc-clock-time-point-type_>`_) - time at which marker was deleted, ``nil`` unless marker is dead or expiring.
.. _scylla-schema-type:
ScyllaDB.schema
"""""""""""""""
Attributes:
* partition_key_columns (table) - list of `ScyllaDB.column_definition <scylla-column-definition-type_>`_ of the key columns making up the partition key.
* clustering_key_columns (table) - list of `ScyllaDB.column_definition <scylla-column-definition-type_>`_ of the key columns making up the clustering key.
* static_columns (table) - list of `ScyllaDB.column_definition <scylla-column-definition-type_>`_ of the static columns.
* regular_columns (table) - list of `ScyllaDB.column_definition <scylla-column-definition-type_>`_ of the regular columns.
* all_columns (table) - list of `ScyllaDB.column_definition <scylla-column-definition-type_>`_ of all columns.
.. _scylla-sstable-type:
ScyllaDB.sstable
""""""""""""""""
Attributes:
* filename (string) - the full path of the sstable Data component file;
.. _scylla-static-row-type:
ScyllaDB.static_row
"""""""""""""""""""
Attributes:
* cells (table) - table of cells, where keys are the column names and the values are either of type `ScyllaDB.atomic_cell <scylla-atomic-cell-type_>`_ or `ScyllaDB.collection <scylla-collection-type_>`_.
.. _scylla-time-point-from-string-method:
ScyllaDB.time_point_from_string()
"""""""""""""""""""""""""""""""""
Create a `ScyllaDB.gc_clock_time_point <scylla-gc-clock-time-point-type_>`_ instance from the passed in string.
Argument is string, using the same format as the CQL timestamp type, see https://en.wikipedia.org/wiki/ISO_8601.
.. _scylla-token-of-method:
ScyllaDB.token_of()
"""""""""""""""""""
Compute and return the token (integer) for a `ScyllaDB.partition_key <scylla-partition-key-type_>`_.
.. _scylla-tombstone-type:
ScyllaDB.tombstone
""""""""""""""""""
Attributes:
* timestamp (integer)
* deletion_time (`ScyllaDB.gc_clock_time_point <scylla-gc-clock-time-point-type_>`_) - the point in time at which the tombstone was deleted.
.. _scylla-unserialize-clustering-key-method:
ScyllaDB.unserialize_clustering_key()
"""""""""""""""""""""""""""""""""""""
Create a `ScyllaDB.clustering_key <scylla-clustering-key-type_>`_ instance.
Argument is a string representing serialized clustering key in hex format.
.. _scylla-unserialize-partition-key-method:
ScyllaDB.unserialize_partition_key()
""""""""""""""""""""""""""""""""""""
Create a `ScyllaDB.partition_key <scylla-partition-key-type_>`_ instance.
Argument is a string representing serialized partition key in hex format.

View File

@@ -667,7 +667,7 @@ write
Writes an SStable based on a description of the content.
The description can be provided in two formats: ``CQL`` and ``JSON``.
The input format can be selected with the ``--input-format`` flag. Default is ``cql``.
In both cases the input is expected to be provided via the file whoose path is passed to ``--input-file``.
In both cases the input is expected to be provided via the file whose path is passed to ``--input-file``.
CQL input format
~~~~~~~~~~~~~~~~
@@ -858,527 +858,9 @@ Alternatively, you can provide each key-value pair via a separate ``--script-arg
--script-arg $key1=$value1 --script-arg $key2=$value2
Command line arguments will be received by the `consume_stream_start() <scylla-consume-stream-start-method_>`_ API method.
Command line arguments will be received by the :ref:`consume_stream_start() <scylla-consume-stream-start-method>` API method.
.. _scylla-consume-api:
ScyllaDB Consume API
~~~~~~~~~~~~~~~~~~~~~~
These methods represent the glue code between scylla-sstable's C++ code and the Lua script.
Conceptually a script is an implementation of a consumer interface. The script has to implement only the methods it is interested in. Each method has a default implementation in the interface, which simply drops the respective `mutation fragment <scylla-sstable-sstable-content_>`_.
For example, a script only interested in partitions can define only `consume_partition_start() <scylla-consume-partition-start-method_>`_ and nothing else.
Therefore a completely empty script is also valid, although not very useful.
Below you will find the listing of the API methods.
These methods (if provided by the script) will be called by the scylla-sstable runtime for the appropriate events and fragment types.
.. _scylla-consume-stream-start-method:
consume_stream_start(args)
""""""""""""""""""""""""""
* Part of the Consume API. Called on the very start of the stream.
* Parameter is a Lua table containing command line arguments for the script, passed via ``--script-arg``.
* Can be used to initialize global state.
.. _scylla-consume-sstable-start-method:
consume_sstable_start(sst)
""""""""""""""""""""""""""
* Part of the Consume API.
* Called on the start of each stable.
* The parameter is of type `ScyllaDB.sstable <scylla-sstable-type_>`_.
* When SStables are merged (``--merge``), the parameter is ``nil``.
Returns whether to stop. If ``true``, `consume_sstable_end() <scylla-consume-sstable-end-method_>`_ is called, skipping the content of the sstable (or that of the entire stream if ``--merge`` is used). If ``false``, consumption follows with the content of the sstable.
.. _scylla-consume-partition-start-method:
consume_partition_start(ps)
"""""""""""""""""""""""""""
* Part of the Consume API. Called on the start of each partition.
* The parameter is of type `ScyllaDB.partition_start <scylla-partition-start-type_>`_.
* Returns whether to stop. If ``true``, `consume_partition_end() <scylla-consume-partition-end-method_>`_ is called, skipping the content of the partition. If ``false``, consumption follows with the content of the partition.
consume_static_row(sr)
""""""""""""""""""""""
* Part of the Consume API.
* Called if the partition has a static row.
* The parameter is of type `ScyllaDB.static_row <scylla-static-row-type_>`_.
* Returns whether to stop. If ``true``, `consume_partition_end() <scylla-consume-partition-end-method_>`_ is called, and the remaining content of the partition is skipped. If ``false``, consumption follows with the remaining content of the partition.
consume_clustering_row(cr)
""""""""""""""""""""""""""
* Part of the Consume API.
* Called for each clustering row.
* The parameter is of type `ScyllaDB.clustering_row <scylla-clustering-row-type_>`_.
* Returns whether to stop. If ``true``, `consume_partition_end() <scylla-consume-partition-end-method_>`_ is called, the remaining content of the partition is skipped. If ``false``, consumption follows with the remaining content of the partition.
consume_range_tombstone_change(crt)
"""""""""""""""""""""""""""""""""""
* Part of the Consume API.
* Called for each range tombstone change.
* The parameter is of type `ScyllaDB.range_tombstone_change <scylla-range-tombstone-change-type_>`_.
* Returns whether to stop. If ``true``, `consume_partition_end() <scylla-consume-partition-end-method_>`_ is called, the remaining content of the partition is skipped. If ``false``, consumption follows with the remaining content of the partition.
.. _scylla-consume-partition-end-method:
consume_partition_end()
"""""""""""""""""""""""
* Part of the Consume API.
* Called at the end of the partition.
* Returns whether to stop. If ``true``, `consume_sstable_end() <scylla-consume-sstable-end-method_>`_ is called, the remaining content of the SStable is skipped. If ``false``, consumption follows with the remaining content of the SStable.
.. _scylla-consume-sstable-end-method:
consume_sstable_end()
"""""""""""""""""""""
* Part of the Consume API.
* Called at the end of the SStable.
* Returns whether to stop. If true, `consume_stream_end() <scylla-consume-stream-end-method_>`_ is called, the remaining content of the stream is skipped. If false, consumption follows with the remaining content of the stream.
.. _scylla-consume-stream-end-method:
consume_stream_end()
""""""""""""""""""""
* Part of the Consume API.
* Called at the very end of the stream.
ScyllaDB LUA API
~~~~~~~~~~~~~~~~
In addition to the `ScyllaDB Consume API <scylla-consume-api_>`_, the Lua bindings expose various types and methods that allow you to work with ScyllaDB types and values.
The listing uses the following terminology:
* Attribute - a simple attribute accessible via ``obj.attribute_name``;
* Method - a method operating on an instance of said type, invocable as ``obj:method()``;
* Magic method - magic methods defined in the metatable which define behaviour of these objects w.r.t. `Lua operators and more <http://www.lua.org/manual/5.4/manual.html#2.4>`_;
The format of an attribute description is the following:
.. code-block:: none
:class: hide-copy-button
attribute_name (type) - description
and that of a method:
.. code-block:: none
:class: hide-copy-button
method_name(arg1_type, arg2_type...) (return_type) - description
Magic methods have their signature defined by Lua and so that is not described here (these methods are not used directly anyway).
.. _scylla-atomic-cell-type:
ScyllaDB.atomic_cell
""""""""""""""""""""
Attributes:
* timestamp (integer)
* is_live (boolean) - is the cell live?
* type (string) - one of: ``regular``, ``counter-update``, ``counter-shards``, ``frozen-collection`` or ``collection``.
* has_ttl (boolean) - is the cell expiring?
* ttl (integer) - time to live in seconds, ``nil`` if cell is not expiring.
* expiry (`ScyllaDB.gc_clock_time_point <scylla-gc-clock-time-point-type_>`_) - time at which cell expires, ``nil`` if cell is not expiring.
* deletion_time (`ScyllaDB.gc_clock_time_point <scylla-gc-clock-time-point-type_>`_) - time at which cell was deleted, ``nil`` unless cell is dead or expiring.
* value:
- ``nil`` if cell is dead.
- appropriate Lua native type if type == ``regular``.
- integer if type == ``counter-update``.
- `ScyllaDB.counter_shards_value <scylla-counter-shards-value-type_>`_ if type == ``counter-shards``.
A counter-shard table has the following keys:
* id (string)
* value (integer)
* clock (integer)
.. _scylla-clustering-key-type:
ScyllaDB.clustering_key
"""""""""""""""""""""""
Attributes:
* components (table) - the column values (`ScyllaDB.data_value <scylla-data-value-type_>`_) making up the composite clustering key.
Methods:
* to_hex - convert the key to its serialized format, encoded in hex.
Magic methods:
* __tostring - can be converted to string with tostring(), uses the built-in operator<< in ScyllaDB.
.. _scylla-clustering-row-type:
ScyllaDB.clustering_row
"""""""""""""""""""""""
Attributes:
* key ($TYPE) - the clustering key's value as the appropriate Lua native type.
* tombstone (`ScyllaDB.tombstone <scylla-tombstone-type_>`_) - row tombstone, ``nil`` if no tombstone.
* shadowable_tombstone (`ScyllaDB.tombstone <scylla-tombstone-type_>`_) - shadowable tombstone of the row tombstone, ``nil`` if no tombstone.
* marker (`ScyllaDB.row_marker <scylla-row-marker-type_>`_) - the row marker, ``nil`` if row doesn't have one.
* cells (table) - table of cells, where keys are the column names and the values are either of type `ScyllaDB.atomic_cell <scylla-atomic-cell-type_>`_ or `ScyllaDB.collection <scylla-collection-type_>`_.
See also:
* `ScyllaDB.unserialize_clustering_key() <scylla-unserialize-clustering-key-method_>`_.
.. _scylla-collection-type:
ScyllaDB.collection
"""""""""""""""""""
Attributes:
* type (string) - always ``collection`` for collection.
* tombstone (`ScyllaDB.tombstone <scylla-tombstone-type_>`_) - ``nil`` if no tombstone.
* cells (table) - the collection cells, each collection cell is a table, with a ``key`` and ``value`` attribute. The key entry is the key of the collection cell for actual collections (list, set and map) and is of type `ScyllaDB.data-value <scylla-data-value-type_>`_. For tuples and UDT this is just an empty string. The value entry is the value of the collection cell and is of type `ScyllaDB.atomic-cell <scylla-atomic-cell-type_>`_.
.. _scylla-collection-cell-value-type:
ScyllaDB.collection_cell_value
""""""""""""""""""""""""""""""
Attributes:
* key (sstring) - collection cell key in human readable form.
* value (`ScyllaDB.atomic_cell <scylla-atomic-cell-type_>`_) - collection cell value.
.. _scylla-column-definition-type:
ScyllaDB.column_definition
""""""""""""""""""""""""""
Attributes:
* id (integer) - the id of the column.
* name (string) - the name of the column.
* kind (string) - the kind of the column, one of ``partition_key``, ``clustering_key``, ``static_column`` or ``regular_column``.
.. _scylla-counter-shards-value-type:
ScyllaDB.counter_shards_value
"""""""""""""""""""""""""""""
Attributes:
* value (integer) - the total value of the counter (the sum of all the shards).
* shards (table) - the shards making up this counter, a lua list containing tables, representing shards, with the following key/values:
- id (string) - the shard's id (UUID).
- value (integer) - the shard's value.
- clock (integer) - the shard's logical clock.
Magic methods:
* __tostring - can be converted to string with tostring().
.. _scylla-data-value-type:
ScyllaDB.data_value
"""""""""""""""""""
Attributes:
* value - the value represented as the appropriate Lua type
Magic methods:
* __tostring - can be converted to string with tostring().
.. _scylla-gc-clock-time-point-type:
ScyllaDB.gc_clock_time_point
""""""""""""""""""""""""""""
A time point belonging to the gc_clock, in UTC.
Attributes:
* year (integer) - [1900, +inf).
* month (integer) - [1, 12].
* day (integer) - [1, 31].
* hour (integer) - [0, 23].
* min (integer) - [0, 59].
* sec (integer) - [0, 59].
Magic methods:
* __eq - can be equal compared.
* __lt - can be less compared.
* __le - can be less-or-equal compared.
* __tostring - can be converted to string with tostring().
See also:
* `ScyllaDB.now() <scylla-now-method_>`_.
* `ScyllaDB.time_point_from_string() <scylla-time-point-from-string-method_>`_.
.. _scylla-json-writer-type:
ScyllaDB.json_writer
""""""""""""""""""""
A JSON writer object, with both low-level and high-level APIs.
The low-level API allows you to write custom JSON and it loosely follows the API of `rapidjson::Writer <https://rapidjson.org/classrapidjson_1_1_writer.html>`_ (upon which it is implemented).
The high-level API is for writing `mutation fragments <scylla-sstable-sstable-content_>`_ as JSON directly, using the built-in JSON conversion logic that is used by `dump-data <dump-data_>`_ operation.
Low level API Methods:
* null() - write a null json value.
* bool(boolean) - write a bool json value.
* int(integer) - write an integer json value.
* double(number) - write a double json value.
* string(string) - write a string json value.
* start_object() - start a json object.
* key(string) - write the key of a json object.
* end_object() - write the end of a json object.
* start_array() - write the start of a json array.
* end_array() - write the end of a json array.
High level API Methods:
* start_stream() - start the stream, call at the very beginning.
* start_sstable() - start an sstable.
* start_partition() - start a partition.
* static_row() - write a static row to the stream.
* clustering_row() - write a clustering row to the stream.
* range_tombstone_change() - write a range tombstone change to the stream.
* end_partition() - end the current partition.
* end_sstable() - end the current sstable.
* end_stream() - end the stream, call at the very end.
.. _scylla-new-json-writer-method:
ScyllaDB.new_json_writer()
""""""""""""""""""""""""""
Create a `ScyllaDB.json_writer <scylla-json-writer-type_>`_ instance.
.. _scylla-new-position-in-partition-method:
ScyllaDB.new_position_in_partition()
""""""""""""""""""""""""""""""""""""
Creates a `ScyllaDB.position_in_partition <scylla-position-in-partition-type_>`_ instance.
Arguments:
* weight (integer) - the weight of the key.
* key (`ScyllaDB.clustering_key <scylla-clustering-key-type_>`_) - the clustering key, optional.
.. _scylla-new-ring-position-method:
ScyllaDB.new_ring_position()
""""""""""""""""""""""""""""
Creates a `ScyllaDB.ring_position <scylla-ring-position-type_>`_ instance.
Has several overloads:
* ``ScyllaDB.new_ring_position(weight, key)``.
* ``ScyllaDB.new_ring_position(weight, token)``.
* ``ScyllaDB.new_ring_position(weight, key, token)``.
Where:
* weight (integer) - the weight of the key.
* key (`ScyllaDB.partition_key <scylla-partition-key-type_>`_) - the partition key.
* token (integer) - the token (of the key if a key is provided).
.. _scylla-now-method:
ScyllaDB.now()
""""""""""""""
Create a `ScyllaDB.gc_clock_time_point <scylla-gc-clock-time-point-type_>`_ instance, representing the current time.
.. _scylla-partition-key-type:
ScyllaDB.partition_key
""""""""""""""""""""""
Attributes:
* components (table) - the column values (`ScyllaDB.data_value <scylla-data-value-type_>`_) making up the composite partition key.
Methods:
* to_hex - convert the key to its serialized format, encoded in hex.
Magic methods:
* __tostring - can be converted to string with tostring(), uses the built-in operator<< in ScyllaDB.
See also:
* :ref:`ScyllaDB.unserialize_partition_key() <scylla-unserialize-partition-key-method>`.
* :ref:`ScyllaDB.token_of() <scylla-token-of-method>`.
.. _scylla-partition-start-type:
ScyllaDB.partition_start
""""""""""""""""""""""""
Attributes:
* key - the partition key's value as the appropriate Lua native type.
* token (integer) - the partition key's token.
* tombstone (`ScyllaDB.tombstone <scylla-tombstone-type_>`_) - the partition tombstone, ``nil`` if no tombstone.
.. _scylla-position-in-partition-type:
ScyllaDB.position_in_partition
""""""""""""""""""""""""""""""
Currently used only for clustering positions.
Attributes:
* key (`ScyllaDB.clustering_key <scylla-clustering-key-type_>`_) - the clustering key, ``nil`` if the position in partition represents the min or max clustering positions.
* weight (integer) - weight of the position, either -1 (before key), 0 (at key) or 1 (after key). If key attribute is ``nil``, the weight is never 0.
Methods:
* tri_cmp - compare this position in partition to another position in partition, returns -1 (``<``), 0 (``==``) or 1 (``>``).
See also:
* `ScyllaDB.new_position_in_partition() <scylla-new-position-in-partition-method_>`_.
.. _scylla-range-tombstone-change-type:
ScyllaDB.range_tombstone_change
"""""""""""""""""""""""""""""""
Attributes:
* key ($TYPE) - the clustering key's value as the appropriate Lua native type.
* key_weight (integer) - weight of the position, either -1 (before key), 0 (at key) or 1 (after key).
* tombstone (`ScyllaDB.tombstone <scylla-tombstone-type_>`_) - tombstone, ``nil`` if no tombstone.
.. _scylla-ring-position-type:
ScyllaDB.ring_position
""""""""""""""""""""""
Attributes:
* token (integer) - the token, ``nil`` if the ring position represents the min or max ring positions.
* key (`ScyllaDB.partition_key <scylla-partition-key-type_>`_) - the partition key, ``nil`` if the ring position represents a position before/after a token.
* weight (integer) - weight of the position, either -1 (before key/token), 0 (at key) or 1 (after key/token). If key attribute is ``nil``, the weight is never 0.
Methods:
* tri_cmp - compare this ring position to another ring position, returns -1 (``<``), 0 (``==``) or 1 (``>``).
See also:
* `ScyllaDB.new_ring_position() <scylla-new-ring-position-method_>`_.
.. _scylla-row-marker-type:
ScyllaDB.row_marker
"""""""""""""""""""
Attributes:
* timestamp (integer).
* is_live (boolean) - is the marker live?
* has_ttl (boolean) - is the marker expiring?
* ttl (integer) - time to live in seconds, ``nil`` if marker is not expiring.
* expiry (`ScyllaDB.gc_clock_time_point <scylla-gc-clock-time-point-type_>`_) - time at which marker expires, ``nil`` if marker is not expiring.
* deletion_time (`ScyllaDB.gc_clock_time_point <scylla-gc-clock-time-point-type_>`_) - time at which marker was deleted, ``nil`` unless marker is dead or expiring.
.. _scylla-schema-type:
ScyllaDB.schema
"""""""""""""""
Attributes:
* partition_key_columns (table) - list of `ScyllaDB.column_definition <scylla-column-definition-type_>`_ of the key columns making up the partition key.
* clustering_key_columns (table) - list of `ScyllaDB.column_definition <scylla-column-definition-type_>`_ of the key columns making up the clustering key.
* static_columns (table) - list of `ScyllaDB.column_definition <scylla-column-definition-type_>`_ of the static columns.
* regular_columns (table) - list of `ScyllaDB.column_definition <scylla-column-definition-type_>`_ of the regular columns.
* all_columns (table) - list of `ScyllaDB.column_definition <scylla-column-definition-type_>`_ of all columns.
.. _scylla-sstable-type:
ScyllaDB.sstable
""""""""""""""""
Attributes:
* filename (string) - the full path of the sstable Data component file;
.. _scylla-static-row-type:
ScyllaDB.static_row
"""""""""""""""""""
Attributes:
* cells (table) - table of cells, where keys are the column names and the values are either of type `ScyllaDB.atomic_cell <scylla-atomic-cell-type_>`_ or `ScyllaDB.collection <scylla-collection-type_>`_.
.. _scylla-time-point-from-string-method:
ScyllaDB.time_point_from_string()
"""""""""""""""""""""""""""""""""
Create a `ScyllaDB.gc_clock_time_point <scylla-gc-clock-time-point-type_>`_ instance from the passed in string.
Argument is string, using the same format as the CQL timestamp type, see https://en.wikipedia.org/wiki/ISO_8601.
.. _scylla-token-of-method:
ScyllaDB.token_of()
"""""""""""""""""""
Compute and return the token (integer) for a `ScyllaDB.partition_key <scylla-partition-key-type_>`_.
.. _scylla-tombstone-type:
ScyllaDB.tombstone
""""""""""""""""""
Attributes:
* timestamp (integer)
* deletion_time (`ScyllaDB.gc_clock_time_point <scylla-gc-clock-time-point-type_>`_) - the point in time at which the tombstone was deleted.
.. _scylla-unserialize-clustering-key-method:
ScyllaDB.unserialize_clustering_key()
"""""""""""""""""""""""""""""""""""""
Create a `ScyllaDB.clustering_key <scylla-clustering-key-type_>`_ instance.
Argument is a string representing serialized clustering key in hex format.
.. _scylla-unserialize-partition-key-method:
ScyllaDB.unserialize_partition_key()
""""""""""""""""""""""""""""""""""""
Create a `ScyllaDB.partition_key <scylla-partition-key-type_>`_ instance.
Argument is a string representing serialized partition key in hex format.
See the `scripting API </operating-scylla/admin-tools/scylla-sstable-script-api/>`_ for more details.
Examples
~~~~~~~~
@@ -1397,7 +879,7 @@ SSTables which are already on the designated version are skipped. To force rewri
Output SSTables are written to the path provided by the ``--output-dir`` flag, or to the current directory if not specified.
This directory is expected to exist and be empty. If not empty the tool will refuse to run. This can be overridden with the ``--unsafe-accept-nonempty-output-dir`` flag.
It is strongly recommended to use the system schema tables as the schema source for this command, see the `schema options <scylla-sstable-schema_>`_ for more details.
It is strongly recommended to use the system schema tables as the schema source for this command, see the :ref:`schema options <scylla-sstable-schema>` for more details.
A schema which is good enough to read the SSTable and dump its content, may not be good enough to write its content back verbatim.
An incomplete or incorrect schema can lead to the tool crashing or even data loss.

View File

@@ -111,9 +111,7 @@ should follow this format:
.. code-block:: yaml
object_storage_endpoints:
- name: <endpoint_address_or_domain_name>
port: <port_number>
https: <true_or_false> # optional
- name: https://<endpoint_address_or_domain_name>[:<port_number>]
aws_region: <region_name> # optional, e.g. us-east-1
iam_role_arn: <iam_role> # optional
@@ -123,9 +121,7 @@ Example:
.. code:: yaml
object_storage_endpoints:
- name: s3.us-east-1.amazonaws.com
port: 443
https: true
- name: https://s3.us-east-1.amazonaws.com
aws_region: us-east-1
iam_role_arn: arn:aws:iam::123456789012:instance-profile/my-instance-instance-profile

View File

@@ -14,12 +14,13 @@ Enable ScyllaDB :doc:`Authentication </operating-scylla/security/authentication>
Enabling Audit
---------------
By default, auditing is **disabled**. Enabling auditing is controlled by the ``audit:`` parameter in the ``scylla.yaml`` file.
By default, auditing is **enabled**. Enabling auditing is controlled by the ``audit:`` parameter in the ``scylla.yaml`` file.
You can set the following options:
* ``none`` - Audit is disabled (default).
* ``table`` - Audit is enabled, and messages are stored in a Scylla table.
* ``syslog`` - Audit is enabled, and messages are sent to Syslog.
* ``syslog,table`` - Audit is enabled, and messages are stored in a Scylla table and sent to Syslog.
Configuring any other value results in an error at Scylla startup.

View File

@@ -178,6 +178,7 @@ public:
gms::feature strongly_consistent_tables { *this, "STRONGLY_CONSISTENT_TABLES"sv };
gms::feature client_routes { *this, "CLIENT_ROUTES"sv };
gms::feature removenode_with_left_token_ring { *this, "REMOVENODE_WITH_LEFT_TOKEN_RING"sv };
gms::feature size_based_load_balancing { *this, "SIZE_BASED_LOAD_BALANCING"sv };
public:
const std::unordered_map<sstring, std::reference_wrapper<feature>>& registered_features() const;

View File

@@ -14,7 +14,6 @@
#include <seastar/core/sharded.hh>
#include <seastar/core/abort_source.hh>
#include "utils/log.hh"
#include "utils/s3/creds.hh"
#include "seastarx.hh"
#include <boost/program_options.hpp>
#include <yaml-cpp/yaml.h>

View File

@@ -8,76 +8,112 @@
#pragma once
#include "service/tablet_allocator_fwd.hh"
#include "locator/topology.hh"
#include "locator/token_metadata.hh"
#include "locator/tablets.hh"
#include "utils/stall_free.hh"
#include "utils/extremum_tracking.hh"
#include "utils/div_ceil.hh"
#include "utils/pretty_printers.hh"
#include <absl/container/btree_set.h>
#include <seastar/util/defer.hh>
#include <optional>
#include <vector>
namespace locator {
struct disk_usage {
using load_type = double; // Disk usage factor (0.0 to 1.0)
uint64_t capacity = 0;
uint64_t used = 0;
load_type get_load() const {
if (capacity == 0) {
return 0;
}
return load_type(used) / capacity;
}
};
/// A data structure which keeps track of load associated with data ownership
/// on shards of the whole cluster.
class load_sketch {
using shard_id = seastar::shard_id;
using load_type = ssize_t; // In tablets.
using load_type = disk_usage::load_type;
struct shard_load {
shard_id id;
load_type load;
disk_usage du;
size_t tablet_count = 0;
// Returns storage utilization for the shard
load_type get_load() const {
return du.get_load();
}
};
// Less-comparator which orders by load first (ascending), and then by shard id (ascending).
struct shard_load_cmp {
bool operator()(const shard_load& a, const shard_load& b) const {
return a.load == b.load ? a.id < b.id : a.load < b.load;
bool operator()(const shard_load& shard_a, const shard_load& shard_b) const {
auto load_a = shard_a.get_load();
auto load_b = shard_b.get_load();
return load_a == load_b ? shard_a.id < shard_b.id : load_a < load_b;
}
};
struct node_load {
std::vector<shard_load> _shards;
absl::btree_set<shard_load, shard_load_cmp> _shards_by_load;
std::vector<load_type> _shards;
load_type _load = 0;
disk_usage _du;
size_t _tablet_count = 0;
node_load(size_t shard_count) : _shards(shard_count) {
// These can be false only when _load_stats != nullptr
bool _has_valid_disk_capacity = true;
bool _has_all_tablet_sizes = true;
node_load(size_t shard_count, uint64_t capacity)
: _shards(shard_count) {
_du.capacity = capacity;
uint64_t shard_capacity = capacity / shard_count;
for (shard_id i = 0; i < shard_count; ++i) {
_shards[i] = 0;
_shards[i].id = i;
_shards[i].du.capacity = shard_capacity;
}
}
void update_shard_load(shard_id shard, load_type load_delta) {
_load += load_delta;
auto old_load = _shards[shard];
auto new_load = old_load + load_delta;
_shards_by_load.erase(shard_load{shard, old_load});
_shards[shard] = new_load;
_shards_by_load.insert(shard_load{shard, new_load});
void update_shard_load(shard_id shard, ssize_t tablet_count_delta, int64_t tablet_size_delta) {
_shards_by_load.erase(_shards[shard]);
_shards[shard].tablet_count += tablet_count_delta;
_shards[shard].du.used += tablet_size_delta;
_shards_by_load.insert(_shards[shard]);
_du.used += tablet_size_delta;
_tablet_count += tablet_count_delta;
}
void populate_shards_by_load() {
_shards_by_load.clear();
for (shard_id i = 0; i < _shards.size(); ++i) {
_shards_by_load.insert(shard_load{i, _shards[i]});
}
_shards_by_load.insert(_shards.begin(), _shards.end());
}
load_type& load() noexcept {
return _load;
}
const load_type& load() const noexcept {
return _load;
// Returns storage utilization for the node
load_type get_load() const noexcept {
return _du.get_load();
}
};
std::unordered_map<host_id, node_load> _nodes;
token_metadata_ptr _tm;
load_stats_ptr _load_stats;
uint64_t _default_tablet_size = service::default_target_tablet_size;
uint64_t _minimal_tablet_size = 0;
// When set to true, it will use gross disk capacity instead of effective_capacity and
// treat all tablet as having the same size: _default_tablet_size
bool _force_capacity_based_load = false;
private:
tablet_replica_set get_replicas_for_tablet_load(const tablet_info& ti, const tablet_transition_info* trinfo) const {
// We reflect migrations in the load as if they already happened,
@@ -85,10 +121,34 @@ private:
return trinfo ? trinfo->next : ti.replicas;
}
future<> populate_table(const tablet_map& tmap, std::optional<host_id> host, std::optional<sstring> only_dc) {
std::optional<uint64_t> get_disk_capacity_for_node(host_id node) {
if (_load_stats) {
if (_load_stats->tablet_stats.contains(node) && !_force_capacity_based_load) {
return _load_stats->tablet_stats.at(node).effective_capacity;
} else if (_load_stats->capacity.contains(node)) {
return _load_stats->capacity.at(node);
}
}
return std::nullopt;
}
std::optional<uint64_t> get_tablet_size(host_id host, const range_based_tablet_id& rb_tid, const tablet_info& ti, const tablet_transition_info* trinfo) const {
if (_force_capacity_based_load) {
return _default_tablet_size;
}
std::optional<uint64_t> tablet_size_opt;
if (_load_stats) {
tablet_size_opt = _load_stats->get_tablet_size_in_transition(host, rb_tid, ti, trinfo);
}
return tablet_size_opt;
}
future<> populate_table(table_id table, const tablet_map& tmap, std::optional<host_id> host, std::optional<sstring> only_dc) {
const topology& topo = _tm->get_topology();
co_await tmap.for_each_tablet([&] (tablet_id tid, const tablet_info& ti) -> future<> {
for (auto&& replica : get_replicas_for_tablet_load(ti, tmap.get_tablet_transition_info(tid))) {
auto trinfo = tmap.get_tablet_transition_info(tid);
for (auto&& replica : get_replicas_for_tablet_load(ti, trinfo)) {
if (host && *host != replica.host) {
continue;
}
@@ -97,21 +157,41 @@ private:
if (only_dc && node->dc_rack().dc != *only_dc) {
continue;
}
_nodes.emplace(replica.host, node_load{node->get_shard_count()});
auto disk_capacity_opt = get_disk_capacity_for_node(replica.host);
auto [i, _] = _nodes.emplace(replica.host, node_load{node->get_shard_count(), disk_capacity_opt.value_or(_default_tablet_size)});
if (!disk_capacity_opt && _load_stats) {
i->second._has_valid_disk_capacity = false;
}
}
node_load& n = _nodes.at(replica.host);
if (replica.shard < n._shards.size()) {
n.load() += 1;
n._shards[replica.shard] += 1;
const range_based_tablet_id rb_tid {table, tmap.get_token_range(tid)};
auto tablet_size_opt = get_tablet_size(replica.host, rb_tid, ti, trinfo);
if (!tablet_size_opt && _load_stats) {
n._has_all_tablet_sizes = false;
}
const uint64_t tablet_size = std::max(tablet_size_opt.value_or(_default_tablet_size), _minimal_tablet_size);
n._du.used += tablet_size;
n._tablet_count++;
n._shards[replica.shard].du.used += tablet_size;
n._shards[replica.shard].tablet_count++;
// Note: as an optimization, _shards_by_load is populated later in populate_shards_by_load()
}
}
return make_ready_future<>();
});
}
void throw_on_incomplete_data(host_id host, bool only_check_disk_capacity = false) const {
if (!has_complete_data(host, only_check_disk_capacity)) {
throw std::runtime_error(format("Can't provide accurate load computation with incomplete load_stats for host: {}", host));
}
}
public:
load_sketch(token_metadata_ptr tm)
: _tm(std::move(tm)) {
load_sketch(token_metadata_ptr tm, load_stats_ptr load_stats = {}, uint64_t default_tablet_size = service::default_target_tablet_size)
: _tm(std::move(tm))
, _load_stats(std::move(load_stats))
, _default_tablet_size(default_tablet_size) {
}
future<> populate(std::optional<host_id> host = std::nullopt,
@@ -132,11 +212,11 @@ public:
if (only_table) {
if (_tm->tablets().has_tablet_map(*only_table)) {
auto& tmap = _tm->tablets().get_tablet_map(*only_table);
co_await populate_table(tmap, host, only_dc);
co_await populate_table(*only_table, tmap, host, only_dc);
}
} else {
for (const auto& [table, tmap] : _tm->tablets().all_tables_ungrouped()) {
co_await populate_table(*tmap, host, only_dc);
co_await populate_table(table, *tmap, host, only_dc);
}
}
@@ -149,12 +229,37 @@ public:
return populate(std::nullopt, std::nullopt, dc);
}
shard_id next_shard(host_id node) {
shard_id next_shard(host_id node, size_t tablet_count, uint64_t tablet_size_sum) {
auto shard = get_least_loaded_shard(node);
pick(node, shard);
pick(node, shard, tablet_count, tablet_size_sum);
return shard;
}
bool has_complete_data(host_id node, bool only_check_disk_capacity = false) const {
if (!_nodes.contains(node)) {
return false;
}
auto& n = _nodes.at(node);
return n._has_valid_disk_capacity && (only_check_disk_capacity || n._has_all_tablet_sizes);
}
void ignore_incomplete_data(host_id node) {
if (!_nodes.contains(node)) {
return;
}
auto& n = _nodes.at(node);
n._has_valid_disk_capacity = true;
n._has_all_tablet_sizes = true;
}
void set_minimal_tablet_size(uint64_t min_ts) {
_minimal_tablet_size = min_ts;
}
void set_force_capacity_based_load(bool force_capacity_based_load) {
_force_capacity_based_load = force_capacity_based_load;
}
node_load& ensure_node(host_id node) {
if (!_nodes.contains(node)) {
const topology& topo = _tm->get_topology();
@@ -162,63 +267,85 @@ public:
if (shard_count == 0) {
throw std::runtime_error(format("Shard count not known for node {}", node));
}
auto [i, _] = _nodes.emplace(node, node_load{shard_count});
auto disk_capacity_opt = get_disk_capacity_for_node(node);
auto [i, _] = _nodes.emplace(node, node_load{shard_count, disk_capacity_opt.value_or(_default_tablet_size)});
i->second.populate_shards_by_load();
if (!disk_capacity_opt && _load_stats) {
i->second._has_valid_disk_capacity = false;
}
}
return _nodes.at(node);
}
shard_id get_least_loaded_shard(host_id node) {
auto& n = ensure_node(node);
const shard_load& s = *n._shards_by_load.begin();
return s.id;
throw_on_incomplete_data(node);
return n._shards_by_load.begin()->id;
}
shard_id get_most_loaded_shard(host_id node) {
auto& n = ensure_node(node);
const shard_load& s = *std::prev(n._shards_by_load.end());
return s.id;
throw_on_incomplete_data(node);
return std::prev(n._shards_by_load.end())->id;
}
void unload(host_id node, shard_id shard) {
void unload(host_id node, shard_id shard, size_t tablet_count_delta, uint64_t tablet_sizes_delta) {
throw_on_incomplete_data(node);
auto& n = _nodes.at(node);
n.update_shard_load(shard, -1);
n.update_shard_load(shard, -ssize_t(tablet_count_delta), -int64_t(tablet_sizes_delta));
}
void pick(host_id node, shard_id shard) {
void pick(host_id node, shard_id shard, size_t tablet_count_delta, uint64_t tablet_sizes_delta) {
throw_on_incomplete_data(node);
auto& n = _nodes.at(node);
n.update_shard_load(shard, 1);
n.update_shard_load(shard, tablet_count_delta, tablet_sizes_delta);
}
load_type get_load(host_id node) const {
if (!_nodes.contains(node)) {
return 0;
}
return _nodes.at(node).load();
throw_on_incomplete_data(node);
return _nodes.at(node).get_load();
}
load_type total_load() const {
load_type total = 0;
for (auto&& n : _nodes) {
total += n.second.load();
uint64_t get_tablet_count(host_id node) const {
if (!_nodes.contains(node)) {
return 0;
}
return total;
return _nodes.at(node)._tablet_count;
}
load_type get_avg_shard_load(host_id node) const {
uint64_t get_avg_tablet_count(host_id node) const {
if (!_nodes.contains(node)) {
return 0;
}
auto& n = _nodes.at(node);
return div_ceil(n.load(), n._shards.size());
return div_ceil(n._tablet_count, n._shards.size());
}
double get_real_avg_shard_load(host_id node) const {
double get_real_avg_tablet_count(host_id node) const {
if (!_nodes.contains(node)) {
return 0;
}
auto& n = _nodes.at(node);
return double(n.load()) / n._shards.size();
return double(n._tablet_count) / n._shards.size();
}
uint64_t get_disk_used(host_id node) const {
if (!_nodes.contains(node)) {
return 0;
}
throw_on_incomplete_data(node);
return _nodes.at(node)._du.used;
}
uint64_t get_capacity(host_id node) const {
if (!_nodes.contains(node)) {
return 0;
}
throw_on_incomplete_data(node, true);
return _nodes.at(node)._du.capacity;
}
shard_id get_shard_count(host_id node) const {
@@ -231,17 +358,18 @@ public:
// Returns the difference in tablet count between highest-loaded shard and lowest-loaded shard.
// Returns 0 when shards are perfectly balanced.
// Returns 1 when shards are imbalanced, but it's not possible to balance them.
load_type get_shard_imbalance(host_id node) const {
auto minmax = get_shard_minmax(node);
return minmax.max() - minmax.max();
size_t get_shard_tablet_count_imbalance(host_id node) const {
auto minmax = get_shard_minmax_tablet_count(node);
return minmax.max() - minmax.min();
}
min_max_tracker<load_type> get_shard_minmax(host_id node) const {
min_max_tracker<load_type> minmax;
if (_nodes.contains(node)) {
throw_on_incomplete_data(node);
auto& n = _nodes.at(node);
for (auto&& load: n._shards) {
minmax.update(load);
for (auto&& shard: n._shards) {
minmax.update(shard.get_load());
}
} else {
minmax.update(0);
@@ -249,18 +377,44 @@ public:
return minmax;
}
// Returns nullopt if capacity is not known.
std::optional<double> get_allocated_utilization(host_id node, const locator::load_stats& stats, uint64_t target_tablet_size) const {
if (!_nodes.contains(node)) {
min_max_tracker<size_t> get_shard_minmax_tablet_count(host_id node) const {
min_max_tracker<size_t> minmax;
if (_nodes.contains(node)) {
auto& n = _nodes.at(node);
for (auto&& shard: n._shards) {
minmax.update(shard.tablet_count);
}
} else {
minmax.update(0);
}
return minmax;
}
// Returns nullopt if node is not known, or we don't have valid disk capacity.
std::optional<load_type> get_allocated_utilization(host_id node) const {
if (!_nodes.contains(node) || !has_complete_data(node, true)) {
return std::nullopt;
}
auto& n = _nodes.at(node);
if (!stats.capacity.contains(node)) {
const node_load& n = _nodes.at(node);
return load_type(n._tablet_count * _default_tablet_size) / n._du.capacity;
}
// Returns nullopt if node is not known, or we don't have tablet sizes or valid disk capacity.
std::optional<load_type> get_storage_utilization(host_id node) const {
if (!_nodes.contains(node) || !has_complete_data(node)) {
return std::nullopt;
}
auto capacity = stats.capacity.at(node);
return capacity > 0 ? double(n.load() * target_tablet_size) / capacity : 0;
return _nodes.at(node).get_load();
}
};
} // namespace locator
template<>
struct fmt::formatter<locator::disk_usage> : fmt::formatter<string_view> {
template <typename FormatContext>
auto format(const locator::disk_usage& du, FormatContext& ctx) const {
return fmt::format_to(ctx.out(), "cap: {:i} used: {:i} load: {}",
utils::pretty_printed_data_size(du.capacity), utils::pretty_printed_data_size(du.used), du.get_load());
}
};

View File

@@ -403,7 +403,7 @@ tablet_replica_set network_topology_strategy::drop_tablets_in_racks(schema_ptr s
if (node.dc_rack().dc == dc && is_rack_to_drop(node.dc_rack().rack)) {
tablet_logger.debug("drop_tablets_in_rack {}.{} tablet_id={} dc={} rack={} removing replica: {}",
s->ks_name(), s->cf_name(), tb, node.dc_rack().dc, node.dc_rack().rack, tr);
load.unload(tr.host, tr.shard);
load.unload(tr.host, tr.shard, 1, service::default_target_tablet_size);
} else {
filtered.emplace_back(tr);
}
@@ -433,7 +433,7 @@ tablet_replica_set network_topology_strategy::add_tablets_in_racks(schema_ptr s,
// Assume that if there was a diff to add a rack, we don't already have a replica
// in the target rack so all nodes in the rack are eligible.
// FIXME: pick based on storage utilization: https://github.com/scylladb/scylladb/issues/26366
auto node_load = load.get_real_avg_shard_load(node.get().host_id());
auto node_load = load.get_real_avg_tablet_count(node.get().host_id());
if (node_load < min_load) {
min_load = node_load;
min_node = node.get().host_id();
@@ -445,7 +445,7 @@ tablet_replica_set network_topology_strategy::add_tablets_in_racks(schema_ptr s,
fmt::format("No candidate node in rack {}.{} to allocate tablet replica", dc, rack));
}
auto new_replica = tablet_replica{min_node, load.next_shard(min_node)};
auto new_replica = tablet_replica{min_node, load.next_shard(min_node, 1, service::default_target_tablet_size)};
new_replicas.push_back(new_replica);
tablet_logger.trace("add_tablet_in_rack {}.{} tablet_id={} dc={} rack={} load={} new_replica={}",
@@ -502,7 +502,7 @@ future<tablet_replica_set> network_topology_strategy::add_tablets_in_dc(schema_p
const auto& host_id = node.get().host_id();
if (!existing.contains(host_id)) {
// FIXME: https://github.com/scylladb/scylladb/issues/26366
candidate.nodes.emplace_back(host_id, load.get_avg_shard_load(host_id));
candidate.nodes.emplace_back(host_id, load.get_avg_tablet_count(host_id));
}
}
if (candidate.nodes.empty()) {
@@ -552,7 +552,7 @@ future<tablet_replica_set> network_topology_strategy::add_tablets_in_dc(schema_p
s->ks_name(), s->cf_name(), tb.id, rack, dc, dc_node_count, dc_rf));
}
auto host_id = nodes.back().host;
auto replica = tablet_replica{host_id, load.next_shard(host_id)};
auto replica = tablet_replica{host_id, load.next_shard(host_id, 1, service::default_target_tablet_size)};
const auto& node = tm->get_topology().get_node(host_id);
auto inserted = replicas_per_rack[node.dc_rack().rack].insert(host_id).second;
// Sanity check that a node is not used more than once
@@ -614,7 +614,7 @@ tablet_replica_set network_topology_strategy::drop_tablets_in_dc(schema_ptr s, c
if (topo.get_node(tr.host).dc_rack().dc != dc || ++nodes_in_dc <= dc_rf) {
filtered.emplace_back(tr);
} else {
load.unload(tr.host, tr.shard);
load.unload(tr.host, tr.shard, 1, service::default_target_tablet_size);
}
}
return filtered;

View File

@@ -927,6 +927,56 @@ std::optional<uint64_t> load_stats::get_tablet_size(host_id host, const range_ba
return std::nullopt;
}
std::optional<uint64_t> load_stats::get_tablet_size_in_transition(host_id host, const range_based_tablet_id& rb_tid, const tablet_info& ti, const tablet_transition_info* trinfo) const {
std::optional<uint64_t> tablet_size_opt;
tablet_size_opt = get_tablet_size(host, rb_tid);
if (tablet_size_opt) {
return tablet_size_opt;
}
// If the tablet is in transition,
// try to find it on the leaving replica, in case of tablet migration,
// or get the avg tablet size of all the replicas, in case we have a rebuild
if (trinfo) {
switch (trinfo->transition) {
case tablet_transition_kind::migration:
// Search for the tablet size on leaving replica
if (trinfo->pending_replica && trinfo->pending_replica->host == host) {
if (auto leaving_replica = get_leaving_replica(ti, *trinfo)) {
tablet_size_opt = get_tablet_size(leaving_replica->host, rb_tid);
} else {
on_internal_error_noexcept(tablet_logger, ::format("No leaving replica for tablet migration in table {}. ti.replicas: {} trinfo->next: {}",
rb_tid.table, ti.replicas, trinfo->next));
}
}
break;
case tablet_transition_kind::rebuild:
[[fallthrough]];
case tablet_transition_kind::rebuild_v2: {
// Get the avg tablet size from the available replicas
size_t replica_count = 0;
uint64_t tablet_size_sum = 0;
for (auto& replica : ti.replicas) {
auto new_tablet_size_opt = get_tablet_size(replica.host, rb_tid);
if (new_tablet_size_opt) {
tablet_size_sum += *new_tablet_size_opt;
replica_count++;
}
}
if (replica_count) {
tablet_size_opt = tablet_size_sum / replica_count;
}
break;
}
case tablet_transition_kind::intranode_migration:
[[fallthrough]];
case tablet_transition_kind::repair:
break;
}
}
return tablet_size_opt;
}
lw_shared_ptr<load_stats> load_stats::reconcile_tablets_resize(const std::unordered_set<table_id>& tables, const token_metadata& old_tm, const token_metadata& new_tm) const {
lw_shared_ptr<load_stats> reconciled_stats { make_lw_shared<load_stats>(*this) };
load_stats& new_stats = *reconciled_stats;

View File

@@ -489,6 +489,12 @@ struct load_stats {
std::optional<uint64_t> get_tablet_size(host_id host, const range_based_tablet_id& rb_tid) const;
// Returns the tablet size on the given host. If the tablet size is not found on the host, we will search for it on
// other hosts based on the tablet transition info:
// - if the tablet is in migration, and the given host is pending, the tablet size will be searched on the leaving replica
// - if the tablet is being rebuilt, we will return the average tablet size of all the replicas
std::optional<uint64_t> get_tablet_size_in_transition(host_id host, const range_based_tablet_id& rb_tid, const tablet_info& ti, const tablet_transition_info* trinfo) const;
// Modifies the tablet sizes in load_stats for the given table after a split or merge. The old_tm argument has
// to contain the token_metadata pre-resize. The function returns load_stats with tablet token ranges
// corresponding to the post-resize tablet_map.

View File

@@ -959,7 +959,6 @@ To start the scylla server proper, simply invoke as: scylla server (or just scyl
auto ip = utils::resolve(cfg->prometheus_address || cfg->listen_address, family, preferred).get();
prometheus::config pctx;
pctx.metric_help = "Scylla server statistics";
pctx.prefix = cfg->prometheus_prefix();
pctx.allow_protobuf = cfg->prometheus_allow_protobuf();
prometheus::start(prometheus_server, pctx).get();
@@ -1791,7 +1790,7 @@ To start the scylla server proper, simply invoke as: scylla server (or just scyl
});
checkpoint(stop_signal, "starting auth cache");
auth_cache.start(std::ref(qp)).get();
auth_cache.start(std::ref(qp), std::ref(stop_signal.as_sharded_abort_source())).get();
auto stop_auth_cache = defer_verbose_shutdown("auth cache", [&] {
auth_cache.stop().get();
});
@@ -2527,7 +2526,7 @@ To start the scylla server proper, simply invoke as: scylla server (or just scyl
api::set_server_service_levels(ctx, cql_server_ctl, qp).get();
alternator::controller alternator_ctl(gossiper, proxy, mm, sys_dist_ks, cdc_generation_service, service_memory_limiter, auth_service, sl_controller, *cfg, dbcfg.statement_scheduling_group);
alternator::controller alternator_ctl(gossiper, proxy, ss, mm, sys_dist_ks, cdc_generation_service, service_memory_limiter, auth_service, sl_controller, *cfg, dbcfg.statement_scheduling_group);
// Register at_exit last, so that storage_service::drain_on_shutdown will be called first
auto do_drain = defer_verbose_shutdown("local storage", [&ss] {

View File

@@ -1292,7 +1292,7 @@ future<std::tuple<rpc::sink<frozen_mutation_fragment, streaming::stream_mutation
messaging_service::make_sink_and_source_for_stream_mutation_fragments(table_schema_version schema_id, streaming::plan_id plan_id, table_id cf_id, uint64_t estimated_partitions, streaming::stream_reason reason, service::session_id session, locator::host_id id) {
using value_type = std::tuple<rpc::sink<frozen_mutation_fragment, streaming::stream_mutation_fragments_cmd>, rpc::source<int32_t>>;
if (is_shutting_down()) {
return make_exception_future<value_type>(rpc::closed_error());
return make_exception_future<value_type>(rpc::closed_error("local node is shutting down"));
}
auto rpc_client = get_rpc_client(messaging_verb::STREAM_MUTATION_FRAGMENTS, addr_for_host_id(id), id);
return rpc_client->make_stream_sink<netw::serializer, frozen_mutation_fragment, streaming::stream_mutation_fragments_cmd>().then([this, session, plan_id, schema_id, cf_id, estimated_partitions, reason, rpc_client] (rpc::sink<frozen_mutation_fragment, streaming::stream_mutation_fragments_cmd> sink) mutable {
@@ -1321,7 +1321,7 @@ rpc::sink<streaming::stream_blob_cmd_data> messaging_service::make_sink_for_stre
future<std::tuple<rpc::sink<streaming::stream_blob_cmd_data>, rpc::source<streaming::stream_blob_cmd_data>>>
messaging_service::make_sink_and_source_for_stream_blob(streaming::stream_blob_meta meta, locator::host_id id) {
if (is_shutting_down()) {
co_await coroutine::return_exception(rpc::closed_error());
co_await coroutine::return_exception(rpc::closed_error("local node is shutting down"));
}
auto rpc_client = get_rpc_client(messaging_verb::STREAM_BLOB, addr_for_host_id(id), id);
auto sink = co_await rpc_client->make_stream_sink<netw::serializer, streaming::stream_blob_cmd_data>();
@@ -1370,7 +1370,7 @@ future<std::tuple<rpc::sink<repair_hash_with_cmd>, rpc::source<repair_row_on_wir
messaging_service::make_sink_and_source_for_repair_get_row_diff_with_rpc_stream(uint32_t repair_meta_id, shard_id dst_cpu_id, locator::host_id id) {
auto verb = messaging_verb::REPAIR_GET_ROW_DIFF_WITH_RPC_STREAM;
if (is_shutting_down()) {
return make_exception_future<std::tuple<rpc::sink<repair_hash_with_cmd>, rpc::source<repair_row_on_wire_with_cmd>>>(rpc::closed_error());
return make_exception_future<std::tuple<rpc::sink<repair_hash_with_cmd>, rpc::source<repair_row_on_wire_with_cmd>>>(rpc::closed_error("local node is shutting down"));
}
auto rpc_client = get_rpc_client(verb, addr_for_host_id(id), id);
return do_make_sink_source<repair_hash_with_cmd, repair_row_on_wire_with_cmd>(verb, repair_meta_id, dst_cpu_id, std::move(rpc_client), rpc());
@@ -1392,7 +1392,7 @@ future<std::tuple<rpc::sink<repair_row_on_wire_with_cmd>, rpc::source<repair_str
messaging_service::make_sink_and_source_for_repair_put_row_diff_with_rpc_stream(uint32_t repair_meta_id, shard_id dst_cpu_id, locator::host_id id) {
auto verb = messaging_verb::REPAIR_PUT_ROW_DIFF_WITH_RPC_STREAM;
if (is_shutting_down()) {
return make_exception_future<std::tuple<rpc::sink<repair_row_on_wire_with_cmd>, rpc::source<repair_stream_cmd>>>(rpc::closed_error());
return make_exception_future<std::tuple<rpc::sink<repair_row_on_wire_with_cmd>, rpc::source<repair_stream_cmd>>>(rpc::closed_error("local node is shutting down"));
}
auto rpc_client = get_rpc_client(verb, addr_for_host_id(id), id);
return do_make_sink_source<repair_row_on_wire_with_cmd, repair_stream_cmd>(verb, repair_meta_id, dst_cpu_id, std::move(rpc_client), rpc());
@@ -1414,7 +1414,7 @@ future<std::tuple<rpc::sink<repair_stream_cmd>, rpc::source<repair_hash_with_cmd
messaging_service::make_sink_and_source_for_repair_get_full_row_hashes_with_rpc_stream(uint32_t repair_meta_id, shard_id dst_cpu_id, locator::host_id id) {
auto verb = messaging_verb::REPAIR_GET_FULL_ROW_HASHES_WITH_RPC_STREAM;
if (is_shutting_down()) {
return make_exception_future<std::tuple<rpc::sink<repair_stream_cmd>, rpc::source<repair_hash_with_cmd>>>(rpc::closed_error());
return make_exception_future<std::tuple<rpc::sink<repair_stream_cmd>, rpc::source<repair_hash_with_cmd>>>(rpc::closed_error("local node is shutting down"));
}
auto rpc_client = get_rpc_client(verb, addr_for_host_id(id), id);
return do_make_sink_source<repair_stream_cmd, repair_hash_with_cmd>(verb, repair_meta_id, dst_cpu_id, std::move(rpc_client), rpc());

View File

@@ -127,20 +127,21 @@ auto send_message(messaging_service* ms, messaging_verb verb, std::optional<loca
auto rpc_handler = ms->rpc()->make_client<MsgIn(MsgOut...)>(verb);
using futurator = futurize<std::invoke_result_t<decltype(rpc_handler), rpc_protocol::client&, MsgOut...>>;
if (ms->is_shutting_down()) {
return futurator::make_exception_future(rpc::closed_error());
return futurator::make_exception_future(rpc::closed_error("local node is shutting down"));
}
auto rpc_client_ptr = ms->get_rpc_client(verb, id, host_id);
auto& rpc_client = *rpc_client_ptr;
return rpc_handler(rpc_client, std::forward<MsgOut>(msg)...).handle_exception([ms = ms->shared_from_this(), id, host_id, verb, rpc_client_ptr = std::move(rpc_client_ptr)] (std::exception_ptr&& eptr) {
ms->increment_dropped_messages(verb);
if (try_catch<rpc::closed_error>(eptr)) {
if (const auto* exp = try_catch<rpc::closed_error>(eptr)) {
// This is a transport error
if (host_id) {
ms->remove_error_rpc_client(verb, *host_id);
} else {
ms->remove_error_rpc_client(verb, id);
}
return futurator::make_exception_future(std::move(eptr));
return futurator::make_exception_future(rpc::closed_error(fmt::format("got error from node {}/{}: {}",
host_id.value_or(locator::host_id{}), id.addr, exp->what())));
} else {
// This is expected to be a rpc server error, e.g., the rpc handler throws a std::runtime_error.
return futurator::make_exception_future(std::move(eptr));
@@ -165,20 +166,21 @@ auto send_message_timeout(messaging_service* ms, messaging_verb verb, std::optio
auto rpc_handler = ms->rpc()->make_client<MsgIn(MsgOut...)>(verb);
using futurator = futurize<std::invoke_result_t<decltype(rpc_handler), rpc_protocol::client&, MsgOut...>>;
if (ms->is_shutting_down()) {
return futurator::make_exception_future(rpc::closed_error());
return futurator::make_exception_future(rpc::closed_error("local node is shutting down"));
}
auto rpc_client_ptr = ms->get_rpc_client(verb, id, host_id);
auto& rpc_client = *rpc_client_ptr;
return rpc_handler(rpc_client, timeout, std::forward<MsgOut>(msg)...).handle_exception([ms = ms->shared_from_this(), id, host_id, verb, rpc_client_ptr = std::move(rpc_client_ptr)] (std::exception_ptr&& eptr) {
ms->increment_dropped_messages(verb);
if (try_catch<rpc::closed_error>(eptr)) {
if (const auto* exp = try_catch<rpc::closed_error>(eptr)) {
// This is a transport error
if (host_id) {
ms->remove_error_rpc_client(verb, *host_id);
} else {
ms->remove_error_rpc_client(verb, id);
}
return futurator::make_exception_future(std::move(eptr));
return futurator::make_exception_future(rpc::closed_error(fmt::format("got error from node {}/{}: {}",
host_id.value_or(locator::host_id{}), id.addr, exp->what())));
} else {
// This is expected to be a rpc server error, e.g., the rpc handler throws a std::runtime_error.
return futurator::make_exception_future(std::move(eptr));
@@ -206,7 +208,7 @@ auto send_message_cancellable(messaging_service* ms, messaging_verb verb, std::o
auto rpc_handler = ms->rpc()->make_client<MsgIn(MsgOut...)>(verb);
using futurator = futurize<std::invoke_result_t<decltype(rpc_handler), rpc_protocol::client&, MsgOut...>>;
if (ms->is_shutting_down()) {
return futurator::make_exception_future(rpc::closed_error());
return futurator::make_exception_future(rpc::closed_error("local node is shutting down"));
}
auto rpc_client_ptr = ms->get_rpc_client(verb, id, host_id);
auto& rpc_client = *rpc_client_ptr;
@@ -222,14 +224,15 @@ auto send_message_cancellable(messaging_service* ms, messaging_verb verb, std::o
return rpc_handler(rpc_client, c_ref, std::forward<MsgOut>(msg)...).handle_exception([ms = ms->shared_from_this(), id, host_id, verb, rpc_client_ptr = std::move(rpc_client_ptr), sub = std::move(sub)] (std::exception_ptr&& eptr) {
ms->increment_dropped_messages(verb);
if (try_catch<rpc::closed_error>(eptr)) {
if (const auto* exp = try_catch<rpc::closed_error>(eptr)) {
// This is a transport error
if (host_id) {
ms->remove_error_rpc_client(verb, *host_id);
} else {
ms->remove_error_rpc_client(verb, id);
}
return futurator::make_exception_future(std::move(eptr));
return futurator::make_exception_future(rpc::closed_error(fmt::format("got error from node {}/{}: {}",
host_id.value_or(locator::host_id{}), id.addr, exp->what())));
} else if (try_catch<rpc::canceled_error>(eptr)) {
// Translate low-level canceled_error into high-level abort_requested_exception.
return futurator::make_exception_future(abort_requested_exception{});
@@ -255,9 +258,10 @@ auto send_message_timeout_cancellable(messaging_service* ms, messaging_verb verb
auto rpc_handler = ms->rpc()->make_client<MsgIn(MsgOut...)>(verb);
using futurator = futurize<std::invoke_result_t<decltype(rpc_handler), rpc_protocol::client&, MsgOut...>>;
if (ms->is_shutting_down()) {
return futurator::make_exception_future(rpc::closed_error());
return futurator::make_exception_future(rpc::closed_error("local node is shutting down"));
}
auto rpc_client_ptr = ms->get_rpc_client(verb, ms->addr_for_host_id(host_id), host_id);
auto address = ms->addr_for_host_id(host_id);
auto rpc_client_ptr = ms->get_rpc_client(verb, address, host_id);
auto& rpc_client = *rpc_client_ptr;
auto c = std::make_unique<seastar::rpc::cancellable>();
@@ -269,12 +273,13 @@ auto send_message_timeout_cancellable(messaging_service* ms, messaging_verb verb
return futurator::make_exception_future(abort_requested_exception{});
}
return rpc_handler(rpc_client, timeout, c_ref, std::forward<MsgOut>(msg)...).handle_exception([ms = ms->shared_from_this(), host_id, verb, rpc_client_ptr = std::move(rpc_client_ptr), sub = std::move(sub)] (std::exception_ptr&& eptr) {
return rpc_handler(rpc_client, timeout, c_ref, std::forward<MsgOut>(msg)...).handle_exception([ms = ms->shared_from_this(), host_id, address, verb, rpc_client_ptr = std::move(rpc_client_ptr), sub = std::move(sub)] (std::exception_ptr&& eptr) {
ms->increment_dropped_messages(verb);
if (try_catch<rpc::closed_error>(eptr)) {
if (const auto* exp = try_catch<rpc::closed_error>(eptr)) {
// This is a transport error
ms->remove_error_rpc_client(verb, host_id);
return futurator::make_exception_future(std::move(eptr));
return futurator::make_exception_future(rpc::closed_error(fmt::format("got error from node {}/{}: {}",
host_id, address.addr, exp->what())));
} else if (try_catch<rpc::canceled_error>(eptr)) {
// Translate low-level canceled_error into high-level abort_requested_exception.
return futurator::make_exception_future(abort_requested_exception{});

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3cbe2dd05945f8fb76ebce2ea70864063d2b282c4d5080af1f290ead43321ab3
size 6444732
oid sha256:9d387b5ff44094e9b6c587d3e0cb2e7098ea68924f3f9947ff7574be3c378a4e
size 6475784

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ad1705d5c37cc6b6cd24354b83fee8da64a14f918351d357f21cf771a650ad3d
size 6452816
oid sha256:3b35c1ed982e025b4c3d079c2d14873a115ff8e8d364a19633bf83052e52a059
size 6473408

View File

@@ -1179,6 +1179,7 @@ private:
bool full = is_incremental_repair_using_all_sstables();
auto& tinfo = tmap.get_tablet_info(id);
auto sstables_repaired_at = tinfo.sstables_repaired_at;
auto gid = locator::global_tablet_id{tid, id};
// Consider this:
// 1) n1 is the topology coordinator
// 2) n1 schedules and executes a tablet repair with session id s1 for a tablet on n3 an n4.
@@ -1190,14 +1191,16 @@ private:
// To avoid the deadlock, we can throw in step 7 so that n2 will
// proceed to the end_repair stage and release the lock. After that,
// the scheduler could schedule the tablet repair again.
if (_rs._repair_compaction_locks.contains(_frozen_topology_guard)) {
if (_rs._repair_compaction_locks.contains(gid)) {
auto msg = fmt::format("Tablet repair session={} table={} is in progress", _frozen_topology_guard, tid);
rlogger.info("{}", msg);
throw std::runtime_error(msg);
}
co_await utils::get_local_injector().inject("incremental_repair_prepare_wait", utils::wait_for_message(60s));
auto reenablers_and_holders = co_await table.get_compaction_reenablers_and_lock_holders_for_repair(_db.local(), _frozen_topology_guard, _range);
for (auto& lock_holder : reenablers_and_holders.lock_holders) {
_rs._repair_compaction_locks[_frozen_topology_guard].push_back(std::move(lock_holder));
_rs._repair_compaction_locks[gid].push_back(std::move(lock_holder));
}
auto sstables = co_await table.take_storage_snapshot(_range);
_incremental_repair_meta.sst_set = make_lw_shared<sstables::sstable_set>(sstables::make_partitioned_sstable_set(_schema, _range));
@@ -2836,9 +2839,20 @@ future<> repair_service::init_ms_handlers() {
auto& table = local_repair.get_db().local().find_column_family(gid.table);
auto erm = table.get_effective_replication_map();
auto& tmap = erm->get_token_metadata_ptr()->tablets().get_tablet_map(gid.table);
auto* trinfo = tmap.get_tablet_transition_info(gid.tablet);
if (!trinfo) {
auto msg = fmt::format("Skipped repair_update_compaction_ctrl gid={} session_id={} since tablet is not in transition", gid, topo_guard);
rlogger.warn("{}", msg);
throw std::runtime_error(msg);
}
if (trinfo->stage != locator::tablet_transition_stage::end_repair) {
auto msg = fmt::format("Skipped repair_update_compaction_ctrl gid={} session_id={} since tablet is not in tablet_transition_stage::end_repair", gid, topo_guard);
rlogger.warn("{}", msg);
throw std::runtime_error(msg);
}
auto range = tmap.get_token_range(gid.tablet);
co_await table.clear_being_repaired_for_range(range);
auto removed = local_repair._repair_compaction_locks.erase(topo_guard);
auto removed = local_repair._repair_compaction_locks.erase(gid);
rlogger.info("Got repair_update_compaction_ctrl gid={} session_id={} removed={}", gid, topo_guard, removed);
});
});

View File

@@ -154,7 +154,7 @@ class repair_service : public seastar::peering_sharded_service<repair_service> {
std::unordered_set<locator::host_id> ignore_nodes);
public:
std::unordered_map<service::session_id, std::vector<seastar::rwlock::holder>> _repair_compaction_locks;
std::unordered_map<locator::global_tablet_id, std::vector<seastar::rwlock::holder>> _repair_compaction_locks;
public:
repair_service(sharded<service::topology_state_machine>& tsm,

View File

@@ -84,6 +84,10 @@ class compaction_group {
seastar::named_gate _async_gate;
// Gates flushes.
seastar::named_gate _flush_gate;
// Gates sstable being added to the group.
// This prevents the group from being considered empty when sstables are being added.
// Crucial for tablet split which ACKs split for a table when all pre-split groups are empty.
seastar::named_gate _sstable_add_gate;
bool _tombstone_gc_enabled = true;
std::optional<compaction::compaction_backlog_tracker> _backlog_tracker;
repair_classifier_func _repair_sstable_classifier;
@@ -248,6 +252,10 @@ public:
return _flush_gate;
}
seastar::named_gate& sstable_add_gate() noexcept {
return _sstable_add_gate;
}
compaction::compaction_manager& get_compaction_manager() noexcept;
const compaction::compaction_manager& get_compaction_manager() const noexcept;
@@ -306,8 +314,8 @@ public:
uint64_t live_disk_space_used() const;
void for_each_compaction_group(std::function<void(const compaction_group_ptr&)> action) const;
utils::small_vector<compaction_group_ptr, 3> compaction_groups();
utils::small_vector<const_compaction_group_ptr, 3> compaction_groups() const;
utils::small_vector<compaction_group_ptr, 3> compaction_groups_immediate();
utils::small_vector<const_compaction_group_ptr, 3> compaction_groups_immediate() const;
utils::small_vector<compaction_group_ptr, 3> split_unready_groups() const;
bool split_unready_groups_are_empty() const;
@@ -434,7 +442,7 @@ public:
virtual bool all_storage_groups_split() = 0;
virtual future<> split_all_storage_groups(tasks::task_info tablet_split_task_info) = 0;
virtual future<> maybe_split_compaction_group_of(size_t idx) = 0;
virtual future<std::vector<sstables::shared_sstable>> maybe_split_sstable(const sstables::shared_sstable& sst) = 0;
virtual future<std::vector<sstables::shared_sstable>> maybe_split_new_sstable(const sstables::shared_sstable& sst) = 0;
virtual dht::token_range get_token_range_after_split(const dht::token&) const noexcept = 0;
virtual lw_shared_ptr<sstables::sstable_set> make_sstable_set() const = 0;

View File

@@ -2816,7 +2816,7 @@ future<> database::snapshot_table_on_all_shards(sharded<database>& sharded_db, t
co_await flush_table_on_all_shards(sharded_db, uuid);
}
auto table_shards = co_await get_table_on_all_shards(sharded_db, uuid);
co_await table::snapshot_on_all_shards(sharded_db, table_shards, tag);
co_await snapshot_table_on_all_shards(sharded_db, table_shards, tag);
}
future<> database::snapshot_tables_on_all_shards(sharded<database>& sharded_db, std::string_view ks_name, std::vector<sstring> table_names, sstring tag, bool skip_flush) {
@@ -2952,7 +2952,7 @@ future<> database::truncate_table_on_all_shards(sharded<database>& sharded_db, s
auto truncated_at = truncated_at_opt.value_or(db_clock::now());
auto name = snapshot_name_opt.value_or(
format("{:d}-{}", truncated_at.time_since_epoch().count(), cf.schema()->cf_name()));
co_await table::snapshot_on_all_shards(sharded_db, table_shards, name);
co_await snapshot_table_on_all_shards(sharded_db, table_shards, name);
}
co_await sharded_db.invoke_on_all([&] (database& db) {

View File

@@ -95,7 +95,6 @@ class reconcilable_result;
namespace bi = boost::intrusive;
namespace tracing { class trace_state_ptr; }
namespace s3 { struct endpoint_config; }
namespace lang { class manager; }
@@ -604,9 +603,28 @@ public:
data_dictionary::table as_data_dictionary() const;
// The usage of these functions are restricted to preexisting sstables that aren't being
// moved anywhere, so should never be used in the context of file streaming and intra
// node migration. The only user today is distributed loader, which populates the
// sstables for each column family on boot.
future<> add_sstable_and_update_cache(sstables::shared_sstable sst,
sstables::offstrategy offstrategy = sstables::offstrategy::no);
future<> add_sstables_and_update_cache(const std::vector<sstables::shared_sstable>& ssts);
// Restricted to new sstables produced by external processes such as repair.
// The sstable might undergo split if table is in split mode.
// If no need for split, the input sstable will only be attached to the sstable set.
// If split happens, the output sstables will be attached and the input sstable unlinked.
// On failure, the input sstable is unlinked and exception propagated to the caller.
// The on_add callback will be called on all sstables to be added into the set.
[[nodiscard]] future<std::vector<sstables::shared_sstable>>
add_new_sstable_and_update_cache(sstables::shared_sstable new_sst,
std::function<future<>(sstables::shared_sstable)> on_add,
sstables::offstrategy offstrategy = sstables::offstrategy::no);
[[nodiscard]] future<std::vector<sstables::shared_sstable>>
add_new_sstables_and_update_cache(std::vector<sstables::shared_sstable> new_ssts,
std::function<future<>(sstables::shared_sstable)> on_add);
future<> move_sstables_from_staging(std::vector<sstables::shared_sstable>);
sstables::shared_sstable make_sstable();
void set_truncation_time(db_clock::time_point truncated_at) noexcept {
@@ -724,7 +742,9 @@ private:
return _config.enable_cache && _schema->caching_options().enabled();
}
void update_stats_for_new_sstable(const sstables::shared_sstable& sst) noexcept;
future<> do_add_sstable_and_update_cache(compaction_group& cg, sstables::shared_sstable sst, sstables::offstrategy, bool trigger_compaction);
// This function can throw even if the sstable was added into the set. When the sstable was successfully
// added, the sstable ptr @sst will be set to nullptr. Allowing caller to optionally discard the sstable.
future<> do_add_sstable_and_update_cache(compaction_group& cg, sstables::shared_sstable& sst, sstables::offstrategy, bool trigger_compaction);
future<> do_add_sstable_and_update_cache(sstables::shared_sstable sst, sstables::offstrategy offstrategy, bool trigger_compaction);
// Helpers which add sstable on behalf of a compaction group and refreshes compound set.
void add_sstable(compaction_group& cg, sstables::shared_sstable sstable);
@@ -1037,37 +1057,11 @@ public:
db::replay_position set_low_replay_position_mark();
db::replay_position highest_flushed_replay_position() const;
private:
using snapshot_file_set = foreign_ptr<std::unique_ptr<std::unordered_set<sstring>>>;
future<snapshot_file_set> take_snapshot(sstring jsondir);
// Writes the table schema and the manifest of all files in the snapshot directory.
future<> finalize_snapshot(const global_table_ptr& table_shards, sstring jsondir, std::vector<snapshot_file_set> file_sets);
static future<> seal_snapshot(sstring jsondir, std::vector<snapshot_file_set> file_sets);
public:
static future<> snapshot_on_all_shards(sharded<database>& sharded_db, const global_table_ptr& table_shards, sstring name);
future<std::pair<std::vector<sstables::shared_sstable>, sstable_list_permit>> snapshot_sstables();
future<std::unordered_map<sstring, snapshot_details>> get_snapshot_details();
static future<snapshot_details> get_snapshot_details(std::filesystem::path snapshot_dir, std::filesystem::path datadir);
/*!
* \brief write the schema to a 'schema.cql' file at the given directory.
*
* When doing a snapshot, the snapshot directory contains a 'schema.cql' file
* with a CQL command that can be used to generate the schema.
* The content is is similar to the result of the CQL DESCRIBE command of the table.
*
* When a schema has indexes, local indexes or views, those indexes and views
* are represented by their own schemas.
* In those cases, the method would write the relevant information for each of the schemas:
*
* The schema of the base table would output a file with the CREATE TABLE command
* and the schema of the view that is used for the index would output a file with the
* CREATE INDEX command.
* The same is true for local index and MATERIALIZED VIEW.
*/
future<> write_schema_as_cql(const global_table_ptr& table_shards, sstring dir) const;
bool incremental_backups_enabled() const {
return _config.enable_incremental_backups;
}
@@ -1358,7 +1352,8 @@ public:
// Clones storage of a given tablet. Memtable is flushed first to guarantee that the
// snapshot (list of sstables) will include all the data written up to the time it was taken.
future<utils::chunked_vector<sstables::entry_descriptor>> clone_tablet_storage(locator::tablet_id tid);
// If leave_unsealead is set, all the destination sstables will be left unsealed.
future<utils::chunked_vector<sstables::entry_descriptor>> clone_tablet_storage(locator::tablet_id tid, bool leave_unsealed);
friend class compaction_group;
friend class compaction::compaction_task_impl;
@@ -2019,6 +2014,7 @@ private:
keyspace::config make_keyspace_config(const keyspace_metadata& ksm, system_keyspace is_system);
struct table_truncate_state;
static future<> snapshot_table_on_all_shards(sharded<database>& sharded_db, const global_table_ptr& table_shards, sstring name);
static future<> truncate_table_on_all_shards(sharded<database>& db, sharded<db::system_keyspace>& sys_ks, const global_table_ptr&, std::optional<db_clock::time_point> truncated_at_opt, bool with_snapshot, std::optional<sstring> snapshot_name_opt);
future<> truncate(db::system_keyspace& sys_ks, column_family& cf, std::vector<lw_shared_ptr<replica::table>>& views, const table_truncate_state&);
public:

View File

@@ -210,9 +210,9 @@ table::add_memtables_to_reader_list(std::vector<mutation_reader>& readers,
auto sgs = storage_groups_for_token_range(token_range);
reserve_fn(std::ranges::fold_left(sgs | std::views::transform(std::mem_fn(&storage_group::memtable_count)), uint64_t(0), std::plus{}));
for (auto& sg : sgs) {
for (auto& cg : sg->compaction_groups()) {
sg->for_each_compaction_group([&] (const compaction_group_ptr &cg) {
add_memtables_from_cg(*cg);
}
});
}
}
@@ -423,15 +423,27 @@ bool compaction_group::memtable_has_key(const dht::decorated_key& key) const {
}
api::timestamp_type storage_group::min_memtable_timestamp() const {
return std::ranges::min(compaction_groups() | std::views::transform(std::mem_fn(&compaction_group::min_memtable_timestamp)));
api::timestamp_type min_timestamp = api::max_timestamp;
for_each_compaction_group([&min_timestamp] (const compaction_group_ptr& cg) {
min_timestamp = std::min(min_timestamp, cg->min_memtable_timestamp());
});
return min_timestamp;
}
api::timestamp_type storage_group::min_memtable_live_timestamp() const {
return std::ranges::min(compaction_groups() | std::views::transform(std::mem_fn(&compaction_group::min_memtable_live_timestamp)));
api::timestamp_type min_timestamp = api::max_timestamp;
for_each_compaction_group([&min_timestamp] (const compaction_group_ptr& cg) {
min_timestamp = std::min(min_timestamp, cg->min_memtable_live_timestamp());
});
return min_timestamp;
}
api::timestamp_type storage_group::min_memtable_live_row_marker_timestamp() const {
return std::ranges::min(compaction_groups() | std::views::transform(std::mem_fn(&compaction_group::min_memtable_live_row_marker_timestamp)));
api::timestamp_type min_timestamp = api::max_timestamp;
for_each_compaction_group([&min_timestamp] (const compaction_group_ptr& cg) {
min_timestamp = std::min(min_timestamp, cg->min_memtable_live_row_marker_timestamp());
});
return min_timestamp;
}
api::timestamp_type table::min_memtable_timestamp() const {
@@ -721,7 +733,7 @@ public:
bool all_storage_groups_split() override { return true; }
future<> split_all_storage_groups(tasks::task_info tablet_split_task_info) override { return make_ready_future(); }
future<> maybe_split_compaction_group_of(size_t idx) override { return make_ready_future(); }
future<std::vector<sstables::shared_sstable>> maybe_split_sstable(const sstables::shared_sstable& sst) override {
future<std::vector<sstables::shared_sstable>> maybe_split_new_sstable(const sstables::shared_sstable& sst) override {
return make_ready_future<std::vector<sstables::shared_sstable>>(std::vector<sstables::shared_sstable>{sst});
}
dht::token_range get_token_range_after_split(const dht::token&) const noexcept override { return dht::token_range(); }
@@ -879,7 +891,7 @@ public:
bool all_storage_groups_split() override;
future<> split_all_storage_groups(tasks::task_info tablet_split_task_info) override;
future<> maybe_split_compaction_group_of(size_t idx) override;
future<std::vector<sstables::shared_sstable>> maybe_split_sstable(const sstables::shared_sstable& sst) override;
future<std::vector<sstables::shared_sstable>> maybe_split_new_sstable(const sstables::shared_sstable& sst) override;
dht::token_range get_token_range_after_split(const dht::token& token) const noexcept override {
return tablet_map().get_token_range_after_split(token);
}
@@ -933,7 +945,7 @@ void storage_group::for_each_compaction_group(std::function<void(const compactio
}
}
utils::small_vector<compaction_group_ptr, 3> storage_group::compaction_groups() {
utils::small_vector<compaction_group_ptr, 3> storage_group::compaction_groups_immediate() {
utils::small_vector<compaction_group_ptr, 3> cgs;
for_each_compaction_group([&cgs] (const compaction_group_ptr& cg) {
cgs.push_back(cg);
@@ -941,7 +953,7 @@ utils::small_vector<compaction_group_ptr, 3> storage_group::compaction_groups()
return cgs;
}
utils::small_vector<const_compaction_group_ptr, 3> storage_group::compaction_groups() const {
utils::small_vector<const_compaction_group_ptr, 3> storage_group::compaction_groups_immediate() const {
utils::small_vector<const_compaction_group_ptr, 3> cgs;
for_each_compaction_group([&cgs] (const compaction_group_ptr& cg) {
cgs.push_back(cg);
@@ -1130,7 +1142,8 @@ future<> tablet_storage_group_manager::maybe_split_compaction_group_of(size_t id
}
future<std::vector<sstables::shared_sstable>>
tablet_storage_group_manager::maybe_split_sstable(const sstables::shared_sstable& sst) {
tablet_storage_group_manager::maybe_split_new_sstable(const sstables::shared_sstable& sst) {
co_await utils::get_local_injector().inject("maybe_split_new_sstable_wait", utils::wait_for_message(120s));
if (!tablet_map().needs_split()) {
co_return std::vector<sstables::shared_sstable>{sst};
}
@@ -1138,8 +1151,7 @@ tablet_storage_group_manager::maybe_split_sstable(const sstables::shared_sstable
auto& cg = compaction_group_for_sstable(sst);
auto holder = cg.async_gate().hold();
auto& view = cg.view_for_sstable(sst);
auto lock_holder = co_await _t.get_compaction_manager().get_incremental_repair_read_lock(view, "maybe_split_sstable");
co_return co_await _t.get_compaction_manager().maybe_split_sstable(sst, view, co_await split_compaction_options());
co_return co_await _t.get_compaction_manager().maybe_split_new_sstable(sst, view, co_await split_compaction_options());
}
future<> table::maybe_split_compaction_group_of(locator::tablet_id tablet_id) {
@@ -1149,7 +1161,7 @@ future<> table::maybe_split_compaction_group_of(locator::tablet_id tablet_id) {
future<std::vector<sstables::shared_sstable>> table::maybe_split_new_sstable(const sstables::shared_sstable& sst) {
auto holder = async_gate().hold();
co_return co_await _sg_manager->maybe_split_sstable(sst);
co_return co_await _sg_manager->maybe_split_new_sstable(sst);
}
dht::token_range table::get_token_range_after_split(const dht::token& token) const noexcept {
@@ -1257,7 +1269,7 @@ future<> table::parallel_foreach_compaction_group(std::function<future<>(compact
tlogger.info("foreach_compaction_group_wait: released");
});
co_await coroutine::parallel_for_each(sg.compaction_groups(), [&] (compaction_group_ptr cg) -> future<> {
co_await coroutine::parallel_for_each(sg.compaction_groups_immediate(), [&] (compaction_group_ptr cg) -> future<> {
if (auto holder = try_hold_gate(cg->async_gate())) {
co_await action(*cg);
}
@@ -1330,7 +1342,7 @@ future<utils::chunked_vector<sstables::shared_sstable>> table::take_sstable_set_
}
future<utils::chunked_vector<sstables::entry_descriptor>>
table::clone_tablet_storage(locator::tablet_id tid) {
table::clone_tablet_storage(locator::tablet_id tid, bool leave_unsealed) {
utils::chunked_vector<sstables::entry_descriptor> ret;
auto holder = async_gate().hold();
@@ -1342,7 +1354,7 @@ table::clone_tablet_storage(locator::tablet_id tid) {
// by compaction while we are waiting for the lock.
auto deletion_guard = co_await get_sstable_list_permit();
co_await sg.make_sstable_set()->for_each_sstable_gently([&] (const sstables::shared_sstable& sst) -> future<> {
ret.push_back(co_await sst->clone(calculate_generation_for_new_table()));
ret.push_back(co_await sst->clone(calculate_generation_for_new_table(), leave_unsealed));
});
co_return ret;
}
@@ -1354,10 +1366,10 @@ void table::update_stats_for_new_sstable(const sstables::shared_sstable& sst) no
}
future<>
table::do_add_sstable_and_update_cache(compaction_group& cg, sstables::shared_sstable sst, sstables::offstrategy offstrategy,
table::do_add_sstable_and_update_cache(compaction_group& cg, sstables::shared_sstable& sst, sstables::offstrategy offstrategy,
bool trigger_compaction) {
auto permit = co_await seastar::get_units(_sstable_set_mutation_sem, 1);
co_return co_await get_row_cache().invalidate(row_cache::external_updater([&] () noexcept {
co_return co_await get_row_cache().invalidate(row_cache::external_updater([&] () mutable noexcept {
// FIXME: this is not really noexcept, but we need to provide strong exception guarantees.
// atomically load all opened sstables into column family.
if (!offstrategy) {
@@ -1369,6 +1381,8 @@ table::do_add_sstable_and_update_cache(compaction_group& cg, sstables::shared_ss
if (trigger_compaction) {
try_trigger_compaction(cg);
}
// Resetting sstable ptr to inform the caller the sstable has been loaded successfully.
sst = nullptr;
}), dht::partition_range::make({sst->get_first_decorated_key(), true}, {sst->get_last_decorated_key(), true}), [sst, schema = _schema] (const dht::decorated_key& key) {
return sst->filter_has_key(sstables::key::from_partition_key(*schema, key.key()));
});
@@ -1376,12 +1390,10 @@ table::do_add_sstable_and_update_cache(compaction_group& cg, sstables::shared_ss
future<>
table::do_add_sstable_and_update_cache(sstables::shared_sstable new_sst, sstables::offstrategy offstrategy, bool trigger_compaction) {
for (auto sst : co_await maybe_split_new_sstable(new_sst)) {
auto& cg = compaction_group_for_sstable(sst);
// Hold gate to make share compaction group is alive.
auto holder = cg.async_gate().hold();
co_await do_add_sstable_and_update_cache(cg, std::move(sst), offstrategy, trigger_compaction);
}
auto& cg = compaction_group_for_sstable(new_sst);
// Hold gate to make share compaction group is alive.
auto holder = cg.async_gate().hold();
co_await do_add_sstable_and_update_cache(cg, new_sst, offstrategy, trigger_compaction);
}
future<>
@@ -1399,6 +1411,85 @@ table::add_sstables_and_update_cache(const std::vector<sstables::shared_sstable>
trigger_compaction();
}
future<std::vector<sstables::shared_sstable>>
table::add_new_sstable_and_update_cache(sstables::shared_sstable new_sst,
std::function<future<>(sstables::shared_sstable)> on_add,
sstables::offstrategy offstrategy) {
std::vector<sstables::shared_sstable> ret, ssts;
std::exception_ptr ex;
try {
bool trigger_compaction = offstrategy == sstables::offstrategy::no;
auto& cg = compaction_group_for_sstable(new_sst);
// This prevents compaction group from being considered empty until the holder is released.
// Helpful for tablet split, where split is acked for a table when all pre-split groups are empty.
auto sstable_add_holder = cg.sstable_add_gate().hold();
ret = ssts = co_await maybe_split_new_sstable(new_sst);
// on successful split, input sstable is unlinked.
new_sst = nullptr;
for (auto& sst : ssts) {
auto& cg = compaction_group_for_sstable(sst);
// Hold gate to make sure compaction group is alive.
auto holder = cg.async_gate().hold();
co_await on_add(sst);
// If do_add_sstable_and_update_cache() throws after sstable has been loaded, the pointer
// sst passed by reference will be set to nullptr, so it won't be unlinked in the exception
// handler below.
co_await do_add_sstable_and_update_cache(cg, sst, offstrategy, trigger_compaction);
sst = nullptr;
}
} catch (...) {
ex = std::current_exception();
}
if (ex) {
// on failed split, input sstable is unlinked here.
if (new_sst) {
tlogger.error("Failed to load SSTable {} of origin {} due to {}, it will be unlinked...", new_sst->get_filename(), new_sst->get_origin(), ex);
co_await new_sst->unlink();
}
// on failure after successful split, sstables not attached yet will be unlinked
co_await coroutine::parallel_for_each(ssts, [&ex] (sstables::shared_sstable sst) -> future<> {
if (sst) {
tlogger.error("Failed to load SSTable {} of origin {} due to {}, it will be unlinked...", sst->get_filename(), sst->get_origin(), ex);
co_await sst->unlink();
}
});
co_await coroutine::return_exception_ptr(std::move(ex));
}
co_return std::move(ret);
}
future<std::vector<sstables::shared_sstable>>
table::add_new_sstables_and_update_cache(std::vector<sstables::shared_sstable> new_ssts,
std::function<future<>(sstables::shared_sstable)> on_add) {
std::exception_ptr ex;
std::vector<sstables::shared_sstable> ret;
// We rely on add_new_sstable_and_update_cache() to unlink the sstable fed into it,
// so the exception handling below will only have to unlink sstables not processed yet.
try {
for (auto& sst: new_ssts) {
auto ssts = co_await add_new_sstable_and_update_cache(std::exchange(sst, nullptr), on_add);
std::ranges::move(ssts, std::back_inserter(ret));
}
} catch (...) {
ex = std::current_exception();
}
if (ex) {
co_await coroutine::parallel_for_each(new_ssts, [&ex] (sstables::shared_sstable sst) -> future<> {
if (sst) {
tlogger.error("Failed to load SSTable {} of origin {} due to {}, it will be unlinked...", sst->get_filename(), sst->get_origin(), ex);
co_await sst->unlink();
}
});
co_await coroutine::return_exception_ptr(std::move(ex));
}
co_return std::move(ret);
}
future<>
table::update_cache(compaction_group& cg, lw_shared_ptr<memtable> m, std::vector<sstables::shared_sstable> ssts) {
auto permit = co_await seastar::get_units(_sstable_set_mutation_sem, 1);
@@ -1892,7 +1983,7 @@ sstables::file_size_stats compaction_group::live_disk_space_used_full_stats() co
}
uint64_t storage_group::live_disk_space_used() const {
auto cgs = const_cast<storage_group&>(*this).compaction_groups();
auto cgs = const_cast<storage_group&>(*this).compaction_groups_immediate();
return std::ranges::fold_left(cgs | std::views::transform(std::mem_fn(&compaction_group::live_disk_space_used)), uint64_t(0), std::plus{});
}
@@ -2019,10 +2110,9 @@ future<std::vector<compaction::compaction_group_view*>> table::get_compaction_gr
auto sgs = storage_groups_for_token_range(range);
for (auto& sg : sgs) {
co_await coroutine::maybe_yield();
auto cgs = sg->compaction_groups();
for (auto& cg : cgs) {
sg->for_each_compaction_group([&ret] (const compaction_group_ptr& cg) {
ret.push_back(&cg->view_for_unrepaired_data());
}
});
}
co_return ret;
}
@@ -2049,7 +2139,7 @@ future<compaction_reenablers_and_lock_holders> table::get_compaction_reenablers_
future<> table::clear_being_repaired_for_range(dht::token_range range) {
auto sgs = storage_groups_for_token_range(range);
for (auto& sg : sgs) {
auto cgs = sg->compaction_groups();
auto cgs = sg->compaction_groups_immediate();
for (auto& cg : cgs) {
auto sstables = cg->all_sstables();
co_await coroutine::maybe_yield();
@@ -2491,9 +2581,11 @@ future<> table::drop_quarantined_sstables() {
}
bool storage_group::no_compacted_sstable_undeleted() const {
return std::ranges::all_of(compaction_groups(), [] (const_compaction_group_ptr& cg) {
return cg->compacted_undeleted_sstables().empty();
auto ret = true;
for_each_compaction_group([&ret] (const compaction_group_ptr& cg) {
ret &= cg->compacted_undeleted_sstables().empty();
});
return ret;
}
// Gets the list of all sstables in the column family, including ones that are
@@ -2612,8 +2704,8 @@ public:
sstables::sstables_manager& get_sstables_manager() noexcept override {
return _t.get_sstables_manager();
}
sstables::shared_sstable make_sstable() const override {
return _t.make_sstable();
sstables::shared_sstable make_sstable(sstables::sstable_state state) const override {
return _t.make_sstable(state);
}
sstables::sstable_writer_config configure_writer(sstring origin) const override {
auto cfg = _t.get_sstables_manager().configure_writer(std::move(origin));
@@ -2731,6 +2823,7 @@ future<> compaction_group::stop(sstring reason) noexcept {
auto flush_future = co_await seastar::coroutine::as_future(flush());
co_await _flush_gate.close();
co_await _sstable_add_gate.close();
// FIXME: indentation
_compaction_disabler_for_views.clear();
co_await utils::get_local_injector().inject("compaction_group_stop_wait", utils::wait_for_message(60s));
@@ -2744,7 +2837,7 @@ future<> compaction_group::stop(sstring reason) noexcept {
}
bool compaction_group::empty() const noexcept {
return _memtables->empty() && live_sstable_count() == 0;
return _memtables->empty() && live_sstable_count() == 0 && _sstable_add_gate.get_count() == 0;
}
const schema_ptr& compaction_group::schema() const {
@@ -2757,9 +2850,9 @@ void compaction_group::clear_sstables() {
}
void storage_group::clear_sstables() {
for (auto cg : compaction_groups()) {
for_each_compaction_group([] (const compaction_group_ptr& cg) {
cg->clear_sstables();
}
});
}
table::table(schema_ptr schema, config config, lw_shared_ptr<const storage_options> sopts, compaction::compaction_manager& compaction_manager,
@@ -3086,7 +3179,7 @@ future<> table::update_repaired_at_for_merge() {
for (auto& x : sgs) {
auto sg = x.second;
if (sg) {
auto cgs = sg->compaction_groups();
auto cgs = sg->compaction_groups_immediate();
for (auto& cg : cgs) {
auto cre = co_await cg->get_compaction_manager().stop_and_disable_compaction("update_repaired_at_for_merge", cg->view_for_unrepaired_data());
co_await cg->update_repaired_at_for_merge();
@@ -3200,7 +3293,7 @@ db::replay_position table::highest_flushed_replay_position() const {
}
struct manifest_json : public json::json_base {
json::json_chunked_list<sstring> files;
json::json_chunked_list<std::string_view> files;
manifest_json() {
register_params();
@@ -3219,22 +3312,25 @@ private:
}
};
future<>
table::seal_snapshot(sstring jsondir, std::vector<snapshot_file_set> file_sets) {
class snapshot_writer {
public:
virtual future<> init() = 0;
virtual future<> sync() = 0;
virtual future<output_stream<char>> stream_for(sstring component) = 0;
virtual ~snapshot_writer() = default;
};
using snapshot_file_set = foreign_ptr<std::unique_ptr<std::unordered_set<sstring>>>;
static future<> write_manifest(snapshot_writer& writer, std::vector<snapshot_file_set> file_sets) {
manifest_json manifest;
for (const auto& fsp : file_sets) {
for (auto& rf : *fsp) {
manifest.files.push(std::move(rf));
manifest.files.push(std::string_view(rf));
}
}
auto streamer = json::stream_object(std::move(manifest));
auto jsonfile = jsondir + "/manifest.json";
tlogger.debug("Storing manifest {}", jsonfile);
co_await io_check([jsondir] { return recursive_touch_directory(jsondir); });
auto f = co_await open_checked_file_dma(general_disk_error_handler, jsonfile, open_flags::wo | open_flags::create | open_flags::truncate);
auto out = co_await make_file_output_stream(std::move(f));
auto out = co_await writer.stream_for("manifest.json");
std::exception_ptr ex;
try {
co_await streamer(std::move(out));
@@ -3245,19 +3341,27 @@ table::seal_snapshot(sstring jsondir, std::vector<snapshot_file_set> file_sets)
if (ex) {
co_await coroutine::return_exception_ptr(std::move(ex));
}
co_await io_check(sync_directory, std::move(jsondir));
}
future<> table::write_schema_as_cql(const global_table_ptr& table_shards, sstring dir) const {
auto schema_desc = schema()->describe(
replica::make_schema_describe_helper(table_shards),
cql3::describe_option::STMTS);
/*!
* \brief write the schema to a 'schema.cql' file at the given directory.
*
* When doing a snapshot, the snapshot directory contains a 'schema.cql' file
* with a CQL command that can be used to generate the schema.
* The content is is similar to the result of the CQL DESCRIBE command of the table.
*
* When a schema has indexes, local indexes or views, those indexes and views
* are represented by their own schemas.
* In those cases, the method would write the relevant information for each of the schemas:
*
* The schema of the base table would output a file with the CREATE TABLE command
* and the schema of the view that is used for the index would output a file with the
* CREATE INDEX command.
* The same is true for local index and MATERIALIZED VIEW.
*/
static future<> write_schema_as_cql(snapshot_writer& writer, cql3::description schema_desc) {
auto schema_description = std::move(*schema_desc.create_statement);
auto schema_file_name = dir + "/schema.cql";
auto f = co_await open_checked_file_dma(general_disk_error_handler, schema_file_name, open_flags::wo | open_flags::create | open_flags::truncate);
auto out = co_await make_file_output_stream(std::move(f));
auto out = co_await writer.stream_for("schema.cql");
std::exception_ptr ex;
auto view = managed_bytes_view(schema_description.as_managed_bytes());
@@ -3278,73 +3382,87 @@ future<> table::write_schema_as_cql(const global_table_ptr& table_shards, sstrin
}
}
// Runs the orchestration code on an arbitrary shard to balance the load.
future<> table::snapshot_on_all_shards(sharded<database>& sharded_db, const global_table_ptr& table_shards, sstring name) {
auto* so = std::get_if<storage_options::local>(&table_shards->get_storage_options().value);
if (so == nullptr) {
throw std::runtime_error("Snapshotting non-local tables is not implemented");
class local_snapshot_writer : public snapshot_writer {
std::filesystem::path _dir;
public:
local_snapshot_writer(std::filesystem::path dir, sstring name)
: _dir(dir / sstables::snapshots_dir / name)
{}
future<> init() override {
co_await io_check([this] { return recursive_touch_directory(_dir.native()); });
}
if (so->dir.empty()) { // virtual tables don't have initialized local storage
future<> sync() override {
co_await io_check([this] { return sync_directory(_dir.native()); });
}
future<output_stream<char>> stream_for(sstring component) override {
auto file_name = (_dir / component).native();
auto f = co_await open_checked_file_dma(general_disk_error_handler, file_name, open_flags::wo | open_flags::create | open_flags::truncate);
co_return co_await make_file_output_stream(std::move(f));
}
};
// Runs the orchestration code on an arbitrary shard to balance the load.
future<> database::snapshot_table_on_all_shards(sharded<database>& sharded_db, const global_table_ptr& table_shards, sstring name) {
auto writer = std::visit(overloaded_functor{
[&name] (const data_dictionary::storage_options::local& loc) -> std::unique_ptr<snapshot_writer> {
if (loc.dir.empty()) {
// virtual tables don't have initialized local storage
return nullptr;
}
return std::make_unique<local_snapshot_writer>(loc.dir, name);
},
[] (const data_dictionary::storage_options::s3&) -> std::unique_ptr<snapshot_writer> {
throw std::runtime_error("Snapshotting non-local tables is not implemented");
}
}, table_shards->get_storage_options().value);
if (!writer) {
co_return;
}
auto jsondir = (so->dir / sstables::snapshots_dir / name).native();
auto orchestrator = std::hash<sstring>()(jsondir) % smp::count;
auto orchestrator = std::hash<sstring>()(name) % smp::count;
co_await smp::submit_to(orchestrator, [&] () -> future<> {
auto& t = *table_shards;
auto s = t.schema();
tlogger.debug("Taking snapshot of {}.{}: directory={}", s->ks_name(), s->cf_name(), jsondir);
tlogger.debug("Taking snapshot of {}.{}: name={}", s->ks_name(), s->cf_name(), name);
std::vector<table::snapshot_file_set> file_sets;
file_sets.reserve(smp::count);
std::vector<snapshot_file_set> file_sets(smp::count);
co_await io_check([&jsondir] { return recursive_touch_directory(jsondir); });
co_await coroutine::parallel_for_each(smp::all_cpus(), [&] (unsigned shard) -> future<> {
file_sets.emplace_back(co_await smp::submit_to(shard, [&] {
return table_shards->take_snapshot(jsondir);
}));
co_await writer->init();
co_await smp::invoke_on_all([&] -> future<> {
auto& t = *table_shards;
auto [tables, permit] = co_await t.snapshot_sstables();
auto table_names = co_await t.get_sstables_manager().take_snapshot(std::move(tables), name);
file_sets[this_shard_id()] = make_foreign(std::make_unique<std::unordered_set<sstring>>(std::move(table_names)));
});
co_await io_check(sync_directory, jsondir);
co_await writer->sync();
co_await t.finalize_snapshot(table_shards, std::move(jsondir), std::move(file_sets));
std::exception_ptr ex;
tlogger.debug("snapshot {}: writing schema.cql", name);
auto schema_desc = s->describe(replica::make_schema_describe_helper(table_shards), cql3::describe_option::STMTS);
co_await write_schema_as_cql(*writer, std::move(schema_desc)).handle_exception([&] (std::exception_ptr ptr) {
tlogger.error("Failed writing schema file in snapshot in {} with exception {}", name, ptr);
ex = std::move(ptr);
});
tlogger.debug("snapshot {}: seal_snapshot", name);
co_await write_manifest(*writer, std::move(file_sets)).handle_exception([&] (std::exception_ptr ptr) {
tlogger.error("Failed to seal snapshot in {}: {}.", name, ptr);
ex = std::move(ptr);
});
if (ex) {
co_await coroutine::return_exception_ptr(std::move(ex));
}
co_await writer->sync();
});
}
future<table::snapshot_file_set> table::take_snapshot(sstring jsondir) {
tlogger.trace("take_snapshot {}", jsondir);
auto sstable_deletion_guard = co_await get_sstable_list_permit();
future<std::pair<std::vector<sstables::shared_sstable>, table::sstable_list_permit>> table::snapshot_sstables() {
auto permit = co_await get_sstable_list_permit();
auto tables = *_sstables->all() | std::ranges::to<std::vector<sstables::shared_sstable>>();
auto table_names = std::make_unique<std::unordered_set<sstring>>();
co_await _sstables_manager.dir_semaphore().parallel_for_each(tables, [&jsondir, &table_names] (sstables::shared_sstable sstable) {
table_names->insert(sstable->component_basename(sstables::component_type::Data));
return io_check([sstable, &dir = jsondir] {
return sstable->snapshot(dir);
});
});
co_return make_foreign(std::move(table_names));
}
future<> table::finalize_snapshot(const global_table_ptr& table_shards, sstring jsondir, std::vector<snapshot_file_set> file_sets) {
std::exception_ptr ex;
tlogger.debug("snapshot {}: writing schema.cql", jsondir);
co_await write_schema_as_cql(table_shards, jsondir).handle_exception([&] (std::exception_ptr ptr) {
tlogger.error("Failed writing schema file in snapshot in {} with exception {}", jsondir, ptr);
ex = std::move(ptr);
});
tlogger.debug("snapshot {}: seal_snapshot", jsondir);
co_await seal_snapshot(jsondir, std::move(file_sets)).handle_exception([&] (std::exception_ptr ptr) {
tlogger.error("Failed to seal snapshot in {}: {}.", jsondir, ptr);
ex = std::move(ptr);
});
if (ex) {
co_await coroutine::return_exception_ptr(std::move(ex));
}
co_return std::make_pair(std::move(tables), std::move(permit));
}
future<bool> table::snapshot_exists(sstring tag) {
@@ -3356,6 +3474,7 @@ future<bool> table::snapshot_exists(sstring tag) {
sstring jsondir = (so->dir / sstables::snapshots_dir / tag).native();
bool exists = false;
try {
future<stat_data> (&file_stat)(std::string_view, follow_symlink) noexcept = seastar::file_stat;
auto sd = co_await io_check(file_stat, jsondir, follow_symlink::no);
if (sd.type != directory_entry_type::directory) {
throw std::error_code(ENOTDIR, std::system_category());
@@ -3411,6 +3530,7 @@ future<table::snapshot_details> table::get_snapshot_details(fs::path snapshot_di
const auto& name = de->name;
// FIXME: optimize stat calls by keeping the base directory open and use statat instead, here and below.
// See https://github.com/scylladb/seastar/pull/3163
future<stat_data> (&file_stat)(std::string_view, follow_symlink) noexcept = seastar::file_stat;
auto sd = co_await io_check(file_stat, (snapshot_dir / name).native(), follow_symlink::no);
auto size = sd.allocated_size;
@@ -3425,7 +3545,7 @@ future<table::snapshot_details> table::get_snapshot_details(fs::path snapshot_di
details.live += size;
continue;
}
// If the number of linkes is greater than 1, it is still possible that the file is linked to another snapshot
// If the number of links is greater than 1, it is still possible that the file is linked to another snapshot
// So check the datadir for the file too.
} else {
continue;
@@ -3469,7 +3589,7 @@ future<> compaction_group::flush() noexcept {
}
future<> storage_group::flush() noexcept {
for (auto& cg : compaction_groups()) {
for (auto& cg : compaction_groups_immediate()) {
co_await cg->flush();
}
}
@@ -3487,7 +3607,11 @@ size_t compaction_group::memtable_count() const noexcept {
}
size_t storage_group::memtable_count() const {
return std::ranges::fold_left(compaction_groups() | std::views::transform(std::mem_fn(&compaction_group::memtable_count)), size_t(0), std::plus{});
size_t count = 0;
for_each_compaction_group([&count] (const compaction_group_ptr& cg) {
count += cg->memtable_count();
});
return count;
}
future<> table::flush(std::optional<db::replay_position> pos) {
@@ -3505,7 +3629,7 @@ future<> table::flush(std::optional<db::replay_position> pos) {
}
bool storage_group::can_flush() const {
return std::ranges::any_of(compaction_groups(), std::mem_fn(&compaction_group::can_flush));
return std::ranges::any_of(compaction_groups_immediate(), std::mem_fn(&compaction_group::can_flush));
}
bool table::can_flush() const {
@@ -3536,9 +3660,11 @@ bool storage_group::compaction_disabled() const {
// Compaction group that has been stopped will be excluded, since the group will not be available for a caller
// to disable compaction explicitly on it, e.g. on truncate, and the caller might want to perform a check
// that compaction was disabled on all groups. Stopping a group is equivalent to disabling compaction on it.
return std::ranges::all_of(compaction_groups()
| std::views::filter(std::not_fn(&compaction_group::stopped)), [] (const_compaction_group_ptr& cg) {
return cg->compaction_disabled(); });
bool all_disabled = true;
for_each_compaction_group([&all_disabled] (const compaction_group_ptr& cg) {
all_disabled &= cg->stopped() || cg->compaction_disabled();
});
return all_disabled;
}
// NOTE: does not need to be futurized, but might eventually, depending on
@@ -4323,11 +4449,11 @@ std::vector<mutation_source> table::select_memtables_as_mutation_sources(dht::to
auto& sg = storage_group_for_token(token);
std::vector<mutation_source> mss;
mss.reserve(sg.memtable_count());
for (auto& cg : sg.compaction_groups()) {
sg.for_each_compaction_group([&mss] (const compaction_group_ptr &cg) {
for (auto& mt : *cg->memtables()) {
mss.emplace_back(mt->as_data_source());
}
}
});
return mss;
}
@@ -4487,7 +4613,7 @@ future<> compaction_group::cleanup() {
}
future<> table::clear_inactive_reads_for_tablet(database& db, storage_group& sg) {
for (auto& cg_ptr : sg.compaction_groups()) {
for (auto& cg_ptr : sg.compaction_groups_immediate()) {
co_await db.clear_inactive_reads_for_tablet(_schema->id(), cg_ptr->token_range());
}
}
@@ -4528,13 +4654,13 @@ future<> table::stop_compaction_groups(storage_group& sg) {
}
future<> table::flush_compaction_groups(storage_group& sg) {
for (auto& cg_ptr : sg.compaction_groups()) {
for (auto& cg_ptr : sg.compaction_groups_immediate()) {
co_await cg_ptr->flush();
}
}
future<> table::cleanup_compaction_groups(database& db, db::system_keyspace& sys_ks, locator::tablet_id tid, storage_group& sg) {
for (auto& cg_ptr : sg.compaction_groups()) {
for (auto& cg_ptr : sg.compaction_groups_immediate()) {
co_await cg_ptr->cleanup();
// FIXME: at this point _highest_rp might be greater than the replay_position of the last cleaned mutation,
// and can cover some mutations which weren't cleaned, causing them to be lost during replay.

View File

@@ -1,6 +1,13 @@
find_program(CARGO cargo
REQUIRED)
# Set up RUSTC_WRAPPER for sccache support if configured
if(Scylla_RUSTC_WRAPPER)
set(RUSTC_WRAPPER_ENV "RUSTC_WRAPPER=${Scylla_RUSTC_WRAPPER}")
else()
set(RUSTC_WRAPPER_ENV "")
endif()
function(add_rust_library name)
# used for profiles defined in Cargo.toml
if(CMAKE_CONFIGURATION_TYPES)
@@ -16,7 +23,7 @@ function(add_rust_library name)
set(library ${target_dir}/lib${name}.a)
add_custom_command(
OUTPUT ${library}
COMMAND ${CMAKE_COMMAND} -E env CARGO_BUILD_DEP_INFO_BASEDIR=. ${CARGO} build --locked --target-dir=${target_dir} --profile=${profile}
COMMAND ${CMAKE_COMMAND} -E env CARGO_BUILD_DEP_INFO_BASEDIR=. ${RUSTC_WRAPPER_ENV} ${CARGO} build --locked --target-dir=${target_dir} --profile=${profile}
COMMAND ${CMAKE_COMMAND} -E copy ${target_dir}/${profile}/lib${name}.a ${library}
DEPENDS Cargo.lock
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}

View File

@@ -390,9 +390,11 @@ dark_green = (195, 215, 195)
light_red = (255, 200, 200)
light_green = (200, 255, 200)
light_gray = (240, 240, 240)
scylla_blue = (87, 209, 229)
tablet_colors = {
(Tablet.STATE_NORMAL, None): GRAY,
(Tablet.STATE_NORMAL, 'repair'): scylla_blue,
(Tablet.STATE_JOINING, 'allow_write_both_read_old'): dark_green,
(Tablet.STATE_LEAVING, 'allow_write_both_read_old'): dark_red,
(Tablet.STATE_JOINING, 'write_both_read_old'): dark_green,
@@ -532,6 +534,8 @@ def update_from_cql(initial=False):
state = (Tablet.STATE_JOINING, tablet.stage)
elif replica in leaving:
state = (Tablet.STATE_LEAVING, tablet.stage)
elif tablet.stage == 'repair':
state = (Tablet.STATE_NORMAL, tablet.stage)
else:
state = (Tablet.STATE_NORMAL, None)

View File

@@ -4109,6 +4109,16 @@ class scylla_fiber(gdb.Command):
return res
return None
# Coroutines need special handling as they allocate the future object on their frame.
if name.strip().endswith('[clone .resume]'):
self._maybe_log(f"Current task is a coroutine, trying to find the promise in the coroutine frame: 0x{ptr_meta.ptr:x}+{ptr_meta.size}\n", verbose)
# Skip the first two pointers, these are the coroutine resume and destroy function pointers.
for maybe_tptr in range(ptr_meta.ptr + 2 * _vptr_type().sizeof, ptr_meta.ptr + ptr_meta.size, _vptr_type().sizeof):
res = self._probe_pointer(maybe_tptr, scanned_region_size, using_seastar_allocator, verbose)
if res is not None:
return res
return None
if name.startswith('vtable for seastar::internal::when_all_state'):
when_all_state_base_ptr_type = gdb.lookup_type('seastar::internal::when_all_state_base').pointer()
when_all_state_base = gdb.Value(int(ptr_meta.ptr)).reinterpret_cast(when_all_state_base_ptr_type)
@@ -4195,6 +4205,9 @@ class scylla_fiber(gdb.Command):
parser.add_argument("--force-fallback-mode", action="store_true", default=False,
help="Force fallback mode to be used, that is, scan a fixed-size region of memory"
" (configurable via --scanned-region-size), instead of relying on `scylla ptr` for determining the size of the task objects.")
parser.add_argument("--direction", action="store", choices=['forward', 'backward', 'both'], default='both',
help="Direction in which to walk the continuation chain. 'forward' walks futures waiting on the given task,"
" 'backward' walks futures the given task is waiting on, 'both' does both.")
parser.add_argument("task", action="store", help="An expression that evaluates to a valid `seastar::task*` value. Cannot contain white-space.")
try:
@@ -4224,14 +4237,20 @@ class scylla_fiber(gdb.Command):
gdb.write("Provided pointer 0x{:016x} is not an object managed by seastar or not a task pointer\n".format(initial_task_ptr))
return
backwards_fiber = self._walk(self._walk_backward, this_task[0], this_task[2], args.max_depth, args.scanned_region_size, using_seastar_allocator, args.verbose)
if (args.direction == 'backward' or args.direction == 'both'):
backwards_fiber = self._walk(self._walk_backward, this_task[0], this_task[2], args.max_depth, args.scanned_region_size, using_seastar_allocator, args.verbose)
else:
backwards_fiber = []
for i, task_info in enumerate(reversed(backwards_fiber)):
format_task_line(i - len(backwards_fiber), task_info)
format_task_line(0, this_task)
forward_fiber = self._walk(self._walk_forward, this_task[0], this_task[2], args.max_depth, args.scanned_region_size, using_seastar_allocator, args.verbose)
if (args.direction == 'forward' or args.direction == 'both'):
forward_fiber = self._walk(self._walk_forward, this_task[0], this_task[2], args.max_depth, args.scanned_region_size, using_seastar_allocator, args.verbose)
else:
forward_fiber = []
for i, task_info in enumerate(forward_fiber):
format_task_line(i + 1, task_info)
@@ -5104,10 +5123,15 @@ class scylla_small_objects(gdb.Command):
span_end = int(span_start + span.size() * self._page_size)
# span's free list
span_next_free = span.page['freelist']
while span_next_free:
self._free_in_span.add(int(span_next_free))
span_next_free = span_next_free['next']
try:
span_next_free = span.page['freelist']
while span_next_free:
self._free_in_span.add(int(span_next_free))
span_next_free = span_next_free['next']
except gdb.error:
# This loop sometimes steps on "Cannot access memory at address", causing CI instability.
# Catch the exception and break the freelist traversal loop gracefully.
gdb.write(f"Warning: error traversing freelist of span [0x{span_start:x}, 0x{span_end:x}), some of the listed objects in this span may be free objects.\n")
return span_start, span_end
@@ -5850,6 +5874,18 @@ class scylla_read_stats(gdb.Command):
def __init__(self):
gdb.Command.__init__(self, 'scylla read-stats', gdb.COMMAND_USER, gdb.COMPLETE_COMMAND)
@staticmethod
def foreach_permit(semaphore, fn):
"""Mirror of reader_concurrency_semaphore::foreach_permit()"""
for permit_list in (
semaphore['_permit_list'],
semaphore['_wait_list']['_admission_queue'],
semaphore['_wait_list']['_memory_queue'],
semaphore['_ready_list'],
semaphore['_inactive_reads']):
for permit in intrusive_list(permit_list):
fn(permit)
@staticmethod
def dump_reads_from_semaphore(semaphore):
try:
@@ -5864,7 +5900,7 @@ class scylla_read_stats(gdb.Command):
permit_summaries = defaultdict(permit_stats)
total = permit_stats()
for permit in intrusive_list(permit_list):
def summarize_permit(permit):
schema_name = "*.*"
schema = permit['_schema']
try:
@@ -5884,6 +5920,8 @@ class scylla_read_stats(gdb.Command):
permit_summaries[(schema_name, description, state)].add(summary)
total.add(summary)
scylla_read_stats.foreach_permit(semaphore, summarize_permit)
if not permit_summaries:
return
@@ -5893,7 +5931,9 @@ class scylla_read_stats(gdb.Command):
inactive_read_count = len(intrusive_list(semaphore['_inactive_reads']))
waiters = int(semaphore["_stats"]["waiters"])
gdb.write("Semaphore {} with: {}/{} count and {}/{} memory resources, queued: {}, inactive={}\n".format(
gdb.write("Semaphore ({}*) 0x{:x} {} with: {}/{} count and {}/{} memory resources, queued: {}, inactive={}\n".format(
semaphore.type.name,
int(semaphore.address),
semaphore_name,
initial_count - int(semaphore['_resources']['count']), initial_count,
initial_memory - int(semaphore['_resources']['memory']), initial_memory,

Submodule seastar updated: 7ec14e836a...4dcd4df5e7

View File

@@ -224,7 +224,13 @@ future<> service::client_state::has_access(const sstring& ks, auth::command_desc
ks + " can be granted only SELECT or DESCRIBE permissions to a non-superuser.");
}
if (cmd.resource.kind() == auth::resource_kind::data && cmd.permission == auth::permission::SELECT && is_vector_indexed.has_value() && is_vector_indexed.value()) {
static const std::unordered_set<auth::resource> vector_search_system_resources = {
auth::make_data_resource(db::system_keyspace::NAME, db::system_keyspace::GROUP0_HISTORY),
auth::make_data_resource(db::system_keyspace::NAME, db::system_keyspace::VERSIONS),
};
if ((cmd.resource.kind() == auth::resource_kind::data && cmd.permission == auth::permission::SELECT && is_vector_indexed.has_value() && is_vector_indexed.value()) ||
(cmd.permission == auth::permission::SELECT && vector_search_system_resources.contains(cmd.resource))) {
co_return co_await ensure_has_permission<auth::command_desc_with_permission_set>({auth::permission_set::of<auth::permission::SELECT, auth::permission::VECTOR_SEARCH_INDEXING>(), cmd.resource});

View File

@@ -79,7 +79,8 @@ group0_state_machine::group0_state_machine(raft_group0_client& client, migration
// the node won't try to fetch a topology snapshot if the other
// node doesn't support it yet.
_topology_change_enabled = true;
})) {
}))
, _in_memory_state_machine_enabled(utils::get_local_injector().is_enabled("group0_enable_sm_immediately")) {
_state_id_handler.run();
}
@@ -154,6 +155,27 @@ static future<> notify_client_route_change_if_needed(storage_service& storage_se
}
}
// Meant to be used only in error injections.
static future<> maybe_partially_apply_cdc_generation_deletion_then_get_stuck(
std::function<future<>(utils::chunked_vector<frozen_mutation_and_schema>)> mutate,
const utils::chunked_vector<frozen_mutation_and_schema>& mutations) {
auto is_cdc_generation_data_clearing_mutation = [] (const frozen_mutation_and_schema& fm_s) {
return fm_s.s->id() == db::system_keyspace::cdc_generations_v3()->id()
&& !fm_s.fm.unfreeze(fm_s.s).partition().row_tombstones().empty();
};
if (std::any_of(mutations.begin(), mutations.end(), is_cdc_generation_data_clearing_mutation)) {
utils::chunked_vector<frozen_mutation_and_schema> filtered_mutations;
std::copy_if(mutations.begin(), mutations.end(), std::back_inserter(filtered_mutations), is_cdc_generation_data_clearing_mutation);
co_await mutate(std::move(filtered_mutations));
while (true) {
slogger.info("group0 has hung on error injection, waiting for the process to be killed");
co_await seastar::sleep(std::chrono::seconds(1));
}
}
}
future<> write_mutations_to_database(storage_service& storage_service, storage_proxy& proxy, gms::inet_address from, utils::chunked_vector<canonical_mutation> cms) {
utils::chunked_vector<frozen_mutation_and_schema> mutations;
client_routes_service::client_route_keys client_routes_update;
@@ -178,7 +200,13 @@ future<> write_mutations_to_database(storage_service& storage_service, storage_p
throw std::runtime_error(::format("Error while applying mutations: {}", e));
}
co_await proxy.mutate_locally(std::move(mutations), tracing::trace_state_ptr(), db::commitlog::force_sync::no);
auto mutate = [&proxy] (utils::chunked_vector<frozen_mutation_and_schema> mutations) {
return proxy.mutate_locally(std::move(mutations), tracing::trace_state_ptr(), db::commitlog::force_sync::no);
};
if (utils::get_local_injector().is_enabled("group0_simulate_partial_application_of_cdc_generation_deletion")) {
co_await maybe_partially_apply_cdc_generation_deletion_then_get_stuck(mutate, mutations);
}
co_await mutate(std::move(mutations));
if (need_system_topology_flush) {
slogger.trace("write_mutations_to_database: flushing {}.{}", db::system_keyspace::NAME, db::system_keyspace::TOPOLOGY);
@@ -271,42 +299,41 @@ future<> group0_state_machine::merge_and_apply(group0_state_machine_merger& merg
// If we crash before appending the state ID, when we reapply the command after restart, the change will be applied because
// the state ID was not yet appended so the above check will pass.
// TODO: reapplication of a command after a crash may require contacting a quorum (we need to learn that the command
// is committed from a leader). But we may want to ensure that group 0 state is consistent after restart even without
// access to quorum, which means we cannot allow partially applied commands. We need to ensure that either the entire
// change is applied and the state ID is updated or none of this happens.
// E.g. use a write-ahead-entry which contains all this information and make sure it's replayed during restarts.
std::optional<storage_service::state_change_hint> topology_state_change_hint;
modules_to_reload modules_to_reload;
co_await std::visit(make_visitor(
[&] (schema_change& chng) -> future<> {
auto modules_to_reload = get_modules_to_reload(chng.mutations);
modules_to_reload = get_modules_to_reload(chng.mutations);
co_await _mm.merge_schema_from(locator::host_id{cmd.creator_id.uuid()}, std::move(chng.mutations));
co_await reload_modules(std::move(modules_to_reload));
},
[&] (broadcast_table_query& query) -> future<> {
auto result = co_await service::broadcast_tables::execute_broadcast_table_query(_sp, query.query, cmd.new_state_id);
_client.set_query_result(cmd.new_state_id, std::move(result));
},
[&] (topology_change& chng) -> future<> {
auto modules_to_reload = get_modules_to_reload(chng.mutations);
auto tablet_keys = replica::get_tablet_metadata_change_hint(chng.mutations);
modules_to_reload = get_modules_to_reload(chng.mutations);
topology_state_change_hint = {.tablets_hint = replica::get_tablet_metadata_change_hint(chng.mutations)};
co_await write_mutations_to_database(_ss, _sp, cmd.creator_addr, std::move(chng.mutations));
co_await _ss.topology_transition({.tablets_hint = std::move(tablet_keys)});
co_await reload_modules(std::move(modules_to_reload));
},
[&] (mixed_change& chng) -> future<> {
auto modules_to_reload = get_modules_to_reload(chng.mutations);
modules_to_reload = get_modules_to_reload(chng.mutations);
topology_state_change_hint.emplace();
co_await _mm.merge_schema_from(locator::host_id{cmd.creator_id.uuid()}, std::move(chng.mutations));
co_await _ss.topology_transition();
co_await reload_modules(std::move(modules_to_reload));
},
[&] (write_mutations& muts) -> future<> {
auto modules_to_reload = get_modules_to_reload(muts.mutations);
modules_to_reload = get_modules_to_reload(muts.mutations);
co_await write_mutations_to_database(_ss, _sp, cmd.creator_addr, std::move(muts.mutations));
co_await reload_modules(std::move(modules_to_reload));
}
), cmd.change);
if (_in_memory_state_machine_enabled) {
if (topology_state_change_hint) {
co_await _ss.topology_transition(std::move(*topology_state_change_hint));
}
co_await reload_modules(std::move(modules_to_reload));
}
co_await _sp.mutate_locally({std::move(history)}, nullptr);
}
@@ -413,9 +440,23 @@ void group0_state_machine::drop_snapshot(raft::snapshot_id id) {
}
future<> group0_state_machine::load_snapshot(raft::snapshot_id id) {
// topology_state_load applies persisted state machine state into
// memory and thus needs to be protected with apply mutex
auto read_apply_mutex_holder = co_await _client.hold_read_apply_mutex(_abort_source);
if (_in_memory_state_machine_enabled) {
co_await reload_state();
}
}
future<> group0_state_machine::enable_in_memory_state_machine() {
auto read_apply_mutex_holder = co_await _client.hold_read_apply_mutex(_abort_source);
if (!_in_memory_state_machine_enabled) {
_in_memory_state_machine_enabled = true;
co_await reload_state();
}
}
future<> group0_state_machine::reload_state() {
// we assume that the apply mutex is held, topology_state_load applies
// persisted state machine into memory so it needs to be protected with it
co_await _ss.topology_state_load();
co_await _ss.view_building_state_load();
if (_feature_service.compression_dicts) {

View File

@@ -113,9 +113,33 @@ class group0_state_machine : public raft_state_machine {
gms::feature_service& _feature_service;
gms::feature::listener_registration _topology_on_raft_support_listener;
// This boolean controls whether the in-memory data structures should be updated
// after snapshot transfer / command application.
//
// The reason for the flag is to protect from reading a partially applied state.
// A group0 command may consist of multiple mutations that are not applied
// in a single, atomic operation, but rather separately. A node can crash
// in the middle of applying such a command, leaving the group0 in an inconsistent
// state. Thanks to the idempotency of mutations, applying the group0 command
// again, fully, will make the state consistent again. Therefore, we use this
// flag to control when the in memory state machine should be updated from the
// on-disk state - we can only do that if we know that the group0 table state
// is consistent.
//
// The only exception to the above rule is the schema - the schema state is
// loaded into memory before group0 is initialized, and the in-memory state
// is reloaded even if _in_memory_state_machine_enabled is set to false.
// Resolving this exception should be possible, but would require considerable
// effort in refactoring the migration manager code. In the meantime, we are
// fine with this exception because the migration manager applies all schema
// mutations of a single command atomically, in a single commitlog entry -
// therefore, we should not observe broken invariants in the schema module.
bool _in_memory_state_machine_enabled;
modules_to_reload get_modules_to_reload(const utils::chunked_vector<canonical_mutation>& mutations);
future<> reload_modules(modules_to_reload modules);
future<> merge_and_apply(group0_state_machine_merger& merger);
future<> reload_state();
public:
group0_state_machine(raft_group0_client& client, migration_manager& mm, storage_proxy& sp, storage_service& ss,
gms::gossiper& gossiper, gms::feature_service& feat, bool topology_change_enabled);
@@ -125,6 +149,7 @@ public:
future<> load_snapshot(raft::snapshot_id id) override;
future<> transfer_snapshot(raft::server_id from_id, raft::snapshot_descriptor snp) override;
future<> abort() override;
future<> enable_in_memory_state_machine();
};
bool should_flush_system_topology_after_applying(const mutation& mut, const data_dictionary::database db);

View File

@@ -244,6 +244,7 @@ raft_server_for_group raft_group0::create_server_for_group0(raft::group_id gid,
service::migration_manager& mm, bool topology_change_enabled) {
auto state_machine = std::make_unique<group0_state_machine>(
_client, mm, qp.proxy(), ss, _gossiper, _feat, topology_change_enabled);
auto& state_machine_ref = *state_machine;
auto rpc = std::make_unique<group0_rpc>(_raft_gr.direct_fd(), *state_machine, _ms.local(), _raft_gr.failure_detector(), gid, my_id);
// Keep a reference to a specific RPC class.
auto& rpc_ref = *rpc;
@@ -277,6 +278,7 @@ raft_server_for_group raft_group0::create_server_for_group0(raft::group_id gid,
.ticker = std::move(ticker),
.rpc = rpc_ref,
.persistence = persistence_ref,
.state_machine = state_machine_ref,
.default_op_timeout_in_ms = qp.proxy().get_db().local().get_config().group0_raft_op_timeout_in_ms
};
}
@@ -471,8 +473,10 @@ future<> raft_group0::start_server_for_group0(raft::group_id group0_id, service:
auto srv_for_group0 = create_server_for_group0(group0_id, my_id, ss, qp, mm, topology_change_enabled);
auto& persistence = srv_for_group0.persistence;
auto& server = *srv_for_group0.server;
co_await with_scheduling_group(_sg, [this, srv_for_group0 = std::move(srv_for_group0)] () mutable {
return _raft_gr.start_server_for_group(std::move(srv_for_group0));
co_await with_scheduling_group(_sg, [this, &srv_for_group0] (this auto self) -> future<> {
auto& state_machine = dynamic_cast<group0_state_machine&>(srv_for_group0.state_machine);
co_await _raft_gr.start_server_for_group(std::move(srv_for_group0));
co_await state_machine.enable_in_memory_state_machine();
});
_group0.emplace<raft::group_id>(group0_id);

View File

@@ -92,8 +92,6 @@ static logging::logger logger("group0_client");
*
* Furthermore, obtaining the guard ensures that we don't read partial state, since it holds a lock that is also taken
* during command application (`_read_apply_mutex_holder`). The lock is released just before sending the command to Raft.
* TODO: we may still read partial state if we crash in the middle of command application.
* See `group0_state_machine::apply` for a proposed fix.
*
* Obtaining the guard also ensures that there is no concurrent group 0 operation running on this node using another lock
* (`_operation_mutex_holder`); if we allowed multiple concurrent operations to run, some of them could fail

View File

@@ -27,6 +27,7 @@ namespace service {
class raft_rpc;
class raft_sys_table_storage;
class raft_state_machine;
using raft_ticker_type = seastar::timer<lowres_clock>;
struct raft_group_not_found: public raft::error {
@@ -51,6 +52,7 @@ struct raft_server_for_group {
std::unique_ptr<raft_ticker_type> ticker;
raft_rpc& rpc;
raft_sys_table_storage& persistence;
raft_state_machine& state_machine;
std::optional<seastar::shared_future<>> aborted;
std::optional<utils::updateable_value<uint32_t>> default_op_timeout_in_ms;
};

View File

@@ -534,12 +534,14 @@ future<storage_service::nodes_to_notify_after_sync> storage_service::sync_raft_t
std::vector<future<>> sys_ks_futures;
auto process_left_node = [&] (raft::server_id id, locator::host_id host_id, std::optional<gms::inet_address> ip) -> future<> {
auto process_left_node = [&] (raft::server_id id, locator::host_id host_id, std::optional<gms::inet_address> ip, bool notify) -> future<> {
if (ip) {
sys_ks_futures.push_back(_sys_ks.local().remove_endpoint(*ip));
co_await _gossiper.force_remove_endpoint(host_id, gms::null_permit_id);
nodes_to_notify.left.push_back({*ip, host_id});
if (notify) {
nodes_to_notify.left.push_back({*ip, host_id});
}
}
if (t.left_nodes_rs.find(id) != t.left_nodes_rs.end()) {
@@ -636,7 +638,7 @@ future<storage_service::nodes_to_notify_after_sync> storage_service::sync_raft_t
for (const auto& id: t.left_nodes) {
locator::host_id host_id{id.uuid()};
auto ip = _address_map.find(host_id);
co_await process_left_node(id, host_id, ip);
co_await process_left_node(id, host_id, ip, id_to_ip_map.find(host_id) != id_to_ip_map.end());
if (ip) {
sys_ks_futures.push_back(raft_topology_update_ip(host_id, *ip, id_to_ip_map, nullptr));
}
@@ -854,7 +856,7 @@ future<> storage_service::topology_state_load(state_change_hint hint) {
}
}
// Ban all left and ignord nodes. We do not allow them to go back online.
// Ban all left and ignored nodes. We do not allow them to go back online.
co_await _messaging.local().ban_hosts(boost::join(topology.left_nodes, topology.ignored_nodes)
| std::views::transform([] (auto id) { return locator::host_id{id.uuid()}; })
| std::ranges::to<utils::chunked_vector<locator::host_id>>());
@@ -5111,11 +5113,21 @@ semaphore& storage_service::get_do_sample_sstables_concurrency_limiter() {
return _do_sample_sstables_concurrency_limiter;
}
future<uint64_t> storage_service::estimate_total_sstable_volume(table_id t) {
future<uint64_t> storage_service::estimate_total_sstable_volume(table_id t, ignore_errors errors) {
co_return co_await seastar::map_reduce(
_db.local().get_token_metadata().get_host_ids(),
[&] (auto h) -> future<uint64_t> {
return ser::storage_service_rpc_verbs::send_estimate_sstable_volume(&_messaging.local(), h, t);
try {
co_return co_await ser::storage_service_rpc_verbs::send_estimate_sstable_volume(&_messaging.local(), h, t);
}
catch(...) {
if (errors == ignore_errors::yes) {
// If the call failed we just return 0 for this one
slogger.info("call to estimate_total_sstable_volume failed for table {} and host {}, returning 0", t, h);
co_return 0;
}
throw;
}
},
uint64_t(0),
std::plus<uint64_t>()
@@ -5263,7 +5275,6 @@ future<> storage_service::raft_rebuild(utils::optional_param sdc_param) {
rtlogger.info("request rebuild for: {} source_dc={}", raft_server.id(), sdc_param);
topology_mutation_builder builder(guard.write_timestamp());
builder.set_session(session_id(guard.new_group0_state_id()));
sstring source_dc = sdc_param.value_or("");
if (sdc_param.force() && !source_dc.empty()) {
source_dc += ":force";
@@ -6526,14 +6537,19 @@ future<> storage_service::clone_locally_tablet_storage(locator::global_tablet_id
leaving.host, pending.host));
}
auto d = co_await smp::submit_to(leaving.shard, [this, tablet] () -> future<utils::chunked_vector<sstables::entry_descriptor>> {
// All sstables cloned locally will be left unsealed, until they're loaded into the table.
// This is to guarantee no unsplit sstables will be left sealed on disk, which could
// cause problems if unsplit sstables are found after split was ACKed to coordinator.
bool leave_unsealed = true;
auto d = co_await smp::submit_to(leaving.shard, [this, tablet, leave_unsealed] () -> future<utils::chunked_vector<sstables::entry_descriptor>> {
auto& table = _db.local().find_column_family(tablet.table);
auto op = table.stream_in_progress();
co_return co_await table.clone_tablet_storage(tablet.tablet);
co_return co_await table.clone_tablet_storage(tablet.tablet, leave_unsealed);
});
rtlogger.debug("Cloned storage of tablet {} from leaving replica {}, {} sstables were found", tablet, leaving, d.size());
auto load_sstable = [] (const dht::sharder& sharder, replica::table& t, sstables::entry_descriptor d) -> future<sstables::shared_sstable> {
auto load_sstable = [leave_unsealed] (const dht::sharder& sharder, replica::table& t, sstables::entry_descriptor d) -> future<sstables::shared_sstable> {
auto& mng = t.get_sstables_manager();
auto sst = mng.make_sstable(t.schema(), t.get_storage_options(), d.generation, d.state.value_or(sstables::sstable_state::normal),
d.version, d.format, db_clock::now(), default_io_error_handler_gen());
@@ -6541,7 +6557,8 @@ future<> storage_service::clone_locally_tablet_storage(locator::global_tablet_id
// will still point to leaving replica at this stage in migration. If node goes down,
// SSTables will be loaded at pending replica and migration is retried, so correctness
// wise, we're good.
auto cfg = sstables::sstable_open_config{ .current_shard_as_sstable_owner = true };
auto cfg = sstables::sstable_open_config{ .current_shard_as_sstable_owner = true,
.unsealed_sstable = leave_unsealed };
co_await sst->load(sharder, cfg);
co_return sst;
};
@@ -6549,16 +6566,23 @@ future<> storage_service::clone_locally_tablet_storage(locator::global_tablet_id
co_await smp::submit_to(pending.shard, [this, tablet, load_sstable, d = std::move(d)] () mutable -> future<> {
// Loads cloned sstables from leaving replica into pending one.
auto& table = _db.local().find_column_family(tablet.table);
auto& sstm = table.get_sstables_manager();
auto op = table.stream_in_progress();
dht::auto_refreshing_sharder sharder(table.shared_from_this());
std::vector<sstables::shared_sstable> ssts;
ssts.reserve(d.size());
std::unordered_set<sstables::shared_sstable> ssts;
for (auto&& sst_desc : d) {
ssts.push_back(co_await load_sstable(sharder, table, std::move(sst_desc)));
ssts.insert(co_await load_sstable(sharder, table, std::move(sst_desc)));
}
co_await table.add_sstables_and_update_cache(ssts);
_view_building_worker.local().load_sstables(tablet.table, ssts);
auto on_add = [&ssts, &sstm] (sstables::shared_sstable loading_sst) -> future<> {
if (ssts.contains(loading_sst)) {
auto cfg = sstm.configure_writer(loading_sst->get_origin());
co_await loading_sst->seal_sstable(cfg.backup);
}
co_return;
};
auto loaded_ssts = co_await table.add_new_sstables_and_update_cache(std::vector(ssts.begin(), ssts.end()), on_add);
_view_building_worker.local().load_sstables(tablet.table, loaded_ssts);
});
rtlogger.debug("Successfully loaded storage of tablet {} into pending replica {}", tablet, pending);
}
@@ -7264,6 +7288,10 @@ future<locator::load_stats> storage_service::load_stats_for_tablet_based_tables(
tls.effective_capacity = si.available + sum_tablet_sizes;
}
utils::get_local_injector().inject("clear_tablet_stats_in_load_stats", [&] {
load_stats.tablet_stats.erase(this_host);
});
co_return std::move(load_stats);
}

View File

@@ -1143,7 +1143,11 @@ private:
utils::disk_space_monitor* _disk_space_monitor; // != nullptr only on shard0.
public:
future<uint64_t> estimate_total_sstable_volume(table_id);
struct ignore_errors_tag;
using ignore_errors = seastar::bool_class<ignore_errors_tag>;
// if ignore_errors set to yes, the function will not throw an exception - any error will be ignored
// and node for which an error happened will be considered having size 0.
future<uint64_t> estimate_total_sstable_volume(table_id, ignore_errors = ignore_errors::no);
future<std::vector<std::byte>> train_dict(utils::chunked_vector<temporary_buffer<char>> sample);
future<> publish_new_sstable_dict(table_id, std::span<const std::byte>, service::raft_group0_client&);
void set_train_dict_callback(decltype(_train_dict));

View File

@@ -204,6 +204,7 @@ struct colocated_tablets {
// to same destination.
struct migration_tablet_set {
std::variant<global_tablet_id, colocated_tablets> tablet_s;
uint64_t tablet_set_disk_size = 0;
table_id table() const {
return std::visit(
@@ -232,7 +233,9 @@ struct migration_tablet_set {
return std::holds_alternative<colocated_tablets>(tablet_s);
}
auto operator<=>(const migration_tablet_set&) const = default;
bool operator==(const migration_tablet_set& rhs) const {
return tablet_s == rhs.tablet_s;
}
};
struct migration_candidate {
@@ -526,16 +529,20 @@ class load_balancer {
// Represents metric for load which we want to equalize between shards or nodes.
// Load balancer equalizes storage utilization.
// It is assumed that each tablet has equal size and that shards and nodes can have different capacity.
// So we equalize: tablet_count * target_tablet_size / capacity_in_bytes.
// In case force_capacity_based_balancing is true, it is assumed that each tablet has equal size and that
// shards and nodes can have different capacity. If force_capacity_based_balancing is false,
// tablet sizes are fetched from load_stats.
// So we equalize: sum of tablet_sizes / capacity_in_bytes.
using load_type = double;
using table_candidates_map = std::unordered_map<table_id, std::unordered_set<migration_tablet_set>>;
struct shard_load {
size_t tablet_count = 0;
std::optional<disk_usage> dusage;
absl::flat_hash_map<table_id, size_t> tablet_count_per_table;
absl::flat_hash_map<table_id, uint64_t> tablet_sizes_per_table;
// Number of tablets which are streamed from this shard.
size_t streaming_read_load = 0;
@@ -583,15 +590,16 @@ class load_balancer {
host_id id;
uint64_t shard_count = 0;
uint64_t tablet_count = 0;
std::optional<uint64_t> capacity; // Invariant: bool(capacity) || drained.
std::optional<disk_usage> dusage; // Invariant: bool(dusage) || drained.
bool drained = false;
const locator::node* node; // never nullptr
// The average shard load on this node.
// Valid only when "capacity" is set.
// Valid only when "dusage" is set.
load_type avg_load = 0;
absl::flat_hash_map<table_id, size_t> tablet_count_per_table;
absl::flat_hash_map<table_id, uint64_t> tablet_sizes_per_table;
// heap which tracks most-loaded shards using shards_by_load_cmp().
// Valid during intra-node plan-making for nodes which are in the source node set.
@@ -602,8 +610,8 @@ class load_balancer {
utils::chunked_vector<skipped_candidate> skipped_candidates;
std::optional<double> capacity_per_shard() const {
return capacity.transform([&] (auto cap) {
return double(cap) / shard_count;
return dusage.transform([&] (auto du) {
return load_type(du.capacity) / shard_count;
});
}
@@ -620,16 +628,17 @@ class load_balancer {
}
// Call when tablet_count or capacity changes.
void update(uint64_t target_tablet_size) {
if (auto load = get_avg_load(tablet_count, target_tablet_size)) {
void update() {
if (auto load = get_avg_load()) {
avg_load = *load;
}
}
// Result engaged when !drained.
std::optional<load_type> get_avg_load(uint64_t tablets, uint64_t target_tablet_size) const {
return capacity.transform([&] (auto capacity) {
return load_type(tablets * target_tablet_size) / capacity;
std::optional<load_type> get_avg_load(uint64_t used_size_delta = 0) const {
return dusage.transform([&] (auto du) {
du.used += used_size_delta;
return du.get_load();
});
}
@@ -642,20 +651,20 @@ class load_balancer {
}
// Result engaged for !drained nodes.
std::optional<load_type> shard_load(shard_id shard, uint64_t target_tablet_size) const {
return shard_load(shard, shards[shard].tablet_count, target_tablet_size);
}
// Result engaged for !drained nodes.
std::optional<load_type> shard_load(shard_id shard, uint64_t shard_tablet_count, uint64_t target_tablet_size) const {
return capacity_per_shard().transform([&] (auto cap_per_shard) {
return load_type(shard_tablet_count * target_tablet_size) / cap_per_shard;
std::optional<load_type> shard_load(shard_id shard, int64_t used_size_delta = 0) const {
return shards[shard].dusage.transform([&] (auto du) {
du.used += used_size_delta;
return du.get_load();
});
}
auto shards_by_load_cmp() {
return [this] (const auto& a, const auto& b) {
return shards[a].tablet_count < shards[b].tablet_count;
if (dusage) {
return shards[a].dusage->get_load() < shards[b].dusage->get_load();
} else {
return shards[a].tablet_count < shards[b].tablet_count;
}
};
}
@@ -811,6 +820,8 @@ class load_balancer {
std::unordered_set<global_tablet_id> _scheduled_tablets;
// Holds tablet replica count per table in the balanced node set (within a single DC).
absl::flat_hash_map<table_id, size_t> _tablet_count_per_table;
// Holds total used storage per table in the DC
absl::flat_hash_map<table_id, uint64_t> _disk_used_per_table;
dc_name _dc;
std::optional<sstring> _rack; // Set when plan making is limited to a single rack.
size_t _total_capacity_shards; // Total number of non-drained shards in the balanced node set.
@@ -821,6 +832,19 @@ class load_balancer {
std::unordered_set<host_id> _skiplist;
bool _use_table_aware_balancing = true;
double _initial_scale = 1;
// This is the maximum load delta between the most and least loaded nodes,
// below which the balancer considers the DC balanced
double _size_based_balance_threshold = 0.01;
// When this is set to true, the balancer assumes all tablets
// have the same size: _target_tablet_size
bool _force_capacity_based_balancing = false;
// The minimal tablet size the balancer will compute load with. For any tablet smaller than this,
// the balancer will use this size instead of the actual tablet size.
uint64_t _minimal_tablet_size = service::default_target_tablet_size / 100;
private:
tablet_replica_set get_replicas_for_tablet_load(const tablet_info& ti, const tablet_transition_info* trinfo) const {
// We reflect migrations in the load as if they already happened,
@@ -907,7 +931,16 @@ public:
, _table_load_stats(std::move(table_load_stats))
, _stats(stats)
, _skiplist(std::move(skiplist))
{ }
, _size_based_balance_threshold(db.get_config().size_based_balance_threshold_percentage() / 100.0)
, _force_capacity_based_balancing(db.get_config().force_capacity_based_balancing())
, _minimal_tablet_size(db.get_config().minimal_tablet_size_for_balancing()) {
// Force capacity based balancing until all the nodes have been upgraded
if (!_db.features().size_based_load_balancing && !_force_capacity_based_balancing) {
lblogger.info("Size based load balancing cluster feature disabled; forcing capacity based balancing");
_force_capacity_based_balancing = true;
}
}
bool ongoing_rack_list_colocation() const {
return _topology != nullptr && _sys_ks != nullptr && !_topology->paused_rf_change_requests.empty();
@@ -972,6 +1005,13 @@ public:
return (it != _table_load_stats->tables.end()) ? &it->second : nullptr;
}
std::optional<uint64_t> get_tablet_size(host_id host, const range_based_tablet_id& rb_tid, const tablet_info& ti, const tablet_transition_info* trinfo) const {
if (_table_load_stats) {
return _table_load_stats->get_tablet_size_in_transition(host, rb_tid, ti, trinfo);
}
return std::nullopt;
}
future<bool> needs_auto_repair(const locator::global_tablet_id& gid, const locator::tablet_info& info,
const locator::repair_scheduler_config& config, const db_clock::time_point& now, db_clock::duration& diff) {
co_return false;
@@ -990,15 +1030,25 @@ public:
load.id = host;
load.node = node;
load.shard_count = node->get_shard_count();
load.shards.resize(load.shard_count);
if (!load.shard_count) {
throw std::runtime_error(format("Shard count of {} not found in topology", host));
}
if (!_db.features().tablet_load_stats_v2) {
// This way load calculation will hold tablet count.
load.capacity = _target_tablet_size * load.shard_count;
} else if (_table_load_stats && _table_load_stats->capacity.contains(host)) {
load.capacity = _table_load_stats->capacity.at(host);
load.dusage = disk_usage{_target_tablet_size * load.shard_count, 0};
} else if (_table_load_stats) {
if (_table_load_stats->tablet_stats.contains(host) && !_force_capacity_based_balancing) {
load.dusage = disk_usage{_table_load_stats->tablet_stats.at(host).effective_capacity, 0};
} else if (_table_load_stats->capacity.contains(host)) {
load.dusage = disk_usage{_table_load_stats->capacity.at(host), 0};
}
}
load.shards.resize(load.shard_count);
if (load.dusage) {
for (auto& sload : load.shards) {
sload.dusage = disk_usage{ load.dusage->capacity / load.shard_count, 0 };
}
}
}
@@ -1931,6 +1981,10 @@ public:
const auto& table_groups = _tm->tablets().all_table_groups();
auto finalize_decision = [&] {
if (utils::get_local_injector().enter("tablet_resize_finalization_postpone")) {
return;
}
_stats.for_cluster().resizes_finalized++;
resize_plan.finalize_resize.insert(table);
};
@@ -2014,6 +2068,19 @@ public:
return utils::get_local_injector().enter("tablet_allocator_shuffle");
}
bool is_balanced(load_type min_load, load_type max_load) const {
if (_force_capacity_based_balancing) {
return min_load == max_load;
}
if (max_load == 0) {
return true;
}
const load_type load_delta = max_load - min_load;
return (load_delta / max_load) < _size_based_balance_threshold;
}
// If cluster cannot agree on tablet merge feature, then merge will not be finalized since
// not all nodes in the cluster can handle the finalization step.
bool bypass_merge_completion() const {
@@ -2057,14 +2124,14 @@ public:
return *shard_info.candidates_all_tables.begin();
}
// Evaluates impact on load balance of migrating a single tablet of a given table to dst.
migration_badness evaluate_dst_badness(node_load_map& nodes, table_id table, tablet_replica dst) {
// Evaluates impact on load balance of migrating a tablet set of a given table to dst.
migration_badness evaluate_dst_badness(node_load_map& nodes, table_id table, tablet_replica dst, uint64_t tablet_set_disk_size) {
_stats.for_dc(_dc).candidates_evaluated++;
auto& node_info = nodes[dst.host];
// Size of all tablet replicas of the table in bytes.
uint64_t table_size = _tablet_count_per_table[table] * _target_tablet_size;
uint64_t table_size = _disk_used_per_table[table];
if (node_info.drained) {
// Moving a tablet to a drained node is always bad.
@@ -2074,33 +2141,36 @@ public:
double ideal_table_load = double(table_size) / _total_capacity_storage;
// max number of tablets per shard to keep perfect distribution.
double shard_balance_threshold = ideal_table_load;
auto new_tablet_count_per_shard = node_info.shards[dst.shard].tablet_count_per_table[table] + 1;
auto new_shard_load = *node_info.shard_load(dst.shard, new_tablet_count_per_shard, _target_tablet_size);
auto dst_shard_badness = (new_shard_load - shard_balance_threshold) / table_size;
lblogger.trace("Table {} @{} shard balance threshold: {}, dst: {} ({:.4f})", table, dst,
shard_balance_threshold, new_shard_load, dst_shard_badness);
auto compute_load_and_dst_badness = [&] (uint64_t capacity, uint64_t new_used) {
double new_load = double(new_used) / capacity;
// Divide badness by table_size to take into account that moving a tablet of a small table has
// greater impact on balance of that table than moving a tablet of the same size of a larger table
return std::make_pair(new_load, (new_load - ideal_table_load) / table_size);
};
// max number of tablets per node to keep perfect distribution.
double node_balance_threshold = ideal_table_load;
size_t new_tablet_count_per_node = node_info.tablet_count_per_table[table] + 1;
load_type new_node_load = *node_info.get_avg_load(new_tablet_count_per_node, _target_tablet_size);
auto dst_node_badness = (new_node_load - node_balance_threshold) / table_size;
uint64_t capacity = node_info.shards[dst.shard].dusage->capacity;
uint64_t new_used = node_info.shards[dst.shard].tablet_sizes_per_table[table] + tablet_set_disk_size;
auto [new_shard_load, dst_shard_badness] = compute_load_and_dst_badness(capacity, new_used);
lblogger.trace("Table {} @{} shard balance threshold: {}, dst: {} ({:.4f})", table, dst,
ideal_table_load, new_shard_load, dst_shard_badness);
capacity = node_info.dusage->capacity;
new_used = node_info.tablet_sizes_per_table[table] + tablet_set_disk_size;
auto [new_node_load, dst_node_badness] = compute_load_and_dst_badness(capacity, new_used);
lblogger.trace("Table {} @{} node balance threshold: {}, dst: {} ({:.4f})", table, dst,
node_balance_threshold, new_node_load, dst_node_badness);
ideal_table_load, new_node_load, dst_node_badness);
return migration_badness{0, 0, dst_shard_badness, dst_node_badness};
}
// Evaluates impact on load balance of migrating a single tablet of a given table from src.
migration_badness evaluate_src_badness(node_load_map& nodes, table_id table, tablet_replica src) {
// Evaluates impact on load balance of migrating a tablet set of a given table from src.
migration_badness evaluate_src_badness(node_load_map& nodes, table_id table, tablet_replica src, uint64_t tablet_set_disk_size) {
_stats.for_dc(_dc).candidates_evaluated++;
auto& node_info = nodes[src.host];
// Size of all tablet replicas of the table in bytes.
uint64_t table_size = _tablet_count_per_table[table] * _target_tablet_size;
uint64_t table_size = _disk_used_per_table[table];
if (node_info.drained) {
// Moving a tablet away from a drained node is always good.
@@ -2109,28 +2179,32 @@ public:
double ideal_table_load = double(table_size) / _total_capacity_storage;
double leaving_shard_balance_threshold = ideal_table_load;
auto new_tablet_count_per_shard = node_info.shards[src.shard].tablet_count_per_table[table] - 1;
auto new_shard_load = *node_info.shard_load(src.shard, new_tablet_count_per_shard, _target_tablet_size);
auto src_shard_badness = (leaving_shard_balance_threshold - new_shard_load) / table_size;
lblogger.trace("Table {} @{} shard balance threshold: {}, src: {} ({:.4f})", table, src,
leaving_shard_balance_threshold, new_shard_load, src_shard_badness);
auto compute_load_and_src_badness = [&] (uint64_t capacity, uint64_t new_used) {
// Divide badness by table_size to take into account that moving a tablet of a small table has
// greater impact on balance of that table than moving a tablet of the same size of a larger table
double new_load = double(new_used) / capacity;
return std::make_pair(new_load, (ideal_table_load - new_load) / table_size);
};
// max number of tablets per node to keep perfect distribution.
double leaving_node_balance_threshold = ideal_table_load;
size_t new_tablet_count_per_node = node_info.tablet_count_per_table[table] - 1;
auto new_node_load = *node_info.get_avg_load(new_tablet_count_per_node, _target_tablet_size);
auto src_node_badness = (leaving_node_balance_threshold - new_node_load) / table_size;
uint64_t capacity = node_info.shards[src.shard].dusage->capacity;
uint64_t new_used = node_info.shards[src.shard].tablet_sizes_per_table[table] - tablet_set_disk_size;
auto [new_shard_load, src_shard_badness] = compute_load_and_src_badness(capacity, new_used);
lblogger.trace("Table {} @{} shard balance threshold: {}, src: {} ({:.4f})", table, src,
ideal_table_load, new_shard_load, src_shard_badness);
capacity = node_info.dusage->capacity;
new_used = node_info.tablet_sizes_per_table[table] - tablet_set_disk_size;
auto [new_node_load, src_node_badness] = compute_load_and_src_badness(capacity, new_used);
lblogger.trace("Table {} @{} node balance threshold: {}, src: {} ({:.4f})", table, src,
leaving_node_balance_threshold, new_node_load, src_node_badness);
ideal_table_load, new_node_load, src_node_badness);
return migration_badness{src_shard_badness, src_node_badness, 0, 0};
}
// Evaluates impact on load balance of migrating a single tablet of a given table from src to dst.
migration_badness evaluate_candidate(node_load_map& nodes, table_id table, tablet_replica src, tablet_replica dst) {
auto src_badness = evaluate_src_badness(nodes, table, src);
auto dst_badness = evaluate_dst_badness(nodes, table, dst);
migration_badness evaluate_candidate(node_load_map& nodes, table_id table, tablet_replica src, tablet_replica dst, uint64_t tablet_set_disk_size) {
auto src_badness = evaluate_src_badness(nodes, table, src, tablet_set_disk_size);
auto dst_badness = evaluate_dst_badness(nodes, table, dst, tablet_set_disk_size);
if (src.host == dst.host) {
src_badness.src_node_badness = 0;
@@ -2158,7 +2232,7 @@ public:
for (auto&& [table, tablets] : shard_info.candidates) {
if (!tablets.empty()) {
auto badness = evaluate_candidate(nodes, table, src, dst);
auto badness = evaluate_candidate(nodes, table, src, dst, tablets.begin()->tablet_set_disk_size);
auto candidate = migration_candidate{*tablets.begin(), src, dst, badness};
lblogger.trace("Candidate: {}", candidate);
if (!best_candidate || candidate.badness < best_candidate->badness) {
@@ -2232,9 +2306,9 @@ public:
// where tablets are moved back and forth between nodes and convergence is never reached.
//
// The assumption is that the algorithm moves tablets from more loaded nodes to less loaded nodes,
// so convergence is reached where the node we picked as source has lower load, or will have lower
// load post-movement, than the node we picked as the destination.
bool check_convergence(node_load& src_info, node_load& dst_info, unsigned delta = 1) {
// so convergence is reached where the node we picked as source has lower or equal load, than the node we
// picked as the destination will have post-movement.
bool check_convergence(node_load& src_info, node_load& dst_info, uint64_t tablet_sizes) {
if (src_info.drained) {
return true;
}
@@ -2250,10 +2324,9 @@ public:
}
// Prevent load inversion post-movement which can lead to oscillations.
if (*src_info.get_avg_load(src_info.tablet_count - delta, _target_tablet_size) <
*dst_info.get_avg_load(dst_info.tablet_count + delta, _target_tablet_size)) {
lblogger.trace("Load inversion post-movement: src={} (avg_load={}), dst={} (avg_load={})",
src_info.id, src_info.avg_load, dst_info.id, dst_info.avg_load);
if (src_info.avg_load <= *dst_info.get_avg_load(tablet_sizes)) {
lblogger.trace("Load inversion post-movement: src={} (avg_load={}), dst={} (avg_load={}) tablet_sizes={}",
src_info.id, src_info.avg_load, dst_info.id, dst_info.avg_load, tablet_sizes);
return false;
}
@@ -2261,76 +2334,83 @@ public:
}
bool check_convergence(node_load& src_info, node_load& dst_info, const migration_tablet_set& tablet_set) {
return check_convergence(src_info, dst_info, tablet_set.tablets().size());
return check_convergence(src_info, dst_info, tablet_set.tablet_set_disk_size);
}
// Checks whether moving a tablet from shard A to B (intra-node) would go against convergence.
// Returns false if the tablet should not be moved, and true if it may be moved.
// Can be called when node_info.drained.
bool check_intranode_convergence(const node_load& node_info, shard_id src_shard, shard_id dst_shard,
unsigned delta = 1) {
return node_info.shards[src_shard].tablet_count > node_info.shards[dst_shard].tablet_count + delta;
uint64_t used_size_delta) {
return node_info.shard_load(src_shard) > node_info.shard_load(dst_shard, int64_t(used_size_delta));
}
// Can be called when node_info.drained.
bool check_intranode_convergence(const node_load& node_info, shard_id src_shard, shard_id dst_shard,
const migration_tablet_set& tablet_set) {
return check_intranode_convergence(node_info, src_shard, dst_shard, tablet_set.tablets().size());
return check_intranode_convergence(node_info, src_shard, dst_shard, tablet_set.tablet_set_disk_size);
}
// Adjusts the load of the source and destination shards in the host where intra-node migration happens.
void update_node_load_on_migration(node_load& node_load, host_id host, shard_id src, shard_id dst, global_tablet_id tablet) {
auto& src_info = node_load.shards[src];
auto& dst_info = node_load.shards[dst];
dst_info.tablet_count++;
src_info.tablet_count--;
dst_info.tablet_count_per_table[tablet.table]++;
src_info.tablet_count_per_table[tablet.table]--;
void update_node_load_on_migration(node_load& node_load, host_id host, shard_id src, shard_id dst, const migration_tablet_set& tablet_set) {
auto tablet_count = tablet_set.tablets().size();
auto tablet_sizes = tablet_set.tablet_set_disk_size;
auto table = tablet_set.tablets().front().table;
auto& dst_shard = node_load.shards[dst];
dst_shard.tablet_count += tablet_count;
dst_shard.tablet_count_per_table[table] += tablet_count;
dst_shard.tablet_sizes_per_table[table] += tablet_sizes;
dst_shard.dusage->used += tablet_sizes;
auto& src_shard = node_load.shards[src];
src_shard.tablet_count -= tablet_count;
src_shard.tablet_count_per_table[table] -= tablet_count;
src_shard.tablet_sizes_per_table[table] -= tablet_sizes;
src_shard.dusage->used -= tablet_sizes;
}
// Adjusts the load of the source and destination (host:shard) that were picked for the migration.
void update_node_load_on_migration(node_load_map& nodes, tablet_replica src, tablet_replica dst, global_tablet_id source_tablet) {
{
auto& target_info = nodes[dst.host];
target_info.shards[dst.shard].tablet_count++;
target_info.shards[dst.shard].tablet_count_per_table[source_tablet.table]++;
target_info.tablet_count_per_table[source_tablet.table]++;
target_info.tablet_count += 1;
target_info.update(_target_tablet_size);
}
auto& src_node_info = nodes[src.host];
auto& src_shard_info = src_node_info.shards[src.shard];
src_shard_info.tablet_count -= 1;
src_shard_info.tablet_count_per_table[source_tablet.table]--;
src_node_info.tablet_count_per_table[source_tablet.table]--;
src_node_info.tablet_count -= 1;
src_node_info.update(_target_tablet_size);
}
void update_node_load_on_migration(node_load& node_load, host_id host, shard_id src, shard_id dst, const migration_tablet_set& tablet_set) {
for (auto tablet : tablet_set.tablets()) {
update_node_load_on_migration(node_load, host, src, dst, tablet);
}
}
void update_node_load_on_migration(node_load_map& nodes, tablet_replica src, tablet_replica dst, const migration_tablet_set& tablet_set) {
for (auto tablet : tablet_set.tablets()) {
update_node_load_on_migration(nodes, src, dst, tablet);
auto tablet_count = tablet_set.tablets().size();
auto tablet_sizes = tablet_set.tablet_set_disk_size;
auto table = tablet_set.tablets().front().table;
auto& dst_node = nodes[dst.host];
auto& dst_shard = dst_node.shards[dst.shard];
dst_shard.tablet_count += tablet_count;
dst_shard.tablet_count_per_table[table] += tablet_count;
dst_shard.tablet_sizes_per_table[table] += tablet_sizes;
dst_shard.dusage->used += tablet_sizes;
dst_node.tablet_count_per_table[table] += tablet_count;
dst_node.tablet_sizes_per_table[table] += tablet_sizes;
dst_node.tablet_count += tablet_count;
dst_node.dusage->used += tablet_sizes;
dst_node.update();
auto& src_node = nodes[src.host];
auto& src_shard = src_node.shards[src.shard];
src_shard.tablet_count -= tablet_count;
src_shard.tablet_count_per_table[table] -= tablet_count;
src_shard.tablet_sizes_per_table[table] -= tablet_sizes;
if (src_shard.dusage) {
src_shard.dusage->used -= tablet_sizes;
}
src_node.tablet_count_per_table[table] -= tablet_count;
src_node.tablet_sizes_per_table[table] -= tablet_sizes;
src_node.tablet_count -= tablet_count;
if (src_node.dusage) {
src_node.dusage->used -= tablet_sizes;
}
src_node.update();
}
static void unload(locator::load_sketch& sketch, host_id host, shard_id shard, const migration_tablet_set& tablet_set) {
for (auto _ : tablet_set.tablets()) {
sketch.unload(host, shard);
}
sketch.unload(host, shard, tablet_set.tablets().size(), tablet_set.tablet_set_disk_size);
}
static void pick(locator::load_sketch& sketch, host_id host, shard_id shard, const migration_tablet_set& tablet_set) {
for (auto _ : tablet_set.tablets()) {
sketch.pick(host, shard);
}
sketch.pick(host, shard, tablet_set.tablets().size(), tablet_set.tablet_set_disk_size);
}
void mark_as_scheduled(const tablet_migration_info& mig) {
@@ -2363,7 +2443,7 @@ public:
}
std::make_heap(src_shards.begin(), src_shards.end(), node_load.shards_by_load_cmp());
size_t max_load = 0; // Tracks max load among shards which ran out of candidates.
load_type max_load = 0; // Tracks max load among shards which ran out of candidates.
while (true) {
co_await coroutine::maybe_yield();
@@ -2403,14 +2483,16 @@ public:
// Convergence check
// When in shuffle mode, exit condition is guaranteed by running out of candidates or by load limit.
if (!shuffle && (src == dst || !check_intranode_convergence(node_load, src, dst))) {
if (!shuffle && src == dst) {
lblogger.debug("Node {} is balanced", host);
break;
}
if (!src_info.has_candidates()) {
lblogger.debug("No more candidates on shard {} of {}", src, host);
max_load = std::max(max_load, src_info.tablet_count);
if (src_info.dusage) {
max_load = std::max(max_load, src_info.dusage->get_load());
}
src_shards.pop_back();
push_back.cancel();
continue;
@@ -2419,7 +2501,6 @@ public:
auto candidate = co_await peek_candidate(nodes, src_info, tablet_replica{host, src}, tablet_replica{host, dst});
auto tablets = candidate.tablets;
// Recheck convergence to avoid oscillations if co-located tablets are being migrated together.
if (!shuffle && (src == dst || !check_intranode_convergence(node_load, src, dst, tablets))) {
lblogger.debug("Node {} is balanced", host);
break;
@@ -2439,7 +2520,7 @@ public:
}
apply_load(nodes, mig_streaming_info);
lblogger.debug("Adding migration: {}", mig);
lblogger.debug("Adding migration: {} size: {}", mig, tablets.tablet_set_disk_size);
_stats.for_dc(node_load.dc()).migrations_produced++;
_stats.for_dc(node_load.dc()).intranode_migrations_produced++;
mark_as_scheduled(mig);
@@ -2448,8 +2529,8 @@ public:
erase_candidates(nodes, tmap, tablets);
update_node_load_on_migration(node_load, host, src, dst, tablets);
sketch.pick(host, dst);
sketch.unload(host, src);
pick(sketch, host, dst, tablets);
unload(sketch, host, src, tablets);
}
co_return plan;
@@ -2678,7 +2759,7 @@ public:
-> future<migration_candidate> {
if (drain_skipped) {
auto source_tablets = src_node_info.skipped_candidates.back().tablets;
auto badness = evaluate_candidate(nodes, source_tablets.table(), src, dst);
auto badness = evaluate_candidate(nodes, source_tablets.table(), src, dst, source_tablets.tablet_set_disk_size);
co_return migration_candidate{source_tablets, src, dst, badness};
} else {
auto&& src_shard_info = src_node_info.shards[src.shard];
@@ -2705,7 +2786,7 @@ public:
continue;
}
auto badness = evaluate_dst_badness(nodes, tablets.table(), tablet_replica{new_target, 0});
auto badness = evaluate_dst_badness(nodes, tablets.table(), tablet_replica{new_target, 0}, tablets.tablet_set_disk_size);
if (!min_dst_host || badness.dst_node_badness < min_dst_badness.dst_node_badness) {
min_dst_badness = badness;
min_dst_host = new_target;
@@ -2725,21 +2806,29 @@ public:
// Find the best shards on best targets.
std::vector<tablet_replica> best_dsts;
for (auto host : best_hosts) {
for (shard_id new_dst_shard = 0; new_dst_shard < nodes[host].shard_count; new_dst_shard++) {
co_await coroutine::maybe_yield();
auto new_dst = tablet_replica{host, new_dst_shard};
auto badness = evaluate_dst_badness(nodes, tablets.table(), new_dst);
auto badness = evaluate_dst_badness(nodes, tablets.table(), new_dst, tablets.tablet_set_disk_size);
if (!min_dst || badness < min_dst_badness) {
min_dst_badness = badness;
min_dst = new_dst;
best_dsts.clear();
}
if (badness.dst_shard_badness == min_dst_badness.dst_shard_badness) {
best_dsts.push_back(new_dst);
}
}
if (min_dst && !min_dst_badness.is_bad()) {
break;
}
}
if (best_dsts.size() > 1) {
min_dst = best_dsts[rand_int() % best_dsts.size()];
}
if (!min_dst) {
on_internal_error(lblogger, fmt::format("No destination shards on {}", best_hosts));
@@ -2766,31 +2855,42 @@ public:
// Consider better alternatives.
if (drain_skipped) {
auto tablets = src_node_info.skipped_candidates.back().tablets;
auto badness = evaluate_src_badness(nodes, tablets.table(), src);
auto badness = evaluate_src_badness(nodes, tablets.table(), src, tablets.tablet_set_disk_size);
co_await evaluate_targets(tablets, src, badness);
} else {
// Find a better candidate.
// Consider different tables. For each table, first find the best source shard.
// Then find the best target node. Then find the best shard on the target node.
for (auto [table, load] : src_node_info.tablet_count_per_table) {
migration_badness min_src_badness;
std::optional<tablet_replica> min_src;
if (load == 0) {
for (auto [table, tablet_count] : src_node_info.tablet_count_per_table) {
if (tablet_count == 0) {
lblogger.trace("No src candidates for table {} on node {}", table, src.host);
continue;
}
migration_badness min_src_badness;
std::optional<tablet_replica> min_src;
std::optional<migration_tablet_set> min_tablet_set;
auto check_candidate = [&] (const tablet_replica& new_src, const migration_tablet_set& tablet_set) {
auto badness = evaluate_src_badness(nodes, table, new_src, tablet_set.tablet_set_disk_size);
if (!min_src || badness < min_src_badness) {
min_src_badness = badness;
min_src = new_src;
min_tablet_set = tablet_set;
}
};
for (auto new_src_shard: src_node_info.shards_by_load) {
auto new_src = tablet_replica{src.host, new_src_shard};
if (src_node_info.shards[new_src_shard].candidates[table].empty()) {
lblogger.trace("No src candidates for table {} on shard {}", table, new_src);
continue;
}
auto badness = evaluate_src_badness(nodes, table, new_src);
if (!min_src || badness < min_src_badness) {
min_src_badness = badness;
min_src = new_src;
co_await coroutine::maybe_yield();
if (_force_capacity_based_balancing) {
check_candidate(new_src, *src_node_info.shards[new_src_shard].candidates[table].begin());
} else {
for (const auto& tablet_set: src_node_info.shards[new_src_shard].candidates[table]) {
check_candidate(new_src, tablet_set);
}
}
}
@@ -2799,8 +2899,7 @@ public:
continue;
}
auto tablet = *src_node_info.shards[min_src->shard].candidates[table].begin();
co_await evaluate_targets(tablet, *min_src, min_src_badness);
co_await evaluate_targets(*min_tablet_set, *min_src, min_src_badness);
if (!min_candidate.badness.is_bad()) {
break;
}
@@ -2902,7 +3001,7 @@ public:
shard_id shard = 0;
for (auto&& shard_load : node_load.shards) {
lblogger.debug("shard {}: load: {}, tablets: {}, candidates: {}, tables: {}", tablet_replica {host, shard},
node_load.shard_load(shard, _target_tablet_size), shard_load.tablet_count,
node_load.shard_load(shard), shard_load.tablet_count,
shard_load.candidate_count(), shard_load.tablet_count_per_table);
shard++;
}
@@ -3020,7 +3119,7 @@ public:
// When draining nodes, disable convergence checks so that all tablets are migrated away.
bool can_check_convergence = !shuffle && nodes_to_drain.empty();
if (!shuffle && nodes_to_drain.empty()) {
if (can_check_convergence) {
// Check if all nodes reached the same avg_load. There are three sets of nodes: target, candidates (nodes_by_load)
// and off-candidates (removed from nodes_by_load). At any time, the avg_load for target is not greater than
// that of any candidate, and avg_load of any candidate is not greater than that of any in the off-candidates set.
@@ -3030,17 +3129,12 @@ public:
// is tracked in max_off_candidate_load. If max_off_candidate_load is equal to target's avg_load,
// it means that all nodes have equal avg_load. We take the maximum with the current candidate in src_node_info
// to handle the case of off-candidates being empty. In that case, max_off_candidate_load is 0.
if (std::max(max_off_candidate_load, src_node_info.avg_load) == target_info.avg_load) {
const load_type max_load = std::max(max_off_candidate_load, src_node_info.avg_load);
if (is_balanced(target_info.avg_load, max_load)) {
lblogger.debug("Balance achieved.");
_stats.for_dc(dc).stop_balance++;
break;
}
if (!check_convergence(src_node_info, target_info)) {
lblogger.debug("No more candidates. Load would be inverted.");
_stats.for_dc(dc).stop_load_inversion++;
break;
}
}
// Pick best target shard.
@@ -3048,13 +3142,13 @@ public:
auto dst = global_shard_id {target, _load_sketch->get_least_loaded_shard(target)};
lblogger.trace("target shard: {}, tablets={}, load={}", dst.shard,
target_info.shards[dst.shard].tablet_count,
target_info.shard_load(dst.shard, _target_tablet_size));
target_info.shard_load(dst.shard));
if (lblogger.is_enabled(seastar::log_level::trace)) {
shard_id shard = 0;
for (auto&& shard_load : target_info.shards) {
lblogger.trace("shard {}: load: {}, tablets: {}, candidates: {}, tables: {}", tablet_replica{dst.host, shard},
target_info.shard_load(shard, _target_tablet_size), shard_load.tablet_count,
target_info.shard_load(shard), shard_load.tablet_count,
shard_load.candidate_count(), shard_load.tablet_count_per_table);
shard++;
}
@@ -3070,7 +3164,6 @@ public:
dst = candidate.dst;
auto& tmap = tmeta.get_tablet_map(source_tablets.table());
// If best candidate is co-located sibling tablets, then convergence is re-checked to avoid oscillations.
if (can_check_convergence && !check_convergence(src_node_info, target_info, source_tablets)) {
lblogger.debug("No more candidates. Load would be inverted.");
_stats.for_dc(dc).stop_load_inversion++;
@@ -3118,7 +3211,7 @@ public:
if (can_accept_load(nodes, mig_streaming_info)) {
apply_load(nodes, mig_streaming_info);
lblogger.debug("Adding migration: {}", mig);
lblogger.debug("Adding migration: {} size: {}", mig, source_tablets.tablet_set_disk_size);
_stats.for_dc(dc).migrations_produced++;
mark_as_scheduled(mig);
plan.add(std::move(mig));
@@ -3240,8 +3333,8 @@ public:
auto shuffle = in_shuffle_mode();
_stats.for_dc(dc).calls++;
lblogger.debug("Examining DC {} rack {} (shuffle={}, balancing={}, tablets_per_shard_goal={})",
dc, rack, shuffle, _tm->tablets().balancing_enabled(), _tablets_per_shard_goal);
lblogger.debug("Examining DC {} rack {} (shuffle={}, balancing={}, tablets_per_shard_goal={}, force_capacity_based_balancing={})",
dc, rack, shuffle, _tm->tablets().balancing_enabled(), _tablets_per_shard_goal, _force_capacity_based_balancing);
const locator::topology& topo = _tm->get_topology();
@@ -3340,60 +3433,54 @@ public:
}
}
_load_sketch = locator::load_sketch(_tm, _table_load_stats, _force_capacity_based_balancing ? _target_tablet_size : 0);
_load_sketch->set_minimal_tablet_size(_minimal_tablet_size);
_load_sketch->set_force_capacity_based_load(_force_capacity_based_balancing);
co_await _load_sketch->populate_dc(dc);
// If we don't have nodes to drain, remove nodes which don't have complete tablet sizes
if (nodes_to_drain.empty()) {
for (auto nodes_i = nodes.begin(); nodes_i != nodes.end();) {
host_id host = nodes_i->first;
if (!_load_sketch->has_complete_data(host)) {
lblogger.info("Node {} does not have complete tablet stats, ignoring", nodes_i->first);
nodes_i = nodes.erase(nodes_i);
} else {
++nodes_i;
}
}
}
plan.set_has_nodes_to_drain(!nodes_to_drain.empty());
// Invariant: node.capacity || node.drained
// Invariant: node.dusage || node.drained
for (auto& [host, node] : nodes) {
if (node.drained) {
continue;
}
if (!node.capacity) {
if (!node.dusage) {
lblogger.info("Cannot balance because capacity of node {} (or more) is unknown", host);
co_return plan;
}
}
// Compute load imbalance.
_total_capacity_shards = 0;
_total_capacity_nodes = 0;
_total_capacity_storage = 0;
load_type max_load = 0;
load_type min_load = 0;
std::optional<host_id> min_load_node = std::nullopt;
for (auto&& [host, load] : nodes) {
load.update(_target_tablet_size);
_stats.for_node(dc, host).load = load.avg_load;
if (!load.drained) {
if (!min_load_node || load.avg_load < min_load) {
min_load = load.avg_load;
min_load_node = host;
// For size based balancing, only excluded nodes are allowed to have incomplete tablet stats
for (auto& [host, node] : nodes) {
if (!_load_sketch->has_complete_data(host)) {
if (!_force_capacity_based_balancing && node.drained && node.node->is_excluded()) {
_load_sketch->ignore_incomplete_data(host);
} else {
lblogger.info("Cannot balance because node {} (or more) has incomplete tablet stats", host);
co_return plan;
}
if (load.avg_load > max_load) {
max_load = load.avg_load;
}
_total_capacity_shards += load.shard_count;
_total_capacity_nodes++;
_total_capacity_storage += *load.capacity;
}
}
for (auto&& [host, load] : nodes) {
size_t read = 0;
size_t write = 0;
for (auto& shard_load : load.shards) {
read += shard_load.streaming_read_load;
write += shard_load.streaming_write_load;
}
auto level = (read + write) > 0 ? seastar::log_level::info : seastar::log_level::debug;
lblogger.log(level, "Node {}: dc={} rack={} load={} tablets={} shards={} tablets/shard={} state={} cap={}"
" stream_read={} stream_write={}",
host, dc, load.rack(), load.avg_load, load.tablet_count, load.shard_count,
load.tablets_per_shard(), load.state(), load.capacity, read, write);
}
if (!min_load_node) {
// Check if we have destination nodes
const bool has_dest_nodes = std::ranges::any_of(std::views::values(nodes), [&] (const auto& load) {
return !load.drained;
});
if (!has_dest_nodes) {
if (!nodes_to_drain.empty()) {
throw std::runtime_error(format("There are nodes with tablets to drain but no candidate nodes in DC {}."
" Consider adding new nodes or reducing replication factor.", dc));
@@ -3415,13 +3502,13 @@ public:
// Compute per-shard load and candidate tablets.
_load_sketch = locator::load_sketch(_tm);
co_await _load_sketch->populate_dc(dc);
_tablet_count_per_table.clear();
_disk_used_per_table.clear();
for (auto&& [table, tables] : _tm->tablets().all_table_groups()) {
const auto& tmap = _tm->tablets().get_tablet_map(table);
uint64_t total_load = 0;
uint64_t total_tablet_count = 0;
uint64_t total_tablet_sizes = 0;
auto get_replicas = [this] (std::optional<tablet_desc> t) -> tablet_replica_set {
return t ? sorted_replicas_for_tablet_load(*t->info, t->transition) : tablet_replica_set{};
@@ -3459,26 +3546,60 @@ public:
if (!nodes.contains(replica.host)) {
continue;
}
utils::small_vector<uint64_t, 2> tablet_sizes;
uint64_t tablet_sizes_sum = 0;
for (auto tid : tids) {
if (_force_capacity_based_balancing) {
tablet_sizes_sum += _target_tablet_size;
tablet_sizes.push_back(_target_tablet_size);
} else {
uint64_t tablet_group_size = 0;
auto token_range = tmap.get_token_range(tid);
for (auto group_member : tables) {
const range_based_tablet_id rb_tid {group_member, token_range};
auto& member_tmap = _tm->tablets().get_tablet_map(group_member);
auto& ti = member_tmap.get_tablet_info(tid);
auto trinfo = member_tmap.get_tablet_transition_info(tid);
auto tablet_size_opt = get_tablet_size(replica.host, rb_tid, ti, trinfo);
const uint64_t tablet_size = std::max(tablet_size_opt.value_or(_target_tablet_size), _minimal_tablet_size);
tablet_group_size += tablet_size;
tablet_sizes_sum += tablet_size;
}
tablet_sizes.push_back(tablet_group_size);
}
}
auto& node_load_info = nodes[replica.host];
shard_load& shard_load_info = node_load_info.shards[replica.shard];
if (shard_load_info.tablet_count == 0) {
node_load_info.shards_by_load.push_back(replica.shard);
}
shard_load_info.tablet_count += tids.size();
if (shard_load_info.dusage) {
shard_load_info.dusage->used += tablet_sizes_sum;
}
shard_load_info.tablet_count_per_table[table] += tids.size();
shard_load_info.tablet_sizes_per_table[table] += tablet_sizes_sum;
node_load_info.tablet_count_per_table[table] += tids.size();
total_load += tids.size();
node_load_info.tablet_sizes_per_table[table] += tablet_sizes_sum;
if (node_load_info.dusage) {
node_load_info.dusage->used += tablet_sizes_sum;
}
total_tablet_count += tids.size();
total_tablet_sizes += tablet_sizes_sum;
if (tmap.needs_merge() && tids.size() == 2) {
// Exclude both sibling tablets if either haven't finished migration yet. That's to prevent balancer from
// un-doing the colocation.
if (!migrating(t1) && !migrating(t2)) {
auto candidate = colocated_tablets{global_tablet_id{table, t1.tid}, global_tablet_id{table, t2->tid}};
add_candidate(shard_load_info, migration_tablet_set{std::move(candidate)});
add_candidate(shard_load_info, migration_tablet_set{std::move(candidate), tablet_sizes_sum});
}
} else {
for (auto tid : tids) {
if (!migrating(get_table_desc(tid))) { // migrating tablets are not candidates
add_candidate(shard_load_info, migration_tablet_set{global_tablet_id{table, tid}});
if (tids.size() != tablet_sizes.size()) {
on_internal_error(lblogger, "Number of co-located tablets and their sizes don't match.");
}
for (size_t i = 0; i < tids.size(); i++) {
if (!migrating(get_table_desc(tids[i]))) { // migrating tablets are not candidates
add_candidate(shard_load_info, migration_tablet_set{global_tablet_id{table, tids[i]}, tablet_sizes[i]});
}
}
}
@@ -3486,10 +3607,51 @@ public:
return make_ready_future<>();
});
_tablet_count_per_table[table] = total_load;
_disk_used_per_table[table] = total_tablet_sizes;
_tablet_count_per_table[table] = total_tablet_count;
}
if (!nodes_to_drain.empty() || (_tm->tablets().balancing_enabled() && (shuffle || max_load != min_load))) {
// Compute load imbalance.
_total_capacity_shards = 0;
_total_capacity_nodes = 0;
_total_capacity_storage = 0;
load_type max_load = 0;
load_type min_load = 0;
std::optional<host_id> min_load_node = std::nullopt;
for (auto&& [host, load] : nodes) {
load.update();
_stats.for_node(dc, host).load = load.avg_load;
if (!load.drained) {
if (!min_load_node || load.avg_load < min_load) {
min_load = load.avg_load;
min_load_node = host;
}
if (load.avg_load > max_load) {
max_load = load.avg_load;
}
_total_capacity_shards += load.shard_count;
_total_capacity_nodes++;
_total_capacity_storage += load.dusage->capacity;
}
}
for (auto&& [host, load] : nodes) {
size_t read = 0;
size_t write = 0;
for (auto& shard_load : load.shards) {
read += shard_load.streaming_read_load;
write += shard_load.streaming_write_load;
}
auto level = (read + write) > 0 ? seastar::log_level::info : seastar::log_level::debug;
lblogger.log(level, "Node {}: dc={} rack={} load={} tablets={} shards={} tablets/shard={} state={} cap={}"
" stream_read={} stream_write={}",
host, dc, load.rack(), load.avg_load, load.tablet_count, load.shard_count,
load.tablets_per_shard(), load.state(), load.dusage->capacity, read, write);
}
if (!nodes_to_drain.empty() || (_tm->tablets().balancing_enabled() && (shuffle || !is_balanced(min_load, max_load)))) {
host_id target = *min_load_node;
lblogger.info("target node: {}, avg_load: {}, max: {}", target, min_load, max_load);
plan.merge(co_await make_internode_plan(dc, nodes, nodes_to_drain, target));

View File

@@ -1243,6 +1243,7 @@ class topology_coordinator : public endpoint_lifecycle_subscriber {
// Record the repair_time returned by the repair_tablet rpc call
db_clock::time_point repair_time;
service::session_id session_id;
uint32_t repair_update_compaction_ctrl_retried = 0;
};
std::unordered_map<locator::global_tablet_id, tablet_migration_state> _tablets;
@@ -1836,22 +1837,37 @@ class topology_coordinator : public endpoint_lifecycle_subscriber {
break;
case locator::tablet_transition_stage::end_repair: {
if (do_barrier()) {
if (tablet_state.session_id.uuid().is_null()) {
tablet_state.session_id = trinfo.session_id;
}
if (action_failed(tablet_state.repair_update_compaction_ctrl)) {
rtlogger.warn("Failed to perform repair_update_compaction_ctrl for tablet repair tablet_id={}", gid);
_tablets.erase(gid);
updates.emplace_back(get_mutation_builder().del_transition(last_token).build());
break;
rtlogger.warn("Failed to perform repair_update_compaction_ctrl for tablet repair tablet_id={} session_id={} nr_retried={}",
gid, tablet_state.session_id, tablet_state.repair_update_compaction_ctrl_retried);
// Do not erase the tablet from _tablets or delete
// the transitions yet so we can retry the
// repair_update_compaction_ctrl verb
tablet_state.repair_update_compaction_ctrl_retried++;
}
bool feature = _feature_service.tablet_incremental_repair;
if (advance_in_background(gid, tablet_state.repair_update_compaction_ctrl, "repair_update_compaction_ctrl", [ms = &_messaging,
gid = gid, sid = tablet_state.session_id, _replicas = tmap.get_tablet_info(gid.tablet).replicas, feature] () -> future<> {
if (advance_in_background(gid, tablet_state.repair_update_compaction_ctrl, "repair_update_compaction_ctrl", [this, ms = &_messaging,
gid = gid, sid = tablet_state.session_id, feature, &tmap] () -> future<> {
if (feature) {
auto replicas = std::move(_replicas);
co_await coroutine::parallel_for_each(replicas, [replicas, ms, gid, sid] (locator::tablet_replica& r) -> future<> {
if (utils::get_local_injector().enter("fail_rpc_repair_update_compaction_ctrl")) {
auto msg = fmt::format("Failed repair_update_compaction_ctrl for tablet repair tablet_id={} session_id={} due to error injection", gid, sid);
rtlogger.info("{}", msg);
throw std::runtime_error(msg);
}
auto& replicas = tmap.get_tablet_info(gid.tablet).replicas;
co_await coroutine::parallel_for_each(replicas, [this, ms, gid, sid] (locator::tablet_replica r) -> future<> {
if (!is_excluded(raft::server_id(r.host.uuid()))) {
co_await ser::repair_rpc_verbs::send_repair_update_compaction_ctrl(ms, r.host, gid, sid);
}
});
}
})) {
if (utils::get_local_injector().enter("log_tablet_transition_stage_end_repair")) {
rtlogger.info("The end_repair stage finished for tablet repair tablet_id={} session_id={}", gid, tablet_state.session_id);
}
_tablets.erase(gid);
updates.emplace_back(get_mutation_builder().del_transition(last_token).build());
}
@@ -1963,26 +1979,39 @@ class topology_coordinator : public endpoint_lifecycle_subscriber {
auto leaving = locator::get_leaving_replica(tinfo, trinfo);
auto pending = trinfo.pending_replica;
const dht::token_range trange {tmap.get_token_range(gid.tablet)};
if (leaving && pending) {
switch (trinfo.transition) {
case locator::tablet_transition_kind::migration:
// Handle tablet migration
new_load_stats = old_load_stats->migrate_tablet_size(leaving->host, pending->host, gid, trange);
} else if (!leaving && pending) {
// Handle rebuild: compute the average tablet size of existing replicas
new_load_stats = make_lw_shared<locator::load_stats>(*old_load_stats);
uint64_t tablet_size_sum = 0;
size_t replica_count = 0;
const locator::range_based_tablet_id rb_tid {gid.table, trange};
for (auto r : tinfo.replicas) {
auto tablet_size_opt = new_load_stats->get_tablet_size(r.host, rb_tid);
if (tablet_size_opt) {
tablet_size_sum += *tablet_size_opt;
replica_count++;
break;
case locator::tablet_transition_kind::rebuild:
[[fallthrough]];
case locator::tablet_transition_kind::rebuild_v2:
// Handle rebuild
if (pending && old_load_stats->tablet_stats.contains(pending->host)) {
// Compute the average tablet size of existing replicas
uint64_t tablet_size_sum = 0;
size_t replica_count = 0;
const locator::range_based_tablet_id rb_tid {gid.table, trange};
auto tsi = get_migration_streaming_info(get_token_metadata().get_topology(), tinfo, trinfo);
for (auto& r : tsi.read_from) {
auto tablet_size_opt = old_load_stats->get_tablet_size(r.host, rb_tid);
if (tablet_size_opt) {
tablet_size_sum += *tablet_size_opt;
replica_count++;
}
}
if (replica_count) {
new_load_stats = make_lw_shared<locator::load_stats>(*old_load_stats);
new_load_stats->tablet_stats.at(pending->host).tablet_sizes[gid.table][trange] = tablet_size_sum / replica_count;
}
}
if (replica_count && new_load_stats->tablet_stats.contains(pending->host)) {
new_load_stats->tablet_stats.at(pending->host).tablet_sizes[gid.table][trange] = tablet_size_sum / replica_count;
}
break;
case locator::tablet_transition_kind::repair:
[[fallthrough]];
case locator::tablet_transition_kind::intranode_migration:
break;
}
if (new_load_stats) {
_tablet_allocator.set_load_stats(std::move(new_load_stats));
@@ -2623,7 +2652,10 @@ class topology_coordinator : public endpoint_lifecycle_subscriber {
co_await _voter_handler.on_node_removed(replaced_node_id, _as);
}
}
utils::get_local_injector().inject("crash_coordinator_before_stream", [] { abort(); });
utils::get_local_injector().inject("crash_coordinator_before_stream", [] {
rtlogger.info("crash_coordinator_before_stream: aborting");
abort();
});
raft_topology_cmd cmd{raft_topology_cmd::command::stream_ranges};
auto state = node.rs->state;
try {
@@ -3158,7 +3190,8 @@ class topology_coordinator : public endpoint_lifecycle_subscriber {
}
case topology_request::rebuild: {
topology_mutation_builder builder(node.guard.write_timestamp());
builder.with_node(node.id)
builder.set_session(session_id(node.guard.new_group0_state_id()))
.with_node(node.id)
.set("node_state", node_state::rebuilding)
.del("topology_request");
co_await update_topology_state(take_guard(std::move(node)), {builder.build(), rtbuilder.build()},

View File

@@ -1696,7 +1696,9 @@ void writer::consume_end_of_stream() {
.map = _collector.get_ext_timestamp_stats()
});
_sst.write_scylla_metadata(_shard, std::move(identifier), std::move(ld_stats), std::move(ts_stats));
_sst.seal_sstable(_cfg.backup).get();
if (!_cfg.leave_unsealed) {
_sst.seal_sstable(_cfg.backup).get();
}
}
uint64_t writer::data_file_position_for_tests() const {

View File

@@ -60,12 +60,17 @@ fmt::formatter<sstables::object_name>::format(const sstables::object_name& n, fm
return fmt::format_to(ctx.out(), "{}", n.str());
}
static shared_ptr<s3::client> make_s3_client(const db::object_storage_endpoint_param& ep, semaphore& memory, std::function<shared_ptr<s3::client>(std::string)> factory) {
auto& epc = ep.get_s3_storage();
return s3::client::make(epc.endpoint, epc.region, epc.iam_role_arn, memory, std::move(factory));
}
class s3_client_wrapper : public sstables::object_storage_client {
shared_ptr<s3::client> _client;
shard_client_factory _cf;
public:
s3_client_wrapper(const std::string& host, s3::endpoint_config_ptr cfg, semaphore& memory, shard_client_factory cf)
: _client(s3::client::make(host, cfg, memory, std::bind_front(&s3_client_wrapper::shard_client, this)))
s3_client_wrapper(const db::object_storage_endpoint_param& ep, semaphore& memory, shard_client_factory cf)
: _client(make_s3_client(ep, memory, std::bind_front(&s3_client_wrapper::shard_client, this)))
, _cf(std::move(cf))
{}
shared_ptr<s3::client> shard_client(std::string host) const {
@@ -98,8 +103,8 @@ public:
return _client->upload_file(std::move(path), name.str(), up, as);
}
future<> update_config(const db::object_storage_endpoint_param& ep) override {
auto s3_cfg = make_lw_shared<s3::endpoint_config>(ep.get_s3_storage().config);
return _client->update_config(std::move(s3_cfg));
auto& epc = ep.get_s3_storage();
return _client->update_config(epc.region, epc.iam_role_arn);
}
future<> close() override {
return _client->close();
@@ -290,9 +295,7 @@ public:
shared_ptr<object_storage_client> sstables::make_object_storage_client(const db::object_storage_endpoint_param& ep, semaphore& memory, shard_client_factory cf) {
if (ep.is_s3_storage()) {
auto& epc = ep.get_s3_storage();
auto s3_cfg = make_lw_shared<s3::endpoint_config>(epc.config);
return seastar::make_shared<s3_client_wrapper>(epc.endpoint, std::move(s3_cfg), memory, std::move(cf));
return seastar::make_shared<s3_client_wrapper>(ep, memory, std::move(cf));
}
if (ep.is_gs_storage()) {
return seastar::make_shared<gs_client_wrapper>(ep, memory, std::move(cf));

View File

@@ -83,6 +83,8 @@ struct sstable_open_config {
bool current_shard_as_sstable_owner = false;
// Do not move the sharding metadata to the sharder, keeping it in the scylla metadata..
bool keep_sharding_metadata = false;
// Allows unsealed sstable to be loaded, since it must read components from temporary TOC instead.
bool unsealed_sstable = false;
};
}

View File

@@ -719,7 +719,7 @@ future<> sstable_directory::filesystem_components_lister::cleanup_column_family_
fs::path dirpath = _directory / de->name;
if (dirpath.extension().string() == tempdir_extension) {
dirlog.info("Found temporary sstable directory: {}, removing", dirpath);
futures.push_back(io_check([dirpath = std::move(dirpath)] () { return lister::rmdir(dirpath); }));
futures.push_back(io_check([dirpath = std::move(dirpath)] () { return seastar::recursive_remove_directory(dirpath); }));
}
}
co_return futures;

View File

@@ -836,13 +836,14 @@ future<std::vector<sstring>> sstable::read_and_parse_toc(file f) {
// This is small enough, and well-defined. Easier to just read it all
// at once
future<> sstable::read_toc() noexcept {
future<> sstable::read_toc(sstable_open_config cfg) noexcept {
if (_recognized_components.size()) {
co_return;
}
try {
co_await do_read_simple(component_type::TOC, [&] (version_types v, file f) -> future<> {
auto toc_type = cfg.unsealed_sstable ? component_type::TemporaryTOC : component_type::TOC;
co_await do_read_simple(toc_type, [&] (version_types v, file f) -> future<> {
auto comps = co_await read_and_parse_toc(f);
for (auto& c: comps) {
// accept trailing newlines
@@ -900,8 +901,8 @@ future<std::unordered_map<component_type, file>> sstable::readable_file_for_all_
co_return std::move(files);
}
future<entry_descriptor> sstable::clone(generation_type new_generation) const {
co_await _storage->snapshot(*this, _storage->prefix(), storage::absolute_path::yes, new_generation);
future<entry_descriptor> sstable::clone(generation_type new_generation, bool leave_unsealed) const {
co_await _storage->clone(*this, new_generation, leave_unsealed);
co_return entry_descriptor(new_generation, _version, _format, component_type::TOC, _state);
}
@@ -1725,7 +1726,7 @@ void sstable::disable_component_memory_reload() {
}
future<> sstable::load_metadata(sstable_open_config cfg) noexcept {
co_await read_toc();
co_await read_toc(cfg);
// read scylla-meta after toc. Might need it to parse
// rest (hint extensions)
co_await read_scylla_metadata();
@@ -2122,7 +2123,7 @@ future<> sstable::seal_sstable(bool backup)
_marked_for_deletion = mark_for_deletion::none;
}
if (backup) {
co_await _storage->snapshot(*this, "backups", storage::absolute_path::no);
co_await _storage->snapshot(*this, "backups");
}
}
@@ -2488,9 +2489,9 @@ std::vector<std::pair<component_type, sstring>> sstable::all_components() const
return all;
}
future<> sstable::snapshot(const sstring& dir) const {
future<> sstable::snapshot(const sstring& name) const {
auto lock = co_await get_units(_mutate_sem, 1);
co_await _storage->snapshot(*this, dir, storage::absolute_path::yes);
co_await _storage->snapshot(*this, format("{}/{}", sstables::snapshots_dir, name));
}
future<> sstable::change_state(sstable_state to, delayed_commit_changes* delay_commit) {
@@ -3960,11 +3961,13 @@ class sstable_stream_sink_impl : public sstable_stream_sink {
shared_sstable _sst;
component_type _type;
bool _last_component;
bool _leave_unsealed;
public:
sstable_stream_sink_impl(shared_sstable sst, component_type type, bool last_component)
sstable_stream_sink_impl(shared_sstable sst, component_type type, sstable_stream_sink_cfg cfg)
: _sst(std::move(sst))
, _type(type)
, _last_component(last_component)
, _last_component(cfg.last_component)
, _leave_unsealed(cfg.leave_unsealed)
{}
private:
future<> load_metadata() const {
@@ -4011,10 +4014,12 @@ public:
co_return co_await make_file_output_stream(std::move(f), stream_options);
}
future<shared_sstable> close_and_seal() override {
future<shared_sstable> close() override {
if (_last_component) {
// If we are the last component in a sequence, we can seal the table.
co_await _sst->_storage->seal(*_sst);
if (!_leave_unsealed) {
co_await _sst->_storage->seal(*_sst);
}
co_return std::move(_sst);
}
_sst = {};
@@ -4031,7 +4036,7 @@ public:
}
};
std::unique_ptr<sstable_stream_sink> create_stream_sink(schema_ptr schema, sstables_manager& sstm, const data_dictionary::storage_options& s_opts, sstable_state state, std::string_view component_filename, bool last_component) {
std::unique_ptr<sstable_stream_sink> create_stream_sink(schema_ptr schema, sstables_manager& sstm, const data_dictionary::storage_options& s_opts, sstable_state state, std::string_view component_filename, sstable_stream_sink_cfg cfg) {
auto desc = parse_path(component_filename, schema->ks_name(), schema->cf_name());
auto sst = sstm.make_sstable(schema, s_opts, desc.generation, state, desc.version, desc.format);
@@ -4042,7 +4047,7 @@ std::unique_ptr<sstable_stream_sink> create_stream_sink(schema_ptr schema, sstab
type = component_type::TemporaryTOC;
}
return std::make_unique<sstable_stream_sink_impl>(std::move(sst), type, last_component);
return std::make_unique<sstable_stream_sink_impl>(std::move(sst), type, cfg);
}
generation_type

View File

@@ -109,6 +109,7 @@ struct sstable_writer_config {
size_t promoted_index_auto_scale_threshold;
uint64_t max_sstable_size = std::numeric_limits<uint64_t>::max();
bool backup = false;
bool leave_unsealed = false;
mutation_fragment_stream_validation_level validation_level;
std::optional<db::replay_position> replay_position;
std::optional<int> sstable_level;
@@ -417,8 +418,8 @@ public:
return component_basename(_schema->ks_name(), _schema->cf_name(), _version, _generation, _format, f);
}
component_name get_filename() const {
return component_name(*this, component_type::Data);
component_name get_filename(component_type f = component_type::Data) const {
return component_name(*this, f);
}
component_name toc_filename() const {
@@ -437,7 +438,7 @@ public:
std::vector<std::pair<component_type, sstring>> all_components() const;
future<> snapshot(const sstring& dir) const;
future<> snapshot(const sstring& name) const;
// Delete the sstable by unlinking all sstable files
// Ignores all errors.
@@ -693,7 +694,7 @@ private:
future<> update_info_for_opened_data(sstable_open_config cfg = {});
future<> read_toc() noexcept;
future<> read_toc(sstable_open_config cfg = {}) noexcept;
future<> read_summary() noexcept;
void write_summary() {
@@ -1069,8 +1070,9 @@ public:
future<std::unordered_map<component_type, file>> readable_file_for_all_components() const;
// Clones this sstable with a new generation, under the same location as the original one.
// If leave_unsealed is true, the destination sstable is left unsealed.
// Implementation is underlying storage specific.
future<entry_descriptor> clone(generation_type new_generation) const;
future<entry_descriptor> clone(generation_type new_generation, bool leave_unsealed = false) const;
struct lesser_reclaimed_memory {
// comparator class to be used by the _reclaimed set in sstables manager
@@ -1244,13 +1246,18 @@ public:
// closes this component. If this is the last component in a set (see "last_component" in creating method below)
// the table on disk will be sealed.
// Returns sealed sstable if last, or nullptr otherwise.
virtual future<shared_sstable> close_and_seal() = 0;
virtual future<shared_sstable> close() = 0;
virtual future<> abort() = 0;
};
struct sstable_stream_sink_cfg {
bool last_component = false;
bool leave_unsealed = false;
};
// Creates a sink object which can receive a component file sourced from above source object data.
std::unique_ptr<sstable_stream_sink> create_stream_sink(schema_ptr, sstables_manager&, const data_dictionary::storage_options&, sstable_state, std::string_view component_filename, bool last_component);
std::unique_ptr<sstable_stream_sink> create_stream_sink(schema_ptr, sstables_manager&, const data_dictionary::storage_options&, sstable_state, std::string_view component_filename, sstable_stream_sink_cfg cfg);
} // namespace sstables

View File

@@ -11,6 +11,7 @@
#include <unordered_map>
#include "utils/log.hh"
#include "sstables/sstables_manager.hh"
#include "sstables/sstable_directory.hh"
#include "sstables/sstables_registry.hh"
#include "sstables/partition_index_cache.hh"
#include "sstables/sstables.hh"
@@ -20,6 +21,7 @@
#include "gms/feature.hh"
#include "gms/feature_service.hh"
#include "utils/assert.hh"
#include "utils/http.hh"
#include "exceptions/exceptions.hh"
namespace sstables {
@@ -79,7 +81,7 @@ storage_manager::object_storage_endpoint::object_storage_endpoint(db::object_sto
storage_manager::storage_manager(const db::config& cfg, config stm_cfg)
: _object_storage_clients_memory(stm_cfg.object_storage_clients_memory)
, _config_updater(this_shard_id() == 0 ? std::make_unique<config_updater>(cfg, *this) : nullptr)
, _config_updater(std::make_unique<config_updater>(cfg, *this))
{
for (auto& e : cfg.object_storage_endpoints()) {
_object_storage_endpoints.emplace(std::make_pair(e.key(), e));
@@ -137,6 +139,16 @@ future<> storage_manager::update_config(const db::config& cfg) {
auto storage_manager::get_endpoint(const sstring& endpoint) -> object_storage_endpoint& {
auto found = _object_storage_endpoints.find(endpoint);
if (found == _object_storage_endpoints.end() && maybe_legacy_endpoint_name(endpoint)) {
found = _object_storage_endpoints.begin();
while (found != _object_storage_endpoints.end()) {
auto uri = utils::http::parse_simple_url(found->first);
if (uri.host == endpoint) {
break;
}
found++;
}
}
if (found == _object_storage_endpoints.end()) {
smlogger.error("unable to find {} in configured object-storage endpoints", endpoint);
throw std::invalid_argument(format("endpoint {} not found", endpoint));
@@ -159,7 +171,20 @@ sstring storage_manager::get_endpoint_type(sstring endpoint) {
}
bool storage_manager::is_known_endpoint(sstring endpoint) const {
return _object_storage_endpoints.contains(endpoint);
if (_object_storage_endpoints.contains(endpoint)) {
return true;
}
if (maybe_legacy_endpoint_name(endpoint)) {
for (auto ep : _object_storage_endpoints) {
auto uri = utils::http::parse_simple_url(ep.first);
if (uri.host == endpoint) {
return true;
}
}
}
return false;
}
std::vector<sstring> storage_manager::endpoints(sstring type) const noexcept {
@@ -170,9 +195,7 @@ std::vector<sstring> storage_manager::endpoints(sstring type) const noexcept {
storage_manager::config_updater::config_updater(const db::config& cfg, storage_manager& sstm)
: action([&sstm, &cfg] () mutable {
return sstm.container().invoke_on_all([&cfg](auto& sstm) -> future<> {
co_await sstm.update_config(cfg);
});
return sstm.update_config(cfg);
})
, observer(cfg.object_storage_endpoints.observe(action.make_observer()))
{}
@@ -405,6 +428,19 @@ future<> sstables_manager::delete_atomically(std::vector<shared_sstable> ssts) {
co_await storage.atomic_delete_complete(std::move(ctx));
}
future<std::unordered_set<sstring>> sstables_manager::take_snapshot(std::vector<shared_sstable> ssts, sstring name) {
std::unordered_set<sstring> table_names;
co_await _dir_semaphore.parallel_for_each(ssts, [&name, &table_names] (sstables::shared_sstable sstable) {
table_names.insert(sstable->component_basename(sstables::component_type::Data));
return io_check([sstable, &name] {
return sstable->snapshot(name);
});
});
co_return table_names;
}
future<> sstables_manager::close() {
_closing = true;
maybe_done();

View File

@@ -248,6 +248,7 @@ public:
}
future<> delete_atomically(std::vector<shared_sstable> ssts);
future<std::unordered_set<sstring>> take_snapshot(std::vector<shared_sstable> ssts, sstring jsondir);
future<lw_shared_ptr<const data_dictionary::storage_options>> init_table_storage(const schema& s, const data_dictionary::storage_options& so);
future<> destroy_table_storage(const data_dictionary::storage_options& so);
future<> init_keyspace_storage(const data_dictionary::storage_options& so, sstring dir);

View File

@@ -50,7 +50,11 @@ class filesystem_storage final : public sstables::storage {
std::optional<std::filesystem::path> _temp_dir; // Valid while the sstable is being created, until sealed
private:
using mark_for_removal = bool_class<class mark_for_removal_tag>;
enum class link_mode {
default_mode,
mark_for_removal,
leave_unsealed,
};
template <typename Comp>
requires std::is_same_v<Comp, component_type> || std::is_same_v<Comp, sstring>
@@ -61,8 +65,7 @@ private:
future<> check_create_links_replay(const sstable& sst, const sstring& dst_dir, generation_type dst_gen, const std::vector<std::pair<sstables::component_type, sstring>>& comps) const;
future<> remove_temp_dir();
virtual future<> create_links(const sstable& sst, const std::filesystem::path& dir) const override;
future<> create_links_common(const sstable& sst, sstring dst_dir, generation_type dst_gen, mark_for_removal mark_for_removal) const;
future<> create_links_common(const sstable& sst, const std::filesystem::path& dir, std::optional<generation_type> dst_gen) const;
future<> create_links_common(const sstable& sst, sstring dst_dir, generation_type dst_gen, link_mode mode) const;
future<> touch_temp_dir(const sstable& sst);
future<> move(const sstable& sst, sstring new_dir, generation_type generation, delayed_commit_changes* delay) override;
future<> rename_new_file(const sstable& sst, sstring from_name, sstring to_name) const;
@@ -83,7 +86,8 @@ public:
{}
virtual future<> seal(const sstable& sst) override;
virtual future<> snapshot(const sstable& sst, sstring dir, absolute_path abs, std::optional<generation_type> gen) const override;
virtual future<> snapshot(const sstable& sst, sstring name) const override;
virtual future<> clone(const sstable& sst, generation_type gen, bool leave_unsealed) const override;
virtual future<> change_state(const sstable& sst, sstable_state state, generation_type generation, delayed_commit_changes* delay) override;
// runs in async context
virtual void open(sstable& sst) override;
@@ -356,8 +360,13 @@ future<> filesystem_storage::check_create_links_replay(const sstable& sst, const
/// \param sst - the sstable to work on
/// \param dst_dir - the destination directory.
/// \param generation - the generation of the destination sstable
/// \param mark_for_removal - mark the sstable for removal after linking it to the destination dst_dir
future<> filesystem_storage::create_links_common(const sstable& sst, sstring dst_dir, generation_type generation, mark_for_removal mark_for_removal) const {
/// \param mode - what will be done after all components were linked
/// mark_for_removal - mark the sstable for removal after linking it to the destination dst_dir
/// leave_unsealed - leaves the destination sstable unsealed
future<> filesystem_storage::create_links_common(const sstable& sst, sstring dst_dir, generation_type generation, link_mode mode) const {
// They're mutually exclusive, so we can assume only one is set.
bool mark_for_removal = mode == link_mode::mark_for_removal;
bool leave_unsealed = mode == link_mode::leave_unsealed;
sstlog.trace("create_links: {} -> {} generation={} mark_for_removal={}", sst.get_filename(), dst_dir, generation, mark_for_removal);
auto comps = sst.all_components();
co_await check_create_links_replay(sst, dst_dir, generation, comps);
@@ -366,7 +375,11 @@ future<> filesystem_storage::create_links_common(const sstable& sst, sstring dst
co_await sst.sstable_write_io_check(idempotent_link_file, fmt::to_string(sst.filename(component_type::TOC)), std::move(dst));
auto dir = opened_directory(dst_dir);
co_await dir.sync(sst._write_error_handler);
co_await parallel_for_each(comps, [this, &sst, &dst_dir, generation] (auto p) {
co_await parallel_for_each(comps, [this, &sst, &dst_dir, generation, leave_unsealed] (auto p) {
// Skips the linking of TOC file if the destination will be left unsealed.
if (leave_unsealed && p.first == component_type::TOC) {
return make_ready_future<>();
}
auto src = filename(sst, _dir.native(), sst._generation, p.second);
auto dst = filename(sst, dst_dir, generation, p.second);
return sst.sstable_write_io_check(idempotent_link_file, std::move(src), std::move(dst));
@@ -379,9 +392,10 @@ future<> filesystem_storage::create_links_common(const sstable& sst, sstring dst
auto src_temp_toc = filename(sst, _dir.native(), sst._generation, component_type::TemporaryTOC);
co_await sst.sstable_write_io_check(rename_file, std::move(dst_temp_toc), std::move(src_temp_toc));
co_await _dir.sync(sst._write_error_handler);
} else {
} else if (!leave_unsealed) {
// Now that the source sstable is linked to dir, remove
// the TemporaryTOC file at the destination.
// This is bypassed if destination will be left unsealed.
co_await sst.sstable_write_io_check(remove_file, std::move(dst_temp_toc));
}
co_await dir.sync(sst._write_error_handler);
@@ -389,23 +403,17 @@ future<> filesystem_storage::create_links_common(const sstable& sst, sstring dst
sstlog.trace("create_links: {} -> {} generation={}: done", sst.get_filename(), dst_dir, generation);
}
future<> filesystem_storage::create_links_common(const sstable& sst, const std::filesystem::path& dir, std::optional<generation_type> gen) const {
return create_links_common(sst, dir.native(), gen.value_or(sst._generation), mark_for_removal::no);
}
future<> filesystem_storage::create_links(const sstable& sst, const std::filesystem::path& dir) const {
return create_links_common(sst, dir.native(), sst._generation, mark_for_removal::no);
return create_links_common(sst, dir.native(), sst._generation, link_mode::default_mode);
}
future<> filesystem_storage::snapshot(const sstable& sst, sstring dir, absolute_path abs, std::optional<generation_type> gen) const {
std::filesystem::path snapshot_dir;
if (abs) {
snapshot_dir = dir;
} else {
snapshot_dir = _dir.path() / dir;
}
future<> filesystem_storage::snapshot(const sstable& sst, sstring name) const {
std::filesystem::path snapshot_dir = _base_dir.path() / name;
co_await sst.sstable_touch_directory_io_check(snapshot_dir);
co_await create_links_common(sst, snapshot_dir, std::move(gen));
co_await create_links_common(sst, snapshot_dir.native(), sst._generation, link_mode::default_mode);
}
future<> filesystem_storage::clone(const sstable& sst, generation_type gen, bool leave_unsealed) const {
co_await create_links_common(sst, _dir.path().native(), std::move(gen), leave_unsealed ? link_mode::leave_unsealed : link_mode::default_mode);
}
future<> filesystem_storage::move(const sstable& sst, sstring new_dir, generation_type new_generation, delayed_commit_changes* delay_commit) {
@@ -413,7 +421,7 @@ future<> filesystem_storage::move(const sstable& sst, sstring new_dir, generatio
sstring old_dir = _dir.native();
sstlog.debug("Moving {} old_generation={} to {} new_generation={} do_sync_dirs={}",
sst.get_filename(), sst._generation, new_dir, new_generation, delay_commit == nullptr);
co_await create_links_common(sst, new_dir, new_generation, mark_for_removal::yes);
co_await create_links_common(sst, new_dir, new_generation, link_mode::mark_for_removal);
co_await change_dir(new_dir);
generation_type old_generation = sst._generation;
co_await coroutine::parallel_for_each(sst.all_components(), [&sst, old_generation, old_dir] (auto p) {
@@ -598,7 +606,8 @@ public:
{}
future<> seal(const sstable& sst) override;
future<> snapshot(const sstable& sst, sstring dir, absolute_path abs, std::optional<generation_type>) const override;
future<> snapshot(const sstable& sst, sstring name) const override;
future<> clone(const sstable& sst, generation_type gen, bool leave_unsealed) const override;
future<> change_state(const sstable& sst, sstable_state state, generation_type generation, delayed_commit_changes* delay) override;
// runs in async context
void open(sstable& sst) override;
@@ -815,11 +824,16 @@ future<> object_storage_base::unlink_component(const sstable& sst, component_typ
}
}
future<> object_storage_base::snapshot(const sstable& sst, sstring dir, absolute_path abs, std::optional<generation_type> gen) const {
future<> object_storage_base::snapshot(const sstable& sst, sstring name) const {
on_internal_error(sstlog, "Snapshotting S3 objects not implemented");
co_return;
}
future<> object_storage_base::clone(const sstable& sst, generation_type gen, bool leave_unsealed) const {
on_internal_error(sstlog, "Cloning S3 objects not implemented");
co_return;
}
std::unique_ptr<sstables::storage> make_storage(sstables_manager& manager, const data_dictionary::storage_options& s_opts, sstable_state state) {
return std::visit(overloaded_functor {
[state] (const data_dictionary::storage_options::local& loc) mutable -> std::unique_ptr<sstables::storage> {

View File

@@ -95,11 +95,11 @@ class storage {
public:
virtual ~storage() {}
using absolute_path = bool_class<class absolute_path_tag>; // FIXME -- should go away eventually
using sync_dir = bool_class<struct sync_dir_tag>; // meaningful only to filesystem storage
virtual future<> seal(const sstable& sst) = 0;
virtual future<> snapshot(const sstable& sst, sstring dir, absolute_path abs, std::optional<generation_type> gen = {}) const = 0;
virtual future<> snapshot(const sstable& sst, sstring name) const = 0;
virtual future<> clone(const sstable& sst, generation_type gen, bool leave_unsealed) const = 0;
virtual future<> change_state(const sstable& sst, sstable_state to, generation_type generation, delayed_commit_changes* delay) = 0;
// runs in async context
virtual void open(sstable& sst) = 0;

View File

@@ -63,30 +63,45 @@ mutation_reader_consumer make_streaming_consumer(sstring origin,
}
schema_ptr s = reader.schema();
// SSTable will be only sealed when added to the sstable set, so we make sure unsplit sstables aren't
// left sealed on the table directory.
auto cfg = cf->get_sstables_manager().configure_writer(origin);
cfg.leave_unsealed = true;
return sst->write_components(std::move(reader), adjusted_estimated_partitions, s,
cfg, encoding_stats{}).then([sst] {
return sst->open_data();
}).then([cf, sst, offstrategy, origin, repaired_at, sstable_list_to_mark_as_repaired, frozen_guard] -> future<> {
if (repaired_at && sstables::repair_origin == origin) {
sst->being_repaired = frozen_guard;
if (sstable_list_to_mark_as_repaired) {
sstable_list_to_mark_as_repaired->insert(sst);
}).then([cf, sst, offstrategy, origin, repaired_at, sstable_list_to_mark_as_repaired, frozen_guard, cfg] -> future<std::vector<sstables::shared_sstable>> {
auto on_add = [sst, origin, repaired_at, sstable_list_to_mark_as_repaired, frozen_guard, cfg] (sstables::shared_sstable loading_sst) -> future<> {
if (repaired_at && sstables::repair_origin == origin) {
loading_sst->being_repaired = frozen_guard;
if (sstable_list_to_mark_as_repaired) {
sstable_list_to_mark_as_repaired->insert(loading_sst);
}
}
}
if (loading_sst == sst) {
co_await loading_sst->seal_sstable(cfg.backup);
}
co_return;
};
if (offstrategy && sstables::repair_origin == origin) {
sstables::sstlog.debug("Enabled automatic off-strategy trigger for table {}.{}",
cf->schema()->ks_name(), cf->schema()->cf_name());
cf->enable_off_strategy_trigger();
}
co_await cf->add_sstable_and_update_cache(sst, offstrategy);
}).then([cf, s, sst, use_view_update_path, &vb, &vbw]() mutable -> future<> {
co_return co_await cf->add_new_sstable_and_update_cache(sst, on_add, offstrategy);
}).then([cf, s, sst, use_view_update_path, &vb, &vbw] (std::vector<sstables::shared_sstable> new_sstables) mutable -> future<> {
auto& vb_ = vb;
auto new_sstables_ = std::move(new_sstables);
auto table = cf;
if (use_view_update_path == db::view::sstable_destination_decision::staging_managed_by_vbc) {
return vbw.local().register_staging_sstable_tasks({sst}, cf->schema()->id());
co_return co_await vbw.local().register_staging_sstable_tasks(new_sstables_, cf->schema()->id());
} else if (use_view_update_path == db::view::sstable_destination_decision::staging_directly_to_generator) {
return vb.local().register_staging_sstable(sst, std::move(cf));
co_await coroutine::parallel_for_each(new_sstables_, [&vb_, &table] (sstables::shared_sstable sst) -> future<> {
return vb_.local().register_staging_sstable(sst, table);
});
}
return make_ready_future<>();
co_return;
});
};
if (!offstrategy) {

View File

@@ -52,8 +52,16 @@ static future<> load_sstable_for_tablet(const file_stream_id& ops_id, replica::d
auto erm = t.get_effective_replication_map();
auto& sstm = t.get_sstables_manager();
auto sst = sstm.make_sstable(t.schema(), t.get_storage_options(), desc.generation, state, desc.version, desc.format);
co_await sst->load(erm->get_sharder(*t.schema()));
co_await t.add_sstable_and_update_cache(sst);
sstables::sstable_open_config cfg { .unsealed_sstable = true };
co_await sst->load(erm->get_sharder(*t.schema()), cfg);
auto on_add = [sst, &sstm] (sstables::shared_sstable loading_sst) -> future<> {
if (loading_sst == sst) {
auto cfg = sstm.configure_writer(sst->get_origin());
co_await loading_sst->seal_sstable(cfg.backup);
}
co_return;
};
auto new_sstables = co_await t.add_new_sstable_and_update_cache(sst, on_add);
blogger.info("stream_sstables[{}] Loaded sstable {} successfully", ops_id, sst->toc_filename());
if (state == sstables::sstable_state::staging) {
@@ -64,7 +72,7 @@ static future<> load_sstable_for_tablet(const file_stream_id& ops_id, replica::d
// so then, the view building coordinator can decide to process it once the migration
// is finished.
// (Instead of registering the sstable to view update generator which may process it immediately.)
co_await sharded_vbw.local().register_staging_sstable_tasks({sst}, t.schema()->id());
co_await sharded_vbw.local().register_staging_sstable_tasks(new_sstables, t.schema()->id());
}
});
}
@@ -343,7 +351,11 @@ future<> stream_blob_handler(replica::database& db, db::view::view_building_work
auto& table = db.find_column_family(meta.table);
auto& sstm = table.get_sstables_manager();
auto sstable_sink = sstables::create_stream_sink(table.schema(), sstm, table.get_storage_options(), sstable_state(meta), meta.filename, meta.fops == file_ops::load_sstables);
// SSTable will be only sealed when added to the sstable set, so we make sure unsplit sstables aren't
// left sealed on the table directory.
sstables::sstable_stream_sink_cfg cfg { .last_component = meta.fops == file_ops::load_sstables,
.leave_unsealed = true };
auto sstable_sink = sstables::create_stream_sink(table.schema(), sstm, table.get_storage_options(), sstable_state(meta), meta.filename, cfg);
auto out = co_await sstable_sink->output(foptions, stream_options);
co_return output_result{
[sstable_sink = std::move(sstable_sink), &meta, &db, &vbw](store_result res) -> future<> {
@@ -351,7 +363,7 @@ future<> stream_blob_handler(replica::database& db, db::view::view_building_work
co_await sstable_sink->abort();
co_return;
}
auto sst = co_await sstable_sink->close_and_seal();
auto sst = co_await sstable_sink->close();
if (sst) {
blogger.debug("stream_sstables[{}] Loading sstable {} on shard {}", meta.ops_id, sst->toc_filename(), meta.dst_shard_id);
auto desc = sst->get_descriptor(sstables::component_type::TOC);

View File

@@ -6,9 +6,6 @@ import run
import os
import requests
import time
import cassandra.cluster
import cassandra.auth
# When tests are to be run against AWS (the "--aws" option), it is not
# necessary to start Scylla at all. All we need to do is to run pytest.

View File

@@ -33,7 +33,6 @@ import botocore
import gzip
import requests
import pytest
from contextlib import contextmanager
from .util import random_string
from .test_manual_requests import get_signed_request

View File

@@ -21,7 +21,7 @@ from functools import cache
import re
from .util import is_aws, unique_table_name, random_string, new_test_table
from .util import unique_table_name, random_string, new_test_table
from .test_gsi_updatetable import wait_for_gsi, wait_for_gsi_gone
from .test_gsi import assert_index_query

View File

@@ -17,8 +17,10 @@
import pytest
from botocore.exceptions import ClientError
import re
import time, datetime
from test.alternator.util import multiset, create_test_table, new_test_table
import time
from test.alternator.util import multiset, new_test_table, random_string, scylla_config_temporary
import requests
# Test that DescribeTable correctly returns the table's name and state
def test_describe_table_basic(test_table):
@@ -75,7 +77,7 @@ def test_describe_table_creation_time(dynamodb):
with new_test_table(dynamodb, **schema) as table1:
# let's sleep few ms, so table2 creation time was always bigger, as we now return CreationDateTime in ms precision
time.sleep(0.002)
time.sleep(0.002)
with new_test_table(dynamodb, **schema) as table2:
got1 = table1.meta.client.describe_table(TableName=table1.name)['Table']
got2 = table2.meta.client.describe_table(TableName=table2.name)['Table']
@@ -114,10 +116,54 @@ def test_describe_table_item_count(test_table):
# Similar test for estimated size in bytes - TableSizeBytes - which again,
# may reflect the size as long as six hours ago.
@pytest.mark.xfail(reason="DescribeTable does not return table size")
def test_describe_table_size(test_table):
got = test_table.meta.client.describe_table(TableName=test_table.name)['Table']
assert 'TableSizeBytes' in got
assert got['TableSizeBytes'] >= 0
# this is scylla-only test - it uses scylla's configuration option to stabilize test
# we also don't have any guarantees about sizes returned by dynamodb
@pytest.mark.parametrize("cache_validity_in_seconds", [0, 3600])
def test_describe_table_size_with_N_timeout(scylla_only, dynamodb, rest_api, cache_validity_in_seconds):
'''
This tests side effect of how describe table works in ScyllaDB - it caches calculated
values. The cache validity is set to 1 hour (long enough to cover whole test duration).
We call DescribeTable twice, expecting second value to be the same as first one due to the cache.
'''
schema = {
'KeySchema': [ { 'AttributeName': 'p', 'KeyType': 'HASH' },
{ 'AttributeName': 'c', 'KeyType': 'RANGE' }
],
'AttributeDefinitions': [
{ 'AttributeName': 'p', 'AttributeType': 'S' },
{ 'AttributeName': 'c', 'AttributeType': 'S' },
],
}
with scylla_config_temporary(dynamodb, 'alternator_describe_table_info_cache_validity_in_seconds', str(cache_validity_in_seconds)):
with new_test_table(dynamodb, **schema) as test_table:
got1 = test_table.meta.client.describe_table(TableName=test_table.name)['Table']
assert 'TableSizeBytes' in got1
assert got1['TableSizeBytes'] >= 0
p = random_string()
c = random_string()
v = random_string()
test_table.put_item(Item={'p': p, 'c': c, 'v': v})
ks = 'alternator_' + test_table.name
cf = test_table.name
# We need to flush memtables to make sure size is updated, as current implementation
# calculates size based on sstables only
response = requests.post(rest_api+f'/storage_service/keyspace_flush/{ks}', params={'cf' : cf})
assert response.ok
got2 = test_table.meta.client.describe_table(TableName=test_table.name)['Table']
assert 'TableSizeBytes' in got2
if cache_validity_in_seconds == 0:
assert got2['TableSizeBytes'] > got1['TableSizeBytes']
else:
assert got2['TableSizeBytes'] == got1['TableSizeBytes']
# Test the ProvisionedThroughput attribute returned by DescribeTable.
# This is a very partial test: Our test table is configured without

View File

@@ -9,7 +9,6 @@
# various cases of that issue.
import pytest
import time
from botocore.exceptions import ClientError
from .util import random_string, full_scan, full_query, multiset, \
new_test_table, wait_for_gsi, wait_for_gsi_gone

View File

@@ -8,8 +8,7 @@
import pytest
from botocore.exceptions import ClientError
from decimal import Decimal
from test.alternator.util import random_string, random_bytes, new_test_table
from test.alternator.util import new_test_table
# When creating a table with PROVISIONED billing mode, ProvisionedThroughput must be explicitly set,
# and the same values should be reflected when the table is described.

View File

@@ -7,7 +7,6 @@
import pytest
from botocore.exceptions import ClientError
from test.alternator.util import random_string, random_bytes, new_test_table
import decimal
from decimal import Decimal
KB = 1024

View File

@@ -155,11 +155,11 @@ def test_delete_item_returnvalues(test_table_s):
# Other ReturnValue options - UPDATED_OLD, ALL_NEW, UPDATED_NEW,
# are supported by other operations but not by PutItem:
with pytest.raises(ClientError, match='ValidationException'):
test_table_s.delete_item(Key={'p': p}, ReturnValues='UPDATE_OLD')
test_table_s.delete_item(Key={'p': p}, ReturnValues='UPDATED_OLD')
with pytest.raises(ClientError, match='ValidationException'):
test_table_s.delete_item(Key={'p': p}, ReturnValues='ALL_NEW')
with pytest.raises(ClientError, match='ValidationException'):
test_table_s.delete_item(Key={'p': p}, ReturnValues='UPDATE_NEW')
test_table_s.delete_item(Key={'p': p}, ReturnValues='UPDATED_NEW')
# Also, obviously, a non-supported setting "DOG" also returns in error:
with pytest.raises(ClientError, match='ValidationException'):
test_table_s.delete_item(Key={'p': p}, ReturnValues='DOG')

View File

@@ -5,7 +5,6 @@
# Tests for the Scan operation
import pytest
import time
from boto3.dynamodb.conditions import Attr
from botocore.exceptions import ClientError

View File

@@ -5,13 +5,9 @@
import pytest
from test.alternator.test_metrics import metrics, get_metrics, check_increases_metric
from contextlib import contextmanager
from cassandra.auth import PlainTextAuthProvider
from cassandra.cluster import Cluster, ExecutionProfile, EXEC_PROFILE_DEFAULT, ConsistencyLevel
from cassandra.policies import RoundRobinPolicy
import time
import re
from .util import random_string, is_aws, unique_table_name
from .util import random_string, unique_table_name
from .test_cql_rbac import new_role, new_dynamodb
# new_service_level() is a context manager for temporarily creating a new

View File

@@ -11,7 +11,7 @@ import requests
from botocore.exceptions import ClientError
from boto3.dynamodb.conditions import Key
from .util import full_scan, scylla_config_read, scylla_config_write, scylla_config_temporary
from .util import full_scan, scylla_config_read, scylla_config_temporary
internal_prefix = '.scylla.alternator.'

View File

@@ -14,7 +14,7 @@ from re import fullmatch
import pytest
from botocore.exceptions import ClientError
from test.alternator.util import list_tables, multiset, unique_table_name, create_test_table, random_string, new_test_table, is_aws, scylla_config_read
from .util import list_tables, unique_table_name, create_test_table, random_string, new_test_table, is_aws, scylla_config_read
# Utility function for create a table with a given name and some valid

View File

@@ -14,10 +14,9 @@
# will probably go away eventually.
import pytest
import boto3
from botocore.exceptions import ClientError
from .util import new_test_table, wait_for_gsi, random_string, full_scan, full_query, multiset, scylla_config_read, scylla_config_temporary
from .util import new_test_table, scylla_config_read, scylla_config_temporary
# All tests in this file are scylla-only
@pytest.fixture(scope="function", autouse=True)

View File

@@ -998,7 +998,7 @@ def test_transact_get_items_projection_expression(test_table_s):
# ProjectionExpression also supports ExpressionAttributeNames.
@pytest.mark.xfail(reason="#5064 - transactions not yet supported")
def test_transact_get_items_projection_expression(test_table_s):
def test_transact_get_items_projection_expression_attribute_names(test_table_s):
p = random_string()
item = {'p': p, 'x': 1, 'y': 2, 'z': 3}
test_table_s.put_item(Item=item)

View File

@@ -8,7 +8,6 @@ import string
import random
import collections
import time
import re
import requests
import json
import pytest

View File

@@ -365,6 +365,7 @@ add_scylla_test(combined_tests
schema_registry_test.cc
secondary_index_test.cc
sessions_test.cc
simple_value_with_expiry_test.cc
sstable_compaction_test.cc
sstable_compressor_factory_test.cc
sstable_compression_config_test.cc

View File

@@ -110,7 +110,7 @@ public:
virtual compaction::compaction_strategy_state& get_compaction_strategy_state() noexcept override { return _compaction_strategy_state; }
virtual reader_permit make_compaction_reader_permit() const override { return _semaphore.make_permit(); }
virtual sstables::sstables_manager& get_sstables_manager() noexcept override { return _sst_man; }
virtual sstables::shared_sstable make_sstable() const override { return _sstable_factory(); }
virtual sstables::shared_sstable make_sstable(sstables::sstable_state) const override { return _sstable_factory(); }
virtual sstables::sstable_writer_config configure_writer(sstring origin) const override { return _sst_man.configure_writer(std::move(origin)); }
virtual api::timestamp_type min_memtable_timestamp() const override { return api::min_timestamp; }
virtual api::timestamp_type min_memtable_live_timestamp() const override { return api::min_timestamp; }

View File

@@ -387,4 +387,27 @@ SEASTAR_TEST_CASE(select_from_vector_indexed_table) {
enable_tablets(db_config_with_auth()));
}
SEASTAR_TEST_CASE(select_from_vector_search_system_table) {
return do_with_cql_env_thread(
[](auto&& env) {
create_user_if_not_exists(env, bob);
with_user(env, bob, [&env] {
BOOST_REQUIRE_EXCEPTION(env.execute_cql("SELECT * FROM system.group0_history").get(), exceptions::unauthorized_exception,
exception_predicate::message_contains("User bob has none of the permissions (VECTOR_SEARCH_INDEXING, SELECT) on"));
});
with_user(env, bob, [&env] {
BOOST_REQUIRE_EXCEPTION(env.execute_cql("SELECT * FROM system.versions").get(), exceptions::unauthorized_exception,
exception_predicate::message_contains("User bob has none of the permissions (VECTOR_SEARCH_INDEXING, SELECT) on"));
});
cquery_nofail(env, "GRANT VECTOR_SEARCH_INDEXING ON ALL KEYSPACES TO bob");
with_user(env, bob, [&env] {
cquery_nofail(env, "SELECT * FROM system.group0_history");
});
with_user(env, bob, [&env] {
cquery_nofail(env, "SELECT * FROM system.versions");
});
},
db_config_with_auth());
}
BOOST_AUTO_TEST_SUITE_END()

View File

@@ -152,9 +152,14 @@ SEASTAR_TEST_CASE(test_inject_future_disabled) {
utils::error_injection<true> errinj;
auto start_time = steady_clock::now();
return errinj.inject("futid", sleep_msec).then([start_time] {
static constexpr milliseconds long_sleep_msec(10000);
return errinj.inject("futid", long_sleep_msec).then([start_time] {
auto wait_time = steady_clock::now() - start_time;
BOOST_REQUIRE_LT(wait_time, sleep_msec);
// Because the injection "futid" was not enabled, we expect the
// sleep to have not happened. If we measure the time that passed,
// it's obviously not zero (especially in a slow debug build on a
// busy test machine), but certainly not the full long_sleep_msec.
BOOST_REQUIRE_LT(wait_time, long_sleep_msec);
return make_ready_future<>();
});
}

View File

@@ -234,13 +234,13 @@ SEASTAR_THREAD_TEST_CASE(test_load_sketch) {
std::vector<unsigned> node3_shards(node3_shard_count, 0);
for (unsigned i = 0; i < node1_shard_count * 3; ++i) {
node1_shards[load.next_shard(host1)] += 1;
node1_shards[load.next_shard(host1, 1, service::default_target_tablet_size)] += 1;
}
for (unsigned i = 0; i < node2_shard_count * 3; ++i) {
node2_shards[load.next_shard(host2)] += 1;
node2_shards[load.next_shard(host2, 1, service::default_target_tablet_size)] += 1;
}
for (unsigned i = 0; i < node3_shard_count * 3; ++i) {
node3_shards[load.next_shard(host3)] += 1;
node3_shards[load.next_shard(host3, 1, service::default_target_tablet_size)] += 1;
}
for (unsigned i = 1; i < node1_shard_count; ++i) {
@@ -300,7 +300,7 @@ SEASTAR_THREAD_TEST_CASE(test_load_sketch) {
// host3 has max shard load of 3 and 3 shards, and 4 tablets allocated.
// So to achieve even load we need to allocate 3 * 3 - 4 = 5 more tablets.
for (int i = 0; i < 5; ++i) {
auto s = load.next_shard(host3);
auto s = load.next_shard(host3, 1, service::default_target_tablet_size);
node3_shards[s] += 1;
}

View File

@@ -7,7 +7,9 @@
*/
#include "utils/assert.hh"
#include <seastar/core/sstring.hh>
#include <fmt/ranges.h>
#include <fmt/format.h>
#include <seastar/core/future.hh>
#include <seastar/testing/test_case.hh>
@@ -27,7 +29,7 @@ namespace fs = std::filesystem;
using namespace sstables;
using namespace tests;
#if 1
static future<> create_file_of_size(fs::path file, size_t dest_size) {
auto f = co_await seastar::open_file_dma(file.string(), open_flags::wo|open_flags::create);
auto os = co_await make_file_output_stream(std::move(f));
@@ -46,7 +48,7 @@ static future<> create_file_of_size(fs::path file, size_t dest_size) {
co_await os.flush();
co_await os.close();
}
#endif
static future<> compare_streams(input_stream<char>& is1, input_stream<char>& is2, size_t total) {
uint64_t read = 0;
while (!is1.eof()) {
@@ -58,6 +60,7 @@ static future<> compare_streams(input_stream<char>& is1, input_stream<char>& is2
BOOST_REQUIRE_EQUAL(buf, buf2);
read += buf.size();
}
BOOST_REQUIRE((co_await is1.read()).empty());
BOOST_REQUIRE((co_await is2.read()).empty());
BOOST_REQUIRE_EQUAL(read, total);
}
@@ -70,8 +73,10 @@ future<> test_file_upload(test_env_config cfg, size_t size) {
auto client = env.manager().get_endpoint_client(ep);
tmpdir tmp;
auto path = tmp.path() / "testfile";
object_name name(bucket, "testfile");
//for multiple tests that may run in CI in the same time we want different files to be used in each test
sstring test_file_name = fmt::format("testfile-{}-{}", std::chrono::system_clock::now().time_since_epoch().count(), tests::random::get_int(0, 1000000));
auto path = tmp.path() / test_file_name;
object_name name(bucket, test_file_name);
utils::upload_progress up;
create_file_of_size(path, size).get();

Some files were not shown because too many files have changed in this diff Show More