Commit Graph

47449 Commits

Author SHA1 Message Date
Benny Halevy
f8d5835cab query_processor: remote: use named gate
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-04-12 11:28:48 +03:00
Benny Halevy
747ae5e1c4 compaction: compaction_state: use named gate
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-04-12 11:28:48 +03:00
Benny Halevy
879811e0d2 alternator/server: use named_gate
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-04-12 11:28:48 +03:00
David Garcia
cf11d5eb69 fix: openapi not rendering in docs.scylladb.com/manual
Closes scylladb/scylladb#23686
2025-04-10 17:47:58 +03:00
Patryk Jędrzejczak
07a7a75b98 Merge 'raft: implement the limited voters feature' from Emil Maskovsky
Currently if raft is enabled all nodes are voters in group0. However it is not necessary to have all nodes to be voters - it only slows down the raft group operation (since the quorum is large) and makes deployments with asymmetrical DCs problematic (2 DCs with 5 nodes along 1 DC with 10 nodes will lose the majority if large DC is isolated).

The topology coordinator will now maintain a state where there are only limited number of voters, evenly distributed across the DCs and racks.

After each node addition or removal the voters are recalculated and rebalanced if necessary. That means:
* When a new node is added, it might become a voter depending on the current distribution of voters - either if there are still some voter "slots" available, or if the new node is a better candidate than some existing voter (in which case the existing node voter status might be revoked).
* When a voter node is removed or stopped (shut down), its voter status is revoked and another node might become a voter instead (this can also depend on other circumstances, like e.g. changing the number of DCs).
* If a node addition or removal causes a change in number of data centers (DCs) or racks, the rebalance action might become wider (as there are some special rules applying to 1 vs 2 vs more DCs, also changing the number of racks might cause similar effects in the voters distribution)

Special conditions for various number of DCs:
* 1 DC: Can have up to the maximum allowed number of voters (5 - see below)
* 2 DCs: The distribution of the voters will be asymmetric (if possible), meaning that we can tolerate a loss of the DC with the smaller number of voters (if both would have the same number of voters we'd lose majority if any of the DCs is lost). For example, if we have 2 DCs with 2 nodes each, one of them will only have 1 voter (despite the limit of 5). Also, if one of the 2 DCs has more racks than the other and the node count allows it, the DC with the more racks will have more voters.
* 3 and more DCs: The distribution of the voters will be so that every DC has strictly less than half of the total voters (so a loss of any of the DCs cannot lead to the majority loss). Again, DCs with more racks are being preferred in the voter distribution.

At the moment we will be handling the zero-token nodes in the same way as the regular nodes (i.e. the zero-token nodes will not take any priority in the voter distribution). Technically it doesn't make much sense to have a zero-token node that is not a voter (when there are regular nodes in the same DC being voters), but currently the intended purpose of zero-token nodes is to form an "arbiter DC" (in case of 2 DCs, creating a third DC with zero-token nodes only), so for that intended purpose no special handling is needed and will work out of the box. If a preference of zero token nodes will eventually be needed/requested, it will be added separately from this PR.

The maximum number of voters of 5 has been chosen as the smallest "safe" value. We can lose majority when multiple nodes (possibly in different dcs and racks) die independently in a short time span. With less than 5 voters, we would lose majority if 2 voters died, which is very unlikely to happen but not entirely impossible. With 5 voters, at least 3 voters must die to lose majority, which can be safely considered impossible in the case of independent failures.

Currently the limit will not be configurable (we might introduce configurable limits later if that would be needed/requested).

Tests added:
* boost/group0_voter_registry_test.cc: run time on CI: ~3.5s
* topology_custom/test_raft_voters.py: parametrized with 1 or 3 nodes per DC, the run time on CI: 1: ~20s. 3: ~40s, approx 1 min total

Fixes: scylladb/scylladb#18793

No backport: This is a new feature that will not be backported.

Closes scylladb/scylladb#21969

* https://github.com/scylladb/scylladb:
  raft: distribute voters by rack inside DC
  raft/test: fix lint warnings in `test_raft_no_quorum`
  raft/test: add the upgrade test for limited voters feature
  raft topology: handle on_up/on_down to add/remove node from voters
  raft: fix the indentation after the limited voters changes
  raft: implement the limited voters feature
  raft: drop the voter removal from the decommission
  raft/test: disable the `stop_before_becoming_raft_voter` test
  raft/test: stop the server less gracefully in the voters test
2025-04-10 15:29:15 +02:00
Avi Kivity
9559e53f55 Merge 'Adjust tablet-mon.py for capacity-aware load balancing' from Tomasz Grabiec
After load-balancer was made capacity-aware it no longer equalizes tablet count per shard, but rather utilization of shard's storage. This makes the old presentation mode not useful in assessing whether balance was reached, since nodes with less capacity will get fewer tablets when in balanced state. This PR adds a new default presentation mode which scales tablet size by its storage utilization so that tablets which have equal shard utilization take equal space on the graph.

To facilitate that, a new virtual table was added: system.load_per_node, which allows the tool to learn about load balancer's view on per-node capacity. It can also serve as a debugging interface to get a view of current balance according to the load-balancer.

Closes scylladb/scylladb#23584

* github.com:scylladb/scylladb:
  tablet-mon.py: Add presentation mode which scales tablet size by its storage utilization
  tablet-mon.py: Center tablet id text properly in the vertical axis
  tablet-mon.py: Show migration stage tag in table mode only when migrating
  virtual-tables: Introduce system.load_per_node
  virtual_tables: memtable_filling_virtual_table: Propagate permit to execute()
  docs: virtual-tables: Fix instructions
  service: tablets: Keep load_stats inside tablet_allocator
2025-04-10 14:59:08 +03:00
Avi Kivity
885838fc46 Merge 'scylla-gdb.py: improve scylla repairs command' from Botond Dénes
Make output more readable by:
* group follower/master repair instances separately
* split repair details into one line for repair summary, then one line for each host info
* add indentation to make the output easier to follow

Also add `-m|--memory` option to calculate memory usage of repair buffers.

Example output:

    (gdb) scylla repairs -m
    Repairs for which this node is leader:
      (repair_meta*) 0x60503ab7f7b0: {id: 19197, table: large_collection_test.table_with_large_collection, reason: decommission, row_buf: {len: 0, memory: 0}, working_row_buf: {len: 30, memory: 48208512}, same_shard: True, tablet: False}
        host: 496e8b0c-50bf-4ada-b8f9-3d167138e908, shard: 5, state: repair_state::get_combined_row_hash_finished
        host: ce4413ab-33d9-40f8-b13e-d14af8511dda, shard: 4294967295, state: repair_state::put_row_diff_with_rpc_stream_started
      (repair_meta*) 0x60503717f7b0: {id: 19211, table: large_collection_test.table_with_large_collection, reason: decommission, row_buf: {len: 0, memory: 0}, working_row_buf: {len: 28, memory: 63863265}, same_shard: True, tablet: False}
        host: 496e8b0c-50bf-4ada-b8f9-3d167138e908, shard: 5, state: repair_state::get_combined_row_hash_finished
        host: c4936a19-41da-4260-971e-651445d740fd, shard: 4294967295, state: repair_state::get_row_diff_with_rpc_stream_finished
      (repair_meta*) 0x60502ddff7b0: {id: 19231, table: large_collection_test.table_with_large_collection, reason: decommission, row_buf: {len: 0, memory: 0}, working_row_buf: {len: 0, memory: 0}, same_shard: True, tablet: False}
        host: 496e8b0c-50bf-4ada-b8f9-3d167138e908, shard: 5, state: repair_state::row_level_stop_started
        host: 039494b6-9d35-4f34-82c4-3c79c1d97175, shard: 4294967295, state: repair_state::row_level_stop_finished
      (repair_meta*) 0x60501db3f7b0: {id: 19234, table: large_collection_test.table_with_large_collection, reason: decommission, row_buf: {len: 0, memory: 0}, working_row_buf: {len: 0, memory: 0}, same_shard: True, tablet: False}
        host: 496e8b0c-50bf-4ada-b8f9-3d167138e908, shard: 5, state: repair_state::get_sync_boundary_started
        host: 039494b6-9d35-4f34-82c4-3c79c1d97175, shard: 4294967295, state: repair_state::get_sync_boundary_finished
      (repair_meta*) 0x60501c81f7b0: {id: 19236, table: large_collection_test.table_with_large_collection, reason: decommission, row_buf: {len: 0, memory: 0}, working_row_buf: {len: 28, memory: 42696821}, same_shard: True, tablet: False}
        host: 496e8b0c-50bf-4ada-b8f9-3d167138e908, shard: 5, state: repair_state::get_combined_row_hash_finished
        host: ce4413ab-33d9-40f8-b13e-d14af8511dda, shard: 4294967295, state: repair_state::put_row_diff_with_rpc_stream_started
      (repair_meta*) 0x60503f65f7b0: {id: 19238, table: large_collection_test.table_with_large_collection, reason: decommission, row_buf: {len: 0, memory: 0}, working_row_buf: {len: 28, memory: 47785163}, same_shard: True, tablet: False}
        host: 496e8b0c-50bf-4ada-b8f9-3d167138e908, shard: 5, state: repair_state::get_combined_row_hash_finished
        host: ce4413ab-33d9-40f8-b13e-d14af8511dda, shard: 4294967295, state: repair_state::get_row_diff_with_rpc_stream_finished
    Repairs for which this node is follower:

Closes scylladb/scylladb#23075

* github.com:scylladb/scylladb:
  scylla-gdb.py: improve scylla repairs commadn
  scylla-gdb.py: seastar_lw_shared_ptr: add __nonzero__ and __bool__
  scylla-gdb.py: introduce managed_bytes
2025-04-10 14:52:43 +03:00
Dani Tweig
e92740cc2b .github: update bug_report.yml
Perform a yaml "face lift" on the old bug report md template, making bug reporting more efficient.

- Add dedicated textarea fields for problem description and expected behavior
- Include pre-filled placeholders to guide issue reporting
- Add formatted log output section with shell syntax highlighting

Closes: #21532
2025-04-10 14:26:00 +03:00
Pavel Emelyanov
88318d3b50 topology_coordinator: Use shorter fault-injection overloads
There are few places that want to pause until a message is received from
the test. There's a convenience one-line suger to do it.

One test needs update its expectations about log message that appears
when scylle steps on it and actually starts waiting.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>

Closes scylladb/scylladb#23390
2025-04-10 14:05:46 +03:00
Botond Dénes
d67202972a mutation/frozen_mutation: frozen_mutation_consumer_adaptor: fix end-of-partition handling
This adaptor adapts a mutation reader pausable consumer to the frozen
mutation visitor interface. The pausable consumer protocol allows the
consumer to skip the remaining parts of the partition and resume the
consumption with the next one. To do this, the consumer just has to
return stop_iteration::yes from one of the consume() overloads for
clustering elements, then return stop_iteration::no from
consume_end_of_partition(). Due to a bug in the adaptor, this sequence
leads to terminating the consumption completely -- so any remaining
partitions are also skipped.

This protocol implementation bug has user-visible effects, when the
only user of the adaptor -- read repair -- happens during a query which
has limitations on the amount of content in each partition.
There are two such queries: select distinct ... and select ... with
partition limit. When converting the repaired mutation to to query
result, these queries will trigger the skip sequence in the consumer and
due to the above described bug, will skip the remaining partitions in
the results, omitting these from the final query result.

This patch fixes the protocol bug, the return value of the underlying
consumer's consume_end_of_partition() is now respected.

A unit test is also added which reproduces the problem both with select
distinct ... and select ... per partition limit.

Follow-up work:
* frozen_mutation_consumer_adaptor::on_end_of_partition() calls the
  underlying consumer's on_end_of_stream(), so when consuming multiple
  frozen mutations, the underlying's on_end_of_stream() is called for
  each partition. This is incorrect but benign.
* Improve documentation of mutation_reader::consume_pausable().

Fixes: #20084

Closes scylladb/scylladb#23657
2025-04-10 13:19:57 +03:00
Pavel Emelyanov
4de48a9d24 encryption: Mark parts of encrypted_data_sink private
Nowadays the whole class is public, but it's not in fact such.
Remove the SUDDENLY unused private _flush_pos member to please the
compiler.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>

Closes scylladb/scylladb#23677
2025-04-10 12:42:57 +03:00
Dawid Mędrek
0ed21d9cc1 test/cluster/test_tablets.py: Fix test errorneous indentation
Some of the statements in the test are not indented properly
and, as a result, are never run. It's most likely a small mistake,
so let's fix it.

Closes scylladb/scylladb#23659
2025-04-10 11:06:01 +03:00
Nadav Har'El
258213f73b Merge 'Alternator batch count histograms' from Amnon Heiman
This series adds a histogram for get and write batch sizes.
It uses the estimated_histogram implementation which starts from 1 with 1.2 exponential factor, which works
extremely tight to 20 but still covers all the way to 100.

Histograms will be reported per node.

**Backport to 2025.1 so we'll have information about user batch size limitation**

Closes scylladb/scylladb#23379

* github.com:scylladb/scylladb:
  alternator: Add tests for the batch items histograms
  alternator: Add histogram for batch item count
2025-04-09 22:41:14 +03:00
Tomasz Grabiec
b5211cca85 Merge 'tablets: rebuild: use repair for tablet rebuild' from Aleksandra Martyniuk
Currently, when we rebuild a tablet, we stream data from all
replicas. This creates a lot of redundancy, wastes bandwidth
and CPU resources.

In this series, we split the streaming stage of tablet rebuild into
two phases: first we stream tablet's data from only one replica
and then repair the tablet.

Fixes: https://github.com/scylladb/scylladb/issues/17174.

Needs backport to 2025.1 to prevent out of space during streaming

Closes scylladb/scylladb#23187

* github.com:scylladb/scylladb:
  test: add test for rebuild with repair
  locator: service: move to rebuild_v2 transition if cluster is upgraded
  locator: service: add transition to rebuild_repair stage for rebuild_v2
  locator: service: add rebuild_repair tablet transition stage
  locator: add maybe_get_primary_replica
  locator: service: add rebuild_v2 tablet transition kind
  gms: add REPAIR_BASED_TABLET_REBUILD cluster feature
2025-04-09 21:35:37 +02:00
Avi Kivity
ed3e4f33fd Merge 'generic_server: throttle and shed incoming connections according to semaphore limit' from Marcin Maliszkiewicz
Adds new live updatable config: uninitialized_connections_semaphore_cpu_concurrency.

It should help to reduce cpu usage by limiting cpu concurrency for new connections.  As a last resort when those connections are waiting for initial processing too long (over 1m) they are shed.

New connections_shed and connections_blocked metrics are added for tracking.

Testing:
 - manually via simple program creating high number of connection and constantly re-connecting
 - added benchmark

Following are benchmark results:

Before:
```
> build/release/test/perf/perf_generic_server --smp=1
170101.41 tps ( 13.1 allocs/op,   0.0 logallocs/op,   7.0 tasks/op,    4695 insns/op,    3178 cycles/op,        0 errors)
[...]
throughput: mean=173850.06 standard-deviation=1844.48 median=174509.66 median-absolute-deviation=874.23 maximum=175087.49 minimum=170588.54
instructions_per_op: mean=4725.59 standard-deviation=13.35 median=4729.38 median-absolute-deviation=12.49 maximum=4738.61 minimum=4709.96
  cpu_cycles_per_op: mean=3135.08 standard-deviation=32.13 median=3122.68 median-absolute-deviation=22.29 maximum=3179.38 minimum=3103.15
```

After:
```
> build/release/test/perf/perf_generic_server --smp=1
167373.19 tps ( 13.1 allocs/op,   0.0 logallocs/op,   7.0 tasks/op,    4821 insns/op,    3371 cycles/op,        0 errors)
[...]
throughput:
  mean=   171199.55 standard-deviation=2484.58
  median= 171667.06 median-absolute-deviation=2087.63
  maximum=173689.11 minimum=167904.76
instructions_per_op:
  mean=   4801.90 standard-deviation=16.54
  median= 4796.78 median-absolute-deviation=9.32
  maximum=4830.71 minimum=4789.81
cpu_cycles_per_op:
  mean=   3245.26 standard-deviation=32.28
  median= 3230.44 median-absolute-deviation=16.52
  maximum=3297.39 minimum=3215.62
```

The patch adds around 67 insns/op so it's effect on performance should be negligible.

Fixes: https://github.com/scylladb/scylladb/issues/22844

Closes scylladb/scylladb#22828

* github.com:scylladb/scylladb:
  transport: move on_connection_close into connection destructor
  test: perf: make aggregated_perf_results formatting more human readable
  transport: add blocked and shed connection metrics
  generic_server: throttle and shed incoming connections according to semaphore limit
  generic_server: add data source and sink wrappers bookkeeping network IO
  generic_server: coroutinize part of server::do_accepts
  test: add benchmark for generic_server
  test: perf: add option to count multiple ops per time_parallel iteration
  generic_server: add semaphore for limiting new connections concurrency
  generic_server: add config to the constructor
  generic_server: add on_connection_ready handler
2025-04-09 21:41:38 +03:00
Tomasz Grabiec
5b5ada1743 tablet-mon.py: Add presentation mode which scales tablet size by its storage utilization
Per-node capacity is queried from system.load_per_node

Tablet height in each node is scaled so that equal height = equal node
utilization.

The nominal height is assigned to the node which has the smallest
capacity, so nodes with higher capacity will have smaller tablets than
normal.
2025-04-09 20:21:51 +02:00
Tomasz Grabiec
217184f16b tablet-mon.py: Center tablet id text properly in the vertical axis
Was too low due to not subtracting frame size from height
2025-04-09 20:21:51 +02:00
Tomasz Grabiec
20cac72056 tablet-mon.py: Show migration stage tag in table mode only when migrating
It's the gray bar at the top of the tablet. It's not showing useful
information when tablet is not migrating.
2025-04-09 20:21:51 +02:00
Tomasz Grabiec
0b9a75d7b6 virtual-tables: Introduce system.load_per_node
Can be used to query per-node stats about load as seen by the load
balancer.

In particular, node's capacity will be used by tablet-mon.py to
scale tablet columns so that equal height is equal node utilization.
2025-04-09 20:21:51 +02:00
Tomasz Grabiec
668094dc58 virtual_tables: memtable_filling_virtual_table: Propagate permit to execute()
So that population can access read's timeout and mark the permit as awaiting.
2025-04-09 20:21:51 +02:00
Tomasz Grabiec
34beaa30b5 docs: virtual-tables: Fix instructions 2025-04-09 20:21:51 +02:00
Tomasz Grabiec
76bc11c78c service: tablets: Keep load_stats inside tablet_allocator
So that virtual tables can pick them up.

It's a better place to keep them than in topology_coordinator.
2025-04-09 20:21:51 +02:00
Pavel Emelyanov
d9853efa7c Merge '[Out-of-space prevention] db: backup: prioritize sstables that were deleted from the table' from Benny Halevy
The motivation behind this change to free up disk space as early as possible.
The reason is that snapshot locks the space of all SSTables in the snapshot,
and deleting form the table, for example, by compaction, or tablet migration,
won't free-up their capacity until they are uploaded to object storage and deleted from the snapshot.

This series adds prioritization of deleted sstables in two cases:
First, after the snapshot dir is processed, the list of SSTable generation is cross-referenced with the
list of SSTables presently in the table and any generation that is not in the table is prioritized to
be uploaded earlier.
In addition, a subscription mechanism was added to sstables_manager
and it is used in backup to prioritize SSTables that get deleted from the table directory
during backup.

This is particularly important when backup happens during high disk utilization (e.g. 90%).
Without it, even if the cluster is scaled up and tablets are migrated away from the full nodes
to new nodes, tablet cleanup might not free any space if all the tablet sstables are hardlinked to the
snapshot taken for backup.

* Enhancement, no backport needed

Closes scylladb/scylladb#23241

* github.com:scylladb/scylladb:
  db: snapshot: backup_task: prioritize sstables deleted during upload
  sstables_manager: add subscriptions
  db: snapshot: backup_task: limit concurrency
  sstables: directory_semaphore: expose get_units
  db: snapshot: backup_task: add sharded sstables_manager
  database: expose get_sstables_manager(schema)
  db: snapshot: backup_task: do_backup: prioritize sstables that are already deleted from the table
  db: snapshot-ctl: pass table_id to backup_task
  db: snapshot-ctl: expose sharded db() getter
  db: snapshot: backup_task: do_backup: organize components by sstable generation
  db: snapshot: coroutinize backup_task
  db: snapshot: backup_task: refactor backup_file out of uploads_worker
  db: snapshot: backup_task: refactor uploads_worker out of do_backup
  db: snapshot: backup_task: process_snapshot_dir: initialize total progress
  utils/s3: upload_progress: init members to 0
  db: snapshot: backup_task: do_backup: refactor process_snapshot_dir
  db: snapshot: backup_task: keep expection as member
2025-04-09 15:32:11 +03:00
Marcin Maliszkiewicz
ce18909688 transport: move on_connection_close into connection destructor
To make the code more robust by ensuring closing code is always executed.
2025-04-09 13:50:19 +02:00
Pavel Emelyanov
35dfc8c782 Merge 'audit: add semaphore to audit_syslog_storage_helper' from Andrzej Jackowski
audit_syslog_storage_helper::syslog_send_helper uses Seastar's
net::datagram_channel to write to syslog device (usually /dev/log).
However, datagram_channel.send() is not fiber-safe (ref seastar#2690),
so unserialized use of send() results in packets overwriting its state.
This, in turn, causes a corruption of audit logs, as well as assertion
failures.

To workaround the problem, a new semaphore is introduced in
audit_syslog_storage_helper. As storage_helper is a member of sharded
audit service, the semaphore allows for one datagram_channel.send() on
each shard. Each audit_syslog_storage_helper stores its own
datagram_channel, therefore concurrent sends to datagram_channel are
eliminated.

This change:
 - Moved syslog_send_helper to audit_syslog_storage_helper
 - Corutinize audit_syslog_storage_helper
 - Introduce semaphore with count=1 in audit_syslog_storage_helper.

See https://github.com/scylladb/scylla-dtest/pull/5749 for releated dtest
Fixes: scylladb#22973

Backport to 2025.1 should be considered, as https://github.com/scylladb/scylladb/issues/22973 is known to cause crashes of 2025.1.

Closes scylladb/scylladb#23464

* github.com:scylladb/scylladb:
  audit: add semaphore to audit_syslog_storage_helper
  audit: corutinize audit_syslog_storage_helper
  audit: moved syslog_send_helper to audit_syslog_storage_helper
2025-04-09 12:39:06 +03:00
Marcin Maliszkiewicz
619944555f test: perf: make aggregated_perf_results formatting more human readable
Before:
throughput: mean=170728.58 standard-deviation=1921.76 median=171084.16 median-absolute-deviation=1501.58 maximum=172913.36 minimum=167288.97
instructions_per_op: mean=4685.89 standard-deviation=12.46 median=4683.92 median-absolute-deviation=9.68 maximum=4706.53 minimum=4666.70
cpu_cycles_per_op: mean=3090.94 standard-deviation=52.69 median=3103.43 median-absolute-deviation=24.55 maximum=3192.99 minimum=3003.00

After:
throughput:
	mean=   168224.81 standard-deviation=854.48
	median= 168829.02 median-absolute-deviation=604.21
	maximum=168829.02 minimum=167620.60
instructions_per_op:
	mean=   4837.02 standard-deviation=20.89
	median= 4851.79 median-absolute-deviation=14.77
	maximum=4851.79 minimum=4822.24
cpu_cycles_per_op:
	mean=   3271.42 standard-deviation=46.29
	median= 3304.16 median-absolute-deviation=32.73
	maximum=3304.16 minimum=3238.69
2025-04-09 10:49:20 +02:00
Marcin Maliszkiewicz
599f4d312b transport: add blocked and shed connection metrics
This adds some visibility into connection storm mitigations
added in following commits.
2025-04-09 10:49:18 +02:00
Marcin Maliszkiewicz
26518704ab generic_server: throttle and shed incoming connections according to semaphore limit
If we have uninitialized_connections_semaphore_cpu_concurrency (default
2) connections being processed we start delay accepting new connections.

Connections which are in network IO state are not counted towards this
limit and they can go to cpu phase without blocking. So it can happen
that we process more concurrent new connections but that's a necessary
tradeof to make progress during storm without implementing more advanced
machinery (i.e. priority queue).
2025-04-09 10:48:51 +02:00
Marcin Maliszkiewicz
9f5de2c256 generic_server: add data source and sink wrappers bookkeeping network IO
They release semaphore units when we start network IO and acquire it
when we enter cpu intensive phase. We use consume() so it doesn't block
because we don't want connections we started processing to compete with
new incomming connections. Otherwise during connection storm we wouldn't
make much progress.

There will be a simplification here as we'll treat disc IO (if there is any)
as cpu work.
2025-04-09 10:48:42 +02:00
Marcin Maliszkiewicz
c56116372e generic_server: coroutinize part of server::do_accepts 2025-04-09 10:48:42 +02:00
Marcin Maliszkiewicz
719d04d501 test: add benchmark for generic_server
Changes in configure.py are needed becuase we don't want to embed
this benchmark in scylla binary as perf_simple_query or perf_alternator,
it doesn't directly translate to Scylla performance but we want to use
aggregated_perf_results for precise cpu measurements so we need
different dependecies.
2025-04-09 10:48:42 +02:00
Marcin Maliszkiewicz
b957cedace test: perf: add option to count multiple ops per time_parallel iteration 2025-04-09 10:30:58 +02:00
Marcin Maliszkiewicz
ed82bede39 generic_server: add semaphore for limiting new connections concurrency
It will be used in following commits.
2025-04-09 10:30:58 +02:00
Marcin Maliszkiewicz
33122d3f93 generic_server: add config to the constructor 2025-04-09 10:30:58 +02:00
Marcin Maliszkiewicz
474e84199c generic_server: add on_connection_ready handler
This patch cleans the code a bit so that ready state is set in a single place.
And adds handler which will allow adding logic when connection is made
ready, this will be added in the following commits.
2025-04-09 10:30:58 +02:00
Benny Halevy
1ab3ec061b db: snapshot: backup_task: prioritize sstables deleted during upload
subscribe on each shard's sstables_manager to get
callback notifications and keep the generation numbers
of deleted sstables in a vector so they can be prioritized
first to free up their disk space as soon as possible.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-04-09 08:54:07 +03:00
Benny Halevy
d8b0c661e4 sstables_manager: add subscriptions
Allow other submodules to subscribe for added/deleted
notifications.  This will be used in a later to
patch to prioritize unlinked sstables for backup.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-04-09 08:54:07 +03:00
Benny Halevy
d3b4874ec3 db: snapshot: backup_task: limit concurrency
Otherwise, once all the background tasks are created
we have no way to reorder the queue.

Fixes #23239

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-04-09 08:54:07 +03:00
Benny Halevy
e60fcc58b7 sstables: directory_semaphore: expose get_units
To be used by a following patch for
backup concurrency control.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-04-09 08:54:07 +03:00
Benny Halevy
b7807ec165 db: snapshot: backup_task: add sharded sstables_manager
Get a reference to the table's sstables_manager
on each shard.  This will be used be later patches
to limit concurrency and to subscribe for notifications.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-04-09 08:54:07 +03:00
Benny Halevy
b270d552fb database: expose get_sstables_manager(schema)
Return either the system or use sstables manager.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-04-09 08:54:07 +03:00
Benny Halevy
9a4b4afade db: snapshot: backup_task: do_backup: prioritize sstables that are already deleted from the table
Detect SSTables that are already deleted from the table
in process_snapshot_dir when their number_of_links is equal to 1.

Note that the SSTable may be hard-linked by more than one snapshot,
so even after it is deleted from the table, its number of links
would be greater than one.  In that case, however, uploading it
earlier won't help to free-up its capacity since it is still held
by other snapshots.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-04-09 08:54:07 +03:00
Benny Halevy
4b8699e278 db: snapshot-ctl: pass table_id to backup_task
To be used by the following patches to get
to the table's sstables_manager for concurrency
control and for notifications (TBD).

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-04-09 08:54:07 +03:00
Benny Halevy
d646603bfd db: snapshot-ctl: expose sharded db() getter
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-04-09 08:54:07 +03:00
Benny Halevy
63bc1d4626 db: snapshot: backup_task: do_backup: organize components by sstable generation
Do not rely on the snapshot directory listing order.
This will become useful for prioritizing unlinked
sstables in a following patch.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-04-09 08:54:06 +03:00
Benny Halevy
a731c1b33d db: snapshot: coroutinize backup_task
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-04-09 08:49:53 +03:00
Benny Halevy
189075b885 db: snapshot: backup_task: refactor backup_file out of uploads_worker
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-04-09 08:49:53 +03:00
Benny Halevy
e3ba425c2b db: snapshot: backup_task: refactor uploads_worker out of do_backup
Let do_backup deal only with the high level coordination.
A future patch will follow this structure to run
uploads_worker on each shard.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-04-09 08:49:53 +03:00
Benny Halevy
ff25b4c97f db: snapshot: backup_task: process_snapshot_dir: initialize total progress
Now we can calculate advance how much data we intend to upload
before we start uploading it.

This will be used also later when uploading in parallel
on all shards, so we can collect the progress from all
shards in get_progress().

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-04-09 08:49:51 +03:00
Benny Halevy
6da215e8af utils/s3: upload_progress: init members to 0
For default construction.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-04-09 08:44:52 +03:00