Commit Graph

36 Commits

Author SHA1 Message Date
Calle Wilund
144b550e4f object_storage_client: Add object_name wrapper type
Remaining S3-centric, but abstracting the object name to
possible implementations not quite formatted the same.
2025-10-13 08:53:25 +00:00
Calle Wilund
5d4558df3b sstables: Use object_storage_client for remote storage
Replaces direct s3 interfaces with the abstraction layer, and open
for having multiple implentations/backends
2025-10-13 08:53:25 +00:00
Benny Halevy
b7212620f9 backup_task: upload on all shards
Use all shards to upload snapshot files to S3.
By using the sharded sstables_manager_for_table
infrastructure.

Refs #22460

Quick perf comparison
===========================================
 Release build, master, smp-16, mem-32GiB
 Bytes: 2342880184, backup time: 9.51 s
===========================================
 Release build, this PR, smp-16, mem-32GiB
 Bytes: 2342891015, backup time: 1.23 s
===========================================

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Co-authored-by: Ernest Zaslavsky <ernest.zaslavsky@scylladb.com>
2025-04-17 16:31:42 +03:00
Ernest Zaslavsky
428f673ca2 backup_task: integrate sharded storage manager for upload
Introduce the sharded storage manager and use it to instantiate upload
clients. Full functionality will be implemented in subsequent changes.
2025-04-16 18:18:58 +03:00
Benny Halevy
da492231df db: snapshot: backup_task: use named gate
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-04-12 11:28:48 +03:00
Benny Halevy
1ab3ec061b db: snapshot: backup_task: prioritize sstables deleted during upload
subscribe on each shard's sstables_manager to get
callback notifications and keep the generation numbers
of deleted sstables in a vector so they can be prioritized
first to free up their disk space as soon as possible.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-04-09 08:54:07 +03:00
Benny Halevy
d3b4874ec3 db: snapshot: backup_task: limit concurrency
Otherwise, once all the background tasks are created
we have no way to reorder the queue.

Fixes #23239

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-04-09 08:54:07 +03:00
Benny Halevy
b7807ec165 db: snapshot: backup_task: add sharded sstables_manager
Get a reference to the table's sstables_manager
on each shard.  This will be used be later patches
to limit concurrency and to subscribe for notifications.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-04-09 08:54:07 +03:00
Benny Halevy
9a4b4afade db: snapshot: backup_task: do_backup: prioritize sstables that are already deleted from the table
Detect SSTables that are already deleted from the table
in process_snapshot_dir when their number_of_links is equal to 1.

Note that the SSTable may be hard-linked by more than one snapshot,
so even after it is deleted from the table, its number of links
would be greater than one.  In that case, however, uploading it
earlier won't help to free-up its capacity since it is still held
by other snapshots.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-04-09 08:54:07 +03:00
Benny Halevy
4b8699e278 db: snapshot-ctl: pass table_id to backup_task
To be used by the following patches to get
to the table's sstables_manager for concurrency
control and for notifications (TBD).

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-04-09 08:54:07 +03:00
Benny Halevy
63bc1d4626 db: snapshot: backup_task: do_backup: organize components by sstable generation
Do not rely on the snapshot directory listing order.
This will become useful for prioritizing unlinked
sstables in a following patch.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-04-09 08:54:06 +03:00
Benny Halevy
a731c1b33d db: snapshot: coroutinize backup_task
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-04-09 08:49:53 +03:00
Benny Halevy
189075b885 db: snapshot: backup_task: refactor backup_file out of uploads_worker
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-04-09 08:49:53 +03:00
Benny Halevy
e3ba425c2b db: snapshot: backup_task: refactor uploads_worker out of do_backup
Let do_backup deal only with the high level coordination.
A future patch will follow this structure to run
uploads_worker on each shard.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-04-09 08:49:53 +03:00
Benny Halevy
ff25b4c97f db: snapshot: backup_task: process_snapshot_dir: initialize total progress
Now we can calculate advance how much data we intend to upload
before we start uploading it.

This will be used also later when uploading in parallel
on all shards, so we can collect the progress from all
shards in get_progress().

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-04-09 08:49:51 +03:00
Benny Halevy
6da215e8af utils/s3: upload_progress: init members to 0
For default construction.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-04-09 08:44:52 +03:00
Benny Halevy
70307e8120 db: snapshot: backup_task: do_backup: refactor process_snapshot_dir
Do preliminary listing of the snapshot dir.

While at it, simplify the loop as follows:
The optional directory_entry returned by snapshot_dir_lister.get()
can be checked as part of the loop condition expression,
and with that, error handling can be simplified and moved
out of the loop body.

A followup patch will organize the component files
by their sstable generation.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>

db: snapshot: backup_task: process_snapshot_dir: simplify loop

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-04-09 08:44:52 +03:00
Benny Halevy
8a4b6b9614 db: snapshot: backup_task: keep expection as member
As part of refactoring do_backup().

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2025-04-09 08:44:52 +03:00
Kefu Chai
42335baec5 backup_task: Use INFO level for upload abort during shutdown
When a backup upload is aborted due to instance shutdown, change the log
level from ERROR to INFO since this is expected behavior. Previously,
 `abort_requested_exception` during upload would trigger an ERROR log, causing
test failures since error logs indicate unexpected issues.

This change:
- Catches `abort_requested_exception` specifically during file uploads
- Logs these shutdown-triggered aborts at INFO level instead of ERROR
- Aligns with how `abort_requested_exception` is handled elsewhere in the service

This prevents false test failures while still informing administrators
about aborted uploads during shutdown.

Fixes scylladb/scylladb#22391

Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>

Closes scylladb/scylladb#22995
2025-02-25 10:32:10 +03:00
Kefu Chai
8080658df7 backup_task: remove a component once it is uploaded
Previously, during backup, SSTable components are preserved in the
snapshot directory even after being uploaded. This leads to redundant
uploads in case of failed backups or restarts, wasting time and
resources (S3 API calls).

This change

- adds an optional query parameter named "move_files" to
  "/storage_service/backup" API. if it is set to "true", SSTable
  components are removed once they are backed up to object storage.
- conditionally removes SSTable components from the snapshot directory once
  they are successfully uploaded to the target location. This prevents
  re-uploading the same files and reduces disk usage.

This change only "Refs" #20655, because, we can move further optimize
the backup process, consider:

- Sending HEAD requests to S3 to check for existing files before uploading.
- Implementing support for resuming partially uploaded files.

Fixes #21799
Refs #20655

Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
2025-01-22 11:17:01 +08:00
Kefu Chai
32d22371b9 backup_task: extract component upload logic into dedicated function
Extract upload_component() from backup_task_impl::do_backup() to improve
readability and prepare for optional post-upload cleanup. This refactoring
simplifies the main backup flow by isolating the upload logic into its own
function.

The change is motivated by an upcoming feature that will allow optional
deletion of components after successful upload, which would otherwise add
complexity to do_backup().

Refs scylladb/scylladb#21799

Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
2025-01-22 11:17:01 +08:00
Avi Kivity
f3eade2f62 treewide: relicense to ScyllaDB-Source-Available-1.0
Drop the AGPL license in favor of a source-available license.
See the blog post [1] for details.

[1] https://www.scylladb.com/2024/12/18/why-were-moving-to-a-source-available-license/
2024-12-18 17:45:13 +02:00
Kefu Chai
f436edfa22 mutation: remove unused "#include"s
these unused includes are identified by clang-include-cleaner. after
auditing the source files, all of the reports have been confirmed.

please note, because `mutation/mutation.hh` does not include
`seastar/coroutine/maybe_yield.hh` anymore, and quite a few source
files were relying on this header to bring in the declaration of
`maybe_yield()`, we have to include this header in the places where
this symbol is used. the same applies to `seastar/core/when_all.hh`.

Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
2024-11-29 14:01:44 +08:00
Aleksandra Martyniuk
292d00463a tasks: add task_manager::task::is_user_task method 2024-11-25 14:21:53 +01:00
Calle Wilund
0db4b9fd94 backup_task: Use task abort source in s3 client call
Fixes #20716

Propagates abort source in task object to actual network call,
thus making the upload workload more quickly abortable.

v2: Fix test to handle two versions after each other
2024-11-06 15:20:23 +00:00
Botond Dénes
0ee0dd3ef4 Merge 'Collect and report backup progress' from Pavel Emelyanov
Task manager GET /status method returns two counters that reflect task progress -- total and completed. To make caller reason about their meaning, additionally there's progress_units field next to those counters.

This patch implements this progress report for backup task. The units are bytes, the total counter is total size of files that are being uploaded, and the completed counter is total amount of bytes successfully sent with PUT requests. To get the counters, the client::upload_file() is extended to calculate those.

fixes #20653

Closes scylladb/scylladb#21144

* github.com:scylladb/scylladb:
  backup_task: Report uploading progress
  s3/client: Account upload progress for real
  s3/client: Introduce upload_progress
  s3: Extract client_fwd.hh
2024-11-01 10:57:12 +02:00
Pavel Emelyanov
39cb93be3c treewide,error_injection: Use inject(wait_for_message) and fix tests
This is continuation of previous patch, this time also update tests that
wait for specific message in logs (to make sure injection handler was
called and paused the code execution).

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2024-10-30 16:53:33 +03:00
Pavel Emelyanov
25ae3d0aed backup_task: Report uploading progress
Do it by passing reference to s3::upload_progress_monitor object that
sits on task impl itself. Different files' uploads would then update the
monitor with their sizes and uploaded counters. The structure is
reported by get_progress() method. Unit size is set to be bytes. Test is
updated.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2024-10-29 08:40:35 +03:00
Pavel Emelyanov
f9a5e02b53 s3: Extract client_fwd.hh
This is to export some simple structures to users without the need to
include client.hh itself (rather large already)

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2024-10-29 08:38:39 +03:00
Kefu Chai
9f8adcd207 backup_task: track the first failure uploading sstables
before this change, we only record the exception returned
by `upload_file()`, and rethrow the exception. but the exception
thrown by `update_file()` not populated to its caller. instead, the
exceptional future is ignored on pupose -- we need to perform
the uploads in parallel.  this is why the task is not marked fail
even if some of the uploads performed by it fail.

in this change, we

- coroutinize `backup_task_impl::do_backup()`. strictly speaking,
  this is not necessary to populate the exception. but, in order
  to ensure that the possible exception is captured before the
  gate is closed, and to reduce the intentation, the teardown
  steps are performed explicitly.
- in addition to note down the exception in the logging message,
  we also store it in a local variable, which it rethrown
  before this function returns.

Fixes scylladb/scylladb#21248
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>

Closes scylladb/scylladb#21254
2024-10-28 12:54:27 +03:00
Kefu Chai
e7d6ab576b backup_task: remove unused member variable
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>

Closes scylladb/scylladb#21258
2024-10-25 11:49:06 +03:00
Kefu Chai
54858b8242 backup_task: fix the indent
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
2024-09-25 09:11:26 +08:00
Kefu Chai
d663b6c13b treewide: add "table" parameter to "backup" API
with this parameter, "backup" API can backup the given table, this
enables it to be a drop-in replacement of existing rclone API used by
scylla manager.

in this change:

* api/storage_service: add "table" parameter to "backup" API.
* snapshot_ctl: compose the full path of the snapshot directory in
  `snapshot_ctl::start_backup`. since we have all the information
  for composing the snapshot directory, and what the `backup_task_impl`
  class is interested is but the snapshot directory, we just pass
  the path to it instead the individual components of the directory.
* backup_task_impl: instead of scan the whole keyspace recursively,
  only scan the specified snapshot directory.

Fixes scylladb/scylladb#20636
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
2024-09-25 09:11:26 +08:00
Ernest Zaslavsky
924325fd25 treewide: add "prefix" parameter to backup API
Allow the caller to pass the prefix when performing backup and restore

Fixes scylladb/scylladb#20335

Closes scylladb/scylladb#20413
2024-09-18 08:25:00 +03:00
Pavel Emelyanov
ed6e6700ab backup-task: Make it abortable (almost)
Make the impl::is_abortable() return 'yes' and check the impl::_as in
the files listing loop. It's not real abort, since files listing loop is
expected to be fast and most of the time will be spent in s3::client
code reading data from disk and sending them to S3, but client doesn't
support aborting its requests. That's some work yet to be done.

Also add injection for future testing.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2024-08-22 19:47:06 +03:00
Pavel Emelyanov
a812f13ddd code: Introduce backup API method
The method starts a task that uploads all files from the given
keyspace's snapshot to the requested endpoint/bucket. The task runs in
the background, its task_id is returned from the method once it's
spawned and it should be used via /task_manager API to track the task
execution and completion (hint: it's good to have non-zero TTL value to
make sure fast backups don't finish before the caller manages to call
wait_task API).

If snapshot doesn't exist, nothing happens (FIXME, need to return back
an error in that case).

If endpoint is not configured locally, the API call resolves with
bad-request instantly.

Sstables components are scanned for all tables in the keyspace and are
uploaded into the /bucket/${cf_name}/${snapshot_name}/ path.

Task is not abortable (FIXME -- to be added) and doesn't really report
its progress other than running/done state (FIXME -- to be added too).

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2024-08-22 19:47:06 +03:00