Files
scylladb/docs/dev/object_storage.md
Calle Wilund 68c109c2df docs::dev::object_storage: Add some initial info on GS storage
Augments the object storage document with config options etc for
using GS instead of S3.

TODO: add proper gsutil command line examples for manual managing of
GCP storage.
2025-10-13 08:53:28 +00:00

12 KiB
Raw Permalink Blame History

Keeping sstables on S3/GS

On of the ways to use object storage is to keep sstables directly on it as objects.

Enabling the feature

Currently the object-storage backend works if keyspace-storage-options is listed in experimental_features in scylla.yaml. like:

experimental_features:
  - keyspace-storage-options

It can also be enabled with --experimental-features=keyspace-storage-options command line option when launchgin scylla.

Configuring AWS S3 access

You can define endpoint details in the scylla.yaml file. For example:

object_storage_endpoints:
  - name: s3.us-east-1.amazonaws.com
    port: 443
    https: true
    aws_region: us-east-1

Configuring GCP storage access

Similarly to AWS, define endpoint details in scylla.yaml like:

object_storage_endpoints:
  - name: https://storage.googleapis.com
    type: gs
    credentials_file: <gcp account credentials json file>

Typically, google compute storage only uses the same endpoint URI (unless using private proxy or mock server), so name can also use the default moniker.

credentials_file can be omitted, in which case the default credentials on the machine will be used, i.e. resolving the current users credentials or fallback to machine credentials if running on a GCP instance.

If set, the environment variable GOOGLE_APPLICATION_CREDENTIALS can be set to point to a credentials file.

If no credentials file is set, the default credentials will be searched, i.e. application_default_credentials.json in the gcp local data folder.

You can also set the credentials_file to none to completely skip authentication. Useful for testing on mock servers.

Local/Development Environment

In a local or development environment, you usually need to set AWS authentication tokens in environment variables to ensure the client works properly. For instance:

export AWS_ACCESS_KEY_ID=EXAMPLE_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=EXAMPLE_SECRET_ACCESS_KEY

Additionally, you may include an aws_session_token, although this is not typically necessary for local or development environments:

export AWS_ACCESS_KEY_ID=EXAMPLE_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=EXAMPLE_SECRET_ACCESS_KEY
export AWS_SESSION_TOKEN=EXAMPLE_TEMPORARY_SESSION_TOKEN

For gs, when using local mock server, authentication is normally not used.

Important Note

The examples above are intended for development or local environments. You should never use this approach in production. The Scylla S3 client will first attempt to access credentials from environment variables. If it fails to obtain credentials, it will then try to retrieve them from the AWS Security Token Service (STS) or the EC2 Instance Metadata Service.

For the EC2 Instance Metadata Service to function correctly, no additional configuration is required. However, STS requires the IAM Role ARN to be defined in the scylla.yaml file, as shown below:

object_storage_endpoints:
  - name: s3.us-east-1.amazonaws.com
    port: 443
    https: true
    aws_region: us-east-1
    iam_role_arn: arn:aws:iam::123456789012:instance-profile/my-instance-instance-profile

Creating keyspace with S3

Sstables location is keyspace-scoped. In order to create a keyspace with S3 storage use CREATE KEYSPACE with STORAGE = { 'type': 'S3', 'endpoint': '$endpoint_name', 'bucket': '$bucket' } parameters, where $endpoint_name should match with the corresponding name of the configured endpoint in the YAML file above.

In the following example, an endpoint named "s3.us-east-2.amazonaws.com" is defined in scylla.yaml, and this endpoint is used when creating the keyspace "ks".

in scylla.yaml:

object_storage_endpoints:
  - name: s3.us-east-2.amazonaws.com
    port: 443
    https: true
    aws_region: us-east-2

and when creating the keyspace:

CREATE KEYSPACE ks
  WITH REPLICATION = {
   'class' : 'NetworkTopologyStrategy',
   'replication_factor' : 1
  }
  AND STORAGE = {
   'type' : 'S3',
   'endpoint' : 's3.us-east-2.amazonaws.com',
   'bucket' : 'bucket-for-testing'
  };

Creating keyspace with GS

This mirrors AWS S3 config.

in scylla.yaml:

object_storage_endpoints:
  - name: default
    credentials_file: <credentials file>|none

and when creating the keyspace:

CREATE KEYSPACE ks
  WITH REPLICATION = {
   'class' : 'NetworkTopologyStrategy',
   'replication_factor' : 1
  }
  AND STORAGE = {
   'type' : 'GS',
   'endpoint' : 'default',
   'bucket' : 'bucket-for-testing'
  };

Copying sstables on S3/GS (backup)

It's possible to upload sstables from data/ directory on S3 via API. This is good to do because in that case all the resources that are needed for that operation (like disk IO bandwidth and IOPS, CPU time, networking bandwidth) will be under Seastar's control and regular Scylla workload will not be randomly affected.

The API endpoint name is /storage_service/backup and its Swagger description can be found here. Accepted parameters are

  • keyspace: the keyspace to copy sstables from
  • table: the table to copy sstables from
  • snapshot: the snapshot name to copy sstables from
  • endpoint: the key in the object storage configuration file. Can be either an AWS or GCP endpoint
  • bucket: bucket name to put sstables' files in
  • prefix: prefix to put sstables' files under

Currently only snapshot backup is possible, so first one needs to take snapshot

All tables in a keyspace are uploaded, the destination object names will look like s3://bucket/some/prefix/to/store/data/.../sstable or gs://bucket/some/prefix/to/store/data/.../sstable

Manipulating S3 data

This section intends to give an overview of where, when and how we store data in S3 and provide a quick set of commands
which help gain local access to the data in case there is a need for manual intervention.

Most of the time it won't be necessary to touch the data on S3 directly, there are transparent REST APIs and Scylla Manager
commands for backup and restore and Scylla can operate normally with S3 storage configured in the
CREATE KEYSPACE cql documented at ScyllaDB CQL Extensions | ScyllaDB Docs.

However, if for some reason the SSTables become corrupted and need an offline scrub before re-uploading
or if a bug investigation leads to the need to analyze the backup data, follow the information below to access
that data.

Issue tracking the document here.

Object Storage Layout

There are currently three mechanisms in Scylla which write data to S3/GS:

  1. Scylla Manager backup

When performing a backup with sctool, a backup prefix is created within the bucket passed as argument and
under that prefix, Scylla Manager stores all the backup data of all the backup tasks organized by cluster name,
datacenter, keyspace, etc.

Follow Specification | ScyllaDB Docs in the Scylla Manager documentation for the exact layout
under the backup prefix.

  1. /storage_service/backup REST API

When using the /storage_service/backup REST API, the data is stored under the prefix passed as argument to the API.
The structure under this prefix is identical to what youd find in the typical Scylla snapshot.
There is a manifest file which contains the list of Data files for each SSTable, the schema file and all the SSTables
components stored flat under the prefix.

scylla-bucket/prefix/

├── manifest.json
├── schema.cql
|
├── me-3gqe_1lnj_4sbpc2ezoscu9hhtor-big-Data.db
├── me-3gqe_1lnj_4sbpc2ezoscu9hhtor-big-Index.db
├── me-3gqe_1lnj_4sbpc2ezoscu9hhtor-big-Summary.db
├── ...

├── ma-1abx_k29m_9fyug3sdtjwj8krpqh-big-Data.db
├── ma-1abx_k29m_9fyug3sdtjwj8krpqh-big-Index.db
├── ma-1abx_k29m_9fyug3sdtjwj8krpqh-big-Summary.db
├── ...

└── ... (more SSTable components)

See the API documentation for more details about the actual backup request.

  1. CREATE KEYSPACE with S3/GS storage

When creating a keyspace with S3/GS storage, the data is stored under the bucket passed as argument to the CREATE KEYSPACE statement.
Once the statement is issued, Scylla will transparently use the S3/GS bucket as the location of the SSTables for that keyspace.
Like in the case above, there is no hierarchy for the data, all SSTables components are stored flat within the bucket.

scylla-sstables-bucket/

├── 3gqe_1lnj_4sbpc2ezoscu9hhtor/
   ├── Data.db
   ├── Index.db
   ├── Summary.db
   └── ...

├── 1abx_k29m_9fyug3sdtjwj8krpqh/
   ├── Data.db
   ├── Index.db
   ├── Summary.db
   └── ...

└── ... (other SSTable folders)

Downloading, deleting, uploading SSTables

To manually manage sstables on S3, AWS CLI commands can be used, but first it's mandatory to have awscli
installed (installation guide) and have the proper credentials set up in order to be able to access ScyllaDB S3 buckets.

Please make sure your ~/.aws/credentials file points to a valid set of S3 credentials.
Either refresh credentials if you use an OKTA-based fetching tool or make sure they point to a valid IAM user with S3 access.

Provided all the prerequisites above are fulfilled and you're able to run

aws s3 ls s3://your-bucket/

and see something (or at least not see an error if the bucket is empty), you're all set for the next commands.

NOTE: Please refer to the sections above for the prefix layout of each S3 use case.

Downloading SSTables

Fetching the SSTables of your backup can be easily done by
e.g. copying each individual component

aws s3 cp s3://your-bucket/path/to/sstable/me-3gqb_1izi_0pxn421yzymfw5c8zf-big-Data.db  /local/path/to/sstable/component

or downloading an entire sstable using globs

aws s3 cp s3://your-bucket/path-to-sstables/ /local/path/for/sstables --exclude "*" --include 'some-sstable-generation-big-*' --recursive

Deleting SSTables

components individually

aws s3 rm s3://your-bucket/path/to/sstable/me-3gqb_1izi_0pxn421yzymfw5c8zf-big-Data.db

or the entire SSTable using globs

aws s3 rm s3://your-bucket/path-to-sstables/ --exclude "*" --include 'some-sstable-generation-big-*' --recursive

Uploading SSTables

components individually

 aws s3 cp /local/path/to/sstable/me-3gqb_1izi_0pxn421yzymfw5c8zf-big-Data.db s3://your-bucket/path/to/sstable/component

or the entire SSTable using globs

aws s3 cp /local/path/for/sstables s3://your-bucket/path-to-sstables/ --exclude "*" --include 'some-sstable-generation-big-*' --recursive

Metadata touchups

In case of Scylla Manager backups, if manual scrubbing is needed and SSTables will be re-uploaded,
multiple things would need to be changed, same thing if you need to drop some SSTables altogether.
As you mightve seen in the Scylla Manager Specification Docs, we keep a JSON manifest per node
and that manifest file contains lots of SSTable-dependent information:

  • list of SSTables per table owned by node
  • total size of SSTables in the chunk of table owned
  • total size of all chunks of tables owned
  • the list of tokens owned by the node

As the name of the fields suggests, all the information in the list above depends on the SSTables content, so any attempt
to fix locally a corrupt SSTable and re-upload, most probably will force you to update them in the manifest file of the node.
There is high likelihood that a scrubbed SSTable results in different values for all the fields specified above.

For the storage_service/backup REST API, in theory only removing an entire SSTable from the backup would require changing
the manifest file and remove the corresponding entry for the SSTable, in all other cases, no metadata changes needed.

For the CREATE KEYSPACE on S3, there is no need to update any metadata as we currently dont have any.

NOTE: Its obvious to say that re-uploading a scrubbed SSTable means re-uploading all its components as its likely most of them were changed.