Alternator's implementation of TagResource, UntagResource and UpdateTimeToLive (the latter uses tags to store the TTL configuration) was unsafe for concurrent modifications - some of these modifications may be lost. This short series fixes the bug, and also adds (in the last patch) a test that reproduces the bug and verifies that it's fixed.
The cause of the incorrect isolation was that we separately read the old tags and wrote the modified tags. In this series we introduce a new function, `modify_tags()` which can do both under one lock, so concurrent tag operations are serialized and therefore isolated as expected.
Fixes#6389.
Closes#13150
* github.com:scylladb/scylladb:
test/alternator: test concurrent TagResource / UntagResource
db/tags: drop unsafe update_tags() utility function
alternator: isolate concurrent modification to tags
db/tags: add safe modify_tags() utility functions
migration_manager: expose access to storage_proxy
This is a translation of Cassandra's CQL unit test source file
validation/operations/DeleteTest.java into our cql-pytest framework.
There are 51 tests, and they did not reproduce any previously-unknown
bug, but did provide additional reproducers for three known issues:
Refs #4244 Add support for mixing token, multi- and single-column
restrictions
Refs #12474 DELETE prints misleading error message suggesting ALLOW
FILTERING would work
Refs #13250 one-element multi-column restriction should be handled like
a single-column restriction
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Closes#13436
SSTable summary is one of the components fully loaded into memory that may have a significant footprint.
This series reduces the summary footprint by reducing the amount of token information that we need to keep
in memory for each summary entry.
Of course, the benefit of this size optimization is proportional to the amount of summary entries, which
in turn is proportional to the number of partitions in a SSTable.
Therefore we can say that this optimization will benefit the most tables which have tons of small-sized
partitions, which will result in big summaries.
Results:
```
BEFORE
[1000000 pkeys] data size: 4035888890, summary -> memory footprint: 5843232, entries: 88158
[10000000 pkeys] data size: 40368888890, summary -> memory footprint: 55787128, entries: 844925
AFTER
[1000000 pkeys] data size: 4035888890, summary -> memory footprint: 4351536, entries: 88158
[10000000 pkeys] data size: 40368888890, summary -> memory footprint: 42211984, entries: 844925
```
That shows a 25% reduction in footprint, for both 1 and 10 million pkeys.
Closes#13447
* github.com:scylladb/scylladb:
sstables: Store raw token into summary entries
sstables: Don't store token data into summary's memory pool
The PR adds sstables storage backend that keeps all component files as S3 objects and system.sstables_registry ownership table that keeps track of what sstables objects belong to local node and their names.
When a keyspace is configured with 'STORAGE = { 'type': 'S3' }' the respective class table object eventually gets the storage_options instance pointing to the target S3 endpoint and bucket. All the sstables created for that table attach the S3 storage implementation that maintains components' files as S3 objects. Writing to and reading from components is handled by the S3 client facilities from utils/. Changing the sstable state, which is -- moving between normal, staging and quarantine states -- is not yet implemented, but would eventually happen by updating entries in the sstables registry.
To keep track of which node owns which objects, to provide bucket-wide uniqueness of object names and to maintain sstable state the storage driver keeps records in the system.sstables_registry ownership table. The table maps sstable location and generation to the object format, version, status-state (*) and (!) unique identifier (some time soon this identifier is supposed to be replaced with UUID sstables generations). The component object name is thus s3://bucket/uuid/component_basename. The registry is also used on boot. The distributed loader picks up sstables from all the tables found in schema and for S3-backed keyspaces it lists entries in the registry to a) identify those and b) get their unique S3-side identifiers to open by name.
(*) About sstable's status and state.
The state field is the part of today's sstable path on disk -- staging, quarantine, normal (root table data dir), etc. Since S3 doesn't have the renaming facility, moving sstable between those states is only possible by updating the entry in the registry. This is not yet implemented in this set (#13017)
The status field tracks sstable' transition through its creation-deletion. It first starts with 'creating' status which corresponds to the today's TemporaryTOC file. After being created and written to the sstable moves into 'sealed' state which corresponds to the today's normal sstable being with the TOC file. To delete sstable atomically it first moves into 'removing' state which is equivalent to being in the deletion-log for the on-disk sstable. Once removed from the bucket, the entry is removed from the registry.
To play with:
1. Start minio (installed by install-dependencies.sh)
```
export MINIO_ROOT_USER=${root_user}
export MINIO_ROOT_PASSWORD=${root_pass}
mkdir -p ${root_directory}
minio server ${root_directory}
```
2. Configure minio CLI, create anonymous bucket
```
mc config host rm local
mc config host add local http://127.0.0.1:9000 ${root_user} ${root_pass}
mc mb local/sstables
mc anonymous set public local/sstables
```
3. Start Scylla with object-storage feature enabled
``` scylla ... --experimental-features=keyspace-storage-options --workdir ${as_usual}```
4. Create KS with S3 storage
``` create keyspace ... storage = { 'type': 'S3', 'endpoint': '127.0.0.1:9000', 'bucket': 'sstables' };```
The S3 client has a logger named "s3", it's useful to use on with `trace` verbosity.
Closes#12523
* github.com:scylladb/scylladb:
test: Add object-storage test
distributed_loader: Print storage type when populating
sstable_directory: Add ownership table components lister
sstable_directory: Make components_lister and API
sstable_directory: Create components lister based on storage options
sstables: Add S3 storage implementation
system_keyspace: Add ownership table
system_keyspace: Plug to user sstables manager too
sstable: Make storage instance based on storage options
sstable_directory: Keep storage_options aboard
sstable: Virtualize the helper that gets on-disk stats for sstable
sstable, storage: Virtualize data sink making for small components
sstable, storage: Virtualize data sink making for Data and Index
sstable/writer: Shuffle writer::init_file_writers()
sstable: Make storage an API
utils: Add S3 readable file impl for random reads
utils: Add S3 data sink for multipart upload
utils: Add S3 client with basic ops
cql-pytest: Add option to run scylla over stable directory
test.py: Equip it with minio server
sstables: Detach write_toc() helper
this is a part of a series to migrating from `operator<<(ostream&, ..)`
based formatting to fmtlib based formatting. the goal here is to enable
fmtlib to print `auth::auth_authentication_options` and `auth::resource_kind`
without the help of fmt::ostream. and their `operator<<(ostream,..)` are
dropped, as there are no users of them anymore.
Refs #13245Closes#13460
* github.com:scylladb/scylladb:
auth: remove unused operator<<(.., resource_kind)
auth: specialize fmt::formatter<resource_kind>
auth: remove unused operator<<(.., authentication_option)
auth: specialize fmt::formatter<authentication_option>
The test does
- starts scylla (over stable directory
- creates S3-backed keyspace (minio is up and running by test.py
already)
- creates table in that keyspace and populates it with several rows
- flushes the keyspace to make sstables hit the storage
- checks that the ownership table is populated properly
- restarts scylla
- makes sure old entries exist
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
On boot it's very useful to know which storage a table comes from, so
add the respective info to existing log messages.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
When sstables are stored on object storage, they are "registered" in the
system.sstables_registry ownership table. The sstable_directory is
supposed to list sstables from this table, so here's the respective
components lister.
The lister is created by sstables_manager, by the time it's requested
from the the system keyspace is already plugged. The lister only handles
"sealed" sstables. Dangling ones are still ignored, this is to be fixed
later.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Now the lister is filesystem-specific. There will soon come another one
for S3, so the sstable_directory should be prepared for that by making
the lister an abstract class.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
The directory's lister is storage-specific and should be created
differently for different storage options.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
The driver puts all componenets into
s3://bucket/uuid/component_name
objects where 'bucket' is the keyspace options configuration parameter,
and the 'uuid' is the value obtained from the ownership table.
E.g.
s3://test_bucket/d0a743b0-ad38-11ed-85b5-39b6b0998182/Data.db
The life-time is straightforward. Until sealed, the sstable has
'creating' status in the table, then it's updated to be 'sealed'. Prior
to removing the objects the status is set to 'deleting' thus allowing
the distributed loader to pick up the dangling objects un re-load (not
yet implemented). Finally, the entry is deleted from the table.
It needs the PR #12648 not to generate empty ks/cf directories on the
local filesystem.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
The schema is
CREATE TABLE system.sstables (
location text,
generation bigint,
format text,
status text,
uuid uuid,
version text,
PRIMARY KEY (location, generation)
)
A sample entry looks like:
location | generation | format | status | uuid | version
---------------------------------------------------------------------+------------+--------+--------+--------------------------------------+---------
/data/object_storage_ks/test_table-d096a1e0ad3811ed85b539b6b0998182 | 2 | big | sealed | d0a743b0-ad38-11ed-85b5-39b6b0998182 | me
The uuid field points to the "folder" on the storage where the sstable
components are. Like this:
s3
`- test_bucket
`- f7548f00-a64d-11ed-865a-0c1fbc116bb3
`- Data.db
- Index.db
- Filter.db
- ...
It's not very nice that the whole /var/lib/... path is in fact used as
location, it needs the PR #12707 to fix this place.
Also, the "status" part is not yet fully functional, it only supports
three options:
- creating -- the same as TemporaryTOC file exists on disk
- sealed -- default state
- deleting -- the analogy for the deletion log on disk
The latter needs support from the distributed_loader, which's not yet
there. In fact, distributes_loader also needs to be patched to actualy
select entries from this table on load. Also it needs the mentioned
PR #12707 to support staging and quarantine sstables.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
The sharded<sys_ks> instances are plugged to large data handler and
compaction manager to maintain the circular dependency between these
components via the interposing database instance. Do the same for user
sstables manager, because S3 driver will need to update the local
ownership table.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
This patch adds storage options lw-ptr to sstables_manager::make_sstable
and makes the storage instance creation depend on the options. For local
it just creates the filesystem storage instance, for S3 -- throws, but
next patch will fix that.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
The class in question will need to know the table's storage it will need
to list sstables from. For that -- construct it with the storage options
taken from table.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
When opening an existing (or just sealed) sstable its components are
stat()-ed to get the on-disk sizes and a bit more. Stat-ing a file by
name on S3 is not (yet) implemented and doing it file-by-file can be
quite terrible. So add a method to return sstable stats in a
storage-specific manner. For S3 this can be implemented by getting the
info from the ownership table (in the future).
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
This time sstable needs to create a data sink for a component without
having the file at hand. That's pretty much the same as in previous
patch, but the mathod declaration differs slightly.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
The method needs to create two data sinks -- for Data and for Index
files -- and then wrap it with more stuff (compression, checksums,
streams, etc.). With S3 backend using file-output-stream won't work,
becase S3 storage cannot provide writable file API (it has data_sink
instead).
This patch extracts file_data_sink creation so that it could be
virtualized with storage API later.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Currently sstable carries a filesystem_storage instance on board. Next
patches will make it possible to use some other storage with different
data accessing methods. This patch makes sstable carry abstract storage
interface and make the existing filesystem_storage implement it.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Sometimes an sstable is used for random read, sometimes -- for streamed
read using the input stream. For both cases the file API can be
provided, because S3 API allows random reads of arbitrary lengths.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Putting a large object into S3 using plain PUT is bad choice -- one need
to collect the whole object in memory, then send it as a content-length
request with plain body. Less memory stress is by using multipart
upload, but multipart upload has its limitation -- each part should be
at least 5Mb in size. For that reason using file API doesn't work --
file IO API operates with external memory buffers and the file impl
would only have raw pointers to it. In order to collect 5Mb of chunk in
RAM the impl would have to copy the memory which is not good. Unlike the
file API data_sink API is more flexible, as it has temporary buffers at
hand and can cache them in zero-copy manner.
Having sad that, the S3 data_sink implementation is like this:
* put(buffer):
move the buffer into local cache, once the local cache grows above 5Mb
send out the part
* flush:
send out whatever is in cache, then send upload completion request
* close:
check that the upload finihsed (in flush), abort the upload otherwise
User of the API may (actually should) wrap the sink with output_stream
and use it as any other output_stream.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Those include -- HEAD to get size, PUT to upload object in one go, GET
to read the object as contigious buffer and DELETE to drop one.
The client uses http client from seastar and just implements the S3
protocol using it.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
The facilities in run.py script allow launching scylla over temporary
directory, waiting for it to come alive, killing, etc. The limitation of
those is that the work-dir create for scylla is tighly coupled with its
pid. The object-storage test in next patches will need to check that the
sstables are preserved on scylla restart and this hard binding of
workdir to pid won't work.
This patch generalizes the scylla run/abort helpers to accept an
external directory to work on and adds a call to restart scylla process
over existing directory.
And one small related change here -- log file is opened in O_APPEND mode
so that restarted scylla process continues writing into the old file.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
When test.py starts it activates a minio server inside test-dir and
configures an anonymous bucket for test cases to run on
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
When sstable is opened it generates a certain content into TOC file. In
filesystem storage this first gets into TemporaryTOC one. Future S3
driver will need the same to put into TOC object. Not to produce
duplicate code detach the content generation into a helper. Next patches
will make use of it.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Scylla stores a dht::token into each summary entry, for convenience.
But that costs us 16 bytes for each summary entry. That's because
dht::token has a kind field in addition to data, both 64 bits.
With 1kk partitions, each averaging 4k bytes, summary may end up
with ~90k summary entries. So dht::token only will add ~1.5M to the
memory footprint of summary.
We know summary samples index keys, therefore all tokens in all
summary entries cannot have any token kind other than 'key'.
Therefore, we can save 8 bytes for each summary entry by storing
a 64-bit raw token and converting it back into token whenever
needed.
Memory footprint of summary entries in a summary goes from
sizeof(summary_entry) * entries.size(): 1771520
to
sizeof(summary_entry) * entries.size(): 1417216
which is explained by the 8 bytes reduction per summary entry.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
summary has a memory pool, which is implemented as a set of contiguous
buffer of exponentially increasing size, with the max size of 128k.
This pool served for both storing keys of summary entries and their
respective tokens. The summary entry itself just stores a string_view,
which points to the actual data in the memory pool.
Since this series 31593e1451, which removed token_view, summary_entry
stores the actual token, not just the view.
Therefore, memory is being wasted, as SSTable loader / writer is
unnecessarily storing the token data into the pool.
With 11k summary entries, the footprint drops from 756004 to 624932.
A 18% reduction. Of course, the reduction depends on factors like key
size, where the key size can outweigh significantly this waste.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Use token_metadata get_endpoint_to_host_id_map_for_reading
to get all normal token owners for all node operations,
rather than using gossip for some operation and
token_metadata for others.
Fixes#12862Closes#13256
* github.com:scylladb/scylladb:
storage_service: node ops: standardize sync_nodes selection
storage_service: get_ignore_dead_nodes_for_replace: make static and rename to parse_node_list
This is not really an error, so print it in debug log_level
rather than error log_level.
Fixes#13374
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Closes#13462
Except for where usage of `std::regex` is required by 3rd party library interfaces.
As demonstrated countless times, std::regex's practice of using recursion for pattern matching can result in stack overflow, especially on AARCH64. The most recent incident happened after merging https://github.com/scylladb/scylladb/pull/13075, which (indirectly) uses `sstables::make_entry_descriptor()` to test whether a certain path is a valid scylla table path in a trial-and-error manner. This resulted in stacks blowing up in AARCH64.
To prevent this, use the already tried and tested method of switching from `std::regex` to `boost::regex`. Don't wait until each of the `std::regex` sites explode, replace them all preemptively.
Refs: https://github.com/scylladb/scylladb/issues/13404Closes#13452
* github.com:scylladb/scylladb:
test: s/std::regex/boost::regex/
utils: s/std::regex/boost::regex/
db/commitlog: s/std::regex/boost::regex/
types: s/std::regex/boost::regex/
index: s/std::regex/boost::regex/
duration.cc: s/std::regex/boost::regex/
cql3: s/std::regex/boost::regex/
thrift: s/std::regex/boost::regex/
sstables: use s/std::regex/boost::regex/
in c642ca9e73, a reference to the
a parameter `config` passed to the `thrift_server` 's constructor is
passed down to `create_handler_factory()`, which keeps it so it can
create connection handler on demand. but unfortunately,
- the `config` parameter is a temporary variable
- the `config` parameter is moved away in the constructor after
`create_handler_factory()` is called
hence we have a dangling reference when the factory created by
`create_handler_factory()` tries to deference the reference when
handling a new incoming connection.
in this change,
- the definitions of `_config` and `_handler_factory` member
variables are transposed, so that the former is initialized
first.
- `_handler_factory` now keeps a reference to `_config`'s member
variable, so that the weak reference it holds is always valid.
Fixes#13455
Branches: none
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
Closes#13456
since the only user of operator<<(..., resource_kind) is now
`auth_resource_test`, let's just move it into this test. and
there is no need to keep this operator in the header file where
`resource_kind` is defined.
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
this is a part of a series to migrating from `operator<<(ostream&, ..)`
based formatting to fmtlib based formatting. the goal here is to enable
fmtlib to print `auth::resource_kind`
without the help of fmt::ostream. its `operator<<(ostream,..)` is
reimplemented using fmtlib accordingly to ease the review.
Refs #13245
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
since we already have fmt::formatter<authentication_option>, and
there is no exiting users of `operator<<(ostream&,
authentication_option)`, let's just drop it.
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
this is a part of a series to migrating from `operator<<(ostream&, ..)`
based formatting to fmtlib based formatting. the goal here is to enable
fmtlib to print `auth::auth_authentication_options`
without the help of fmt::ostream. its `operator<<(ostream,..)` is
reimplemented using fmtlib accordingly to ease the review.
Refs #13245
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
The former is prone to producing stack-overflow as it uses recursion in
it match implementation.
The migration is entirely mechanical is for the most part.
escape() needs some special treatment, looks like boost::regex wants
double escaped bacspace.
The system_keyspace.hh now includes raft stuff, topology changes stuff, task_manager stuff, etc. It's going to include tablets.hh (but maybe not). Anything that deals with system keyspace, and includes system_keyspace.hh, would transitively pull these too. This header is becoming a central hub for all the features.
This PR removes all the headers from system_keyspace.hh that correspond to other "subsystems" keeping only generic mutations/querying and seastar ones.
Closes#13450
* github.com:scylladb/scylladb:
system_keyspace.hh: Remove unneeded headers
system_keyspace: Move topology_mutation_builder to storage_service
system_keyspace: Move group0_upgrade_state conversions to group0 code
After a failed topology operation, like bootstrap / decommission /
removenode, the cluster might contain a garbage entry in either token
ring or group 0. This entry can be cleaned-up by executing removenode on
any other node, pointing to the node that failed to bootstrap or leave
the cluster.
Document this procedure, including a method of finding the host ID of a
garbage entry.
Add references in other documents.
Fixes: #13122Closes#13186
As a first step towards using host_id to identify nodes instead of ip addresses
this series introduces a node abstraction, kept in topology,
indexed by both host_id and endpoint.
The revised interface also allows callers to handle cases where nodes
are not found in the topology more gracefully by introducing `find_node()` functions
that look up nodes by host_id or inet_address and also get a `must_exist` parameter
that, if false (the default parameter value) would return nullptr if the node is not found.
If true, `find_node` throws an internal error, since this indicates a violation of an internal
assumption that the node must exist in the topology.
Callers that may handle missing nodes, should use the more permissive flavor
and handle the !find_node() case gracefully.
Closes#11987
* github.com:scylladb/scylladb:
topology: add node state
topology: remove dead code
locator: add class node
topology: rename update_endpoint to add_or_update_endpoint
topology: define get_{rack,datacenter} inline
shared_token_metadata: mutate_token_metadata: replicate to all shards
locator: endpoint_dc_rack: refactor default_location
locator: endpoint_dc_rack: define default operator==
test: storage_proxy_test: provide valid endpoint_dc_rack