The current method of segregating partitions doesn't work well for huge
number of small partitions. For especially bad input, it can produce
hundreds or even thousands of buckets. This patch adds a new segregator
specialized for this use-case. This segregator uses a memtable to sort
out-of-order partitions in-memory. When the memtable size reaches the
provided max-memory limit, it is flushed to disk and a new empty one is
created. In-order partitions bypass the sorting altogether and go to the
fast-path bucket.
The new method is not used yet, this will come in the next patch.
The partition based splitting writer (used by scrub) was found to be exception-unsafe, converting an `std::bad_alloc` to an assert failure. This series fixes the problem and adds a unit test checking the exception safety against `std::bad_alloc`:s fixing any other related problems found.
Fixes: https://github.com/scylladb/scylla/issues/9452Closes#9453
* github.com:scylladb/scylla:
test: mutation_writer_test: add exception safety test for segregate_by_partition()
mutation_writer: segregate_by_partition(): make exception safe
mutation_reader: queue_reader_handle: make abandoned() exception safe
mutation_writer: feed_writers(): make it a coroutine
mutation_writer: partition_based_splitting_writer: erase old bucket if we fail to create replacement
It's much more efficient to have a separate compaction task that consists
completely from expired sstables and make sure it gets a unique "weight" than
mixing expired sstables with non-expired sstables adding an unpredictable
latency to an eviction event of an expired sstable.
This change also improves the visibility of eviction events because now
they are always going to appear in the log as compactions that compact into
an empty set.
Fixes#9533
Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
Closes#9534
Although the DynamoDB API responses are JSON, additional conventions apply
to these responses - such as how error codes are encoded in JSON. For this
reason, DynamoDB uses the content type `application/x-amz-json-1.0` instead
of the standard `application/json` in its responses.
Until this patch, Scylla used `application/json` in its responses. This
unexpected content-type didn't bother any of the AWS libraries which we
tested, but it does bother the aiodynamo library (see HENNGE/aiodynamo#27).
Moreover, we should return the x-amz-json-1.0 content type for future
proofing: It turns out that AWS already defined x-amz-json-1.1 - see:
https://awslabs.github.io/smithy/1.0/spec/aws/aws-json-1_1-protocol.html
The 1.1 content type differs (only) in how it encodes error replies.
If one day DynamoDB starts to use this new reply format (it doesn't yet)
and if DynamoDB libraries will need to differenciate between the two
reply formats, Alternator better return the right one.
This patch also includes a new test that the Content-Type header is
returned with the expected value. The test passes on DynamoDB, and
after this patch it starts to pass on Alternator as well.
Fixes#9554.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20211031094621.1193387-1-nyh@scylladb.com>
Setting a value of "warn" will still allow the create or alter commands,
but will warn the user, with a message that will appear both at the log
and also at cqlsh for example.
This is another step towards deprecating DTCS. Users need to know we're
moving towards this direction, and setting the default value to warn
is needed for this. Next step is to set it to false, and finally remove
it from the code base.
Refs #8914.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20211029184503.102936-1-raphaelsc@scylladb.com>
Objects of type "api_error" are used in Alternator when throwing an
error which will be reported as-is to the user as part of the official
DynamoDB protocol.
Although api_error objects are often thrown, the api_error class was not
derived from std::exception, because that's not necessary in C++.
However, it is *useful* for this exception to derive from std::except,
so this is what this patch does. It is useful for api_error to inherit
from std::exception because then our logging and debugging code knows
how to print this exception with all its details. All we need to do is
to implement a what() virtual function for api_error.
Before this patch, logging an api_error just logs the type's name
(i.e., the string "api_error"). After this patch, we get the full
information stored in the api_error - the error's type and its message.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20211017150555.225464-1-nyh@scylladb.com>
sprint() uses the printf-style formatting language while most of our
code uses the Python-derived format language from fmt::format().
The last mass conversion of sprint() to fmt (in 1129134a4a)
missed some callers (principally those that were on multiple lines, and
so the automatic converter missed them). Convert the remainder to
fmt::format(), and some sprintf() and printf() calls, so we have just
one format language in the code base. Seastar::sprint() ought to be
deprecated and removed.
Test: unit (dev)
Closes#9529
* github.com:scylladb/scylla:
utils: logalloc: convert debug printf to fmt::print()
utils: convert fmt::fprintf() to fmt::print()
main: convert fprint() to fmt::print()
compress: convert fmt::sprintf() to fmt::format()
tracing: replace seastar::sprint() with fmt::format()
thrift: replace seastar::sprint() with fmt::format()
test: replace seastar::sprint() with fmt::format()
streaming: replace seastar::sprint() with fmt::format()
storage_service: replace seastar::sprint() with fmt::format()
repair: replace seastar::sprint() with fmt::format()
redis: replace seastar::sprint() with fmt::format()
locator: replace seastar::sprint() with fmt::format()
db: replace seastar::sprint() with fmt::format()
cql3: replace seastar::sprint() with fmt::format()
cdc: replace seastar::sprint() with fmt::format()
auth: replace seastar::sprint() with fmt::format()
Alternator auth module used to piggy-back on top of CQL query processor
to retrieve authentication data, but it's no longer the case.
Instead, storage proxy is used directly.
Closes#9538
Convert storage_service::describe_ring to a coroutine
to prevent reactor stalls as seen in #9280.
Fixes#9280Closes#9282
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Closes#9282
With 062436829c,
we return all input sstables in strict mode
if they are dosjoint even if they don't need reshaping at all.
This leads to an infinite reshape loop when uploading sstables
with TWCS.
The optimization for disjoint sstables is worth it
also in relaxed mode, so this change first makes sorting of the input
sstables by first_key order independent of reshape_mode,
and then it add a check for sstable_set_overlapping_count
before trimming either the multi_window vector or
any single_window bucket such that we don't trim the list
if the candidates are disjoint.
Adjust twcs_reshape_with_disjoint_set_test accordingly.
And also add some debug logging in
time_window_compaction_strategy::get_reshaping_job
so one can figure out what's going on there.
Test: unit(dev)
DTest: cdc_snapshot_operation.py:CDCSnapshotOperationTest.test_create_snapshot_with_collection_list_with_base_rows_delete_type
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Message-Id: <20211025071828.509082-1-bhalevy@scylladb.com>
The cdc stream rewrite code launches work in the backround. It's careful
to make sure all the dependencies are preserved via shared pointers, but
there are implicit dependencies (system_keyspace) that are not. To fix this
problem, this series changes the lifetime of the background work from
unbounded to be bounded by the lifetime of storage_service.
[ xemul: Not the storage_service, but the cdc_generation_service, now
all the dances with rewriting and its futures happen inside the cdc
part. Storage_service coroutinization and call to .stop() from main
are here for several reasons
- They are awseome
- Splitting a PR into two is frustrating (as compared to ML threads)
- Shia LaBeouf is good motivation speaker
]
As explained in the patches, I don't expect this to be a real problem and the
series just paves the way for making system_keyspace an explicit
dependency.
Test: unit (dev)
Closes#9356
* github.com:scylladb/scylla:
cdc: don't allow background streams description rewrite to delay too far
storage_service: coroutinize stop()
main: stop storage_service on shutdown
'system.status' and 'system.describe_ring' are imperfect names for
what they do, so rename them. Fortunately they aren't exposed in any
released version so there is no compatibility concern.
Closes#9530
* github.com:scylladb/scylla:
system_keyspace: rename 'system.describe_ring' to 'system.token_ring'
system_keyspace: rename 'system.status' to 'system.cluster_status'
Currently, aws_instance.ephemeral_disks() returns both ephemeral disks
and EBS disks on Nitro System.
This is because both are attached as NVMe disks, we need to add disk
type detection code on NVMe handle logic.
Fixes#9440Closes#9462
Some time ago we started gathering stats for system tables in a separate
class in order to be able to distinguish which queries come from the
user - e.g. if the unpaged queries are internal or not.
Originally, only local system tables were moved into this class,
i.e. system and system_schema. It would make sense, however, to also
include other internal keyspaces in this separate class - which includes
system_distributed, system_traces, etc.
Fixes#9380Closes#9490
if mean() is called when there are no elements in the histogram,
a runtime error will happen due to division-by-zero.
approx_exponential_histogram::mean() handles it but for some
reason we forgot to do the same for estimated_histogram.
this problem was found when adding an unit test which calls
mean() in an empty histogram.
Fixes#9531.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20211027142813.56969-1-raphaelsc@scylladb.com>
Throw a proper exception from do_load_schemas if parse_statements
fails to parse the schema cql.
Catch it in scylla-sstable main() function so
it won't be reported as seastar - unhandled exception.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Message-Id: <20211027124032.1787347-1-bhalevy@scylladb.com>
Table names are usually nouns, so SELECT/INSERT statements
sound natural: "SELECT * FROM pets". 'system.describe_ring' defies
this convention. Rename it to 'system.token_ring' so selects are natural.
The name is not in any released version, so we can safely rename it.
'system.status' is too generic, it doesn't explain the status of what.
'system.node_status' is also ambiguous (this node? all nodes?) so I
picked 'system.cluster_status'.
The internal name, nodetool_status_table, was even worse (we're
not querying the status of nodetool!) but fortunately wasn't exposed.
The name is not in any released version, so we can safely rename it.
sprint() is obsolete. Note some calls where to helper functions that
use sprint(), not to sprint() directly, so both the helpers and
the callers were modified.
We have two identical "Truncated frame" errors, at:
* read_frame_size() in serialization_visitors.hh;
* cql_server::connection::read_and_decompress_frame() in
transport/server.cc;
When such an exception is thrown, it is impossible to tell where was it
thrown from and it doesn't have any further information contained in it
(beyond the basic information it being thrown implies).
This patch solves both problems: it makes the exception messages unique
per location and it adds information about why it was thrown (the
expected vs. real size of the frame).
Ref: #9482Closes#9520
In definition for /column_family/major_compaction/{name} there is an
incorrect use of "bool" instead of "boolean".
Signed-off-by: Alejo Sanchez <alejo.sanchez@scylladb.com>
Closes#9516
Although the sstable name is part of the system.large_* records,
it is not printed in the log.
In particular, this is essential for the "too many rows" warning
that currently does not record a row in any large_* table
so we can't correlate it with a sstable.
Fixes#9524
Test: unit(dev)
DTest: wide_rows_test.py
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Message-Id: <20211027074104.1753093-1-bhalevy@scylladb.com>
Currently row marker shadowing the shadowable tombstone is only checked
in `apply(row_marker)`. This means that shadowing will only be checked
if the shadowable tombstone and row marker are set in the correct order.
This at the very least can cause flakyness in tests when a mutation
produced just the right way has a shadowable tombstone that can be
eliminated when the mutation is reconstructed in a different way,
leading to artificial differences when comparing those mutations.
This patch fixes this by checking shadowing in
`apply(shadowable_tombstone)` too, making the shadowing check symmetric.
There is still one vulnerability left: `row_marker& row_marker()`, which
allow overwriting the marker without triggering the corresponding
checks. We cannot remove this overload as it is used by compaction so we
just add a comment to it warning that `maybe_shadow()` has to be manually
invoked if it is used to mutate the marker (compaction takes care of
that). A caller which didn't do the manual check is
mutation_source_test: this patch updates it to use `apply(row_marker)`
instead.
Fixes: #9483
Tests: unit(dev)
Closes#9519
Make sure to close the reader created by flush_fragments
if an exception occurs before it's moved to `populate_views`.
Note that it is also ok to close the reader _after_ it has been
moved, in case populate_views itself throws after closing the
reader that was moved it. For conveience flat_mutation_reader::close
supports close-after-move.
Fixes#9479
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Message-Id: <20211024164138.1100304-1-bhalevy@scylladb.com>
Allocating the exception might fail, terminating the application as
`abandoned()` is called in a noexcept context. Handle this case by
catching the bad-alloc and aborting the reader with that instead when
this happens.
Functions `upper_bound` and `lower_bound` had signatures:
```
template<typename T, typename... Args>
static rows_iter_type lower_bound(const T& t, Args... args);
```
This caused a dacay from `const schema&` to `schema` as one of the args,
which in turn copied the schema in a fair number of the queries. Fix
that by setting the parameter type to `Args&&`, which doesn't discard
the reference.
Fixes#9502Closes#9507