"
There are 3 overlapping problems with the test case. It
has use after move that covers wrond window selection and
relies on a time-since-epoch being aligned with the time
window by chance.
tests: unit(dev)
"
* 'br-twcs-test-fixes' of https://github.com/xemul/scylla:
test, compaction: Do not rely on random timestamp
test, compaction: Fix use after move in twcs reshape
There are two APIs for checking the repair status and they behave
differently in case the id is not found.
```
{"host": "192.168.100.11:10001", "method": "GET", "uri":
"/storage_service/repair_async/system_auth?id=999", "duration": "1ms",
"status": 400, "bytes": 49, "dump": "HTTP/1.1 400 Bad
Request\r\nContent-Length: 49\r\nContent-Type: application/json\r\nDate:
Wed, 03 Nov 2021 10:49:33 GMT\r\nServer: Seastar
httpd\r\n\r\n{\"message\": \"unknown repair id 999\", \"code\": 400}"}
{"host": "192.168.100.11:10001", "method": "GET", "uri":
"/storage_service/repair_status?id=999&timeout=1", "duration": "0ms",
"status": 500, "bytes": 49, "dump": "HTTP/1.1 500 Internal Server
Error\r\nContent-Length: 49\r\nContent-Type: application/json\r\nDate:
Wed, 03 Nov 2021 10:49:33 GMT\r\nServer: Seastar
httpd\r\n\r\n{\"message\": \"unknown repair id 999\", \"code\": 500}"}
```
The correct status code is 400 as this is a parameter error and should
not be retried.
Returning status code 500 makes smarter http clients retry the request
in hopes of server recovering.
After this patch:
curl -X PGET
'http://127.0.0.1:10000/storage_service/repair_async/system_auth?id=9999'
{"message": "unknown repair id 9999", "code": 400}
curl -X GET
'http://127.0.0.1:10000/storage_service/repair_status?id=9999'
{"message": "unknown repair id 9999", "code": 400}
Fixes#9576Closes#9578
Again, there's a sub-case with sequential time stamps that still
works by chance. This time it's because splitting 256 sstables
into buckets of maximum 8 ones is allowed to have the 1st and the
last ones with less than 8 items in it, e.g. 3, 8, ..., 8, 5. The
exact generation depends on the time-since-epoch at which it
starts.
When all the cases are run altogether this time luckily happens
to be well-aligned with 8-hours and the generated buckets are
filled perfectly. When this particular test-case is run all alone
(e.g. by --run_test or --parallel-cases) then the starting time
becomes different and it gets less than 4 sstables in its first
bucket.
The fix is in adjusting the starting time to be aligned with the
8 hours window.
Actually, the 8 hours appeared in the previous patch, before which
it was 24 hours. Nonetheless, the above reasoning applies to any
size of the time window that's less than 256, so it's still an
independent fix.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
The options are std::move-d twice -- first into schema builder then
into compaction strategy. Surprisingly, but the 2nd move makes the
test work.
There's a sub-case in this case that checks sstables with incremental
timestamps with 1 hour step -- 0h, 1h, 2h, ... 255h. Next, the twcs
buckets generator obeys a minimal threshold of 4 sstables per bucket.
Those with less sstables in are not included in the job. Finally,
since the options used to create the twcs are empty after the 1st
move the default window of 24 hours is used. If they were configured
correctly with 1 hour window then all buckets would contain 1 sstable
and the generated job would become empty.
So the fix is both -- don't move after move and make the window size
large enough to fit more sstables than the mentioned minimum.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
In UpdateItem's AttributeUpdates (old-style parameter) we were missing
support for the ADD operation - which can increment a number, or add
items to sets (or to lists, even though this fact isn't documented).
This two-patch series add this missing feature. The first patch just moves
an existing function to where we can reuse it, and the second patch is
the actual implementation of the feature (and enabling its test).
Fixes#5893Closes#9574
* github.com:scylladb/scylla:
alternator: add support for AttributeUpdates ADD operation
alternator: move list_concatenate() function
Currently, scylla-server fails to start on ARM instances because scylla_io_setup does not have preset parameters even instance type added to 'supported instance'.
To fix this, we need to add io parameter preset on scylla_io_setup.
Also, we mistakenly added EBS only instances at a004b1da30, need to remove them.
Instrances does not have ephemeral disk should be 'unsupported instance', we still run our AMI on it, but we print warning message on login prompt, and user requires to run scylla_io_setup.
Fixes#9493Closes#9532
* github.com:scylladb/scylla:
scylla_util.py: remove EBS only ARM instances from support instance list
scylla_io_setup: support ARM instances on AWS
In UpdateItem's AttributeUpdates (old-style parameter) we were missing
support for the ADD operation - which can increment a number, or add
items to sets (or to lists, even though this fact isn't documented).
This patch adds this feature, and the test for it begins to pass so its
"xfail" marker is removed.
Fixes#5893
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
The list_concatenate() function was only used for UpdateExpression's
ADD operation, so we made it a static function in the source file where
it was used. In the next patch, we'll want to use it in another place
(AttributeUpdates' ADD operation), so let's move it to the same file
where similar functions for sets exist.
This patch is almost entirely a code move, but also makes one small
change: list_concatenate() used to throw an exception if one of the
arguments wasn't a list, but the text of this exception was specific to
UpdateExpression. So in the new version, we return a null value in this
case - and the caller checks for it and throws the right exception.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
The "Authorization" HTTP header is used in DynamoDB API to sign
requests. Our parser for this header, in server::verify_signature(),
required the different components of this header to be separated by
a comma followed by a whitespace - but it turns out that in DynamoDB
both spaces and commas are optional - one of them is enough.
At least one DynamoDB client library - the old "boto" (which predated
boto3) - builds this header without spaces.
In this patch we add a test that shows that an Authorization header
with spaces removed works fine in DynamoDB but didn't work in
Alternator, and after this patch modifies the parsing code for this
header, the test begins to pass (and the other tests show that the
previously-working cases didn't break).
Fixes#9568
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20211101214114.35693-1-nyh@scylladb.com>
Since we required ephemeral disks for our AMI, these EBS only ARM
instances cannot add in it is 'supported instance' list.
We still able to run our AMI on these instance types but login message
warns it is 'unsupported instance type', and requires to run
scylla_io_setup manually.
* seastar 083898a172...a189cdc45d (7):
> print: deprecate print() family
> treewide: replace uses of fmt_print() and fprint() with direct fmt calls
> circular_buffer: mark clear noexcept
> circular_buffer: mark trivial methods noexcept
> Merge "file: allow destroying append_challenged_posix_file_impl following a close failure" from Benny
> merge: Add parsing HTTP response status
> inet_address: fix usage of `htonl` for clang
scrub_compaction assumes that `make_interposer_consumer()` is called
only when `use_interposer_consumer()` returns true. This is false, so in
effect scrub always ends up using the segregating interposer. Fix this
by short-circuiting the former method when the latter returns true,
returning the passed-in consumer unchanged.
Tests: unit(dev)
Fixes#9541Closes#9564
The current disk-based segregation method works well enough for most cases, but it struggles greatly when there are a lot of partitions in the input. When this is the case it will produce tons of buckets (sstables), in the order of hundreds or even thousands. This puts a huge strain on different parts of the system.
This series introduces a new segregation method which specializes on the lots of small partitions case. If the conditions are right, it can cause a drastic reduction of buckets. In one case I tested, a 1.1GB sstable with 3.6M partitions in it produced just 2 output sstables, down from the 500+ with the on-disk method.
This new method uses a memtable to sort out-of-order partitions. In-order partitions bypass this sorting altogether and go to the disk directly. This method is not suitable for cases where either the partition are large or the total amount of data is large. For those the disk-based method should be used. Scrub compaction decides on the method to use based on heuristics.
Tests: unit(dev)
Closes#9548
* github.com:scylladb/scylla:
compaction: scrub_compaction: add bucket count to finish message
test/boost: mutation_writer_test: harden the partition-based segregator test
mutation_writer: remove now unused on-disk partition segregator
compaction,test: use the new in-memory segregator for scrub
mutation_writer/partition_based_splitting_writer: add memtable-based segregator
We introduce a new operation to the framework: `reconfiguration`.
The operation sends a reconfiguration request to a Raft cluster. It
bounces a few times in case of `not_a_leader` results.
A side effect of the operation is modifying a `known` set of nodes which
the operation's state has a reference to. This `known` set can then be
used by other operations (such as `raft_call`s) to find the current
leader.
For now we assume that reconfigurations are performed sequentially. If a
reconfiguration succeeds, we change `known` to the new configuration. If
it fails, we change `known` to be the set sum of the previous
configuration and the current configuration (because we don't know what
the configuration will eventually be - the old or the attempted one - so
any member of the set sum may eventually become a leader).
We use a dedicated thread (similarly to the network partitioning thread)
to periodically perform random reconfigurations.
* kbr/reconfig-v2:
test: raft: randomized_nemesis_test: perform reconfigurations in basic_generator_test
test: raft: randomized_nemesis_test: improve the bouncing algorithm
test: raft: randomized_nemesis_test: handle more error types
test: raft: randomized_nemesis_test put `variant` and `monostate` `ostream` `operator<<`s into `std` namespace
test: raft: randomized_nemesis_test: `reconfiguration` operation
It is useful to know how many buckets (output sstables) scrub produced
in total. The end compaction message will only report those still open
when the scrub finished, but will omit those that were closed in the
middle.
Test both methods: the "old" disk-based one and the recently added
in-memory one, with different configurations and also add additional
checks to ensure they don't loose data.
The current method of segregating partitions doesn't work well for huge
number of small partitions. For especially bad input, it can produce
hundreds or even thousands of buckets. This patch adds a new segregator
specialized for this use-case. This segregator uses a memtable to sort
out-of-order partitions in-memory. When the memtable size reaches the
provided max-memory limit, it is flushed to disk and a new empty one is
created. In-order partitions bypass the sorting altogether and go to the
fast-path bucket.
The new method is not used yet, this will come in the next patch.
The range::subtract() was used as a trick to implement
range::intersection() when intersection was not available at that time.
The following code is problematic:
dht::token_range given_range_complement(tok_end, tok_start);
because dht::token_range should be non-wrapping range.
To fix, use compat::unwrap_into() to unwrap the range and use
the range::intersection() to calculate the intersection.
Note even if the given_range_complement is problematic, current code
generates correct intersections.
Example 1:
$ curl -X POST 'http://127.0.0.1:10000/storage_service/repair_async/keyspace1?startToken=5&endToken=100'
[shard 0] repair - starting user-requested
repair for keyspace keyspace1, repair id [id=1,
uuid=aa71a192-5967-4f05-99b8-5febd9d81d50], options {{ endToken -> 100}, { startToken -> 5}}
[shard 0] repair - start=5, end=100, given_range_complement=(100, 5], wrange=(100, 5], is_wrap_around=true,
ranges={(-inf, -7612759882658906007], (-7612759882658906007,
-6766703710995023384], (-6766703710995023384, 2918449800065200439],
(2918449800065200439, 8039072586540417979], (8039072586540417979,
+inf)}, intersections={(5, 100]}
[shard 0] repair - New method intersections={(5, 100]}
Example 2:
$ curl -X POST
'http://127.0.0.1:10000/storage_service/repair_async/keyspace1?startToken=100&endToken=5'
[shard 0] repair - starting user-requested
repair for keyspace keyspace1, repair id [id=1,
uuid=f6076438-015c-4bdc-8ebd-0a55664365fa], options {{ endToken -> 5}, { startToken -> 100}}
[shard 0] repair - start=100, end=5, given_range_complement=(5, 100], wrange=(5, 100], is_wrap_around=false,
ranges={(-inf, -7612759882658906007], (-7612759882658906007,
-6766703710995023384], (-6766703710995023384, 2918449800065200439],
(2918449800065200439, 8039072586540417979], (8039072586540417979,
+inf)}, intersections={
(-inf, -7612759882658906007],
(-7612759882658906007, -6766703710995023384],
(-6766703710995023384, 5],
(100, 2918449800065200439],
(2918449800065200439, 8039072586540417979],
(8039072586540417979, +inf)}
[shard 0] repair - New method intersections={
(-inf, -7612759882658906007],
(-7612759882658906007, -6766703710995023384],
(-6766703710995023384, 5],
(100, 2918449800065200439],
(2918449800065200439, 8039072586540417979],
(8039072586540417979, +inf)}
Fixes#9560Closes#9561
The two cql-pytest tests:
test_frozen_collection.py::test_wrong_set_order_in_nested
test_frozen_collection.py::test_wrong_set_order_in_nested_2
Which used to fail, and therefore marked "xfail", got fixed by commit
5589f348e7 ("cql3: expr: Implement
evaluate(expr::bind_variable). That commit made the handling of bound
variables in prepared statements more rigorous, and in particular
made sure that sets are re-sorted not only if they are at the top
level of the value (as happened in the old code), but also if they
are nested inside some other container. This explains the surprising
fact that we could only reproduce bug with prepared statements, and
only with nested sets - while top-level sets worked correctly.
As the tests no longer failed and the bug tested by them really did
get fixed, in this patch we remove the "xfail" marker from these
tests.
Closes#7856. This issue was really fixed by the aforementioned commit,
but let's close it now.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20211029221731.1113554-1-nyh@scylladb.com>
We shouldn't be using Seastar as a text formatting library; that's
not its focus. Use fmt directly instead. fmt::print() doesn't return
the output stream which is a minor inconvenience, but that's life.
Closes#9556
The partition based splitting writer (used by scrub) was found to be exception-unsafe, converting an `std::bad_alloc` to an assert failure. This series fixes the problem and adds a unit test checking the exception safety against `std::bad_alloc`:s fixing any other related problems found.
Fixes: https://github.com/scylladb/scylla/issues/9452Closes#9453
* github.com:scylladb/scylla:
test: mutation_writer_test: add exception safety test for segregate_by_partition()
mutation_writer: segregate_by_partition(): make exception safe
mutation_reader: queue_reader_handle: make abandoned() exception safe
mutation_writer: feed_writers(): make it a coroutine
mutation_writer: partition_based_splitting_writer: erase old bucket if we fail to create replacement
It's much more efficient to have a separate compaction task that consists
completely from expired sstables and make sure it gets a unique "weight" than
mixing expired sstables with non-expired sstables adding an unpredictable
latency to an eviction event of an expired sstable.
This change also improves the visibility of eviction events because now
they are always going to appear in the log as compactions that compact into
an empty set.
Fixes#9533
Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
Closes#9534
Although the DynamoDB API responses are JSON, additional conventions apply
to these responses - such as how error codes are encoded in JSON. For this
reason, DynamoDB uses the content type `application/x-amz-json-1.0` instead
of the standard `application/json` in its responses.
Until this patch, Scylla used `application/json` in its responses. This
unexpected content-type didn't bother any of the AWS libraries which we
tested, but it does bother the aiodynamo library (see HENNGE/aiodynamo#27).
Moreover, we should return the x-amz-json-1.0 content type for future
proofing: It turns out that AWS already defined x-amz-json-1.1 - see:
https://awslabs.github.io/smithy/1.0/spec/aws/aws-json-1_1-protocol.html
The 1.1 content type differs (only) in how it encodes error replies.
If one day DynamoDB starts to use this new reply format (it doesn't yet)
and if DynamoDB libraries will need to differenciate between the two
reply formats, Alternator better return the right one.
This patch also includes a new test that the Content-Type header is
returned with the expected value. The test passes on DynamoDB, and
after this patch it starts to pass on Alternator as well.
Fixes#9554.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20211031094621.1193387-1-nyh@scylladb.com>
Setting a value of "warn" will still allow the create or alter commands,
but will warn the user, with a message that will appear both at the log
and also at cqlsh for example.
This is another step towards deprecating DTCS. Users need to know we're
moving towards this direction, and setting the default value to warn
is needed for this. Next step is to set it to false, and finally remove
it from the code base.
Refs #8914.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20211029184503.102936-1-raphaelsc@scylladb.com>
Objects of type "api_error" are used in Alternator when throwing an
error which will be reported as-is to the user as part of the official
DynamoDB protocol.
Although api_error objects are often thrown, the api_error class was not
derived from std::exception, because that's not necessary in C++.
However, it is *useful* for this exception to derive from std::except,
so this is what this patch does. It is useful for api_error to inherit
from std::exception because then our logging and debugging code knows
how to print this exception with all its details. All we need to do is
to implement a what() virtual function for api_error.
Before this patch, logging an api_error just logs the type's name
(i.e., the string "api_error"). After this patch, we get the full
information stored in the api_error - the error's type and its message.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20211017150555.225464-1-nyh@scylladb.com>
sprint() uses the printf-style formatting language while most of our
code uses the Python-derived format language from fmt::format().
The last mass conversion of sprint() to fmt (in 1129134a4a)
missed some callers (principally those that were on multiple lines, and
so the automatic converter missed them). Convert the remainder to
fmt::format(), and some sprintf() and printf() calls, so we have just
one format language in the code base. Seastar::sprint() ought to be
deprecated and removed.
Test: unit (dev)
Closes#9529
* github.com:scylladb/scylla:
utils: logalloc: convert debug printf to fmt::print()
utils: convert fmt::fprintf() to fmt::print()
main: convert fprint() to fmt::print()
compress: convert fmt::sprintf() to fmt::format()
tracing: replace seastar::sprint() with fmt::format()
thrift: replace seastar::sprint() with fmt::format()
test: replace seastar::sprint() with fmt::format()
streaming: replace seastar::sprint() with fmt::format()
storage_service: replace seastar::sprint() with fmt::format()
repair: replace seastar::sprint() with fmt::format()
redis: replace seastar::sprint() with fmt::format()
locator: replace seastar::sprint() with fmt::format()
db: replace seastar::sprint() with fmt::format()
cql3: replace seastar::sprint() with fmt::format()
cdc: replace seastar::sprint() with fmt::format()
auth: replace seastar::sprint() with fmt::format()
Alternator auth module used to piggy-back on top of CQL query processor
to retrieve authentication data, but it's no longer the case.
Instead, storage proxy is used directly.
Closes#9538
Convert storage_service::describe_ring to a coroutine
to prevent reactor stalls as seen in #9280.
Fixes#9280Closes#9282
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Closes#9282
With 062436829c,
we return all input sstables in strict mode
if they are dosjoint even if they don't need reshaping at all.
This leads to an infinite reshape loop when uploading sstables
with TWCS.
The optimization for disjoint sstables is worth it
also in relaxed mode, so this change first makes sorting of the input
sstables by first_key order independent of reshape_mode,
and then it add a check for sstable_set_overlapping_count
before trimming either the multi_window vector or
any single_window bucket such that we don't trim the list
if the candidates are disjoint.
Adjust twcs_reshape_with_disjoint_set_test accordingly.
And also add some debug logging in
time_window_compaction_strategy::get_reshaping_job
so one can figure out what's going on there.
Test: unit(dev)
DTest: cdc_snapshot_operation.py:CDCSnapshotOperationTest.test_create_snapshot_with_collection_list_with_base_rows_delete_type
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Message-Id: <20211025071828.509082-1-bhalevy@scylladb.com>
The cdc stream rewrite code launches work in the backround. It's careful
to make sure all the dependencies are preserved via shared pointers, but
there are implicit dependencies (system_keyspace) that are not. To fix this
problem, this series changes the lifetime of the background work from
unbounded to be bounded by the lifetime of storage_service.
[ xemul: Not the storage_service, but the cdc_generation_service, now
all the dances with rewriting and its futures happen inside the cdc
part. Storage_service coroutinization and call to .stop() from main
are here for several reasons
- They are awseome
- Splitting a PR into two is frustrating (as compared to ML threads)
- Shia LaBeouf is good motivation speaker
]
As explained in the patches, I don't expect this to be a real problem and the
series just paves the way for making system_keyspace an explicit
dependency.
Test: unit (dev)
Closes#9356
* github.com:scylladb/scylla:
cdc: don't allow background streams description rewrite to delay too far
storage_service: coroutinize stop()
main: stop storage_service on shutdown
'system.status' and 'system.describe_ring' are imperfect names for
what they do, so rename them. Fortunately they aren't exposed in any
released version so there is no compatibility concern.
Closes#9530
* github.com:scylladb/scylla:
system_keyspace: rename 'system.describe_ring' to 'system.token_ring'
system_keyspace: rename 'system.status' to 'system.cluster_status'
Currently, aws_instance.ephemeral_disks() returns both ephemeral disks
and EBS disks on Nitro System.
This is because both are attached as NVMe disks, we need to add disk
type detection code on NVMe handle logic.
Fixes#9440Closes#9462
Some time ago we started gathering stats for system tables in a separate
class in order to be able to distinguish which queries come from the
user - e.g. if the unpaged queries are internal or not.
Originally, only local system tables were moved into this class,
i.e. system and system_schema. It would make sense, however, to also
include other internal keyspaces in this separate class - which includes
system_distributed, system_traces, etc.
Fixes#9380Closes#9490
if mean() is called when there are no elements in the histogram,
a runtime error will happen due to division-by-zero.
approx_exponential_histogram::mean() handles it but for some
reason we forgot to do the same for estimated_histogram.
this problem was found when adding an unit test which calls
mean() in an empty histogram.
Fixes#9531.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20211027142813.56969-1-raphaelsc@scylladb.com>
Throw a proper exception from do_load_schemas if parse_statements
fails to parse the schema cql.
Catch it in scylla-sstable main() function so
it won't be reported as seastar - unhandled exception.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Message-Id: <20211027124032.1787347-1-bhalevy@scylladb.com>
Table names are usually nouns, so SELECT/INSERT statements
sound natural: "SELECT * FROM pets". 'system.describe_ring' defies
this convention. Rename it to 'system.token_ring' so selects are natural.
The name is not in any released version, so we can safely rename it.
'system.status' is too generic, it doesn't explain the status of what.
'system.node_status' is also ambiguous (this node? all nodes?) so I
picked 'system.cluster_status'.
The internal name, nodetool_status_table, was even worse (we're
not querying the status of nodetool!) but fortunately wasn't exposed.
The name is not in any released version, so we can safely rename it.