There's a perf_bptree test that compares B+ tree collection with
std::set and std::map ones. There will come more, also the "patterns"
to compare are not just "fill with keys" and "drain to empty", so
here's the perf_collection test, that measures timings of
- fill with keys
- drain key by key
- empty with .clear() call
- full scan with iterator
- insert-and-remove of a single element
for currently used collections
- std::set
- std::map
- intrusive_set_external_comparator
- bplus::tree
* https://github.com/xemul/scylla/tree/br-perf-collection-test:
test: Generalize perf_bptree into perf_collection
perf_collection: Clear collection between itartions
perf_collection: Add intrusive_set_external_comparator
perf_collection: Add test for single element insertion
perf_collection: Add test for destruction with .clear()
perf_collection: Add test for full scan time
Although the code for it existed already, the validation function
hasn't been invoked properly. This change fixes that, adding
a validating check when converting from text to specific value
type and throwing a marshal exception if some characters
are not ASCII.
Fixes#5421Closes#7532
The main goal of this this series is to fix issue #6951 - a Query (or Scan) with
a combination of filtering and projection parameters produced wrong results if
the filter needs some attributes which weren't projected.
This series also adds new tests for various corner cases of this issue. These
new tests also pass after this fix, or still fail because some other missing
feature (namely, nested attributes). These additional tests will be important if
we ever want to refactor or optimize this code, because they exercise some rare
corner code paths at the intersection of filtering and projection.
This series also fixes some additional problems related to this issue, like
combining old and new filtering/projection syntaxes (should be forbidden), and
even one fix to a wrong comment.
Closes#7328
* github.com:scylladb/scylla:
alternator test: tests for nested attributes in FilterExpression
alternator test: fix comment
alternator tests: additional tests for filter+projection combination
alternator: forbid combining old and new-style parameters
alternator: fix query with both projection and filtering
In certain CQL statements it's possible to provide a custom timestamp via the USING TIMESTAMP clause. Those values are accepted in microseconds, however, there's no limit on the timestamp (apart from type size constraint) and providing a timestamp in a different unit like nanoseconds can lead to creating an entry with a timestamp way ahead in the future, thus compromising the table.
To avoid this, this change introduces a sanity check for modification and batch statements that raises an error when a timestamp of more than 3 days into the future is provided.
Fixes#5619Closes#7475
An upcoming change in Seastar only initializes the Seastar allocator in
reactor threads. This causes imr_test and double_decker_test to fail:
1. Those tests rely on LSA working
2. LSA requires the Seastar allocator
3. Seastar is not initialized, so the Seastar allocator is not initialized.
Fix by switching to the Seastar test framework, which initializes Seastar.
Closes#7486
gcc collects all the initialization code for thread-local storage
and puts it in one giant function. In combination with debug mode,
this creates a very large stack frame that overflows the stack
on aarch64.
Work around the problem by placing each initializer expression in
its own function, thus reusing the stack.
Closes#7509
Add additional comments to select_statement_utils, fix formatting, add
missing #pragma once and introduce set_internal_paging_size_guard to
set internal_paging in RAII fashion.
Closes#7507
Extensively tests queries on tables with secondary indices with
aggregates and GROUP BYs. Tests three cases that are implemented
in indexed_table_select_statement::do_execute - partition_slices,
whole_partitions and (non-partition_slices and non-whole_partitions).
As some of the issues found were related to paging, the tests check
scenarios where the inserted data is smaller than a page, larger than
a page and larger than two pages (and some boundary scenarios).
Before this change, internal page_size when doing aggregate, GROUP BY
or nonpaged filtering queries was hard-coded to DEFAULT_COUNT_PAGE_SIZE.
This made testing hard (timeouts in debug build), because the tests had
to be large to test cases when there are multiple internal pages.
This change adds new internal_paging_size variable, which is
configurable by set_internal_paging_size and reset_internal_paging_size
free functions. This functionality is only meant for testing purposes.
We already have a test for the behavior of a closed shard and how
iterators previously created for it are still valid. In this patch
we add to this also checking that the shard id itself, not just the
iterator, is still valid.
Additionally, although the aforementioned test used a disabled stream
to create a closed shard, it was not a complete test for the behavior
of a disabled stream, and this patch adds such a test. We check that
although the stream is disabled, it is still fully usable (for 24 hours) -
its original ARN is still listed on ListStreams, the ARN is still usable,
its shards can be listed, all are marked as closed but still fully readable.
Both tests pass on DynamoDB, and xfail on Alternator because of
issue #7239 - CDC drops the CDC log table as soon as CDC is disabled,
so the stream data is lost immediately instead of being retained for
24 hours.
Refs #7239
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20201006183915.434055-1-nyh@scylladb.com>
Refs #7364
The number of tombstones can be large. As a stopgap measure to
just returning a source range (with keepalive), we can at least
alleviate the problem by using a chunked vector.
Closes#7433
Fixes#7435
Adds an "eor" (end-of-record) column to cdc log. This is non-null only on
last-in-timestamp group rows, i.e. end of a singular source "event".
A client can use this as a shortcut to knowing whether or not he has a
full cdc "record" for a given source mutation (single row change).
Closes#7436
Currently, we linearize large UTF8 cells in order to validate them.
This can cause large latency spikes if the cell is large.
This series changes UTF8 validation to work on fragmented buffers.
This is somewhat tricky since the validation routines are optimized
for single-instruction-multiple-data (SIMD) architectures.
The unit tests are expanded to cover the new functionality.
Fixes#7448.
Closes#7449
* github.com:scylladb/scylla:
types: don't linearize utf8 for validation
test: utf8: add fragmented buffer validation tests
utils: utf8: add function to validate fragmented buffers
utils: utf8: expose validate_partial() in a header
utils: utf8: introduce validate_partial()
utils: utf8: extract a function to evaluate a single codepoint
Since there are a huge number of variations, we use random
testing. Each test case is composed of a random number of valid
code points, with a possible invalid code point somehwere. The
test case is broken up into a random number of fragments. We
test both validation success and error position indicator.
Clang does not yet implement p1091r3, which allows lambdas
to capture structured bindings. To accomodate it, don't
use structured bindings for variables that are later
captured.
Hopefully, most of these lambda captures will be replaces with
coroutines.
Closes#7445
* github.com:scylladb/scylla:
test: mutation_reader_test: don't capture structured bindings in lambdas
api: column_family: don't capture structured bindings in lambdas
thrift: don't capture structured bindings in lambdas
test: partition_data_test: don't capture structured bindings in lambdas
test: querier_cache_test: don't capture structured bindings in lambdas
test: mutation_test: don't capture structured bindings in lambdas
storage_proxy: don't capture structured bindings in lambdas
db: hints/manager: don't capture structured bindings in lambdas
db: commitlog_replayer: don't capture structured bindings in lambdas
cql3: select_statement: don't capture structured bindings in lambdas
cql3: statement_restrictions: don't capture structured bindings in lambdas
cdc: log: don't capture structured bindings in lambdas
Clang does not yet implement p1091r3, which allows lambdas
to capture structured bindings. To accomodate it, don't
use structured bindings for variables that are later
captured.
Clang does not yet implement p1091r3, which allows lambdas
to capture structured bindings. To accomodate it, don't
use structured bindings for variables that are later
captured.
Clang does not yet implement p1091r3, which allows lambdas
to capture structured bindings. To accomodate it, don't
use structured bindings for variables that are later
captured.
Clang does not yet implement p1091r3, which allows lambdas
to capture structured bindings. To accomodate it, don't
use structured bindings for variables that are later
captured.
Before this change, invalid query exception on selects with both normal
and token restrictions was only thrown when token restriction was after
normal restriction.
This change adds proper validation when token restriction is before normal restriction.
**Before the change - does not return error in last query; returns wrong results:**
```
cqlsh> CREATE TABLE ks.t(pk int, PRIMARY KEY(pk));
cqlsh> INSERT INTO ks.t(pk) VALUES (1);
cqlsh> INSERT INTO ks.t(pk) VALUES (2);
cqlsh> INSERT INTO ks.t(pk) VALUES (3);
cqlsh> INSERT INTO ks.t(pk) VALUES (4);
cqlsh> SELECT pk, token(pk) FROM ks.t WHERE pk = 2 AND token(pk) > 0;
InvalidRequest: Error from server: code=2200 [Invalid query] message="Columns "ColumnDefinition{name=pk, type=org.apache.cassandra.db.marshal.Int32Type, kind=PARTITION_KEY, componentIndex=0, droppedAt=-9223372036854775808}" cannot be restricted by both a normal relation and a token relation"
cqlsh> SELECT pk, token(pk) FROM ks.t WHERE token(pk) > 0 AND pk = 2;
pk | system.token(pk)
----+---------------------
3 | 9010454139840013625
(1 rows)
```
Closes#7441
* github.com:scylladb/scylla:
tests: Add token and non-token conjunction tests
token_restriction: Add non-token merge exception
This patch introduces a new system table: `system.scylla_table_schema_history`,
which is used to keep track of column mappings for obsolete table
schema versions (i.e. schema becomes obsolete when it's being changed
by means of `CREATE TABLE` or `ALTER TABLE` DDL operations).
It is populated automatically when a new schema version is being
pulled from a remote in get_schema_definition() at migration_manager.cc
and also when schema change is being propagated to system schema tables
in do_merge_schema() at schema_tables.cc.
The data referring to the most recent table schema version is always
present. Other entries are garbage-collected when the corresponding
table schema version is obsoleted (they will be updated with a TTL equal
to `DEFAULT_GC_GRACE_SECONDS` on `ALTER TABLE`).
In case we failed to persist column mapping after a schema change,
missing entries will be recreated on node boot.
Later, the information from this table is used in `paxos_state::learn`
callback in case we have a mismatch between the most recent schema
version and the one that is stored inside the `frozen_mutation`
for the accepted proposal.
Such situation may arise under following circumstances:
1. The previous LWT operation crashed on the "accept" stage,
leaving behind a stale accepted proposal, which waits to be
repaired.
2. The table affected by LWT operation is being altered, so that
schema version is now different. Stored proposal now references
obsolete schema.
3. LWT query is retried, so that Scylla tries to repair the
unfinished Paxos round and apply the mutation in the learn stage.
When such mismatch happens, prior to that patch the stored
`frozen_mutation` is able to be applied only if we are lucky enough
and column_mapping in the mutation is "compatible" with the new
table schema.
It wouldn't work if, for example, the columns are reordered, or
some columns, which are referenced by an LWT query, are dropped.
With this patch we try to look up the column mapping for
the obsolete schema version, then upgrade the stored mutation
using obtained column mapping and apply an upgraded mutation instead.
* git@github.com:ManManson/scylla.git feature/table_schema_history_v7:
lwt: add column_mapping history persistence tests
schema: add equality operator for `column_mapping` class
lwt: store column_mapping's for each table schema version upon a DDL change
schema_tables: extract `fill_column_info` helper
frozen_mutation: introduce `unfreeze_upgrading` method
There are two basic tests, which:
* Test that column mappings are serialized and deserialized
properly on both CREATE TABLE and ALTER TABLE
* Column mappings for obsoleted schema versions are updated
with a TTL value on schema change
Tests: unit(dev)
Signed-off-by: Pavel Solodovnikov <pa.solodovnikov@scylladb.com>
Support snapshotting for raft. The patch series only concerns itself
with raft logic, not how a specific state machine implements
take_snapshot() callback.
* scylla-dev/raft-snapshots-v2:
raft: test: add tests for snapshot functionality
raft: preserve trailing raft log entries during snapshotting
raft: implement periodic snapshotting of a state machine
raft: add snapshot transfer logic
Checks for invalid_request_exception in case of trying to run a query
with both normal and token relations. Tests both orderings of those
relations (normal or token relation first).
When Alternator is enabled over HTTPS - by setting the
"alternator_https_port" option - it needs to know some SSL-related options,
most importantly where to pick up the certificate and key.
Before this patch, we used the "server_encryption_options" option for that.
However, this was a mistake: Although it sounds like these are the "server's
options", in fact prior to Alternator this option was only used when
communicating with other servers - i.e., connections between Scylla nodes.
For CQL connections with the client, we used a different option -
"client_encryption_options".
This patch introduces a third option "alternator_encryption_options", which
controls only Alternator's HTTPS server. Making it separate from the
existing CQL "client_encryption_options" allows both Alternator and CQL to
be active at the same time but with different certificates (if the user
so wishes).
For backward compatibility, we temporarily continue to allow
server_encryption_options to control the Alternator HTTPS server if
alternator_encryption_options is not specified. However, this generates
a warning in the log, urging the user to switch. This temporary workaround
should be removed in a future version.
This patch also:
1. fixes the test run code (which has an "--https" option to test over
https) to use the new name of the option.
2. Adds documentation of the new option in alternator.md and protocols.md -
previously the information on how to control the location of the
certificate was missing from these documents.
Fixes#7204.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20200930123027.213587-1-nyh@scylladb.com>
Fixes#7424
AWS sdk (kinesis) assumes SequenceNumbers are monotonically
growing bigints. Since we sort on and use timeuuids are these
a "raw" bit representation of this will _not_ fulfill the
requirement. However, we can "unwrap" the timestamp of uuid
msb and give the value as timestamp<<64|lsb, which will
ensure sort order == bigint order.
Fixes#7409
AWS kinesis Java sdk requires/expects shards to be reported in
lexical order, and even worse, ignores lastevalshard. Thus not
upholding said order will break their stream intropection badly.
Added asserts to unit tests.
v2:
* Added more comments
* use unsigned_cmp
* unconditional check in streams_test
Tests that the exceptional future returned by the serialized action
is propagated to trigger, reproducing #7352.
The test fails without the previoud patch:
"serialized_action: trigger: include also semaphore status to promise"
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
"
The reader concurrency semaphore timing out or its queue being overflown
are fairly common events both in production and in testing. At the same
time it is a hard to diagnose problem that often has a benign cause
(especially during testing), but it is equally possible that it points
to something serious. So when this error starts to appear in logs,
usually we want to investigate and the investigation is lengthy...
either involves looking at metrics or coredumps or both.
This patch intends to jumpstart this process by dumping a diagnostics on
semaphore timeout or queue overflow. The diagnostics is printed to the
log with debug level to avoid excessive spamming. It contains a
histogram of all the permits associated with the problematic semaphore
organized by table, operation and state.
Example:
DEBUG 2020-10-08 17:05:26,115 [shard 0] reader_concurrency_semaphore -
Semaphore _read_concurrency_sem: timed out, dumping permit diagnostics:
Permits with state admitted, sorted by memory
memory count name
3499M 27 ks.test:data-query
3499M 27 total
Permits with state waiting, sorted by count
count memory name
1 0B ks.test:drain
7650 0B ks.test:data-query
7651 0B total
Permits with state registered, sorted by count
count memory name
0 0B total
Total: permits: 7678, memory: 3499M
This allows determining several things at glance:
* What are the tables involved
* What are the operations involved
* Where is the memory
This can speed up a follow-up investigation greatly, or it can even be
enough on its own to determine that the issue is benign.
Tests: unit(dev, debug)
"
* 'dump-diagnostics-on-semaphore-timeout/v2' of https://github.com/denesb/scylla:
reader_concurrency_semaphore: dump permit diagnostics on timeout or queue overflow
utils: add to_hr_size()
reader_concurrency_semaphore: link permits into an intrusive list
reader_concurrency_semaphore: move expiry_handler::operator()() out-of-line
reader_concurrency_semaphore: move constructors out-of-line
reader_concurrency_semaphore: add state to permits
reader_concurrency_semaphore: name permits
querier_cache_test: test_immediate_evict_on_insert: use two permits
multishard_combining_reader: reader_lifecycle_policy: add permit param to create_reader()
multishard_combining_reader: add permit parameter
multishard_combining_reader: shard_reader: use multishard reader's permit
Require a schema and an operation name to be given to each permit when
created. The schema is of the table the read is executed against, and
the operation name, which is some name identifying the operation the
permit is part of. Ideally this should be different for each site the
permit is created at, to be able to discern not only different kind of
reads, but different code paths the read took.
As not all read can be associated with one schema, the schema is allowed
to be null.
The name will be used for debugging purposes, both for coredump
debugging and runtime logging of permit-related diagnostics.
The test currently uses a single permit shared between two simulated
reads (to wait admission twice). This is not a supported way of using a
permit and will stop working soon as we make the states the permit is in
more pronounced.
Allow the evictable reader managing the underlying reader to pass its
own permit to it when creating it, making sure they share the same
permit. Note that the two parts can still end up using different
permits, when the underlying reader is kept alive between two pages of a
paged read and thus keeps using the permit received on the previous
page.
Also adjust the `reader_context` in multishard_mutation_query.cc to use
the passed-in permit instead of creating a new one when creating a new
reader.
Don't create an own permit, take one as a parameter, like all other
readers do, so the permit can be provided by the higher layer, making
sure all parts of the logical read use the same permit.
The remains of the defunct #7246.
Fixes#7344Fixes#7345Fixes#7346Fixes#7347
Shard ID length is now within limits.
Shard end sequence number should be set when appropriate.
Shard parent is selected a bit more carefully (sorting)
Shards are filtered by time to exclude cdc generations we cannot get data from (too old)
Shard paging improved
Closes#7348
* github.com:scylladb/scylla:
test_streams: Add some more sanity asserts
alternator::streams: Set dynamodb data TTL explicitly in cdc options
alternator::streams: Improve paging and fix parent-child calculation
alternator::streams: Remove table from shard_id
alternator::streams: Filter our cdc streams older than data/table
alternator::error: Add a few dynamo exception types
"
max_concurrent_for_each was added to seastar for replacing
sstable_directory::parallel_for_each_restricted by using
more efficient concurrency control that doesn't create
unlimited number of continuations.
The series replaces the use of sstable_directory::parallel_for_each_restricted
with max_concurrent_for_each and exposes the sstable_directory::do_for_each_sstable
via a static method.
This method is used here by table::snapshot to limit concurrency
do snapshot operations that suffer from the same unbound
concurrency problem sstable_directory solved.
In addition sstable_directory::_load_semaphore that was used
across calls to do_for_each_sstable was replaced by a static per-shard
semaphore that caps concurrency across all calls to `do_for_each_sstable`
on that shard. This makes sense since the disk is a shared resource.
In the future, we may want to have a load semaphore per device rather than
a single global one. We should experiment with that.
Test: unit(dev)
"
* tag 'max_concurrent_for_each-v5' of github.com:bhalevy/scylla:
table: snapshot: use max_concurrent_for_each
sstable_directory: use a external load_semaphore
test: sstable_directory_test: extract sstable_directory creation into with_sstable_directory
distributed_loader: process_upload_dir: use initial_sstable_loading_concurrency
sstables: sstable_directory: use max_concurrent_for_each
database_test contains several instances of calling do_with_cql_test_env()
with a function that expects to be called in a thread. This mostly works
because there is an internal thread in do_with_cql_test_env(), but is not
guaranteed to.
Fix by switching to the more appropriate do_with_cql_test_env_thread().
Closes#7333
The variable 'observer' (an std::optional) may be left uninitialized
if 'incremental_enabled' is false. However, it is used afterwards
with a call to disconnect, accessing garbage.
Fix by accessing it via the optional wrapper. A call to optional::reset()
destroys the observable, which in turn calls disconnect().
Closes#7380
libstdc++'s std::regex uses recursion[1], with a depth controlled by the
input. Together with clang's debug mode, this overflows the stack.
Use boost::regex instead, which is immune to the problem.
[1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=86164Closes#7378
d2t() scales a fraction in the range [0, 1] to the range of
a biased token (same as unsigned long). But x86 doesn't support
conversion to unsigned, only signed, so this is a truncating
conversion. Clang's ubsan correctly warns about it.
Fix by reducing the range before converting, and expanding it
afterwards.
Closes#7376
Values seen by nodes were so far added but this does not provide a
guarantee the order of these values was respected.
Use a digest to check output, implicitly checking order.
On the other hand, sum or a simple positional checksum like Fletcher's
is easier to debug as rolling sum is evident.
Signed-off-by: Alejo Sanchez <alejo.sanchez@scylladb.com>
For convenience making Raft tests, use declarative structures.
Servers are set up and initialized and then updates are processed.
For now, updates are just adding entries to leader and change of leader.
Updates and leader changes can be specified to run after initial test setup.
An example test for 3 nodes, node 0 starting as leader having two entries
0 and 1 for term 1, and with current term 2, then adding 12 entries,
changing leader to node 1, and adding 12 more entries. The test will
automatically add more entries to the last leader until the test limit
of total_values (default 100).
{.name = "test_name", .nodes = 3, .initial_term = 2,
.initial_states = {{.le = {{1,0},{1,1}}},
.updates = {entries{12},new_leader{1},entries{12}},},
Leader is isolated before change via is_leader returning false.
Initial leader (default server 0) will be set with this method, too.
Signed-off-by: Alejo Sanchez <alejo.sanchez@scylladb.com>