Alternator uses a single column, a map, with the deliberately strange
name ":attrs", to hold all the schema-less attributes of an item.
The existing code is buggy when the user tries to write to an attribute
with this strange name ":attrs". Although it is extremely unlikely that
any user would happen to choose such a name, it is nevertheless a legal
attribute name in DynamoDB, and should definitely not cause Scylla to crash
as it does in some cases today.
The bug was caused by the code assuming that to check whether an attribute
is stored in its own column in the schema, we just need to check whether
a column with that name exists. This is almost true, except for the name
":attrs" - a column with this name exists, but it is a map - the attribute
with that name should be stored *in* the map, not as the map. The fix
is to modify that check to special-case ":attrs".
This fix makes the relevant tests, which used to crash or fail, now pass.
This fix solves most of #5009, but one point is not yet solved (and
perhaps we don't need to solve): It is still not allowed to use the
name ":attrs" for a **key** attribute. But trying to do that fails cleanly
(during the table creation) with an appropriate error message, so is only
a very minor compatibility issue.
Refs #5009
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
As explained in issue #5009, Alternator currently forbids the special
attribute name ":attrs", whereas DynamoDB allows any string of approriate
length (including the specific string ":attrs") to be used.
We had only a partial test for this incompatibility, and this patch
improves the testing of this issue. In particular, we were missing a
test for the case that the name ":attrs" was used for a non-key
attribute (we only tested the case it was used as a sort key).
It turns out that Alternator crashes on the new test, when the test tries
to write to a non-key attribute called ":attrs", so we needed to mark
the new test with "skip". Moreover, it turns out that different code paths
handle the attribute name ":attrs" differently, and also crash or fail
in other ways - so we added more than one xfailing and skipped tests
that each fails in a different place (and also a few tests that do pass).
As usual, the new tests we checked to pass on DynamoDB.
Refs #5009
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
"
Messaging service checks dc/rack of the target node when creating a
socket. However, this information is not available for all verbs, in
particular gossiper uses RPC to get topology from other nodes.
This generates a chicken-and-egg problem -- to create a socket messaging
service needs topology information, but in order to get one gossiper
needs to create a socket.
Other than gossiper, raft starts sending its APPEND_ENTRY messages early
enough so that topology info is not avaiable either.
The situation is extra-complicated with the fact that sockets are not
created for individual verbs. Instead, verbs are groupped into several
"indices" and socket is created for it. Thus, the "gossiping" index that
includes non-gossiper verbs will create topology-less socket for all
verbs in it. Worse -- raft sends messages w/o solicited topology, the
corresponding socket is created with the assumption that the peer lives
in default dc and rack which doesn't matchthe local nodes' dc/rack and
the whole index group gets the "randomly" configured socket.
Also, the tcp-nodelay tries to implement similar check, but uses wrong
index of 1, so it's also fixed here.
"
* 'br-messaging-topology-ignoring-clients' of https://github.com/xemul/scylla:
messaging_service: Fix gossiper verb group
messaging_service: Mind the absence of topology data when creating sockets
messaging_service: Templatize and rename remove_rpc_client_one
There's a bunch of helpers for CDC gen service in db/system_keyspace.cc. All are static and use global qctx to make queries. Fortunately, both callers -- storage_service and cdc_generation_service -- already have local system_keyspace references and can call the methods via it, thus reducing the global qctx usage.
Closes#11557
* github.com:scylladb/scylladb:
system_keyspace: De-static get_cdc_generation_id()
system_keyspace: De-static cdc_is_rewritten()
system_keyspace: De-static cdc_set_rewritten()
system_keyspace: De-static update_cdc_generation_id()
- Raise on response not HTTP 200 for `.get_text()` helper
- Fix API paths
- Close and start a fresh driver when restarting a server and it's the only server in the cluster
- Fix stop/restart response as text instead of inspecting (errors are status 500 and raise exceptions)
Closes#11496
* github.com:scylladb/scylladb:
test.py: handle duplicate result from driver
test.py: log server restarts for topology tests
test.py: log actions for topology tests
Revert "test.py: restart stopped servers before...
test.py: ManagerClient API fix return text
test.py: ManagerClient raise on HTTP != 200
test.py: ManagerClient fix paths to updated resource
When cross-shard barrier is abort()-ed it spawns a background fiber
that will wake-up other shards (if they are sleeping) with exception.
This fiber is implicitly waited by the owning sharded service .stop,
because barrier usage is like this:
sharded<service> s;
co_await s.invoke_on_all([] {
...
barrier.abort();
});
...
co_await s.stop();
If abort happens, the invoke_on_all() will only resolve _after_ it
queues up the waking lambdas into smp queues, thus the subseqent stop
will queue its stopping lambdas after barrier's ones.
However, in debug mode the queue can be shuffled, so the owning service
can suddenly be freed from under the barrier's feet causing use after
free. Fortunately, this can be easily fixed by capturing the shared
pointer on the shared barrier instead of a regular pointer on the
shard-local barrier.
fixes: #11303
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Closes#11553
The test is supposed to give a helpful error message when the user forgets to
run --populate before the benchmark. But this must have become broken at some
point, because execute_cql() terminates the program with an unhelpful
("unconfigured table config") message, which doesn't mention --populate.
Fix that by catching the exception and adding the helpful tip.
Closes#11533
The logger is proof against allocation failures, except if
--abort-on-seastar-bad-alloc is specified. If it is, it will crash.
The reclaim stall report is likely to be called in low memory conditions
(reclaim's job is to alleviate these conditions after all), so we're
likely to crash here if we're reclaiming a very low memory condition
and have a large stall simultaneously (AND we're running in a debug
environment).
Prevent all this by disabling --abort-on-seastar-bad-alloc temporarily.
Fixes#11549Closes#11555
Long-term index caching in the global cache, as introduced in 4.6, is a major
pessimization for workloads where accesses to the index are (spacially) sparse.
We want to have a way to disable it for the affected workloads.
There is already infrastructure in place for disabling it for BYPASS CACHE
queries. One way of solving the issue is hijacking that infrastructure.
This patch adds a global flag (and a corresponding CLI option) which controls
index caching. Setting the flag to `false` causes all index reads to behave
like they would in BYPASS CACHE queries.
Consequences of this choice:
- The per-SSTable partition_index_cache is unused. Every index_reader has
its own, and they die together. Independent reads can no longer reuse the
work of other reads which hit the same index pages. This is not crucial,
since partition accesses have no (natural) spatial locality. Note that
the original reason for partition_index_cache -- the ability to share
reads for the lower and upper bound of the query -- is unaffected.
- The per-SSTable cached_file is unused. Every index_reader has its own
(uncached) input stream from the index file, and every
bsearch_clustered_cursor has its own cached_file, which dies together with
the cursor. Note that the cursor still can perform its binary search with
caching. However, it won't be able to reuse the file pages read by
index_reader. In particular, if the promoted index is small, and fits inside
the same file page as its index_entry, that page will be re-read.
It can also happen that index_reader will read the same index file page
multiple times. When the summary is so dense that multiple index pages fit in
one index file page, advancing the upper bound, which reads the next index
page, will read the same index file page. Since summary:disk ratio is 1:2000,
this is expected to happen for partitions with size greater than 2000
partition keys.
Fixes#11202
Sometimes the driver calls twice the callback on ready done future with
a None result. Log it and avoid setting the local future twice.
Signed-off-by: Alejo Sanchez <alejo.sanchez@scylladb.com>
Introduces support to split large partitions during compaction. Today, compaction can only split input data at partition boundary, so a large partition is stored in a single file. But that can cause many problems, like memory pressure (e.g.: https://github.com/scylladb/scylladb/issues/4217), and incremental compaction can also not fulfill its promise as the file storing the large partition can only be released once exhausted.
The first step was to add clustering range metadata for first and last partition keys (retrieved from promoted index), which is crucial to determine disjointness at clustering level, and also the order at which the disjoint files should be opened for incremental reading.
The second step was to extend sstable_run to look at clustering dimension, so a set of files storing disjoint ranges for the same partition can live in the same sstable run.
The final step was to introduce the option for compaction to split large partition being written if it has exceeded the size threshold.
What's next? Following this series, a reader will be implemented for sstable_run that will incrementally open the readers. It can be safely built on the assumption of the disjoint invariant after the second step aforementioned.
Closes#11233
* github.com:scylladb/scylladb:
test: Add test for large partition splitting on compaction
compaction: Add support to split large partitions
sstable: Extend sstable_run to allow disjointness on the clustering level
sstables: simplify will_introduce_overlapping()
test: move sstable_run_disjoint_invariant_test into sstable_datafile_test
test: lib: Fix inefficient merging of mutations in make_sstable_containing()
sstables: Keep track of first partition's first pos and last partition's last pos
sstables: Rename min/max position_range to a descriptive name
sstables_manager: Add sstable metadata reader concurrency semaphore
sstables: Add ability to find first or last position in a partition
teardown..."
This reverts commit df1ca57fda.
In order to prevent timeouts on teardown queries, the previous commit
added functionality to restart servers that were down. This issue is
fixed in fc0263fc9b so there's no longer need to restart stopped servers
on test teardown.
For ManagerClient request API, don't return status, raise an exception.
Server side errors are signaled by status 500, not text body.
Signed-off-by: Alejo Sanchez <alejo.sanchez@scylladb.com>
Halted background fibers render raft server effectively unusable, so
report this explicitly to the clients.
Fix: #11352Closes#11370
* github.com:scylladb/scylladb:
raft server, status metric
raft server, abort group0 server on background errors
raft server, provide a callback to handle background errors
raft server, check aborted state on public server public api's
Pool.get() might have waiting callers, so if an item is not returned
to the pool after use, tell the pool to add a new one and tell the pool
an entry was taken (used for total running entries, i.e. clusters).
Use it when a ScyllaCluster is dirty and not returned.
While there improve logging and docstrings.
Issue reported by @kbr-.
Signed-off-by: Alejo Sanchez <alejo.sanchez@scylladb.com>
Closes#11546
These two are just getting in the way when touching inter-components
dependencies around messaging service. Without it m.-s. start/stop
just looks like any other service out there
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Closes#11535
It's been ~1 year (2bf47c902e) since we set restrict_dtcs
config option to WARN, meaning users have been warned about the
deprecation process of DTCS.
Let's set the config to TRUE, meaning that create and alter statements
specifying DTCS will be rejected at the CQL level.
Existing tables will still be supported. But the next step will
be about throwing DTCS code into the shadow realm, and after that,
Scylla will automatically fallback to STCS (or ICS) for users which
ignored the deprecation process.
Refs #8914.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Closes#11458
When a server is down, the driver expects multiple schema timeouts
within the same request to handle it properly.
Found by @kbr-
Signed-off-by: Alejo Sanchez <alejo.sanchez@scylladb.com>
Closes#11544
If user stops off-strategy via API, compaction manager can decide
to give up on it completely, so data will sit unreshaped in
maintenance set, preventing it from being compacted with data
in the main set. That's problematic because it will probably lead
to a significant increase in read and space amplification until
off-strategy is triggered again, which cannot happen anytime
soon.
Let's handle it by moving data in maintenance set into main one,
even if unreshaped. Then regular compaction will be able to
continue from where off-strategy left off.
Fixes#11543.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Closes#11545
When configuring tcp-nodelay unconditionally, messaging service thinks
gossiper uses group index 1, though it had changed some time ago and now
those verbs belong to group 0.
fixes: #11465
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
When a socket is created to serve a verb there may be no topology
information regarding the target node. In this case current code
configures socket as if the peer node lived in "default" dc and rack of
the same name. If topology information appears later, the client is not
re-connected, even though it could providing more relevant configuration
(e.g. -- w/o encryption)
This patch checks if the topology info is needed (sometimes it's not)
and if missing it configures the socket in the most restrictive manner,
but notes that the socket ignored the topology on creation. When
topology info appears -- and this happens when a node joins the cluster
-- the messaging service is kicked to drop all sockets that ignored the
topology, so thay they reconnect later.
The mentioned "kick" comes from storage service on-join notification.
More correct fix would be if topology had on-change notification and
messaging service subscribed on it, but there are two cons:
- currently dc/rack do not change on the fly (though they can, e.g. if
gossiping property file snitch is updated without restart) and
topology update effectively comes from a single place
- updating topology on token-metadata is not like topology.update()
call. Instead, a clone of token metadata is created, then update
happens on the clone, then the clone is committed into t.m. Though
it's possible to find out commit-time which nodes changed their
topology, but since it only happens on join this complexity likely
doesn't worth the effort (yet)
fixes: #11514fixes: #11492fixes: #11483
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
It actually finds and removes a client and in its new form it also
applies filtering function it, so some better name is called for
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Adds support for splitting large partitions during compaction.
Large partitions introduce many problems, like memory overhead and
breaks incremental compaction promise. We want to split large
partitions across fixed-size fragments. We'll allow a partition
to exceed size limit by 10%, as we don't want to unnecessarily split
partitions that just crossed the limit boundary.
To avoid having to open a minimal of 2 fragments in a read, partition
tombstone will be replicated to every fragment storing the
partition.
The splitting isn't enabled by default, and can be used by
strategies that are run aware like ICS. LCS still cannot support
it as it's still using physical level metadata, not run id.
An incremental reader for sstable runs will follow soon.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
After commit 0796b8c97a, sstable_run won't accept a fragment
that introduces key overlapping. But once we split large partitions,
fragments in the same run may store disjoint clustering ranges
of the same partition. So we're extending sstable_run to look
at clustering dimension, so fragments storing disjoint clustering
ranges of the same large partition can co-exist in the same run.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
An element S1 is completely ordered before S2, if S1's last key is
lower than S2's first key.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
make_sstable_containing() was absurdly slow when merging thousands of
mutations belonging to the same key, as it was unnecessarily copying
the mutation for every merge, producing bad complexity.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
With first partition's first position and last partition's last
partition, we'll be able to determine which fragments composing a
sstable run store a large partition that was split.
Then sstable run will be able to detect if all fragments storing
a given large partition are disjoint in the clustering level.
Fixes#10637.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
The new descriptive name is important to make a distinction when
sstable stores position range for first and last rows instead
of min and max.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Let's introduce a reader_concurrency_semaphore for reading sstable
metadata, to avoid an OOM due to unlimited concurrency.
The concurrency on startup is not controlled, so it's important
to enforce a limit on the amount of memory used by the parallel
readers.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
This new method allows sstable to load the first row of the first
partition and last row of last partition.
That's useful for incremental reading of sstable run which will
be split at clustering boundary.
To get the first row, it consumes the first row (which can be
either a clustering row or range tombstone change) and returns
its position_in_partition.
To get the last row, it does the same as above but in reverse
mode instead.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
We introduce `server_get_config` to fetch the entire configuration dict
and `update_config` to update a value under the given key.
Closes#11493
* github.com:scylladb/scylladb:
test/pylib: APIs to read and modify configuration from tests
test/pylib: ScyllaServer: extract _write_config_file function
test/pylib: ScyllaCluster: extend ActionReturn with dict data
test/pylib: ManagerClient: introduce _put_json
test/pylib: ManagerClient: replace `_request` with `_get`, `_get_text`
test: pylib: store server configuration in `ScyllaServer`
`_request` performed a GET request and extracted a text body out of the
response.
Split it into `_get`, which only performs the request, and `_get_text`,
which calls `_get` and extracts the body as text.
Also extract a `_resource_uri` function which will be used for other
request types.
Add a suite which is basically equivalent to `topology` except that it
doesn't start servers with Raft enabled.
The suite will be used to test the Raft upgrade procedure.
The suite contains a basic test just to check the suite itself can run;
the test will be removed when 'real' tests are added.
Closes#11487
* github.com:scylladb/scylladb:
test.py: PythonTestSuite: sum default config params with user-provided ones
test: add a topology suite with Raft disabled
test: pylib: use Python dicts to manipulate `ScyllaServer` configuration
test: pylib: store `config_options` in `ScyllaServer`