Querying from virtual table system.token_ring fails if there is a
tablet-based table due to attempt to obtain a per-keyspace erm.
Fix by not showing such keyspaces.
This will make it easier to access table proprties in places which
only have schema_ptr. This is in particular useful when replacing
dht::shard_of() uses with s->table().shard_of(), now that sharding is
no longer static, but table-specific.
Also, it allows us to install a guard which catches invalid uses of
schema::get_sharder() on tablet-based tables.
It will be helpful for other uses as well. For example, we can now get
rid of the static_props hack.
The netyr may exist, but its schema may not yet be loaded. learn()
didn't take that into account. This problem is not reachable in
production code, which currently always calls get_or_load() before
learn(), except for boot, but there's no concurrency at that point.
Exposed by unit test added later.
System tables have static schemas and code uses those static schemas
instead of looking them up in the database. We want those schemas to
have a valid table() once the table is created, so we need to attach
registry entry to the target schema rather than to a schema duplicate.
This PR implements the storage part of the cluster features on raft functionality, as described in the "Cluster features on raft v2" doc. These changes will be useful for later PRs that will implement the remaining parts of the feature.
Two new columns are added to `system.topology`:
- `supported_features set<text>` is a new clustering column which holds the features that given node advertises as supported. It will be first initialized when the node joins the cluster, and then updated every time the node reboots and its supported features set changes.
- `enabled_features set<text>` is a new static column which holds the features that are considered enabled by the cluster. Unlike in the current gossip-based implementation the features will not be enabled implicitly when all nodes support a feature, but rather via an explicit action of the topology coordinator.
These columns are reflected in the `topology_state_machine` structure and are populated when the topology state is loaded. Appropriate methods are added to the `topology_mutation_builder` and `topology_node_mutation_builder` in order to allow setting/modifying those columns.
During startup, nodes update their corresponding `supported_features` column to reflect their current feature set. For now it is done unconditionally, but in the future appropriate checks will be added which will prevent nodes from joining / starting their server for group 0 if they can't guarantee that they support all enabled features.
Closes#14232
* github.com:scylladb/scylladb:
storage_service: update supported cluster features in group0 on start
storage_service: add methods for features to topology mutation builder
storage_service: use explicit ::set overload instead of a template
storage_service: reimplement mutation builder setters
storage_service: introduce topology_mutation_builder_base
topology_state_machine: include information about features
system_keyspace: introduce deserialize_set_column
db/system_keyspace: add storage for cluster features managed in group 0
Now, when a node starts, it will update its `supported_features` row in
`system.topology` via `update_topology_with_local_metadata`.
At this point, the functionality behind cluster features on raft is
mostly incomplete and the state of the `supported_features` column does
not influence anything so it's safe to update this column
unconditionally. In the future, the node will only join / start group0
server if it is sure that it supports all enabled features and it can
safely update the `supported_features` parameter.
The newly added `supported_features` and `enabled_features` columns can
now be modified via topology mutation builders:
- `supported_features` can now be overwritten via a new overload of
`topology_node_mutation_builder::set`.
- `enabled_features` can now be extended (i.e. more elements can be
added to it) via `topology_mutation_builder::add_enabled_features`. As
the set of enabled features only grows, this should be sufficient.
The `topology_node_mutation_builder::set` function has an overload which
accepts any type which can be converted to string via `::format`. Its
presence can lead to easy mistakes which can only be detected at runtime
rather at compile time. A concrete example: I wrote a function that
accepts an std::set<S> where S is convertible to sstring; it turns out
that std::string_view is not std::convertible_to sstring and overload
resolution falled back to the catch-all overload.
This commit gets rid of the catch-all overload and replaces it with
explicit ones. Fortunately, it was used for only two enums, so it wasn't
much work.
As promised in the previous commit which introduced
topology_mutation_builder_base, this commit adjusts existing setters of
topology mutation builder and topology node mutation builder to use
helper methods defined in the base class.
Note that the `::set` method for the unordered set of tokens now does
not delete the column in case an empty value is set, instead it just
writes an empty set. This semantic is arguably more clear given that we
have an explicit `::del` method and it shouldn't affect the existing
implementation - we never intentionally insert an empty set of tokens.
Introduces `topology_mutation_builder_base` which will be a base class
for both topology mutation builder and topology node mutation builder.
Its purpose is to abstract away some detail about setting/deleting/etc.
column in the mutation, the actual topology (node) mutation builder will
only have to care about converting types and/or allowing only particular
columns to be set. The class is using CRTP: derived classes provide
access to the row being modified, schema and the timestamp.
For the sake of commit diff readability, this commt only introduces this
class and changes the builders to derive from it but no setter
implementations are modified - this will be done in the next commit.
There are three places in system_keyspace.cc which deserialize a column
holding a set of tokens and convert it to an unordered set of
dht::token. The deserialization process involves a small number of steps
that are the same in all of those places, therefore they can be
abstracted away.
This commit adds `deserialize_set_column` function which takes care of
deserializing the column to `set_type_impl::native_type` which can be
then passed to `decode_tokens`. The new function will also be useful for
decoding set columns with cluster features, which will be handled in the
next commit.
This PR fixes some problems found after the PR was merged:
* missed `node_to_work_on` assignment in `handle_topology_transition`;
* change error reporting in `update_fence_version` from `on_internal_error` to regular exceptions, since that exceptions can happen during normal operation.
* `update_fence_version` has beed moved after `group0_service.setup_group0_if_exist` in `main.cc`, otherwise we use uninitialized `token_metadata::version` and get an error.
Fixes: #14303Closes#14292
* github.com:scylladb/scylladb:
main.cc: move update_fence_version after group0_service.setup_group0_if_exist
shared_token_metadata: update_fence_version: on_internal_error -> throw
storage_service: handle_topology_transition: fix missed node assignment
Currently, when two cells have the same write timestamp
and both are alive or expiring, we compare their value first,
before checking if either of them is expiring
and if both are expiring, comparing their expiration time
and ttl value to determine which of them will expire
later or was written later.
This was based on an early version of Cassandra.
However, the Cassandra implementation rightfully changed in
e225c88a65 ([CASSANDRA-14592](https://issues.apache.org/jira/browse/CASSANDRA-14592)),
where the cell expiration is considered before the cell value.
To summarize, the motivation for this change is three fold:
1. Cassandra compatibility
2. Prevent an edge case where a null value is returned by select query when an expired cell has a larger value than a cell with later expiration.
3. A generalization of the above: value-based reconciliation may cause select query to return a mixture of upserts, if multiple upserts use the same timeastamp but have different expiration times. If the cell value is considered before expiration, the select result may contain cells from different inserts, while reconciling based the expiration times will choose cells consistently from either upserts, as all cells in the respective upsert will carry the same expiration time.
Fixes#14182
Also, this series:
- updates dml documentation
- updates internal documentation
- updates and adds unit tests and cql pytest reproducing #14182Closes#14183
* github.com:scylladb/scylladb:
docs: dml: add update ordering section
cql-pytest: test_using_timestamp: add tests for rewrites using same timestamp
mutation_partition: compare_row_marker_for_merge: consider ttl in case expiry is the same
atomic_cell: compare_atomic_cell_for_merge: update and add documentation
compare_atomic_cell_for_merge: compare value last for live cells
mutation_test: test_cell_ordering: improve debuggability
Otherwise, the validation
new_fence_version <= token_metadata::version
inside update_fence_version will use an uninitialized
token_metadata::version == 0
and we will get an error.
The test_topology_ops was improved to
catch this problem.
Fixes: #14303
on_internal_error is wrong for fence_version
condition violation, since in case of
topology change coordinator migrating to another
node we can have raft_topology_cmd::command::fence
command from the old coordinator running in
parallel with the fence command (or topology version
upgrading raft command) from the new one.
The comment near the raft_topology_cmd::command::fence
handling describes this situation, assuming an exception
is thrown in this case.
It is currently located in query_class_config.hh, which is named after a
now defunct struct. This arrangement is unintuitive and there is no
upside to it. The main user of max_result_size is query_comand, so
colocate it next to the latter.
Closes#14268
and add docs/dev/timestamp-conflict-resolution.md
to document the details of the conflict resolution algorithm.
Refs scylladb/scylladb#14063
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Add reproducers for #14182:
test_rewrite_different_values_using_same_timestamp verifies
expiration-based cell reconciliation.
test_rewrite_different_values_using_same_timestamp_and_expiration
is a scylla_only test, verifying that when
two cells with same timestamp and same expiration
are compared, the one with the lesser ttl prevails.
test_rewrite_using_same_timestamp_select_after_expiration
reproduces the specific issue hit in #14182
where a cell is selected after it expires since
it has a lexicographically larger value than
the other cell with later expiration.
test_rewrite_multiple_cells_using_same_timestamp verifies
atomicity of inserts of multiple columns, with a TTL.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
As in compare_atomic_cell_for_merge, we want to consider
the row marker ttl for ordering, in case both are expiring
and have the same expiration time.
This was missed in a57c087c89
and a085ef74ff.
With that in mind, add documentation to compare_row_marker_for_merge
and a mutual note to both functions about their
equivalence.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Currently, when two cells have the same write timestamp
and both are alive or expiring, we compare their value first,
before checking if either of them is expiring
and if both are expiring, comparing their expiration time
and ttl value to determine which of them will expire
later or was written later.
This was changed in CASSANDRA-14592
for consistency with the preference for dead cells over live cells,
as expiring cells will become tombstones at a future time
and then they'd win over live cells with the same timestamp,
hence they should win also before expiration.
In addition, comparing the cell value before expiration
can lead to unintuitive corner cases where rewriting
a cell using the same timestamp but different TTL
may cause scylla to return the cell with null value
if it expired in the meanwhile.
Also, when multiple columns are written using two upserts
using the same write timestamp but with different expiration,
selecting cells by their value may return a mixed result
where each cell is selected individually from either upsert,
by picking the cells with the largest values for each column,
while using the expiration time to break tie will lead
to a more consistent results where a set of cell from
only one of the upserts will be selected.
Fixesscylladb/scylladb#14182
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Currently, it is hard to tell which of the many sub-cases
fail in this unit test, in case any of them fails.
This change uses logging in debug and trace level
to help with that by reproducing the error
with --logger-log-level testlog=trace
(The cases are deterministic so reproducing should not
be a problem)
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Split off do_execute() into a fast path and slow(ish) path, and
coroutinize the latter.
perf-simple-query shows no change in performance (which is
unsurprising since it picks the fast path which is essentially unchanged).
Closes#14246
* github.com:scylladb/scylladb:
cql3: select_statement: reindent execute_without_checking_exception_message_aggregate_or_paged()
cql3: select_statement: coroutinize execute_without_checking_exception_message_aggregate_or_paged()
cql3: select_statement: split do_execute into fast-path and slow/slower paths
cql3: select_statement: disambiguate execute() overloads
`query_partition_range_concurrent` implements an optimization when
querying a token range that intersects multiple vnodes. Instead of
sending a query for each vnode separately, it sometimes sends a single
query to cover multiple vnodes - if the intersection of replica sets for
those vnodes is large enough to satisfy the CL and good enough in terms
of the heat metric. To check the latter condition, the code would take
the smallest heat metric of the intersected replica set and compare them
to smallest heat metrics of replica sets calculated separately for each
vnode.
Unfortunately, there was an edge case that the code didn't handle: the
intersected replica set might be empty and the code would access an
empty range.
This was catched by an assertion added in
8db1d75c6c by the dtest
`test_query_dc_with_rf_0_does_not_crash_db`.
The fix is simple: check if the intersected set is empty - if so, don't
calculate the heat metrics because we can decide early that the
optimization doesn't apply.
Also change the `assert` to `on_internal_error`.
Fixes#14284Closes#14300
In its current state s3 client uses a single default-configured http client thus making different sched classes' workload compete with each other for sockets to make requests on. There's an attempt to handle that in upload-sink implementation that limits itself with some small number of concurrent PUT requests, but that doesn't help much as many sinks don't share this limit.
This PR makes S3 client maintain a set of http clients, one per sched-group, configures maximum number of TCP connections proportional to group's shares and removes the artificial limit from sinks thus making them share the group's http concurrency limit.
As a side effect, the upload-sink fixes the no-writes-after-flush protection -- if it's violated, write will result in exception, while currently it just hangs on a semaphore forever.
fixes: #13458fixes: #13320fixes: #13021Closes#14187
* github.com:scylladb/scylladb:
s3/client: Replace skink flush semaphore with gate
s3/client: Configure different max-connections on http clients
s3/client: Maintain several http clients on-board
s3/client: Remove now unused http reference from sink and file
s3/client: Add make_request() method
This patch adds some minimal tests for the "with compression = {..}" table
configuration. These tests reproduce three known bugs:
Refs #6442: Always print all schema parameters (including default values)
Scylla doesn't return the default chunk_length_in_kb, but Cassandra
does.
Refs #8948: Cassandra 3.11.10 uses "class" instead of "sstable_compression"
for compression settings by default
Cassandra switched, long ago, the "sstable_compression" attribute's
name to "class". This can break Cassandra applications that create
tables (where we won't understand the "class" parameter) and applications
that inquire about the configuration of existing tables. This patch adds
tests for both problems.
Refs #9933: ALTER TABLE with "chunk_length_kb" (compression) of 1MB caused a
core dump on all nodes
Our test for this issue hangs Scylla (or crashes, depending on the test
environment configuration), when a huge allocation is attempted during
memtable flush. So this test is marked "skip" instead of xfail.
The tests included here also uncovered a new minor/insignificant bug,
where Scylla allows floating point numbers as chunk_length_in_kb - this
number is truncated to an integer, and allowed, unlike Cassandra or
common sense.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Closes#14261
Since `mvcc: make schema upgrades gentle` (51e3b9321b),
rows pointed to by the cursor can have different (older) schema
than the schema of the cursor's snapshot.
However, one place in the code wasn't updated accordingly,
causing a row to be processed with the wrong schema in the right
circumstances.
This passed through unit testing because it requires
a digest-computing cache read after a schema change,
and no test exercised this.
This series fixes the bug and adds a unit test which reproduces the issue.
Fixes#14110Closes#14305
* github.com:scylladb/scylladb:
test: boost/row_cache_test: add a reproducer for #14110
cache_flat_mutation_reader: use the correct schema in prepare_hash
mutation: mutation_cleaner: add pause()
Since `mvcc: make schema upgrades gentle` (51e3b9321b),
rows pointed to by the cursor can have different (older) schema
than the schema of the cursor's snapshot.
However, one place in the code wasn't updated accordingly,
causing a row to be processed with the wrong schema in the right
circumstances.
This passed through unit testing because it requires
a digest-computing cache read after a schema change,
and no test exercised this.
Fixes#14110
In unit tests, we would want to delay the merging of some MVCC
versions to test the transient scenarios with multiple versions present.
In many cases this can be done by holding snapshots to all versions.
But sometimes (i.e. during schema upgrades) versions are added and
scheduled for merge immediately, without a window for the test to
grab a snapshot to the new version.
This patch adds a pause() method to mutation_cleaner, which ensures
that no asynchronous/implicit MVCC version merges happen within
the scope of the call.
This functionality will be used by a test added in an upcoming patch.
Exposing scrub compaction to the command-line. Allows for offline scrub of sstables, in cases where online scrubbing (via scylla itself) is not possible or not desired. One such case recently was an sstable from a backup which turned out to be corrupt, `nodetool refresh --load-and-stream` refusing to load it.
Fixes: #14203Closes#14260
* github.com:scylladb/scylladb:
docs/operating-scylla/admin-tools: scylla-sstable: document scrub operation
test/cql-pytest: test_tools.py: add test for scylla sstable scrub
tools/scylla-sstable: add scrub operation
tools/scylla-sstable: write operation: add none to valid validation levels
tools/scylla-sstable: handle errors thrown by the operation
test/cql-pytest: add option to omit scylla's output from the test output
tools/scylla-sstable: s/option/operation_option/
tool/scylla-sstable: add missing comments
This mini-series updates the expected errors in `test/cql-pytest/test-timestamp.py`
to the ones changed in b7bbcdd178.
Then, it renamed the test to `test_using_timestamp.py` so it would
run automatically with `test.py`.
Closes#14293
* github.com:scylladb/scylladb:
cql-pytest: rename test-timestamp.py to test_using_timestamp.py
cql-pytest: test-timestamp: test_key_writetime: update expected errors
This reverts commit 8a54e478ba. As
commit 7dadd38161 ("Revert "configure: Switch debug build from
-O0 to -Og") was reverted (by b7627085cb, "Revert "Revert
"configure: Switch debug build from -O0 to -Og""")), we do the
same to cmake to keep the two build systems in sync.
Closes#14286
There are tons of wrappers that help test cases make sstables for their needs. And lots of code duplication in test cases that do parts of those helpers' work on their own. This set cleans some bits of those
Closes#14280
* github.com:scylladb/scylladb:
test/utils: Generalize making memtable from vector<mutation>
test/util: Generalize make_sstable_easy()-s
test/sstable_mutation: Remove useless helper
test/sstable_mutation: Make writer config in make_sstable_mutation_source()
test/utils: De-duplicate make_sstable_containing-s
test/sstable_compaction: Remove useless one-line local lambda
test/sstable_compaction: Simplify sstable making
test/sstables*: Make sstable from vector of mutations
test/mutation_reader: Remove create_sstable() helper from test
It's excessive, test case that needs it can get storage prefix without
this fancy wrapper-helper
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Closes#14273
Commit 4e205650 (test: Verify correctness of sstable::bytes_on_disk())
added a test to verify that sstable::bytes_on_disk() is equal to the
real size of real files. The same test case makes sense for S3-backed
sstables as well.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Closes#14272
Followup to 9bfa63fe37. Like in
`test_downgrade_after_successful_upgrade_fails`, the test
`test_joining_old_node_fails` also restarts all nodes at once and is prone
to a bug in the Python driver which can prevent the session from
reconnecting to any of the nodes. This commit applies the same
workaround to the other test (manual reconnect by recreating the Python
driver session).
Closes#14291
Another node can stop after it joined the group0 but before it
advertised itself in gossip. `get_inet_addrs` will try to resolve all
IPs and `wait_for_peers_to_enter_synchronize_state` will loop
indefinitely.
But `wait_for_peers_to_enter_synchronize_state` can return early if one
of the nodes confirms that the upgrade procedure has finished. For that,
it doesn't need the IPs of all group 0 members - only the IP of some
nodes which can do the confirmation.
This pr restructures the code so that IPs of nodes are resolved inside
the `max_concurrent_for_each` that
`wait_for_peers_to_enter_synchronize_state` performs. Then, even if some
IPs won't be resolved, but one of the nodes confirms a successful
upgrade, we can continue.
Fixes#13543Closes#14046
* github.com:scylladb/scylladb:
raft topology: test: check if aborted node replacing blocks bootstrap
raft topology: `wait_for_peers_to_enter_synchronize_state` doesn't need to resolve all IPs
1. Otherwise test.py doesn't recognize it.
2. As it represents what the test does in a better way.
3. Following `test_using_timeout.py` naming convention.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
The error messages were changed in
b7bbcdd178.
Extend the `match` regular expression param
to pytest.raises to include both old and new message
to remain backward compatible also with Cassandra,
as this test is run against both Cassandra and Scylla.
Note that the test didn't run automatically
since it's named `test-timestamp.py` and test.py
looks up only test scripts beginning with `test_`.
The test will be renamed in the next patch.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
In preparation for converting selectors to evaluate expressions,
add support for evaluating column_mutation_attribute (representing
the WRITETIME/TTL pseudo-functions).
A unit test is added.
Fixes#12906Closes#14287
* github.com:scylladb/scylladb:
test: expr: test evaluation of column_mutation_attribute
test: lib: enhance make_evaluation_inputs() with support for ttls/timestamps
cql3: expr: evaluate() column_mutation_attribute