Commit Graph

37458 Commits

Author SHA1 Message Date
Tomasz Grabiec
d92287f997 db: multishard: Obtain sharder from erm
This is not strictly necessary, as the multishard reader will be later
avoided altogether for tablet-based tables, but it is a step towards
converting all code to use the erm->get_sharder() instead of
schema::get_sharder().
2023-06-21 00:58:24 +02:00
Tomasz Grabiec
18f567385c sstable_directory: Improve trace-level logging 2023-06-21 00:58:24 +02:00
Tomasz Grabiec
34ba8a6a53 db: table: Introduce shard_of() helper
Saves some boiler plate code.
2023-06-21 00:58:24 +02:00
Tomasz Grabiec
36da062bcb db: Use table sharder in compaction 2023-06-21 00:58:24 +02:00
Tomasz Grabiec
ad983ac23d sstables: Compute sstable shards using sharder from erm when loading
schema::get_sharder() does not use the correct sharder for
tablet-based tables.  Code which is supposed to work with all kinds of
tables should obtain the sharder from erm::get_sharder().
2023-06-21 00:58:24 +02:00
Tomasz Grabiec
17d6163548 sstables: Generate sharding metadata using sharder from erm when writing
We need to keep sharding metadata consistent with tablet mapping to
shards in order for node restart to detect that those sstables belong
to a single shard and that resharding is not necessary. Resharding of
sstables based on tablet metadata is not implemented yet and will
abort after this series.

Keeping sharding metadata accurate for tablets is only necessary until
compaction group integration is finished. After that, we can use the
sstable token range to determine the owning tablet and thus the owning
shard. Before that, we can't, because a single sstable may contain
keys from different tablets, and the whole key range may overlap with
keys which belong to other shards.
2023-06-21 00:58:24 +02:00
Tomasz Grabiec
36e12020b9 test: partitioner: Test split_range_to_single_shard() on tablet-like sharder 2023-06-21 00:58:24 +02:00
Tomasz Grabiec
28b972a588 dht: Make split_range_to_single_shard() prepared for tablet sharder
The function currently assumes that shard assignment for subsequent
tokens is round robin, which will not be the case for tablets. This
can lead to incorrect split calculation or infinite loop.

Another assumption was that subsequent splits returned by the sharder
have distinct shards. This also doesn't hold for tablets, which may
return the same shard for subsequent tokens. This assumption was
embedded in the following line:

  start_token = sharder.token_for_next_shard(end_token, shard);

If the range which starts with end_token is also owned by "shard",
token_for_next_shard() would skip over it.
2023-06-21 00:58:24 +02:00
Tomasz Grabiec
fe7922d65c sstables: Move compute_shards_for_this_sstable() to load()
Soon, compute_shards_for_this_sstable() will need to take a sharder object.

open_data() is called indirectly from sstable::load() and directly
after writing an sstable from various paths. The latter don't really
need to compute shards, since the field is already set by the writer. In
order to reduce code churn, move compute_shards_for_this_sstable() to
the load() path only so that only load() needs to take the sharder.
2023-06-21 00:58:24 +02:00
Tomasz Grabiec
390bcf3fae dht: Take sharder externally in splitting functions
We need those functions to work with tablet sharder, which is not
accessible through schema::get_sharder(). In order to propagate the
right sharder, those functions need to take it externally rather from
the schema object. The sharder will come from the
effective_replication_map attached to the table object.

Those splitting functions are used when generating sharding metadata
of an sstable. We need to keep this sharding metadata consistent with
tablet mapping to shards in order for node restart to detect that
those sstables belong to a single shard and that resharding is not
necessary. Resharding of sstables based on tablet metadata is not
implemented yet and will abort after this series.

Keeping sharding metadata accurate for tablets is only necessary until
compaction group integration is finished. After that, we can use the
sstable token range to determine the owning tablet and thus the owning
shard. Before that, we can't, because a single sstable may contain
keys from different tablets, and the whole key range may overlap with
keys which belong to other shards.
2023-06-21 00:58:24 +02:00
Tomasz Grabiec
353ce1a6d1 locator: Make sharder accessible through effective_replication_map
For tablets, sharding depends on replication map, so the scope of the
sharder should be effective_replicaion_map rather than the schema
object.

Existing users will be transitioned incrementally in later patches.
2023-06-21 00:58:24 +02:00
Tomasz Grabiec
606a8ee2da dht: sharder: Document guarantees about mapping stability 2023-06-21 00:58:24 +02:00
Tomasz Grabiec
22ab100b41 tablets: Implement tablet sharder 2023-06-21 00:58:24 +02:00
Tomasz Grabiec
e44e6033d8 tablets: Include pending replica in get_shard()
We need to move get_shard() from tablet_info to tablet_map in order to
have access to transition_info.
2023-06-21 00:58:24 +02:00
Tomasz Grabiec
e8dd5e34c3 dht: sharder: Introduce next_shard()
The logic was extracted from ring_position_range_sharder::next(), and
the latter was changed to rely on sharder::next_shard().

The tablet sharder will have a different implementation for
next_shard(). This way, ring_position_range_sharder can work with both
current sharder and the tablet sharder.
2023-06-21 00:58:24 +02:00
Tomasz Grabiec
16797c2d1a db: token_ring_table: Filter out tablet-based keyspaces
Querying from virtual table system.token_ring fails if there is a
tablet-based table due to attempt to obtain a per-keyspace erm.

Fix by not showing such keyspaces.
2023-06-21 00:58:24 +02:00
Tomasz Grabiec
2303466375 db: schema: Attach table pointer to schema
This will make it easier to access table proprties in places which
only have schema_ptr. This is in particular useful when replacing
dht::shard_of() uses with s->table().shard_of(), now that sharding is
no longer static, but table-specific.

Also, it allows us to install a guard which catches invalid uses of
schema::get_sharder() on tablet-based tables.

It will be helpful for other uses as well. For example, we can now get
rid of the static_props hack.
2023-06-21 00:58:24 +02:00
Tomasz Grabiec
84cb0f5df7 schema_registry: Fix SIGSEGV in learn() when concurrent with get_or_load()
The netyr may exist, but its schema may not yet be loaded. learn()
didn't take that into account. This problem is not reachable in
production code, which currently always calls get_or_load() before
learn(), except for boot, but there's no concurrency at that point.

Exposed by unit test added later.
2023-06-21 00:58:24 +02:00
Tomasz Grabiec
053484e762 schema_registry: Make learn(schema_ptr) attach entry to the target schema
System tables have static schemas and code uses those static schemas
instead of looking them up in the database. We want those schemas to
have a valid table() once the table is created, so we need to attach
registry entry to the target schema rather than to a schema duplicate.
2023-06-21 00:58:24 +02:00
Tomasz Grabiec
ebc49e89ab test: lib: cql_test_env: Expose feature_service 2023-06-21 00:58:24 +02:00
Tomasz Grabiec
ad6d2b42f2 test: Extract throttle object to separate header 2023-06-21 00:58:24 +02:00
Kamil Braun
643e69af89 Merge 'Cluster features on raft: add storage for supported and enabled features' from Piotr Dulikowski
This PR implements the storage part of the cluster features on raft functionality, as described in the "Cluster features on raft v2" doc. These changes will be useful for later PRs that will implement the remaining parts of the feature.

Two new columns are added to `system.topology`:

- `supported_features set<text>` is a new clustering column which holds the features that given node advertises as supported. It will be first initialized when the node joins the cluster, and then updated every time the node reboots and its supported features set changes.
- `enabled_features set<text>` is a new static column which holds the features that are considered enabled by the cluster. Unlike in the current gossip-based implementation the features will not be enabled implicitly when all nodes support a feature, but rather via an explicit action of the topology coordinator.

These columns are reflected in the `topology_state_machine` structure and are populated when the topology state is loaded. Appropriate methods are added to the `topology_mutation_builder` and `topology_node_mutation_builder` in order to allow setting/modifying those columns.

During startup, nodes update their corresponding `supported_features` column to reflect their current feature set. For now it is done unconditionally, but in the future appropriate checks will be added which will prevent nodes from joining / starting their server for group 0 if they can't guarantee that they support all enabled features.

Closes #14232

* github.com:scylladb/scylladb:
  storage_service: update supported cluster features in group0 on start
  storage_service: add methods for features to topology mutation builder
  storage_service: use explicit ::set overload instead of a template
  storage_service: reimplement mutation builder setters
  storage_service: introduce topology_mutation_builder_base
  topology_state_machine: include information about features
  system_keyspace: introduce deserialize_set_column
  db/system_keyspace: add storage for cluster features managed in group 0
2023-06-20 18:32:00 +02:00
Avi Kivity
453bbc1115 cql3: expr: improve error message when rejecting aggregation functions in illegal contexts
Fix a small grammatical error, and capitalize WHERE in accordance
with SQL tradition.

Closes #14288
2023-06-20 17:52:53 +03:00
Piotr Dulikowski
3e955945de storage_service: update supported cluster features in group0 on start
Now, when a node starts, it will update its `supported_features` row in
`system.topology` via `update_topology_with_local_metadata`.

At this point, the functionality behind cluster features on raft is
mostly incomplete and the state of the `supported_features` column does
not influence anything so it's safe to update this column
unconditionally. In the future, the node will only join / start group0
server if it is sure that it supports all enabled features and it can
safely update the `supported_features` parameter.
2023-06-20 16:41:08 +02:00
Piotr Dulikowski
707e929831 storage_service: add methods for features to topology mutation builder
The newly added `supported_features` and `enabled_features` columns can
now be modified via topology mutation builders:

- `supported_features` can now be overwritten via a new overload of
  `topology_node_mutation_builder::set`.
- `enabled_features` can now be extended (i.e. more elements can be
  added to it) via `topology_mutation_builder::add_enabled_features`. As
  the set of enabled features only grows, this should be sufficient.
2023-06-20 16:41:08 +02:00
Piotr Dulikowski
2a4462a01f storage_service: use explicit ::set overload instead of a template
The `topology_node_mutation_builder::set` function has an overload which
accepts any type which can be converted to string via `::format`. Its
presence can lead to easy mistakes which can only be detected at runtime
rather at compile time. A concrete example: I wrote a function that
accepts an std::set<S> where S is convertible to sstring; it turns out
that std::string_view is not std::convertible_to sstring and overload
resolution falled back to the catch-all overload.

This commit gets rid of the catch-all overload and replaces it with
explicit ones. Fortunately, it was used for only two enums, so it wasn't
much work.
2023-06-20 16:41:08 +02:00
Piotr Dulikowski
a8aaeabfac storage_service: reimplement mutation builder setters
As promised in the previous commit which introduced
topology_mutation_builder_base, this commit adjusts existing setters of
topology mutation builder and topology node mutation builder to use
helper methods defined in the base class.

Note that the `::set` method for the unordered set of tokens now does
not delete the column in case an empty value is set, instead it just
writes an empty set. This semantic is arguably more clear given that we
have an explicit `::del` method and it shouldn't affect the existing
implementation - we never intentionally insert an empty set of tokens.
2023-06-20 16:41:08 +02:00
Piotr Dulikowski
ee12192125 storage_service: introduce topology_mutation_builder_base
Introduces `topology_mutation_builder_base` which will be a base class
for both topology mutation builder and topology node mutation builder.
Its purpose is to abstract away some detail about setting/deleting/etc.
column in the mutation, the actual topology (node) mutation builder will
only have to care about converting types and/or allowing only particular
columns to be set. The class is using CRTP: derived classes provide
access to the row being modified, schema and the timestamp.

For the sake of commit diff readability, this commt only introduces this
class and changes the builders to derive from it but no setter
implementations are modified - this will be done in the next commit.
2023-06-20 16:41:08 +02:00
Piotr Dulikowski
bc84d59665 topology_state_machine: include information about features
Now, the newly added `supported_features` and `enabled_features` columns
are reflected in the `topology_state_machine` structure.
2023-06-20 16:41:05 +02:00
Piotr Dulikowski
e527e63abc system_keyspace: introduce deserialize_set_column
There are three places in system_keyspace.cc which deserialize a column
holding a set of tokens and convert it to an unordered set of
dht::token. The deserialization process involves a small number of steps
that are the same in all of those places, therefore they can be
abstracted away.

This commit adds `deserialize_set_column` function which takes care of
deserializing the column to `set_type_impl::native_type` which can be
then passed to `decode_tokens`. The new function will also be useful for
decoding set columns with cluster features, which will be handled in the
next commit.
2023-06-20 16:37:09 +02:00
Kamil Braun
8b152361f4 Merge 'raft topology: fixes after #13884' from Gusev Petr
This PR fixes some problems found after the PR was merged:
  * missed `node_to_work_on` assignment in `handle_topology_transition`;
  * change error reporting in `update_fence_version` from `on_internal_error` to regular exceptions, since that exceptions can happen during normal operation.
  * `update_fence_version` has beed moved  after `group0_service.setup_group0_if_exist` in `main.cc`, otherwise we use uninitialized `token_metadata::version` and get an error.

Fixes: #14303

Closes #14292

* github.com:scylladb/scylladb:
  main.cc: move update_fence_version after group0_service.setup_group0_if_exist
  shared_token_metadata: update_fence_version: on_internal_error -> throw
  storage_service: handle_topology_transition: fix missed node assignment
2023-06-20 13:02:17 +02:00
Tomasz Grabiec
87b4606cd6 Merge 'atomic_cell: compare value last' from Benny Halevy
Currently, when two cells have the same write timestamp
and both are alive or expiring, we compare their value first,
before checking if either of them is expiring
and if both are expiring, comparing their expiration time
and ttl value to determine which of them will expire
later or was written later.

This was based on an early version of Cassandra.
However, the Cassandra implementation rightfully changed in
e225c88a65 ([CASSANDRA-14592](https://issues.apache.org/jira/browse/CASSANDRA-14592)),
where the cell expiration is considered before the cell value.

To summarize, the motivation for this change is three fold:
1. Cassandra compatibility
2. Prevent an edge case where a null value is returned by select query when an expired cell has a larger value than a cell with later expiration.
3. A generalization of the above: value-based reconciliation may cause select query to return a mixture of upserts, if multiple upserts use the same timeastamp but have different expiration times.  If the cell value is considered before expiration, the select result may contain cells from different inserts, while reconciling based the expiration times will choose cells consistently from either upserts, as all cells in the respective upsert will carry the same expiration time.

Fixes #14182

Also, this series:
- updates dml documentation
- updates internal documentation
- updates and adds unit tests and cql pytest reproducing #14182

Closes #14183

* github.com:scylladb/scylladb:
  docs: dml: add update ordering section
  cql-pytest: test_using_timestamp: add tests for rewrites using same timestamp
  mutation_partition: compare_row_marker_for_merge: consider ttl in case expiry is the same
  atomic_cell: compare_atomic_cell_for_merge: update and add documentation
  compare_atomic_cell_for_merge: compare value last for live cells
  mutation_test: test_cell_ordering: improve debuggability
2023-06-20 12:11:48 +02:00
Petr Gusev
41b950dd21 main.cc: move update_fence_version after group0_service.setup_group0_if_exist
Otherwise, the validation
new_fence_version <= token_metadata::version
inside update_fence_version will use an uninitialized
token_metadata::version == 0
and we will get an error.

The test_topology_ops was improved to
catch this problem.

Fixes: #14303
2023-06-20 13:40:01 +04:00
Petr Gusev
246eaec14e shared_token_metadata: update_fence_version: on_internal_error -> throw
on_internal_error is wrong for fence_version
condition violation, since in case of
topology change coordinator migrating to another
node we can have raft_topology_cmd::command::fence
command from the old coordinator running in
parallel with the fence command (or topology version
upgrading raft command) from the new one.
The comment near the raft_topology_cmd::command::fence
handling describes this situation, assuming an exception
is thrown in this case.
2023-06-20 13:39:17 +04:00
Botond Dénes
8bfe3ca543 query: move max_result_size to query-request.hh
It is currently located in query_class_config.hh, which is named after a
now defunct struct. This arrangement is unintuitive and there is no
upside to it. The main user of max_result_size is query_comand, so
colocate it next to the latter.

Closes #14268
2023-06-20 11:37:50 +02:00
Benny Halevy
26ff8f7bf7 docs: dml: add update ordering section
and add docs/dev/timestamp-conflict-resolution.md
to document the details of the conflict resolution algorithm.

Refs scylladb/scylladb#14063

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2023-06-20 11:55:54 +03:00
Benny Halevy
31a3152a59 cql-pytest: test_using_timestamp: add tests for rewrites using same timestamp
Add reproducers for #14182:

test_rewrite_different_values_using_same_timestamp verifies
expiration-based cell reconciliation.

test_rewrite_different_values_using_same_timestamp_and_expiration
is a scylla_only test, verifying that when
two cells with same timestamp and same expiration
are compared, the one with the lesser ttl prevails.

test_rewrite_using_same_timestamp_select_after_expiration
reproduces the specific issue hit in #14182
where a cell is selected after it expires since
it has a lexicographically larger value than
the other cell with later expiration.

test_rewrite_multiple_cells_using_same_timestamp verifies
atomicity of inserts of multiple columns, with a TTL.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2023-06-20 10:10:39 +03:00
Benny Halevy
0aa13f70eb mutation_partition: compare_row_marker_for_merge: consider ttl in case expiry is the same
As in compare_atomic_cell_for_merge, we want to consider
the row marker ttl for ordering, in case both are expiring
and have the same expiration time.

This was missed in a57c087c89
and a085ef74ff.

With that in mind, add documentation to compare_row_marker_for_merge
and a mutual note to both functions about their
equivalence.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2023-06-20 10:10:39 +03:00
Benny Halevy
6717e45ff0 atomic_cell: compare_atomic_cell_for_merge: update and add documentation
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2023-06-20 10:10:39 +03:00
Benny Halevy
761d62cd82 compare_atomic_cell_for_merge: compare value last for live cells
Currently, when two cells have the same write timestamp
and both are alive or expiring, we compare their value first,
before checking if either of them is expiring
and if both are expiring, comparing their expiration time
and ttl value to determine which of them will expire
later or was written later.

This was changed in CASSANDRA-14592
for consistency with the preference for dead cells over live cells,
as expiring cells will become tombstones at a future time
and then they'd win over live cells with the same timestamp,
hence they should win also before expiration.

In addition, comparing the cell value before expiration
can lead to unintuitive corner cases where rewriting
a cell using the same timestamp but different TTL
may cause scylla to return the cell with null value
if it expired in the meanwhile.

Also, when multiple columns are written using two upserts
using the same write timestamp but with different expiration,
selecting cells by their value may return a mixed result
where each cell is selected individually from either upsert,
by picking the cells with the largest values for each column,
while using the expiration time to break tie will lead
to a more consistent results where a set of cell from
only one of the upserts will be selected.

Fixes scylladb/scylladb#14182

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2023-06-20 10:10:39 +03:00
Benny Halevy
ec034b92c0 mutation_test: test_cell_ordering: improve debuggability
Currently, it is hard to tell which of the many sub-cases
fail in this unit test, in case any of them fails.

This change uses logging in debug and trace level
to help with that by reproducing the error
with --logger-log-level testlog=trace
(The cases are deterministic so reproducing should not
be a problem)

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2023-06-20 10:10:39 +03:00
Botond Dénes
63b395fe70 Merge 'docs: changes subdomain to opensource.docs.scylladb.com' from David Garcia
docs.scylladb.com will point to https://github.com/scylladb/scylladb-docs-homepage

This pull request changes the domain of this repo to opensource.docs.scylladb.com and moves all the redirects to https://github.com/scylladb/scylladb-docs-homepage/blob/main/docs/_utils/redirects.yaml

Closes #14221

* github.com:scylladb/scylladb:
  Update conf.py
  docs: separate homepage
2023-06-20 10:00:40 +03:00
Botond Dénes
9e9636ef15 Merge 'cql3: select_statement: coroutinize and simplify do_execute()' from Avi Kivity
Split off do_execute() into a fast path and slow(ish) path, and
coroutinize the latter.

perf-simple-query shows no change in performance (which is
unsurprising since it picks the fast path which is essentially unchanged).

Closes #14246

* github.com:scylladb/scylladb:
  cql3: select_statement: reindent execute_without_checking_exception_message_aggregate_or_paged()
  cql3: select_statement: coroutinize execute_without_checking_exception_message_aggregate_or_paged()
  cql3: select_statement: split do_execute into fast-path and slow/slower paths
  cql3: select_statement: disambiguate execute() overloads
2023-06-20 08:02:07 +03:00
Kamil Braun
732feca115 storage_proxy: query_partition_key_range_concurrent: don't access empty range
`query_partition_range_concurrent` implements an optimization when
querying a token range that intersects multiple vnodes. Instead of
sending a query for each vnode separately, it sometimes sends a single
query to cover multiple vnodes - if the intersection of replica sets for
those vnodes is large enough to satisfy the CL and good enough in terms
of the heat metric. To check the latter condition, the code would take
the smallest heat metric of the intersected replica set and compare them
to smallest heat metrics of replica sets calculated separately for each
vnode.

Unfortunately, there was an edge case that the code didn't handle: the
intersected replica set might be empty and the code would access an
empty range.

This was catched by an assertion added in
8db1d75c6c by the dtest
`test_query_dc_with_rf_0_does_not_crash_db`.

The fix is simple: check if the intersected set is empty - if so, don't
calculate the heat metrics because we can decide early that the
optimization doesn't apply.

Also change the `assert` to `on_internal_error`.

Fixes #14284

Closes #14300
2023-06-20 07:56:40 +03:00
Botond Dénes
ddf8547f25 Merge 'Add concurrency control and workload isolation for S3 client' from Pavel Emelyanov
In its current state s3 client uses a single default-configured http client thus making different sched classes' workload compete with each other for sockets to make requests on. There's an attempt to handle that in upload-sink implementation that limits itself with some small number of concurrent PUT requests, but that doesn't help much as many sinks don't share this limit.

This PR makes S3 client maintain a set of http clients, one per sched-group, configures maximum number of TCP connections proportional to group's shares and removes the artificial limit from sinks thus making them share the group's http concurrency limit.

As a side effect, the upload-sink fixes the no-writes-after-flush protection -- if it's violated, write will result in exception, while currently it just hangs on a semaphore forever.

fixes: #13458
fixes: #13320
fixes: #13021

Closes #14187

* github.com:scylladb/scylladb:
  s3/client: Replace skink flush semaphore with gate
  s3/client: Configure different max-connections on http clients
  s3/client: Maintain several http clients on-board
  s3/client: Remove now unused http reference from sink and file
  s3/client: Add make_request() method
2023-06-20 07:09:21 +03:00
Nadav Har'El
7deba4f4a5 test/cql-pytest: add tests reproducing bugs in compression configuration
This patch adds some minimal tests for the "with compression = {..}" table
configuration. These tests reproduce three known bugs:

Refs #6442: Always print all schema parameters (including default values)

  Scylla doesn't return the default chunk_length_in_kb, but Cassandra
  does.

Refs #8948: Cassandra 3.11.10 uses "class" instead of "sstable_compression"
            for compression settings by default

  Cassandra switched, long ago, the "sstable_compression" attribute's
  name to "class". This can break Cassandra applications that create
  tables (where we won't understand the "class" parameter) and applications
  that inquire about the configuration of existing tables. This patch adds
  tests for both problems.

Refs #9933: ALTER TABLE with "chunk_length_kb" (compression) of 1MB caused a
            core dump on all nodes

  Our test for this issue hangs Scylla (or crashes, depending on the test
  environment configuration), when a huge allocation is attempted during
  memtable flush. So this test is marked "skip" instead of xfail.

The tests included here also uncovered a new minor/insignificant bug,
where Scylla allows floating point numbers as chunk_length_in_kb - this
number is truncated to an integer, and allowed, unlike Cassandra or
common sense.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>

Closes #14261
2023-06-20 06:36:13 +03:00
Tomasz Grabiec
5fa08adc88 Merge 'cache_flat_mutation_reader: use the correct schema in prepare_hash' from Michał Chojnowski
Since `mvcc: make schema upgrades gentle` (51e3b9321b),
rows pointed to by the cursor can have different (older) schema
than the schema of the cursor's snapshot.

However, one place in the code wasn't updated accordingly,
causing a row to be processed with the wrong schema in the right
circumstances.

This passed through unit testing because it requires
a digest-computing cache read after a schema change,
and no test exercised this.

This series fixes the bug and adds a unit test which reproduces the issue.

Fixes #14110

Closes #14305

* github.com:scylladb/scylladb:
  test: boost/row_cache_test: add a reproducer for #14110
  cache_flat_mutation_reader: use the correct schema in prepare_hash
  mutation: mutation_cleaner: add pause()
2023-06-20 01:30:11 +02:00
Israel Fruchter
3889e9040c Update tools/cqlsh submodule
* tools/cqlsh 6e1000f1...2254e920 (2):
  > test: add support for testing cloud bundle option
  > Fix cloudconf handling

Closes #14259
2023-06-20 00:10:53 +03:00
Michał Chojnowski
02bcb5d539 test: boost/row_cache_test: add a reproducer for #14110 2023-06-19 22:50:46 +02:00
Michał Chojnowski
d56b0c20f4 cache_flat_mutation_reader: use the correct schema in prepare_hash
Since `mvcc: make schema upgrades gentle` (51e3b9321b),
rows pointed to by the cursor can have different (older) schema
than the schema of the cursor's snapshot.

However, one place in the code wasn't updated accordingly,
causing a row to be processed with the wrong schema in the right
circumstances.

This passed through unit testing because it requires
a digest-computing cache read after a schema change,
and no test exercised this.

Fixes #14110
2023-06-19 22:50:43 +02:00