Commit Graph

44501 Commits

Author SHA1 Message Date
Avi Kivity
1663fbe717 cql3: statement_restrictions: use functional style
Instead of a constructor, use a new function
analyze_statement_restrictions() as the entry point. It returns an
immutable statement_restrictions object.

This opens the door to returning a variant, with each arm of the variant
corresponding to a different query plan.
2024-09-17 17:13:27 +03:00
Avi Kivity
3169b8e0ec cql3: statement_restrictions: calculate the index only once
find_idx() is called several times. Rename it do_find_idx(), call it
just once, store the results, and make find_idx() return the stored
results.

This simplifies control flow and reduces the risk that successive
calls of find_idx return different results.
2024-09-17 17:03:31 +03:00
Avi Kivity
d5c8083b76 cql3: statement_restrictions: make it a const object
Make validate_secondary_index_selections() const (it trivially is),
and call prepare_indexed_local() / prepared_indexed_global() at the
end of the constructor.

By making statement_restrictions a const object, reasoning about it
can be local (looking at the source file) rather than global (looking
at all the interactions of the class with its environment. In fact,
we might make it a function one day.

Since prepare_indexed_global()/prepare_indexed_local() only mutate
_idx_tbl_ck_prefix, which isn't mutated by the rest of the code, the
transformation is safe.

The corresponding code is removed from select_statement. The removal
isn't complete since it still uses some computation, but later
deduplication is left for another day.
2024-09-17 17:03:27 +03:00
Avi Kivity
e4cab3a5e9 cql3: statement_restrictions: drop accessors for single-column key restrictions
No longer used.
2024-09-16 12:15:14 +03:00
Avi Kivity
626acf416e cql3: selection: adjust indentation 2024-09-16 12:15:14 +03:00
Avi Kivity
c443d922ea cql3: selection: delete empty loop
Our refactoring left a loop with no body, delete it.
2024-09-16 12:15:14 +03:00
Avi Kivity
56e8a4c931 cql3: statement_restrictions, selection: fold multi-column restrictions into row-level filter
When filtering, we apply single-column and multi-column filters separately.

This is completely unnecessary. Find the multi-column filters during prepare
time and append them to the row-level filter.

This slightly changes the original: in the original, if we had a multi-column
filter, we applied all of the restrictions. But hopefully if we check
for multi-column filters, that's what we need.
2024-09-16 12:15:14 +03:00
Avi Kivity
a6d81806c0 cql3: statement_restrictions, selection: merge clustering key filter and regular columns filter
The two filters are used in the same way: check the filter, return false if
it matches.

Unify the two filters into a clustering_row_level_filter.

Since one of the two filters wasn't std::optional, we take the liberty
of making the combined filter non-optional.
2024-09-16 12:15:03 +03:00
Avi Kivity
2933a2f118 cql3: statement_restrictions, selection: merge partition key filter and static columns filter
The two filters are used in the same way: check the filter, set a boolean
flag if it matches, return false. The two boolean flags are in turn checked
in the same way.

Unify the two filters into a partition_level_filter.

Since one of the two filters wasn't std::optional, we take the liberty
of making the combined filter non-optional.
2024-09-16 12:10:49 +03:00
Avi Kivity
807153a9ed cql3: selection: filter regular and static rows as a single expression each
Instead of filtering regular and static columns column by column, call
is_satisfied_by() for an expression containing all the static columns
predicates, and one for all the regular column.

We cannot have one expression, since the code sets
_current_static_row_does_not_match only for static columns.

Note the fix for #20485 is now implicit, since the evaluation machinery
will treat missing regular columns as NULL.
2024-09-15 14:33:57 +03:00
Avi Kivity
3c71096479 cql3: statement_restrictions: collect regular column and static column filters into single expressions
Similar to previous work with clustering and partition key, expose
static and reglar column filters as single expressions.

Since we don't currently expose a boolean for whether those filters
exist, we expose them now as non-optionals. In any case evaluating
an empty conjunction is plenty fast.
2024-09-15 14:33:57 +03:00
Avi Kivity
ec2898afe9 cql3: selection: filter clustering key as a single expression
Instead of filtering the clustering key column by column, call
is_satisfied_by() for an expression containing all the clustering key
predicates.

The check for clustering_key.empty() is removed; the evaluation machinery
is able to handle partial clustering keys. In fact if we add IS NULL,
we have to evaluate as an empty clustering key should match.
2024-09-15 14:33:57 +03:00
Avi Kivity
318d653d80 cql3: statement_restrictions: expose filter for clustering key
cql3::selection performs filtering by consulting
ck_restrictions_need_filtering() and
get_single_column_clustering_key_restrictions() (which is a map of column
definition to expressions). Make them available in one
nice package as an optional<expression>. When the optional is engaged,
filtering is needed, and the expression in the equivalent of all of the
map.
2024-09-15 14:33:57 +03:00
Avi Kivity
0bd2f12922 cql3: selection: filter partition key as a single expression
Instead of filtering the partition key column by column, call
is_satisfied_by() for an expression containing all the partition key
predicates.
2024-09-15 14:33:56 +03:00
Avi Kivity
21cb91077f cql3: statement_restrictions: expose filter for partition key
cql3::selection performs filtering by consulting
pk_restrictions_need_filtering() and
get_single_column_partition_key_restrictions() (which is a map of column
definition to expressions). Make them available in one
nice package as an optional<expression>. When the optional is engaged,
filtering is needed, and the expression in the equivalent of all of the
map.
2024-09-15 14:33:56 +03:00
Avi Kivity
a453221314 cql3: statement_restrictions: remove relations used for indexing from filtering
statement_restrictions does not name columns that were used for a secondary
index for selection for filtering, since accessing the index "pre-filters"
these columns.

However, it keeps the relations that contain these columns. This makes
it impossible (besides unnecessary) to evaluate the relations, as the
columns they reference aren't selected.

The reason this works now is that
result_set_builder::restrictions_filter::do_filter() iterates on selected
columns, matching them to relations, then execute the matched relation.
A relation that references an unselected column is invisible to do_filter().

We wish to filter using complete expressions, rather than fragments, so as
a first step remove these unnecessary and unusable relations while we
choose which columns are necessary for filtering.

calculate_column_defs_for_filtering is renamed to remind us of the extra
work done.
2024-09-15 14:33:56 +03:00
Avi Kivity
ba8c2014bf cql3: statement_restrictions: bail out of find_idx if !_uses_secondary_index
The condition seems trivial, but wasn't implemented, without ill effects
so far.

With the following patches, calculate_column_defs_for_filtering() becomes
confused as it selects an indexing code path even when !_uses_secondary_index,
triggered by the reproducer of #10300.
2024-09-15 14:33:56 +03:00
Avi Kivity
65ba19323c cql3: statement_restrictions, modification_statement: pass correct value of check_indexes
Our UPDATE/INSERT/DELETE statements require a full primary/partition key
and therefore never use indexes; fix the check_index parameter passed from
modification_statement.

So far the bug is benign as we did not take any action on the value.

Make the parameter non-default to avoid such confusion in the future.
2024-09-15 14:33:56 +03:00
Avi Kivity
71ea3200ba cql3: statement_restrictions: correct mismatched clustering/partition restrictions references
The second loop of calculate_column_defs_for_filtering() finds clustering
keys that are used for filtering, minus and clustering keys that happen
to be used for secondary indexing.

However, to check whether the clustering key is used for secondary indexing,
it looks up in _single_column_partition_key_restrictions, which contains
partition key restrictions.

The end result is that we select a column which ends the partition key
for the secondary index, and so is unnecessary. We do a little more work,
but the bug is benign. Nevertheless, fix it, as it interferes with following
work.
2024-09-15 14:33:56 +03:00
Avi Kivity
33db14e7d5 cql3: statement_restrictions: precalculate get_column_defs_for_filtering()
get_column_defs_for_filtering() names all the columns that are required
for filtering. While doing that, it skips over columns that are participate
in indexing (primary or secondary), since the index "pre-filters" the
query.

We wish to make use of this skipping. As a first step, call the calculation
from the constructor, so we have control over when it is executed.
2024-09-15 14:33:56 +03:00
Avi Kivity
251ad4fcd0 cql3: selection: do_filter(): push static/regular row glue to higher level
Currently, for each column we call get_non_pk_values() to transform
the way we get the information (query::result_row_view) to the way
the expression evaluation machinery wants it (vector<managed_bytes_opt>).

Call it just once outside the loop.
2024-09-15 14:33:56 +03:00
Avi Kivity
b9bc783418 cql3: selection: don't ignore regular column restriction if a regular row is not present
If a regular row isn't present, no regular column restriction
(say, r=3) can pass since all regular columns are presented as NULL,
and we don't have an IS NULL predicate. Yet we just ignore it.

Handle the restriction on a missing column by return false, signifying
the row was filtered out.

We have to move the check after the conditional checking whether there's
any restriction at all, otherwise we exit early with a false failure.

Unit test marked xfail on this issue are now unmarked.

A subtest of test_tombstone_limit is adjusted since it depended on this
bug. It tested a regular column which wasn't there, and this bug caused
the filter to be ignored. Change to test a static column that is there.

A test for a bug found while developing the patch is also added. It is
also tested by test_tombstone_limit, but better to have a dedicated test.

Fixes #10357

Closes scylladb/scylladb#20486
2024-09-15 13:44:16 +03:00
Botond Dénes
6d8e9645ce test/*/run: restore --vnodes into working order
This option was silently broken when --enable-tablet's default changed
from false to true. The reason is that when --vnodes is passed, run only
removes --enable-tablets=true from scylla's command line. With the new
default this is not enough, we need to explicitely disable tablets to
override the default.

Closes scylladb/scylladb#20462
2024-09-13 17:10:09 +03:00
Nadav Har'El
f255391d52 cql-pytest: translate Cassandra's tests for arithmetic operators
This is a translation of Cassandra's CQL unit test source file
OperationFctsTest.java into our cql-pytest framework.

This is a massive test suite (over 800 lines of code) for Cassandra's
"arithmetic operators" CQL feature (CASSANDRA-11935), which was added
to Cassandra almost 8 years ago (and reached Cassandra 4.0), but we
never implemented it in Scylla.

All of the tests in suite fail in ScyllaDB due to our lack of this
feature:

  Refs #2693: Support arithmetic operators

One test also discovered a new issue:

  Refs #20501: timestamp column doesn't allow "UTC" in string format

All the tests pass on Cassandra.

Some of the tests insist on specific error message strings and specific
precision for decimal arithmetic operations - where we may not necessarily
want to be 100% compatible with Cassandra in our eventual implementation.
But at least the test will allow us to make deliberate - and not accidental -
deviations from compatibility with Cassandra.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>

Closes scylladb/scylladb#20502
2024-09-13 14:52:59 +03:00
Botond Dénes
d3a9654fcc Merge 'Make use of async() context in sstable_mutation_test' from Pavel Emelyanov
This test runs all its cases in seastar thread, but still uses .then() continuations in some of them.
This PR converts all continuations into plain .get()-s.

Closes scylladb/scylladb#20457

* github.com:scylladb/scylladb:
  test: Restore indentation after previous changes
  test: Threadify tombstone_in_tombstone2()
  test: Threadify range_tombstone_reading()
  test: Threadify tombstone_in_tombstone()
  test: Threadify broken_ranges_collection()
  test: Threadify compact_storage_dense_read()
  test: Threadify compact_storage_simple_dense_read()
  test: Threadify compact_storage_sparse_read()
  test: Simplify test_range_reads() counting
  test: Simplify test_range_reads() inner loop
  test: Threadify test_range_reads() itself
  test: Threadify test_range_reads() callers
  test: Threadify generate_clustered() itself
  test: Threadify generate_clustered() callers
  test: Threadify test_no_clustered test
  test: Threadify nonexistent_key test
2024-09-13 14:09:53 +03:00
Andrei Chekun
bad7407718 test.py: Add support for BOOST_DATA_TEST_CASE
Currently, test.py will throw an error if the test will use
BOOST_DATA_TEST_CASE. test.py as a first step getting all test
functions in the file, but when BOOST_DATA_TEST_CASE will be used the
output will have additional lines indicating parametrized test that
test.py can not handle. This commit adds handling this case, as a caveat
all tests should start from 'test' or they will be ignored.

Closes: #20530

Closes scylladb/scylladb#20556
2024-09-13 13:44:26 +03:00
Botond Dénes
7cb8cab2ae Merge 'Remove make_shared_schema() helper' from Pavel Emelyanov
This function was obsoleted by schema_builder some time ago. Not to patch all its callers, that helper became wrapper around it. Remained users are all in tests, and patching the to use builder directory makes the code shorter in many cases.

Closes scylladb/scylladb#20466

* github.com:scylladb/scylladb:
  schema: Ditch make_shared_schema() helper
  test: Tune up indentation in uncompressed_schema()
  test: Make tests use schema_builder instead of make_shared_schema
2024-09-13 12:25:10 +03:00
Pavel Emelyanov
730731da4a test: Remove unused table config from max_ongoing_compaction_test
The local config is unused since #15909, when the table creation was
changed to use env's facilities.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>

Closes scylladb/scylladb#20511
2024-09-13 12:21:56 +03:00
Pavel Emelyanov
4c77f474ed test: Remove unused upload_path local variable
Since #14152 creation of an sstable takes table dir and its state. The
test in question wants to create and sstable in upload/ subdir and for
that it used to maintain full "cf.dir/upload" path, which is not
required any more.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>

Closes scylladb/scylladb#20514
2024-09-13 12:21:00 +03:00
Pavel Emelyanov
e9a1c0716f test: Use sstables::test_env to make sstables for directory test
This is continuation of #20431 in another test. After #20395 it's also
possible to remove unused local dir variables.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>

Closes scylladb/scylladb#20541
2024-09-13 12:19:59 +03:00
Botond Dénes
4fb194117e Merge 'Generalize multipart upload implementations in S3 client' from Pavel Emelyanov
There are two currently -- upload_sink_base and do_upload_file. This PR merges as much code as possible (spoiler: it's already mostly copy-n-pase-d, so squashing is pretty straightforward)

Closes scylladb/scylladb#20568

* github.com:scylladb/scylladb:
  s3/client: Reuse class multipart_upload in do_upload_file
  s3/client: Split upload_sink_base class into two
2024-09-13 10:35:10 +03:00
Kefu Chai
cf1f90fe0c auth: remove unused #include
the `seastar/core/print.hh` header is no longer required by
`auth/resource.hh`. this was identified by clang-include-cleaner.
As the code is audited, wecan safely remove the #include directive.

Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>

Closes scylladb/scylladb#20575
2024-09-13 09:49:05 +03:00
Botond Dénes
c7c5817808 Merge 'Improve timestamp heuristics for tombstone garbage collection' from Benny Halevy
When purging regular tombstone consult the min_live_timestamp, if available.
This is safe since we don't need to protect dead data from resurrection, as it is already dead.

For shadowable_tombstones, consult the min_memtable_live_row_marker_timestamp,
if available, otherwise fallback to the min_live_timestamp.

If we see in a view table a shadowable tombstone with time T, then in any row where the row marker's timestamp is higher than T the shadowable tombstone is completely ignored and it doesn't hide any data in any column, so the shadowable tombstone can be safely purged without any effect or risk resurrecting any deleted data.

In other words, rows which might cause problems for purging a shadowable tombstone with time T are rows with row markers older or equal T. So to know if a whole sstable can cause problems for shadowable tombstone of time T, we need to check if the sstable's oldest row marker (and not oldest column) is older or equal T. And the same check applies similarly to the memtable.

If both extended timestamp statistics are missing, fallback to the legacy (and inaccurate) min_timestamp.

Fixes scylladb/scylladb#20423
Fixes scylladb/scylladb#20424

> [!NOTE]
> no backport needed at this time
> We may consider backport later on after given some soak time in master/enterprise
> since we do see tombstone accumulation in the field under some materialized views workloads

Closes scylladb/scylladb#20446

* github.com:scylladb/scylladb:
  cql-pytest: add test_compaction_tombstone_gc
  sstable_compaction_test: add mv_tombstone_purge_test
  sstable_compaction_test: tombstone_purge_test: test that old deleted data do not inhibit tombstone garbage collection
  sstable_compaction_test: tombstone_purge_test: add testlog debugging
  sstable_compaction_test: tombstone_purge_test: make_expiring: use next_timestamp
  sstable, compaction: add debug logging for extended min timestamp stats
  compaction: get_max_purgeable_timestamp: use memtable and sstable extended timestamp stats
  compaction: define max_purgeable_fn
  tombstone: can_gc_fn: move declaration to compaction_garbage_collector.hh
  sstables: scylla_metadata: add ext_timestamp_stats
  compaction_group, storage_group, table_state: add extended timestamp stats getters
  sstables, memtable: track live timestamps
  memtable_encoding_stats_collector: update row_marker: do nothing if missing
2024-09-13 08:56:51 +03:00
Takuya ASADA
3cd2a61736 dist: drop scylla-jmx
Since JMX server is deprecated, drop them from submodule, build system
and package definition.

Related scylladb/scylla-tools-java#370
Related #14856

Signed-off-by: Takuya ASADA <syuu@scylladb.com>

Closes scylladb/scylladb#17969
2024-09-13 07:59:45 +03:00
Botond Dénes
fc9804ec31 Update tools/java submodule
* tools/java 0b4accdd...e505a6d3 (1):
  > [C-S] Make it use DCAwareRoundRobinPolicy unless rack is provided

Closes scylladb/scylladb#20562
2024-09-13 06:30:04 +03:00
Pavel Emelyanov
17e7d3145c s3/client: Reuse class multipart_upload in do_upload_file
Uploading a file is implemented by the do_upload_file class. This class
re-implements a big portion of what's currently in multipart_upload one.
This patch makes the former class inherit from the latter and removes
all the duplication from it.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2024-09-12 18:38:16 +03:00
Pavel Emelyanov
14b741afc9 s3/client: Split upload_sink_base class into two
This class implements two facilities -- multipart upload protocol itself
plus some common parts of upload_sink_impl (in fact -- only close() and
plugs put(packet)).

This patch aplits those two facilities into two classes. One of them
will be re-used later.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2024-09-12 18:00:19 +03:00
Sergey Zolotukhin
612a141660 raft: Fix race condition on override_snapshot_thresholds.
When the server_impl::applier_fiber is paused by a co_await at line raft/server.cc:1375:
```
co_await override_snapshot_thresholds();
```
a new snapshot may be applied, which updates the actual values of the log's last applied
and snapshot indexes. As a result, the new snapshot index could become higher than the
old value stored in _applied_idx at line raft/server.cc:1365, leading to an assertion
failure in log::last_conf_for().
Since error injection is disabled in release builds, this issue does not affect production releases.

This issue was introduced in the following commit
9dfa041fe1,
when error injection was added to override the log snapshot configuration parameters.

How to reproduce:

1. Build debug version of randomized_nemesis_test
```
ninja-build build/debug/test/raft/randomized_nemesis_test
```
2. Run
```
parallel --halt now,fail=1 -j20 'build/debug/test/raft/randomized_nemesis_test \
--run_test=test_frequent_snapshotting  -- -c2 -m2G --overprovisioned --unsafe-bypass-fsync 1 \
--kernel-page-cache 1 --blocked-reactor-notify-ms 2000000  --default-log-level \
trace > tmp/logs/eraseme_{}.log  2>&1 && rm tmp/logs/eraseme_{}.log' ::: {1..1000}
```

Fixes scylladb/scylladb#20363

Closes scylladb/scylladb#20555
2024-09-12 16:19:27 +02:00
Aleksandra Martyniuk
59fba9016f docs: operating-scylla: add task manager docs
Admin-facing documentation of task manager.

Closes scylladb/scylladb#20209
2024-09-12 16:42:28 +03:00
Nadav Har'El
d49dbb944c Merge 'doc: move Alternator in the page tree and remove it's redundant ToC' from Anna Stuchlik
This PR hides the ToC on the Alternator page, as we don't need it, especially at the end of the page.

The ToC must be hidden rather than removed because removing it would, in turn, remove the "Getting Started With ScyllaDB Alternator" and "ScyllaDB Alternator for DynamoDB users" from the page tree and make them inaccessible.

In addition, this PR moves Alternator higher in the page tree.

Fixes https://github.com/scylladb/scylladb/issues/19823

Closes scylladb/scylladb#20565

* github.com:scylladb/scylladb:
  doc: move Alternator higher in the page tree
  doc: hide the redundant ToC on the Alternator page
2024-09-12 15:58:34 +03:00
Nadav Har'El
930accad12 alternator: return error on unused AttributeDefinitions
A CreateTable request defines the KeySchema of the base table and each
of its GSIs and LSIs. It also needs to give an AttributeDefinition for
each attribute used in a KeySchema - which among other things specifies
this attribute's type (e.g., S, N, etc.). Other, non-key, attributes *do
not* have a specified type, and accordingly must not be mentioned in
AttributeDefinitions.

Before this patch, Alternator just ignored unused AttributeDefinitions
entries, whereas DynamoDB throws an error in this case. This patch fixes
Alternator's behavior to match DynamoDB's - and adds a test to verify this.

Besides being more error-path-compatible with DynamoDB, this extra check
can also help users: We already had one user complaining that an
AttributeDefinitions setting he was using was ignored, not realizing
that it wasn't used by any KeySchema. A clear error message would have
saved this user hours of investigation.

Fixes #19784.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>

Closes scylladb/scylladb#20378
2024-09-12 15:37:18 +03:00
Pavel Emelyanov
632a65bffa Merge 'repair: row_level: coroutinize more functions' from Avi Kivity
Coroutinize more functions in row-level repair to improve maintainability.

The functions all deal with repair buffers, so coroutinization does not affect performance.

Cleanup, no reason to backport

Closes scylladb/scylladb#20464

* github.com:scylladb/scylladb:
  repair: row_level: restore indentation
  repair: row_level: coroutinize repair_service::insert_repair_meta()
  repair: row_level: coroutinize repair_meta::get_full_row_hashes()
  repair: row_level: coroutinize repair_meta::apply_rows_on_follower()
  repair: row_level: coroutinize repair_meta::clear_working_row_buf()
  repair: row_level: coroutinize get_common_diff_detect_algorithm()
  repair: row_level: coroutinize repair_service::remove_repair_meta() (non-selective overload)
  repair: row_level: coroutinize repair_service::remove_repair_meta() (by-address overload)
  repair: row_level: coroutinize repair_service::remove_repair_meta() (by-id overload)
  repair: row_level: row_level_repair::run()
  repair: row_level: row_level_repair::send_missing_rows_to_follower_nodes()
  repair: row_level: row_level_repair::get_missing_rows_from_follower_nodes()
  repair: row_level: row_level_repair::negotiate_sync_boundary()
  repair: row_level: coroutinize repair_put_row_diff_with_rpc_stream_process_op()
  repair: row_level: coroutinize repair_meta::get_sync_boundary_handler()
  repair: row_level: coroutinize repair_meta::get_sync_boundary()
  repair: row_level: coroutinize repair_meta::repair_set_estimated_partitions_handler()
  repair: row_level: coroutinize repair_meta::repair_set_estimated_partitions()
  repair: row_level: coroutinize repair_meta::repair_get_estimated_partitions_handler()
  repair: row_level: coroutinize repair_meta::repair_get_estimated_partitions()
  repair: row_level: coroutinize repair_meta::repair_row_level_stop_handler()
  repair: row_level: coroutinize repair_meta::repair_row_level_stop()
  repair: row_level: coroutinize repair_meta::repair_row_level_start_handler()
  repair: row_level: coroutinize repair_meta::repair_row_level_start()
  repair: row_level: coroutinize repair_meta::get_combined_row_hash_handler()
  repair: row_level: coroutinize repair_meta::get_combined_row_hash()
  repair: row_level: coroutinize repair_meta::get_full_row_hashes_handler()
  repair: row_level: coroutinize repair_meta::get_full_row_hashes_with_rpc_stream()
  repair: row_level: coroutinize repair_meta::request_row_hashes()
2024-09-12 15:35:57 +03:00
Kefu Chai
197451f8c9 utils/rjson.cc: include the function name in exception message
recently, we are observing errors like:

```
stderr: error running operation: rjson::error (JSON SCYLLA_ASSERT failed on condition 'false', at: 0x60d6c8e 0x4d853fd 0x50d3ac8 0x518f5cd 0x51c4a4b 0x5fad446)
```

we only passed `false` to the `RAPIDJSON_ASSERT()` macro, so what we
have is but the type of the error (rjson::error) and a backtrace.
would be better if we can have more information without recompiling
or fetching the debug symbols for decipher the backtrace.

Refs scylladb/scylladb#20533
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>

Closes scylladb/scylladb#20539
2024-09-12 15:22:49 +03:00
Anna Stuchlik
851e903f46 doc: move Alternator higher in the page tree 2024-09-12 14:08:26 +02:00
Anna Stuchlik
a32ff55c66 doc: hide the redundant ToC on the Alternator page
This commit hides the ToC, as we don't need it, especially at the end of the page.
The ToC must be hidden rather than removed because removing it would, in turn,
remove the "Getting Started With ScyllaDB Alternator" and "ScyllaDB Alternator for DynamoDB users"
from the page tree and make them inaccessible.
2024-09-12 14:01:15 +02:00
Alexey Novikov
8b6e987a99 test: add test_pinned_cl_segment_doesnt_resurrect_data
add test for issue when writes in commitlog segments pinned to another table can be resurrected.
This test based on dtest code published in #14870 and adapted for community version.
It's a regression test for #15060 fix and should fail before this patch and succeed afterwards.

Refs #14870, #15060

Closes scylladb/scylladb#20331
2024-09-12 10:58:22 +03:00
Takuya ASADA
90ab2a24df toolchain: restore multiarch build
When we introduced optimized clang at 6e487a4, we dropped multiarch build on frozen toolchain, because building clang on QEMU emulation is too heavy.

Actually, even after the patch merged, there are two mode which does not build clang, --clang-build-mode INSTALL_FROM and --clang-build-mode SKIP.
So we should restore multiarch build only these mode, and keep skipping on INSTALL mode since it builds clang.

Since we apply multiarch on INSTALL_FROM mode, --clang-archive replaced
to --clang-archive-x86_64 and --clang-archive-aarch64.

Note that this breaks compatibility of existing clang archive, since it
changes clang root directory name from llvm-project to llvm-project-$ARCH.

Closes #20442

Closes scylladb/scylladb#20444
2024-09-12 10:44:45 +03:00
Kefu Chai
3e84d43f93 treewide: use seastar::format() or fmt::format() explicitly
before this change, we rely on `using namespace seastar` to use
`seastar::format()` without qualifying the `format()` with its
namespace. this works fine until we changed the parameter type
of format string `seastar::format()` from `const char*` to
`fmt::format_string<...>`. this change practically invited
`seastar::format()` to the club of `std::format()` and `fmt::format()`,
where all members accept a templated parameter as its `fmt`
parameter. and `seastar::format()` is not the best candidate anymore.
despite that argument-dependent lookup (ADT for short) favors the
function which is in the same namespace as its parameter, but
`using namespace` makes `seastar::format()` more competitive,
so both `std::format()` and `seastar::format()` are considered
as the condidates.

that is what is happening scylladb in quite a few caller sites of
`format()`, hence ADT is not able to tell which function the winner
in the name lookup:

```
/__w/scylladb/scylladb/mutation/mutation_fragment_stream_validator.cc:265:12: error: call to 'format' is ambiguous
  265 |     return format("{} ({}.{} {})", _name_view, s.ks_name(), s.cf_name(), s.id());
      |            ^~~~~~
/usr/bin/../lib/gcc/x86_64-redhat-linux/14/../../../../include/c++/14/format:4290:5: note: candidate function [with _Args = <const std::basic_string_view<char> &, const seastar::basic_sstring<char, unsigned int, 15> &, const seastar::basic_sstring<char, unsigned int, 15> &, const utils::tagged_uuid<table_id_tag> &>]
 4290 |     format(format_string<_Args...> __fmt, _Args&&... __args)
      |     ^
/__w/scylladb/scylladb/seastar/include/seastar/core/print.hh:143:1: note: candidate function [with A = <const std::basic_string_view<char> &, const seastar::basic_sstring<char, unsigned int, 15> &, const seastar::basic_sstring<char, unsigned int, 15> &, const utils::tagged_uuid<table_id_tag> &>]
  143 | format(fmt::format_string<A...> fmt, A&&... a) {
      | ^
```

in this change, we

change all `format()` to either `fmt::format()` or `seastar::format()`
with following rules:
- if the caller expects an `sstring` or `std::string_view`, change to
  `seastar::format()`
- if the caller expects an `std::string`, change to `fmt::format()`.
  because, `sstring::operator std::basic_string` would incur a deep
  copy.

we will need another change to enable scylladb to compile with the
latest seastar. namely, to pass the format string as a templated
parameter down to helper functions which format their parameters.
to miminize the scope of this change, let's include that change when
bumping up the seastar submodule. as that change will depend on
the seastar change.

Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
2024-09-11 23:21:40 +03:00
Pavel Emelyanov
f227f4332c test: Remove unused path local variable
Left after #20499 :(

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>

Closes scylladb/scylladb#20540
2024-09-11 23:10:25 +03:00
Avi Kivity
ed7d352e7d Merge 'Validate checksums for uncompressed SSTables' from Nikos Dragazis
This PR introduces a new file data source implementation for uncompressed SSTables that will be validating the checksum of each chunk that is being read. Unlike for compressed SSTables, checksum validation for uncompressed SSTables will be active for scrub/validate reads but not for normal user reads to ensure we will not have any performance regression.

It consists of:
* A new file data source for uncompressed SSTables.
* Integration of checksums into SSTable's shareable components. The validation code loads the component on demand and manages its lifecycle with shared pointers.
* A new `integrity_check` flag to enable the new file data source for uncompressed SSTables. The flag is currently enabled only through the validation path, i.e., it does not affect normal user reads.
* New scrub tests for both compressed and uncompressed SSTables, as well as improvements in the existing ones.
* A change in JSON response of `scylla validate-checksums` to report if an uncompressed SSTable cannot be validated due to lack of checksums (no `CRC.db` in `TOC.txt`).

Refs #19058.

New feature, no backport is needed.

Closes scylladb/scylladb#20207

* github.com:scylladb/scylladb:
  test: Add test to validate SSTables with no checksums
  tools: Fix typo in help message of scylla validate-checksums
  sstables: Allow validate_checksums() to report missing checksums
  test: Add test for concurrent scrub/validate operations
  test: Add scrub/validate tests for uncompressed SSTables
  test/lib: Add option to create uncompressed random schemas
  test: Add test for scrub/validate with file-level corruption
  test: Check validation errors in scrub tests
  sstables: Enable checksum validation for uncompressed SSTables
  sstables: Expose integrity option via crawling mutation readers
  sstables: Expose integrity option via data_consume_rows()
  sstables: Add option for integrity check in data streams
  sstables: Remove unused variable
  sstables: Add checksum in the SSTable components
  sstables: Introduce checksummed file data source implementation
  sstables: Replace assert with on_internal_error
2024-09-11 23:09:45 +03:00