- Introduce a simpler substitute for `flat_mutation_reader`-resulting-from-a-downgrade that is adequate for the remaining uses but is _not_ a full-fledged reader (does not redirect all logic to an `::impl`, does not buffer, does not really have `::peek()`), so hopefully carries a smaller performance overhead. The name `mutation_fragment_v1_stream` is kind of a mouthful but it's the best I have
- (not tests) Use the above instead of `downgrade_to_v1()`
- Plug it in as another option in `mutation_source`, in and out
- (tests) Substitute deliberate uses of `downgrade_to_v1()` with `mutation_fragment_v1_stream()`
- (tests) Replace all the previously-overlooked occurrences of `mutation_source::make_reader()` with `mutation_source::make_reader_v2()`, or with `mutation_source::make_fragment_v1_stream()` where deliberate or still required (see below)
- (tests) This series still leaves some tests with `mutation_fragment_v1_stream` (i.e. at v1) where not called for by the test logic per se, because another missing piece of work is figuring out how to properly feed `mutation_fragment_v2` (i.e. range tombstone changes) to `mutation_partition`. While that is not done (and I think it's better to punt on it in this PR), we have to produce `mutation_fragment` instances in tests that `apply()` them to `mutation_partition`, thus we still use downgraded readers in those tests
- Remove the `flat_mutation_reader` class and things downstream of it
Fixes#10586Closes#10654
* github.com:scylladb/scylla:
fix "ninja dev-headers"
flat_mutation_reader ist tot
tests: downgrade_to_v1() -> mutation_fragment_v1_stream()
tests: flat_reader_assertions: refactor out match_compacted_mutation()
tests: ms.make_reader() -> ms.make_fragment_v1_stream()
repair/row_level: mutation_fragment_v1_stream() instead of downgrade_to_v1()
stream_transfer_task: mutation_fragment_v1_stream() instead of downgrade_to_v1()
sstables_loader: mutation_fragment_v1_stream() instead of downgrade_to_v1()
mutation_source: add ::make_fragment_v1_stream()
introduce mutation_fragment_v1_stream
tests: ms.make_reader() -> ms.make_reader_v2()
tests: remove test_downgrade_to_v1_clear_buffer()
mutation_source_test: fix indentation
tests: remove some redundant calls to downgrade_to_v1()
tests: remove some to-become-pointless ms.make_reader()-using tests
tests: remove some to-become-pointless reader downgrade tests
The test sometimes fails because the order of rows in the SELECT results
depends on how stream IDs for the different partition keys get generated.
In some runs the stream ID for pk=1 may go before the stream ID for
pk=4, in some runs the other way.
The fix is to use the same partition key but different clustering keys
for the different rows.
Refs: #10601Closes#10718
Replace:
Compressed chunk checksum mismatch at chunk {}, offset {}, for chunk of size {}: expected={}, actual={}
With:
Compressed chunk checksum mismatch at offset {}, for chunk #{} of size {}: expected={}, actual={}
This is a follow-up for #10693. Also bring the uncompressed chunk
checksum check messages up to date with the compressed one (which #10693
forgot to do).
Another change included is merging the advancement of the chunk index
with the iteration over the chunks, so we don't maintain two counters
(one in the iterator and an explicit one).
Closes#10715
To avoid a discrepancy about underlying generation type once something other than integer is allowed for the sstable generation.
Also simplifies one generic writer interface for sealing sstable statistics.
Closes#10703
* github.com:scylladb/scylla:
sstables: Use generation_type for compaction ancestors
sstables: Make compaction ancestors optional when sealing statistics
At this point, none of the remaining uses of
`flat_mutation_reader` (all of which are results of calling
`downgrade_to_v1()` anyway) actually need a full-featured flat
mutation reader with its own separate buffer etc.
`mutation_fragment_v1_stream` can only be constructed by wrapping a
`flat_mutation_reader_v2`, contains enough functionality for the
remaining consumers of `mutation_fragment_v1` sources and unit tests
and no more, and does not buffer.
Signed-off-by: Michael Livshin <michael.livshin@scylladb.com>
The projected limited replacement of downgraded v1 mutation reader
will not do its own buffering, so this test will be pointless.
Signed-off-by: Michael Livshin <michael.livshin@scylladb.com>
mutation_source are going to be created only from v2 readers and the
::make_reader() method family is scheduled for removal.
Signed-off-by: Michael Livshin <michael.livshin@scylladb.com>
Let's also use generation_type for compaction ancestors, so once we
support something other than integer for SSTable generation, we
won't have discrepancy about what the generation type is.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Compaction ancestors is only available in versions older than mx,
therefore we can make it optional in seal_statistics(). The motivation
is that mx writer will no longer call sstable::compaction_ancestors()
which return type will be soon changed to type generation_type, so the
returned value can be something other than an integer, e.g. uuid.
We could kill compaction_ancestors in seal_statistics interface, but
given that most generic write functions still work for older versions,
if there were still a writer for them, I decided to not do it now.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Fixes#10489
Killing the CDC log table on CDC disable is unhelpful in many ways,
partly because it can cause random exceptions on nodes trying to
do a CDC-enabled write at the same time as log table is dropped,
but also because it makes it impossible to collect data generated
before CDC was turned off, but which is not yet consumed.
Since data should be TTL:ed anyway, retaining the table should not
really add any overhead beyond the compaction to eventually clear
it. And user did set TTL=0 (disabled), then he is already responsible
for clearing out the data.
This also has the nice feature of meshing with the alternator streams
semantics.
Closes#10601
The change
- adds a test which exposes a problem of a peculiar setup of
tombstones that trigger a mutation fragment stream validation exception
- fixes the problem
Applying tombstones in the order:
range_tombstone_change pos(ck1), after_all_prefixed, tombstone_timestamp=1
range_tombstone_change pos(ck2), before_all_prefixed, tombstone=NONE
range_tombstone_change pos(NONE), after_all_prefixed, tombstone=NONE
Leads to swapping the order of mutations when written and read from
disk via sstable writer. This is caused by conversion of
range_tombstone_change (in memory representation) to range tombstone
marker (on disk representation) and back.
When this mutation stream is written to disk, the range tombstone
markers type is calculated based on the relationship between
range_tombstone_changes. The RTC series as above produces markers
(start, end, start). When the last marker is loaded from disk, it's kind
gets incorrectly loaded as before_all_prefixed instead of
after_all_prefixed. This leads to incorrect order of mutations.
The solution is to skip writing a new range_tombstone_change with empty
tombstone if the last range_tombstone_change already has empty
tombstone. This is redundant information and can be safely removed,
while the logic of encoding RTCs as markers doesn't handle such
redundancy well.
Closes#10643
I noticed that `column_condition` (used in LWT `IF` clause) supports lists.
As part of the Grand Expression Unification we'll need to migrate that to
expressions, so we'll need to support list subscripts.
Use the opportunity to relax the normal filtering to allow filtering on
list subscripts: `WHERE my_list[:index] = :value`.
Closes#10645
* github.com:scylladb/scylla:
test: cql-pytest: add test for list subscript filtering
doc: document list subscripts usable in WHERE clause
cql3: expr: drop restrictions on list subscripts
cql3: expr: prepare_expr: support subscripted lists
cql3: expressions: reindent get_value()
cql3: expression: evaluate() support subscripting lists
coroutine::parallel_for_each avoids an allocation and is therefore preferred. The lifetime
of the function object is less ambiguous, and so it is safer. Replace all eligible
occurences (i.e. caller is a coroutine).
One case (storage_service::node_ops_cmd_heartbeat_updater()) needed a little extra
attention since there was a handle_exception() continuation attached. It is converted
to a try/catch.
Closes#10699
This two-patch series makes two improvements to configure.py:
The first patch fixes, yet again, issue #4706 where interrupting ninja's rebuild of build.ninja can leave it without any build.ninja at all. The patch uses a different approach from the previous pull-request #10671 that aimed to solve the same problem.
The second patch makes the output of configure.py more reproducible, not resulting in a different random order every time. This is useful especially when debugging configure.py and wanting to check if anything changed in its output.
Closes#10696
* github.com:scylladb/scylla:
configure.py: make build.ninja the same every time
configure.py: don't delete build.ninja when rebuild is interrupted
* seastar 96bb3a1b8...2be9677d6 (37):
> Merge 'stream_range_as_array: always close output stream' from Benny Halevy
Fixes#10592
> net/api: add "server_socket::is_listening()"
> src/net/proxy: remove unused variable
> coroutine: parallel_for_each: relax contraints
> native-stack: do not use 0 as ip address if !_dhcp
> coroutine: fix a typo in comment
> std-coroutine: include for LLVM-14
> tutorial: use non-variadic version of when_all_succeed()
> scripts: Fix build.sh to use new --c++-standard config option
> core/thread: initialize work::pr and work::th explicitly
> util/log-impl: remove "const" qualifier in return type
> map_reduce: remove redundant move() in return statement
> util: mark unused parameter with [[maybe_unused]]
> drop unused parameters
> build: use "20" for the default CMAKE_CXX_STANDARD
> build: make CMAKE_CXX_STANDARD a string
> utils: log: don't crash on allocation failure while extending log buffer
> tests: unix_domain_test: fix thread/future confusion in client_round()
> compat: do not use std::source_location if it is broken
> build: use CMAKE_CXX_STANDARD instead of Seastar_CXX_DIALECT
> Merge 'Add hello-world demo from tutorial' from Pavel
> rpc_tester: Put client/server sides into correct sched groups
> reactor_backend: Use _r reference, not engine() method
> future.hh: #include std-compat.hh for SEASTAR_COROUTINES_ENABLED
> Merge "Add more CPU-hog facilities to RPC-tester" from Pavel E
> Merge "io: Enlighten queued_request" from Pavel E
> Correct swapped AIO detection/setup calls
> sharded: De-duplicate map-reduce overloads
> file: don't trample on xfs flags when setting xfs size hint
> Merge "Per-class IO bandwidth limits" from Pavel E
> Merge 'sstring: fix format and optimize the performance of sstring::find().' from Jianyong Chen
> reactor_backend: Mark reactor_backend_aio::poll() private
> scripts/build.sh: Mind if not running on a terminal
> test, rpc: Don't work with large buffers
> test, futures: Don't expect ready future to resolve immediately
> source_location compatibility: Fix an unused private field error when treat warning as errors
> file: Remove try-catch around noexcept calls
Currently it works, but the newer version of seastar's map_reduce()
is compiled in a way to trigger use-after-free on accessing captured
value.
tests: unit(dev), unit.alternator(debug on v1)
Fixes#10689
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Message-Id: <20220523095409.6078-1-xemul@scylladb.com>
When logging a failed checksum on a compressed chunk.
Currently, only the offset is logged, but the index of the chunk whose
checksum failed to validate is also interesting.
Closes#10693
In several places, configure.py uses unsorted sets which results in
its output being in different order every time - both a different
order of targets, and a different order in dependencies of each
target.
This is both strange, and annoying when trying to debug configure.py
and trying to understand when, if at all, its output changes.
So in this patch, we use "sorted(...)" in the right places that
are needed to guarantee a fixed order. This fixed order is alphabetical,
but that's not the goal of this patch - the goal is to ensure a fixed
order.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
In commit 9cc9facbea, I fixed issue #4706.
That issue about what happens when interrupting a rebuild of build.ninja
(which happens automatically when you run "ninja" after configure.py
changed). We don't want to leave behind a half-built build.ninja,
or leave it deleted.
The solution in that commit was for configure.py to build a temporary file
(build.ninja.tmp), and only as the very last step rename it build.ninja.
Unfortunately, since that time, we added more last steps after what
used to be that very last step :-(
If this new code running after the rename takes a noticable amount of
time, and if the user is unlucky enough to interrupt it during that
time, ninja will see a modified output file (build.ninja) and a failed
rule, and will delete the output file!
The solution is to move the rename out of configure.py. Instead, we
add a "--out=filename" option to configure.py which allows it to write
directly to a different file name, not build.ninja. When rebuilding
build.ninja, the rule will now call configure.py with "--out=build.ninja.new"
and then rename it back to build.ninja. Any failure or interrupt at any
stage of configure.py will leave build.ninja untouched, so ninja will
not delete it - it will just delete the temporary build.ninja.new.
Fixes#4706 (again)
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
We modify the `reconfigure` and `modify_config` APIs to take a vector of
<server_id, bool> pairs (instead of just a vector of server_ids), where
the bool indicates whether the server is a voter in the modified config.
The `reconfiguration` operation would previously shuffle the set of
servers and split it into two parts: members and non-members. Now it
partitions it into three parts: voters, non-voters, and non-members.
The PR also includes fixes for some liveness problems stumbled upon
during testing.
Closes#10640
* github.com:scylladb/scylla:
test: raft: randomized_nemesis_test: include non-voters during reconfigurations
raft: server: if `add_entry` with `wait_type::applied` successfully returns, ensure `state_machine::apply` is called for this entry
raft: tracker: fix the definition of `voters()`
raft: when printing `raft::server_address`, include `can_vote`
The messages only dumps the last sealed fragment, but it should dump
all the output fragments replacing the exhausted input ones.
Let's print origin of output fragments, so we can differ between
files with compaction and garbage-collection origin.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20220524232232.119520-1-raphaelsc@scylladb.com>
"
Azure snitch tries to replicate db/rack info from all shards to all
other shards. This may lead to use-after-free when shard A gets "this"
from shard B, starts copying its _dc field and the shard A destructs
its _dc from under B because it's receiving one from shard C.
Also polish replication code a little bit while at it.
"
* 'br-azure-snitch-serialize' of https://github.com/xemul/scylla:
snitch: Use invoke_on_others() to replicate
snitch: Merge set_my_dc and set_my_rack into one
azure_snitch: Do nothing on non-io-cpu
Restriction validation forbids lists (somewhat oddly, it talks about
indexes; validation should make a soft check about indexes (since it
can fall back to filtering) and a hard check about supported filtering
expressions), and enforces a map in another place. Remove the first
restriction and relax the second to allow lists as well as maps as
subscript operands.
Some validation messages are adjusted to reflect that lists are supported.
Infer the type of a list index as int32_type.
The error message when a non-subscriptable type is provided is
changed, so the corresponding test is changed too.
We already support subscripting maps (for filtering WHERE m[3] = 6),
so adding list subscript support is easy. Most of the code is shared.
Differences are:
- internal list representation is a vector of values, not of key/values
- key type is int32_type, not defined by map
- need to check index bounds
is not supported' from Nadav Har'El
This small series implements the DescribeTimeToLive and
DescribeContinuousBackups operations in Alternator. Even if the
corresponding features aren't implemented, it can help applications that
we implement just the Describe operation that can say that this feature
is in fact currently disabled.
Fixes #10660Closes#10670
* github.com:scylladb/scylla:
alternator: remove dead code
alternator: implement DescribeContinuousBackups operation
alternator: allow DescribeTimeToLive even without TTL enabled
This patch contains five tests which reproduce three old bugs in
Scylla's handling of multi-column restrictions like (c1,c2)<(1,2).
These old bugs are:
Refs #64 (yes, a two-digit issue!)
Refs #4244
Refs #6200
The three github issues are closely intertwined, exposing the same
or similar bugs in our internal implementation, and I suspect that
eventually most of them could be fixed together.
In writing these tests, I carefully read all three issues and the
various failure scenarios described in them, tried to distill and
simplify the scenarios, and also consider various other broken
variants of the scenarios. The resulting tests are heavily commented,
explaining the motivation of each test and exactly which of the
above bugs it reproduces.
All five tests included in this patch pass on Cassandra and currently
fail on Scylla, so are marked "xfail".
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Closes#10675
It's an `int64_t` that needs to be explicitly initialized, otherwise the
value is undefined.
This is probably the cause of #10639, although I'm not sure - I couldn't
reproduce it (the bug is dependent on how the binary is compiled, so
that's probably it). We'll see if it reproduces with this fix, and if
it will, close the issue.
Closes#10681