When there is nothing to read, make_flat_reader() returns an empty (no-op)
reader. But it turns out that constructing, combining and destroying that
empty reader is quite costly.
As an optimization, add an alternative version which returns an empty optional
instead.
Commit 1c99ed6ced added tracing logs
about the index chosen for the query, but aggregate queries have
a separate code path, which wasn't taken into account.
After this patch, tracing for aggregate queries also includes
this additional information.
Closes#10195
interrupt() makes it sound like it's interrupting the compaction, but it's
actually called *on* interrupt, to handle the interrupt scenario.
Let's rename it to on_interrupt().
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20220311000128.189840-1-raphaelsc@scylladb.com>
Changing the capture list of a lambda in
forward_service::execute_on_this_shard from [&] to an explicit one
enables grater readability and prevents potential bugs.
Closes#10191
The services' configuration should be performed with the help of
service-specific config that's filled by the service creator. This
is not the case for gossiper that grabs the db::config and keeps
reference on it throughout its lifetime.
This set brings the gossiper configuration to the described form
by putting the needed config bits onto gossip_config (that already
exists and is partially used for gossiper configuration). And two
live-updateable options need extra care.
tests: unit(dev), dtest.simple_boot_shutdown(dev)
* 'br-gossiper-no-db-config' of https://github.com/xemul/scylla:
gossiper: Remove db::config reference from gossiper
gossiper: Keep live-updateable options on gossiper
gossiper: Keep immutable options on gossip_config
Although its API was long converted to v2, its implementation stayed v1
because the memtable and mutation API were still v1. Now that the
memtable flush returns a v2 reader we can have a second look at
converting this. While the mutation API still uses v1, this can easily
be worked around by using going through `mutation_rebuilder_v2`.
Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <20220302145945.189607-1-bdenes@scylladb.com>
The series overhauls the compaction_manager::task design and implementation
by properly layering the functionality between the compaction_manager
that deals with generic task execution, and the per-task business logic that is defined
in a set of classes derived from the generic task class.
While at it, the series introduces `task::state` and a set of helper functions to manage it
to prevent leaks in the statistics, fixing #9974.
Two more stats counter were exposed: `completed_tasks` and a new `postponed_tasks`.
Test: sstable_compaction_test
Dtest: compaction_test.py compaction_additional_test.py
Fixes#9974Closes#10122
* github.com:scylladb/scylla:
compaction_manager: use coroutine::switch_to
compaction_manager::task: drop _compaction_running
compaction_manager: move per-type logic to derived task
compaction_manager: task: add state enum
compaction_manager: task: add maybe_retry
compaction_manager: reevaluate_postponed_compactions: mark as noexcept
compaction_manager: define derived task types
compaction_manager: register_metrics: expose postponed_compactions
compaction_manager: register_metrics: expose failed_compactions
compaction_manager: register_metrics: expose _stats.completed_tasks
compaction: add documentation for compaction_type to string conversions
compaction: expose to_string(compaction_type)
compaction_manager: task: standardize task description in log messages
compaction_manager: refactor can_proceed
compaction_manager: pass compaction_manager& to task ctor
compaction_manager: use shared_ptr<task> rather than lw_shared_ptr
compaction_manager: rewrite_sstables: acquire _maintenance_ops_sem once
compaction_manager: use compaction_state::lock only to synchronize major and regular compaction
Saving an allocation for running the functor
as a task in the switched-to scheduling group.
Also, switch to the desired scheduling group at
the beginning of the task so that the higher level logic,
like getting the list of sstables to compact
will be performed under the desired scheduling group,
not only the compaction code itself.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Replace the _compaction_running boolean member
by calculating _state == state::active
now that setup_new_compaction switches state to
`active`
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Move the business logic into the task specific classes.
Separating initialization during task construction,
from the compaction_done task, moved into
a do_run() method, and in some cases moving
a lambda function that was called per table (as in
rewrite_sstables) into a private method of the
derived class.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Add an enum class representing the task state machine
and a switch_state function to transition between the states
and update the corresponding compaction_manager stats counters.
Refs #9974
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Reading data from sstables without compacting first puts
unnecessary pressure on the cache. The mutation streams
need to be resolved anyway before passing to subsequent
consumers, so it's better to do it as close to the
source as possible.
Fixes: #3568Closes#10188
"
This patch-set converts the sstable writer to v2, then prepares the
ground for users actually being able to use the v2 variant. Finally it
converts all users to do so and then decommissions the v1 variant.
For users to be able to use the v2 writer API, we first have to add a v2
output to the compactor first, as some users write to sstables via the
compactor.
Tests: unit(dev, release)
"
* 'sstable-writer-v2/v2' of https://github.com/denesb/scylla:
sstables/sstable: remove now unused v1 write_components() variant
mutation_compactor: remove now unused compact_for_compaction
test/boost/mutation_test: migrate to compact_for_mutation_v2
streaming: migrate to v2 variant of sstable writer API
memtable-sstable: migrate to v2 variant of sstable writer API
test: migrate to the v2 variant of the sstable writer API
sstables/sstable: expose v2 variant of write_components()
sstables: convert mx writer to v2
sstables/metadata_collector: use position_in_partition for min/max keys
test/boost/mutation_test: test_compactor_range_tombstone_spanning_many_pages extend to check v2 output too
mutation_reader: convert compacting reader v2
mutation_compactor: add v2 output
mutation_compactor: make _last_clustering_pos track last input
range_tombstone_change: add set_tombstone()
test/lib/mutation_source_test: log name of each run_mutation_source()
To be used in the next patch to generate
a string dscription from the compaction_type.
In theory, we could use compaction_name()
btu the latter returns the compaction type
in all-upper case and that is very different from
what we print to the log today. The all-upper
strings are used for the api layer, e.g. to
stop tasks of a particular compaction type.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Define task::describe and use it via operator<<
to print the task metadata to the log in a standard way.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Move the task-internal parts of can_proceed
to a respective compaction_manager::task method,
preparing for turning it into a class with
a proper hierarchy of access to private members.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
And use it to get the compaction state of the
table to compact.
It will be used in a later patch
to manage the task state from task
methods.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Like all other maintenance operations, acquire the _maintenance_ops_sem
once for the whole task, rather than for each sstable.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Maintenance operations like cleanup, upgrade, reshape, and reshard
are serialized serialized with major compaction using the _maintenance_ops_sem
and they need no further synchronization with regular compaction
by acquiring the per-table read lock..
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
The sstables::sstable class has two methods for writing sstables:
1) sstable_writer get_writer(...);
2) future<> write_components(flat_mutation_reader, ...);
(1) directly exposes the writer type, so we have to update all users of
it (there is not that many) in this same patch. We defer updating
users of (2) to a follow-up commits.
Instead of naked clustering keys. Working with the latter is dangerous
because it cannot accurately represent the entire clustering domain: it
cannot represent positions between (before/after) keys. For this reason
the metadata collector had a separate update_min_max_components()
overload for range tombstones because the positions of these cannot be
represented by clustering keys alone.
Moving to position_in_partition solves this problem and it is now enough
to have a single overload with position_in_partition_view. This is also
more future proof as it will work with range tombstone changes without
any additional changes.
The output version is selected via compactor_output_format, which is a
template parameter of `compact_mutation_state` and all downstream types.
This is to ensure a compaction state created to emit a v2 stream will
not be accidentally used with a v1 consumer.
When using a v2 output, the current active tombstone has to be tracked
separately for the regular and for the gc consumer (if any), so that
each can be closed properly on EOS. The current effective tombstone is
tracked separately from these two. The reason is that purged tombstones
are still applied to data, but are not emitted to the regular consumer.
Instead of updating _last_clustering_pos whenever a clustering fragment
is pushed to the consumers, we now update it whenever a clustering
fragment enters the compactor. Not only is this much more robust, but it
also makes more sense. Just because a range tombstone is purged (and
therefore the consumer doesn't see it), it still moves the logical
clustering position in the stream. Also, tracking the input side avoids
any ambiguity related to cases where we have two consumers (regular + gc
consumer).
Although we have a log in run_mutation_reader_tests(), it is useful to
know where it was called from, when trying to find the test scenario
that failed.
"
Add per-action help content for each action. Main description now points
to these for more details.
"
* 'scylla-types-improvements/v1' of https://github.com/denesb/scylla:
tools/types: update main description
tools/scylla-types: per-action help content
tools/scylla-types: description: remove -- from action listing
tools/scylla-types: use fmt::print() instead of std::cout <<
"
Also convert the foreign_reader used by it in the process.
Tests: unit(dev)
"
* 'multishard-writer-v2/v1' of https://github.com/denesb/scylla:
mutation_writer/multishard_writer: remove now unused v1 factory overloads
test/boost/mutation_writer_test: test the v2 variant of distribute_reader_and_consume_on_shards()
flat_mutation_reader: add v2 variant of make_generating_reader()
mutation_reader: multishard_writer: migrate implementation to v2
mutation_reader: convert foreign_reader to v2
streaming/consumer: convert to v2
mutation_writer/multishard_writer: add v2 variant of distribute_reader_and_consume_on_shards()
It was a suggestion from @psarna, done to get more info about the abort from #10174.
Closes#10185
* github.com:scylladb/scylla:
query: do not assert in `operator<<(ostream&, const forward_result::printer&)`
query: transform asserts into on_internal_error in forward_result::merge
There are two issues with current implementation of remove/remove_if:
1) If it happens concurrently with get_ptr(), the latter may still
populate the cache using value obtained from before remove() was
called. remove() is used to invalidate caches, e.g. the prepared
statements cache, and the expected semantic is that values
calculated from before remove() should not be present in the cache
after invalidation.
2) As long as there is any active pointer to the cached value
(obtained by get_ptr()), the old value from before remove() will be
still accessible and returned by get_ptr(). This can make remove()
have no effect indefinitely if there is persistent use of the cache.
One of the user-perceived effects of this bug is that some prepared
statements may not get invalidated after a schema change and still use
the old schema (until next invalidation). If the schema change was
modifying UDT, this can cause statement execution failures. CQL
coordinator will try to interpret bound values using old set of
fields. If the driver uses the new schema, the coordinaotr will fail
to process the value with the following exception:
User Defined Type value contained too many fields (expected 5, got 6)
The patch fixes the problem by making remove()/remove_if() erase old
entries from _loading_values immediately.
The predicate-based remove_if() variant has to also invalidate values
which are concurrently loading to be safe. The predicate cannot be
avaluated on values which are not ready. This may invalidate some
values unnecessarily, but I think it's fine.
Fixes#10117
Message-Id: <20220309135902.261734-1-tgrabiec@scylladb.com>
It was done to show more context in case of forward_result::merge
arguments size mismatch and also to prevent aborts caused by another
nodes sending malformed data.