The do_with() means we have an unconditional allocation, so we can
justify the coroutine's allocation (replacing it). Meanwhile,
coroutine::parallel_for_each() reduces an allocation if mutate_locally()
blocks.
Closes#10387
gcc 12 checks some things that clang doesn't, resulting in compile errors.
This series fixes some of theses issues, but still builds (and tests) with clang.
Unfortunately, we still don't have a clean gcc build due to an outstanding bug [1].
[1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98056Closes#10386
* github.com:scylladb/scylla:
build: disable warnings that cause false-positive errors with gcc 12
utils: result_loop: remove invalid and incorrect constraint
service: forward_service: avoid using deprecated std::bind1st and std::not1
repair: explicityl ignore tombstone gc update response
treewide: abort() after switch in formatters
db: view: explicitly ignore unused result
compaction: leveled_compaction_strategy: avoid compares between signed and unsigned
compaction_manager: compaction_reenabler: disambiguate compaction_state
api: avoid function specialization in req_param
alternator: ttl: avoid specializing class templates in non-namespace scope
alternator: executor: fix signed/unsigned comparison in is_big()
Currently, rpc handlers are all lambdas inside
storage_proxy::init_messaging_service(). This means any stack trace
refers to storage_proxy::init_messaging_service::lambda#n instead of
a meaningful function name, and it makes init_messaging_service()
very intimidating.
Fix that by moving all such lambdas to regular member functions. The
first two patches remove unnecessary captures to make it easy; the
final patch coverts the lambdas to member functions.
Closes#10388
* github.com:scylladb/scylla:
storage_proxy: convert rpc handlers from lambdas to member functions
storage_proxy: don't capture messaging_service in server callbacks
storage_proxy: don't capture migration_manager in server callbacks
We currently does not able to get any error message from subprocess when we specified capture_output=True on subprocess.run().
This is because CalledProcessError does not print stdout/stderr when it raised, and we don't catch the exception, we just let python to cause Traceback.
Result of that, we only able to know exit status and failed command but
not able to get stdout/stderr.
This is problematic especially working on perftune.py bug, since the
script should caused Traceback but we never able to see it.
To resolve this, add wrapper function "out()" for capture output, and
print stdout/stderr with error message inside the function.
Fixes#10390Closes#10391
Checking a concept in a requires-expression requires an additional
requires keyword. Moreover, the constraint is incorrect (at least
all callers pass a T, not a result<T>), so remove it.
Found by gcc 12.
It is typical in switch statements to select on an enum type and
rely on the compliler to complain if an enum value was missed. But
gcc isn't satisified since the enum could have a value outside the
declared list. Call abort() in this impossible situation to pacify
it.
Function specializations are not allowed (you're supposed to use
overloads), but clang appears to allow them.
Here, we can't use an overload since the type doesn't appear in the
parameter list. Use a constraint instead.
The C++ standard disallows class template specialization in non-namespace
scopes. Clang apparently allows it as an extension.
Fix by not using a template - there are just two specializations and
no generic implementation. Use regular classes and std::conditional_t
to choose between the two.
Signed/unsigned comparisons are subject to C promotion rules. In is_big()
in this case the comparison is safe, but gcc warns. Use a cast to silence
the warning.
The sign/unsigned mix and int/size_t size differences still look bad, it
would be good to revisit this code, but that is left for another patch.
Series 59d56a3fd7 introduced
an accidental backward incompatible regression by adding
a column to system_schema.keyspaces and then not even using
it for anything. It's a leftover from the original hackathon
implementation and should never reach master in the first place.
Fortunately, the series isn't part of any stable release yet.
Fixes#10376
Tests: manual, verifying that the system_schema.keyspaces table
no longer contains the extraneous column.
Closes#10377
Currently, rpc handlers are all lambdas inside
storage_proxy::init_messaging_service(). This means any stack trace
refers to storage_proxy::init_messaging_service::lambda#n instead of
a meaningful function name, and it makes init_messaging_service()
very intimidating.
Fix that by moving all such lambdas to regular member functions.
This is easy now that they don't capture anything except `this`,
which we provide during registration via std::bind_front().
A few #includes and forward declarations had to be added to
storage_proxy.hh. This is unfortunate, but can only be solved
by splitting storage_proxy into a client part and a server part.
We'd like to make the server callbacks member functions, rather
than lambdas, so we need to eliminate their captures. This patch
eliminats 'ms' by referringn to the already existing member '_messaging'
instead.
We'd like to make the server callbacks member functions, rather
than lambdas, so we need to eliminate their captures. This patch
eliminates 'mm' by making it a member variable and capturing 'this'
instead. In one case 'mm' was used by a handle_write() intermediate
lambda so we have to make that non-static and capture it too.
uninit_messaging_service() clears the member variable to preserve
the same lifetime 'mm' had before, in case that's important.
* seastar acf7e3523b...5e86362704 (10):
> Merge "Respect taskset-configured cpumask" from Pavel E
Ref #9505.
> rpc_tester: Run CPU hogs on server side too
> std-coroutine: include <coroutine> for LLVM-15
> Revert "Merge "tests: perf: measure coroutines performance" from Benny"
> test: perf_tests: remove [[gnu::always_inline]] attribute from coroutine perf tests
> Merge "tests: perf: measure coroutines performance" from Benny
> Merge "Extend RPC tester" from Pavel E
> rpc: Mark connection trivial getters const noexcept
> seastar-addr2line: Allow use of llvm-addr2line as the command
> file: append_challenged_posix_file: Serialize allocate() to not block concurrent reads or writes
We start the memory threshold guard (that enables large memory allocation
warnings post-boot) but don't wait for it. I can't imagine it can hurt,
but it does carry a FIXME label.
Closes#10375
This patch series splits up parts of repair pipeline to allow unit testing
various bits of code without having to run full dtest suite. The reason why
repair pipeline has no unit tests is that by definition repair requires multiple
nodes, while unit test environment works only for a single node.
However, it is possible to explicitly define interfaces between various parts of the
pipeline, inject dependencies and test them individually. This patch series is focused
on taking repair_rows_on_wire (frozen mutation representation of changes coming from
another node) and flushing them to an sstable.
The commits are split into the following parts:
- pulling out classes to separate headers so that they can be included (potentially indirectly) from the test,
- pulling out repair_meta::to_repair_rows_list and part of repair_meta::flush_rows_in_working_row_buf so that they can be tested,
- refactoring repair_writer so that the actual writing logic can be injected as dependency,
- creating the unit test.
tests: unit(dev), dtest(incremental_repair_test, read_repair_test, repair_additional_test, repair_test)
Closes#10345
* github.com:scylladb/scylla:
repair: Add unit test for flushing repair_rows_on_wire to disk.
repair: Extract mutation_fragment_queue and repair_writer::impl interfaces.
repair: Make parts of repair_writer interface private.
repair: Rename inputs to flush_rows.
repair: Make repair_meta::flush_rows a free function.
repair: Split flush_rows_in_working_row_buf to two functions and make one static.
repair: Rename inputs to to_repair_rows_list.
repair: Make to_repair_rows_list a free function.
repair: Make repair_meta::to_repair_rows_list a static function
repair: Fix indentation in repair_writer.
repair: Move repair_writer to separate header.
repair: Move repair_row to a separate header.
repair: Move repair_sync_boundary to a separate header.
repair: Move decorated_key_with_hash to separate header.
repair: Move row_repair hashing logic to separate class and file.
* 'raft_group0_early_startup_v3' of https://github.com/ManManson/scylla:
main: allow joining raft group0 before waiting for gossiper to settle
service: raft_group0: make `join_group0` re-entrant
service: storage_service: add `join_group0` method
raft_group_registry: update gossiper state only on shard 0
raft: don't update gossiper state if raft is enabled early or not enabled at all
gms: feature_service: add `cluster_uses_raft_mgmt` accessor method
db: system_keyspace: add `bootstrap_needed()` method
db: system_keyspace: mark getter methods for bootstrap state as "const"
"
Optimize consuming from a single partition.
This gives us significant improvement with single, small mutations,
as shown with perf_mutation_readers, compared to the vector-based
flat_mutation_reader_from_mutations_v2.
These are expected to be common on the write path,
and can be optimized for view building.
results from: perf_mutation_readers -c1 --random-seed=840478750
(userspace cpu-frequency governer, 2.2GHz)
test iterations median mad min max
Before:
combined.one_row 720118 825.668ns 1.020ns 824.648ns 827.750ns
After:
combined.one_mutation 881482 751.157ns 0.397ns 750.211ns 751.912ns
combined.one_row 843270 756.553ns 0.303ns 755.889ns 757.911ns
The grand plan is to follow up
with make_flat_mutation_reader_from_frozen_mutation_v2
so that we can read directly from either a mutation
or frozen_mutation without having to unfreeze it e.g. in
table::push_view_replica_updates.
Test: unit(dev)
Perf: perf_mutation_readers(release)
"
* tag 'flat_mutation_reader_from_mutation-v3' of https://github.com/bhalevy/scylla:
perf: perf_mutation_readers: add one_mutation case
test: mutation_query_test: make make_source static
mutation readers: refactor make_flat_mutation_reader_from_mutation*_v2
mutation readers: add make_flat_mutation_reader_from_mutation_v2
readers: delete slice_mutation.hh
test: flat_mutation_reader_test: mock_consumer: add debug logging
test: flat_mutation_reader_test: mock_consumer: make depth counter signed
"
There's a generic way to start-stop services in scylla, that includes
5 "actions" (some are optional and/or implicit though)
service_config cfg = ...
sharded<service>.start(cfg)
service.invoke_on_all(&service::start)
service.invoke_on_all(&service::shutdown)
service.invoke_on_all(&servuce::stop)
sharded<service>.stop()
and most of the service out there conforms to that scheme. Not snitch
(spoiler: and not tracing), for which there's a couple of helpers that
do all that magic behind the scenes, "configuring" snitch is done with
the help of overloaded constructors. The latter is extra complicated
with the need to register snitch drivers in class-registry for each
constructor overload. Also there's an external shards synchronization
on stop.
This set brings snitch start/stop code to the described standard: the
create/stop helpers are removed, creation acceps the config structure,
per-shard start/stop (snitch has no drain for now) happens in the
simple invoke-on-all manner.
The intended side effect of this change is the ability to add explicit
dependencies to snitch (in the future, not in this set).
tests: unit(dev)
"
* 'br-snitch-config' of https://github.com/xemul/scylla:
snitch: Remove create_snitch/stop_snitch
snitch: Simplify stop (and pause_io)
snitch: Move io_is_stopped to property-file driver
snitch: Remove init_snitch_obj()
snitch: Move instance creation into snitch_ptr constructor
snitch: Make config-based construction of all drivers
snitch: Declare snitch_ptr peering and rework container() method
snitch: Introduce container() method
A node can join group0 without waiting for gossiper if
it is either a fresh node, or it's an existing node, which
is already part of some group0 (i.e. have `group0_id` persisted
in system tables).
In that case the second `join_group0()` call inside the
`storage_service::join_token_ring` will be a no-op.
Signed-off-by: Pavel Solodovnikov <pa.solodovnikov@scylladb.com>
Measure performance of the single-mutation reader:
make_flat_mutation_reader_from_mutation_v2.
Comparable to the `one_row` case that consumes the
single mutation using the multi-mutatio reader:
make_flat_mutation_reader_from_mutations_v2
perf_mutation_readers shows ~20-30% improvement of
make_flat_mutation_reader_from_mutation_v2
the same single mutation, just given as a single-item vector
to make_flat_mutation_reader_from_mutations_v2.
test iterations median mad min max
Before:
combined.one_row 720118 825.668ns 1.020ns 824.648ns 827.750ns
After:
combined.one_mutation 881482 751.157ns 0.397ns 750.211ns 751.912ns
combined.one_row 843270 756.553ns 0.303ns 755.889ns 757.911ns
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Extract the common parts of the single mutation reader
and the vector-based variant into mutation_reader_base
and reuse from both readers.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
slice_mutations() is currently used only by readers/mutation_readers.cc
so there's no need to expose it.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
We want to return stop_iteration::yes once we crossed
the initial depth threshold, with an unsigned depth counter,
it might wraparound and look > 1.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
The test will now, with probability 1/2, enable forwarding of entries by
followers to leaders. This is possible thanks to the new abort_source&
APIs which we use to ensure that no operations are running on servers
before we destroy them.
Some adjustments were required to the server abort procedure in order to
prevent rare hangs (see first patch). We also translate some low-level
exceptions coming from seastar primitives to high-level Raft API
exceptions (second patch).
* kbr/nemesis-enable-fd-v1:
test: raft: randomized_nemesis_test: enable entry forwarding
test: raft: randomized_nemesis_test: increase logging level on some rare operations
raft: server: translate abort_requested_exception to raft::request_aborted
raft: fsm: when stopping, become follower to reject new requests
This pull request adds support for retrying failed forwarder calls
(currently used to parallelize `select count(*) from ...` queries).
Failed-to-forward sub-queries will be executed locally (on a
super-coordinator). This local execution is meant as a fallback for a
forward_requests that could not be sent to its destined coordinator
(e.g. due gossiper not reacting fast enough). Local execution was chosen
as the safest one - it does not require sending data to another
coordinator.
Due to problems with misscompilations, some parts of the
`forward_service` were uncoroutinized.
Fixes: #10131Closes#10329
* github.com:scylladb/scylla:
forward_service: uncoroutinize dispatch method
forward_service: uncoroutinize retrying_dispatcher
forward_service: rety a failed forwarder call
forward_service: copy arguments/captured vars to local variables
The restriction "WHERE m[NULL] = 2" should result in an invalid request
error, but currently results in an ugly internal server error.
This test reproduces it, and since the bug is still in the code - is
marked as xfail.
Refs #10361
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20220412134118.829671-1-nyh@scylladb.com>
This is a reproducer for issue #10359 that a "CONTAINS NULL" and
"CONTAINS KEY NULL" restrictions should not match any set, but currently
do match non-empty or all sets.
The tests currently fail on Scylla, so marked xfail. They also fails on
Cassandra because Cassandra considers such a request an error, which
we consider a mistake (see #4776) - so the tests are marked "cassandra_bug".
Refs #10359.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20220412130914.823646-1-nyh@scylladb.com>
We already have a test showing that WHERE v=NULL ALLOW FILTERING is
allowed in Scylla (unlike Cassandra), and matches nothing. Here
we add two further tests that confirm that:
1. Not only is v=NULL allowed - v<NULL, v<=NULL, and so on, is also
allowed and matches nothing.
2. The ALLOW FILTERING is required in in those requests. Without it,
both Scylla and Cassandra generate the same "ALLOW FILTERING is
required" error.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20220411214503.770413-1-nyh@scylladb.com>
Protocol v4 added WRITE_FAILURE and READ_FAILURE. When running under v3
we downgrade these exceptions to WRITE_TIMEOUT and READ_TIMEOUT (since
the client won't understand the v4 errors), but we still send the new
error codes. This causes the client to become confused.
Fix by updating the error codes.
A better fix is to move the error code from the constructor parameter
list and hard-code it in the constructor, but that is left for a follow-up
after this minimal fix.
Fixes#5610.
Closes#10362
"
Repair code keeps its history in system keyspace and uses the qctx
global thing to update and query it. This set replaces the qctx with
the explicit reference on the system_keyspace object.
tests: unit(dev), dtest.repair_test(dev)
"
* 'br-repair-vs-qctx' of https://github.com/xemul/scylla:
repair, system_keyspace: Query repair_history with a helper
repair: Update loader code to use system_keyspace entry
repair, system_keyspace: Update repair_history with a helper
repair: Keep system keyspace reference
Fixes the case of make_room() invoked with last_chunk_capacity_deficit
but _size not in the last reserved chunk.
Found during code review, no user impact.
Fixes#10364.
Message-Id: <20220411224741.644113-1-tgrabiec@scylladb.com>
Fixes the case of make_room() invoked with last_chunk_capacity_deficit
but _size not in the last reserved chunk.
Found during code review, no known user impact.
Fixes#10363.
Message-Id: <20220411222605.641614-1-tgrabiec@scylladb.com>