Commit 8d6e575 introduced a new stat, instructions per fragment.
Computing this new stat can end with a division by zero when
the number of fragmens read is 0. Here we fix it by reporting
0 ins/f when no fragments were read.
Fixes#9231Closes#9232
Use a forward declaration of cql3::expr::oper_t to reduce the
number of translation units depending on expression.hh.
Before:
$ find build/dev -name '*.d' | xargs cat | grep -c expression.hh
272
After:
$ find build/dev -name '*.d' | xargs cat | grep -c expression.hh
154
Some translation units adjust their includes to restore access
to required headers.
Closes#9229
loading_shared_values/loading_cache'es iterators interface is dangerous/fragile because
iterator doesn't "lock" the entry it points to and if there is a
preemption point between aquiring non-end() iterator and its
dereferencing the corresponding cache entry may had already got evicted (for
whatever reason, e.g. cache size constraints or expiration) and then
dereferencing may end up in a use-after-free and we don't have any
protection against it in the value_extractor_fn today.
And this is in addition to #8920.
So, instead of trying to fix the iterator interface this patch kills two
birds in a single shot: we are ditching the iterators interface
completely and return value_ptr from find(...) instead - the same one we
are returning from loading_cache::get_ptr(...) asyncronous APIs.
A similar rework is done to a loading_shared_values loading_cache is
based on: we drop iterators interface and return
loading_shared_values::entry_ptr from find(...) instead.
loading_cache::value_ptr already takes care of "lock"ing the returned value so that it
would relain readable even if it's evicted from the cache by the time
one tries to read it. And of course it also takes care of updating the
last read time stamp and moving the corresponding item to the top of the
MRU list.
Fixes#8920
Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
Message-Id: <20210817222404.3097708-1-vladz@scylladb.com>
This is side effect of allowing to run scylla_io_setup in nonroot mode,
the script able to run in non-root user even the installation is not
nonroot mode.
Result of that, the script finally failed to write io_properties.yaml
and causes permission denied. Since the evaluation takes long time, we
should run permission check before starting it.
We need to add root privilege check again, but skip it on nonroot mode.
Fixes#8915Closes#8984
The parse() function of high-level sstable metadata types are
trivial straight line code and can be easily simplified by
conversion to coroutines.
Test: unit (dev)
Closes#9224
* github.com:scylladb/scylla:
sstables: parse(*): adjust indentation after coroutine conversion
sstables: parse(compression&): eliminate unnecessary indirection
sstables: convert parse(compression&) to a coroutine
sstables: convert parse(commitlog_interval&) to a coroutine
sstables: parse(streaming_histogram&): eliminate unnecessary indirection
sstables: convert parse(streaming_histogram&) to a coroutine
sstables: convert parse(estimated_histogran&) to a coroutine
sstables: convert parse(statistics&) to a coroutine
sstables: convert parse(summary&) to a coroutine
Workaround for Clang bug: https://bugs.llvm.org/show_bug.cgi?id=51515
When compiled on aarch64 with ASAN support and -Og/-Oz/-Os optimization
level, `raft_sys_table_storage::do_store_log_entries` crashes during the
tests. ASAN incorrectly reports `stack-use-after-return` on
`std::vector` list initialization after initial coroutine suspension
(initializer list's data pointer starts to point to garbage).
The workaround is simple: don't use initializer lists in such case
and replace with a series of `emplace_back` calls.
Tests: unit(debug, aarch64)
Fixes#9178
Signed-off-by: Pavel Solodovnikov <pa.solodovnikov@scylladb.com>
Message-Id: <20210818102038.92509-1-pa.solodovnikov@scylladb.com>
We need to pass --supervisor option just for scylla-server module,
and also pass --packaging option to scylla-jmx module to avoid running
systemctl command, since the option may run in container, and container
may not have systemd.
Fixes#9141Closes#9142
Make the code tidier.
The conversion is not mechanical: the finally block is converted
to straight line code. stop()/close() must not fail anyway, and we
cannot recover from such failures. The when_all_succeed() for stopping
the semaphores is also converted to straight-line code - there is no
advantage to stopping them in parallel, as we're just waiting for
running tasks to complete and clean up.
Test: unit (dev)
Closes#9218
"
The mutation_fragments hashing code sitting in row-level repair
upsets clang and makes it spend 20 minutes compiling itself. This
set speeds this up greatly by moving the hashing code into the
mutation_fragment.cc and turning it into the appending_hash<>
specialisation. A simple sanity checking test makes sure this
doesn't change resulting hash values.
tests: unit.hashers_test(dev, release) // hash values matched, phew
dtest.repair_additional_test.repair_large_partition_existing_rows_test(release)
"
* 'br-row-level-comp-speedup-2.2' of https://github.com/xemul/scylla:
mutation_fragment: Specialize appending_hash for it
tests: Add sanity check for hashing mutation_fragments
Row-level rpair hashes the mutation fragment and wraps this into a
private fragment_hasher class. For some reason it takes ~20 minutes
for clang to compile the row_level.o with -O3 level (release mode).
Putting the whole fragment_hasher into a dedicated file reduces the
compilation times ~9 times.
However, it seems more natural not to move the fragment_hasher around
but to specialize the appending_hash<> for mutation_fragment and make
row_level.cc code just call feed_hash().
Compilation times (release mode):
before after
row_level.o 19m34s 2m4s
mutation_fragment.o 13s 17s
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Next patch is going to change the way row-level repair code hashes
mutation_fragment objects. This patch prepares the sanity check for
the hash values not be accidentally changed by hashing some simple
fragments and comparing them against known expected values.
The hash_mutation_fragment_for_test helper is added for this patch
only and will be removed really soon.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Documentation was extracted from abstract_replication_strategy::get_ranges(),
which says:
// get_ranges() returns the list of ranges held by the given endpoint.
// The list is sorted, and its elements are non overlapping and non wrap-around.
That's important because users of get_keyspace_local_ranges() expect
that the returned list is both sorted and non overlapping, so let's
document it to prevent someone from removing any of these properties.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20210805140628.537368-1-raphaelsc@scylladb.com>
Currently, view will be not updated because the streaming reason is set
to streaming::stream_reason::rebuild. On the receiver side, only
streaming with the reason streaming::stream_reason::repair will trigger
view update.
Change the stream reason to repair to trigger view update for load and
stream. This makes load_and_stream behaves the same as nodetool refresh.
Note: However, this is not very efficient though.
Consider RF = 3, sst1, sst2, sst3 from the older cluster. When sst1 is
loaded, it streams to 3 replica nodes, if we generate view updates, we
will have 3 view updates for this replica (each of the peer nodes finds
its peer and writes the view update to peer). After loading sst2 and
sst3, we will have 9 view updates in total for a single partition.
If we create the view after the load and stream process, we will only
have 3 view updates for a single partition.
If we create the view after the load and stream process, we will only
have 3 view updates for a single partition.
Fixes#9205Closes#9213
Operations and generators can be composed to create more complex
operations and generators. There are certain composition patterns useful
for many different test scenarios.
We implement a couple of such patterns. For example:
- Given multiple different operation types, we can create a new
operation type - `either_of` - which is a "union" of the original
operation types. Executing `either_of` operation means executing an
operation of one of the original types, but the specific type
can be chosen in runtime.
- Given a generator `g`, `op_limit(n, g)` is a new generator which
limits the number of operations produced by `g`.
- Given a generator `g` and a time duration of `d` ticks, `stagger(g, d)` is a
new generator which spreads the operations from `g` roughly every `d`
ticks. (The actual definition in code is more general and complex but
the idea is similar.)
Some of these patterns have correspodning notions in Jepsen, e.g. our
`stagger` has a corresponding `stagger` in Jepsen (although our
`stagger` is more general).
Finally, we implement a test that uses this new infrastructure.
Two `Executable` operations are implemented:
- `raft_call` is for calling to a Raft cluster with a given state
machine command,
- `network_majority_grudge` partitions the network in half,
putting the leader in the minority.
We run a workload of these operations against a cluster of 5 nodes with
6 threads for executing the operations: one "nemesis thread" for
`network_majority_grudge` and 5 "client threads" for `raft_call`.
Each client thread randomly chooses a contact point which it tries first
when executing a `raft_call`, but it can also "bounce" - call a
different server when the previous returned "not_a_leader" (we use the
generic "bouncing" wrapper to do this).
For now we only print the resulting history. In a follow-up patchset
we will analyze it for consistency anomalies.
* kbr/raft-test-generator-v4:
test: raft: randomized_nemesis_test: a basic generator test
test: raft: generator: a library of basic generators
test: raft: introduce generators
test: raft: introduce `future_set`
test: raft: randomized_nemesis_test: handle `raft::stopped_error` in timeout futures
The previous commits introduced basic the generator concept and a
library of most common composition patterns.
In this commit we implement a test that uses this new infrastructure.
Two `Executable` operations are implemented:
- `raft_call` is for calling to a Raft cluster with a given state
machine command,
- `network_majority_grudge` partitions the network in half,
putting the leader in the minority.
We run a workload of these operations against a cluster of 5 nodes with
6 threads for executing the operations: one "nemesis thread" for
`network_majority_grudge` and 5 "client threads" for `raft_call`.
Each client thread randomly chooses a contact point which it tries first
when executing a `raft_call`, but it can also "bounce" - call a
different server when the previous returned "not_a_leader" (we use the
generic "bouncing" wrapper to do this).
For now we only print the resulting history. In a follow-up patchset
we will analyze it for consistency anomalies.
Operations and generators can be composed to create more complex
operations and generators. There are certain composition patterns useful
for many different test scenarios.
This commit introduces a couple of such patterns. For example:
- Given multiple different operation types, we can create a new
operation type - `either_of` - which is a "union" of the original
operation types. Executing `either_of` operation means executing an
operation of one of the original types, but the specific type
can be chosen in runtime.
- Given a generator `g`, `op_limit(n, g)` is a new generator which
limits the number of operations produced by `g`.
- Given a generator `g` and a time duration of `d` ticks, `stagger(g, d)` is a
new generator which spreads the operations from `g` roughly every `d`
ticks. (The actual definition in code is more general and complex but
the idea is similar.)
And so on.
Some of these patterns have correspodning notions in Jepsen, e.g. our
`stagger` has a corresponding `stagger` in Jepsen (although our
`stagger` is more general).
We introduce the concepts of "operations" and "generators", basic
building blocks that will allow us to declaratively write randomized
tests for torturing simulated Raft clusters.
An "operation" is a data structure representing a computation which
may cause side effects such as calling a Raft cluster or partitioning
the network, represented in the code with the `Executable` concept.
It has an `execute` function performing the computation and returns
a result of type `result_type`. Different computations of the same type
share state of type `state_type`. The state can, for example, contain
database handles.
Each execution is performed on an abstract `thread' (represented by a `thread_id`)
and has a logical starting time point. The thread and start point together form
the execution's `context` which is passed as a reference to `execute`.
Two operations may be called in parallel only if they are on different threads.
A generator, represented through the `Generator` concept, produces a
sequence of operations. An operation can be fetched from a generator
using the `op` function, which also returns the next state of the
generator (generators are purely functional data structures).
The generator concept is inspired by the generators in the Jepsen
testing library for distributed systems.
We also implement `interpreter` which "interprets", or "runs", a given
generator, by fetching operations from the generator and executing them
with concurrency controlled by the abstract threads.
The algorithm used in the interpreter is also similar to the interpreter
algorithm in Jepsen, although there are differences. Most notably we don't
have a "worker" concept - everything runs on a single shard; but we use
"abstract threads" combined with futures for concurrency.
There is also no notion of "process". Finally, the interpreter doesn't
keep an explicit history, but instead uses a callback `Recorder` to notify
the user about operation invocations and completions. The user can
decide to save these events in a history, or perhaps they can analyze
them on the fly using constant memory.
A set of futures that can be polled.
Polling the set (`poll` function) returns the value of one of
the futures which became available or `std::nullopt` if the given
logical durationd passes (according to the given timer), whichever
event happens first. The current implementation assumes sequential
polling.
New futures can be added to the set with `add`.
All futures can be removed from the set with `release`.
The timeout futures in `call` and `reconfigure` may be discarded after
Raft servers were `abort()`ed which would result in
`raft::stopped_error` and the test complained about discarded
exceptional futures. Discard these errors explicitly.
supervisor scripts for Docker and supervisor scripts for offline
installer are almost same, drop Docker one and share same code to
deduplicate them.
Closes#9143Fixes#9194
In order to decouple the service level controller from the systems logic, we introduce an API for subscribing to configuration changes. The timing of the call was determined with resource creation and destruction in mind. An API subscriber can create
resources that will be available from the very start of the service level existence it can also destroy them since the service level
is guarantied not to exist anymore at the time of the call to the deletion notification callback.
Testing:
unit tests - all + a newly added one.
dtests - next-gating (dev mode)
Closes#9097
* github.com:scylladb/scylla:
service level controller: Subscriber API unit test
Service Level Controller: Add a listener API for service level config changes
changes
This change adds an api for registering a listener for service_level
configuration chanhes. It notifies about removal addition and change of
service level.
The hidden assumption is that some listeners are going to create and/or
manage service level specific resources and this it what guided the
time of the call to the subscriber.
Addition and change of a service level are called before the actual
change takes place, this guaranties that resource creation can take
place before the service level or new config starts to be used.
The deletion notification is called only after the deletion took place
and this guranties that the service level can't be active and the
resources created can be safely destroyed.
Refs #9053
Flips default for commitlog disk footprint hard limit enforcement to off due
to observed latency stalls with stress runs. Instead adds an optional flag
"commitlog_use_hard_size_limit" which can be turned on to in fact do enforce it.
Sort of tape and string fix until we can properly tweak the balance between
cl & sstable flush rate.
Closes#9195
We decided to enable repair based node operations by default for replace
node operations.
To do that, a new option --allowed-repair-based-node-ops is added. It
lists the node operations that are allowed to enable repair based node
operations.
The operations can be bootstrap, replace, removenode, decommission and rebuild.
By default, --allowed-repair-based-node-ops is set to contain "replace".
Note, the existing option --enable-repair-based-node-ops is still in
play. It is the global switch to enable or disable the feature.
Examples:
- To enable bootstrap and replace node ops:
```
scylla --enable-repair-based-node-ops true --allowed-repair-based-node-ops replace,bootstrap
```
- To disable any repair based node ops:
```
scylla --enable-repair-based-node-ops false
```
Closes#9197
It turns out that user-defined aggregates did not need any elaborate coding in order to make them exposed to the users. The whole infrastructure is already there, including system schema tables and support for running aggregate queries, so this series simply adds lots and lots of boilerplate glue code to make UDA usable.
It also comes with a simple test which shows that it's possible to define and use such an aggregate.
Performance not tested, since user-defined functions are still experimental, so nothing really changes in this matter.
Tests: unit(release)
Fixes#7201Closes#9165
* github.com:scylladb/scylla:
cql-pytest: add a test suite for user-defined aggregates
cql-pytest: add context managers for functions and aggregates
cql3: enable user-defined aggregates in CQL grammar
cql3: add statements for user-defined aggregates
cql3,functions: add checking if a function is used in UDA
gms: add UDA feature
migration_manager: add migrating user-defined aggregates
db,schema_tables: add handling user-defined aggregates
pagers: make a lambda mutable in fetch_page
cql3: wrap handling paging result with with_thread_if_needed
cql3: correctly mark function selectors as needing threads
cql3: add user-define aggregate representation
The test suite now consists of a single user aggregate:
a custom implementation for existing avg() built-in function,
as well as a couple of cases for catching incorrect operations,
like using wrong function signatures or dropping used functions.