Clang warns when "return std::move(x)" is needed to elide a copy,
but the call to std::move() is missing. We disabled the warning during
the migration to clang. This patch re-enables the warning and fixes
the places it points out, usually by adding std::move() and in one
place by converting the returned variable from a reference to a local,
so normal copy elision can take place.
Closes#8739
This warning triggers when a range for ("for (auto x : range)") causes
non-trivial copies, prompting the developer to replace with a capture
by reference. A few minor violations in the test suite are corrected.
Closes#8699
config_file.cc instantiates std::istream& std::operator>>(std::istream&,
std::unordered_map<seastar::sstring, seastar::sstring>&), but that
instantiation is ignored since config_file_impl.hh specializes
that signature. -Winstantiation-after-specialization warns about it,
so re-enable it now that the code base is clean.
Also remove the matching "extern template" declaration, which has no
definition any more.
Closes#8696
The -Wunused-private-field was squelched when we switched to
clang to make the change easier. But it is a useful warning, so
re-enable it.
It found a serious bug (#8682) and a few minor instances of waste.
A minimal implementation of alternator test env, a younger cousin
of cql_test_env, is implemented. Note that using this environment
for unit tests is strongly discouraged in favor of the official
test/alternator pytest suite. Still, alternator_test_env has its uses
for microbenchmarks.
"
The current scrub compaction has a serious drawback, while it is
very effective at removing any corruptions it recognizes, it is very
heavy-handed in its way of repairing such corruptions: it simply drops
all data that is suspected to be corrupt. While this *is* the safest way
to cleanse data, it might not be the best way from the point of view of
a user who doesn't want to loose data, even at the risk of retaining
some business-logic level corruption. Mind you, no database-level scrub
can ever fully repair data from the business-logic point of view, they
can only do so on the database-level. So in certain cases it might be
desirable to have a less heavy-handed approach of cleansing the data,
that tries as hard as it can to not loose any data.
This series introduces a new scrub mode, with the goal of addressing
this use-case: when the user doesn't want to loose any data. The new
mode is called "segregate" and it works by segregating its input into
multiple outputs such that each output contains a valid stream. This
approach can fix any out-of-order data, be that on the partition or
fragment level. Out-of-order partitions are simply written into a
separate output. Out of order fragments are handled by injecting a
partition-end/partition-start pair right before them, so that they are
now in a separate (duplicate) partition, that will just be written into
a separate output, just like a regular out-of-order partition.
The reason this series is posted as an RFC is that although I consider
the code stable and tested, there are some questions related to the UX.
* First and foremost every scrub that does more than just discard data
that is suspected to be corrupt (but even these a certain degree) have
to consider the possibility that they are rehabilitating corruptions,
leaving them in the system without a warning, in the sense that the
user won't see any more problems due to low-level corruptions and
hence might think everything is alright, while data is still corrupt
from the business logic point of view. It is very hard to draw a line
between what should and shouldn't scrub do, yet there is a demand from
users for scrub that can restore data without loosing any of it. Note
that anybody executing such a scrub is already in a bad shape, even if
they can read their data (they often can't) it is already corrupt,
scrub is not making anything worse here.
* This series converts the previous `skip_corrupted` boolean into an
enum, which now selects the scrub mode. This means that
`skip_corrupted` cannot be combined with segregate to throw out what
the former can't fix. This was chosen for simplicity, a bunch of
flags, all interacting with each other is very hard to see through in
my opinion, a linear mode selector is much more so.
* The new segregate mode goes all-in, by trying to fix even
fragment-level disorder. Maybe it should only do it on the partition
level, or maybe this should be made configurable, allowing the user to
select what to happen with those data that cannot be fixed.
Tests: unit(dev), unit(sstable_datafile_test:debug)
"
* 'sstable-scrub-segregate-by-partition/v1' of https://github.com/denesb/scylla:
test: boost/sstable_datafile_test: add tests for segregate mode scrub
api: storage_service/keyspace_scrub: expose new segregate mode
sstables: compaction/scrub: add segregate mode
mutation_fragment_stream_validator: add reset methods
mutation_writer: add segregate_by_partition
api: /storage_service/keyspace_scrub: add scrub mode param
sstables: compaction/scrub: replace skip_corrupted with mode enum
sstables: compaction/scrub: prevent infinite loop when last partition end is missing
tests: boost/sstable_datafile_test: use the same permit for all fragments in scrub tests
Use a hardware counter to report instructions per fragment. Results
vary from ~4k insns/f when reading sequentially to more than 1M insns/f.
Instructions per fragment can be a more stable metric than frags/sec.
It would probably be even more stable with a fake file implementation
that works in-memory to eliminate seastar polling instruction variation.
Closes#8660
This is the 1st PR in series with the goal to finish the hackathon project authored by @tgrabiec, @kostja, @amnonh and @mmatczuk (improved virtual tables + function call syntax in CQL). Virtual tables created within this framework are "materialized" in memtables, so current solution is for small tables only. As an example system.status was added. It was checked that DISTINCT and reverse ORDER BY do work.
This PR was created by @jul-stas and @StarostaGit
Fixes#8343
This is the same as #8364, but with a compilation fix (newly added `close()` method was not implemented by the reader)
Closes#8634
* github.com:scylladb/scylla:
boost/tests: Add virtual_table_test for basic infrastructure
boost/tests: Test memtable_filling_virtual_table as mutation_source
db/system_keyspace: Add system.status virtual table
db/virtual_table: Add a way to specify a range of partitions for virtual table queries.
db/virtual_table: Introduce memtable_filling_virtual_table
db: Add virtual tables interface
db: Introduce chained_delegating_reader
The commit introduces `PureStateMachine`, which is the most direct translation
of the mathematical definition of a state machine to C++ that I could come up with.
Represented by a C++ concept, it consists of: a set of inputs
(represented by the `input_t` type), outputs (`output_t` type), states (`state_t`),
an initial state (`init`) and a transition function (`delta`) which
given a state and an input returns a new state and an output.
The rest of the testing infrastructure is going to be
generic w.r.t. `PureStateMachine`. This will allow easily implementing
tests using both simple and complex state machines by substituting the
proper definition for this concept.
One possibility of modifying this definition would be to have `delta`
return `future<pair<state_t, output_t>>` instead of
`pair<state_t, output_t>`. This would lose some ``purity'' but allow
long computations without reactor stalls in the tests. Such modification,
if we decide to do it, is trivial.
Uses the infrastructure for testing mutation_sources, but only a
subset of it which does not do fast forwarding (since virtual_table
does not support it).
This change introduces the basic interface we expect each virtual
table to implement. More specific implementations will then expand
upon it if needed.
A separate build mode is a much better fit for coverage generation.
Generating coverage requires certain flags and optimization modes,
which is much better expressed with a separate build mode, then by
bolting it on top of an existing one, possibly conflicting with its own
requirements.
This patch therefore converts the current `--coverage` flag to a build
mode of its own. The build mode is based on debug mode, in fact seastar
is built in plain debug mode, with some extra cflags.
The new build mode is called "coverage" and it is a non-default mode (by
default configure.py doesn't generate build files for it).
The huge amount of choices for the --with argument obscures the help
output, making it hard to read. This patch removes the choices list and
instead manually checks the passed in artifacts. Unknown artifacts are
removed from the list and if it remains empty the script is aborted.
Available artifacts can be listed by a new --list-artifacts flag.
Currently modes[0] is used as the fallback when 'dev' is not available.
But modes is a dict with mode names as keys, so this won't work. Replace
it with modes.keys()[0] to select the first key instead.
Currently the declaration of build modes is scattered throughout the
script. We have several places where build modes are mentioned
hardcoded, and related configuration is also scattered in several data
structures. This commit centralized all this into a single data
structure, all other code uses this to iterate over modes and to mutate
their configuration.
This patch was motivated by the wish to make it easier to add a new
build mode, which is what the next patch does. This is not something we
do often, but I believe these changes also serve to make the code easier
to understand and modify.
The mutation_reader_test is already one of our largest test files.
Move the reader concurrency semaphore related tests to a new file,
making them easier to find making the mutation reader test a little bit
smaller too.
Add a new segregator which segregates a stream, potentially containing
duplicate or even out-of-order partitions, into multiple output streams,
such that each output stream has strictly monotonic partitions.
This segregator will be used by a new scrub compaction mode which is
meant to fix sstables containing duplicate or out-of-order data.
Instructions retired per op is a much more stable than time per op
(inverse throughput) since it isn't much affected by changes in
CPU frequencey or other load on the test system (it's still somewhat
affected since a slower system will run more reactor polls per op).
It's also less indicative of real performance, since it's possible for
fewer inststructions to execute in more time than more instructions,
but that isn't an issue for comparative tests).
This allows incremental changes to the code base to be compared with
more confidence.
Current results are around 55k instructions per read, and 52k for writes.
Closes#8563
* github.com:scylladb/scylla:
test: perf: tidy up executor_stats snapshot computation
test: perf: report instructions retired per operations
test: perf: add RAII wrapper around Linux perf_event_open()
test: perf: make executor_stats_snapshot() a member function of executor
Both hinted handoff and repair are meant to improve the consistency of the cluster's data. HH does this by storing records of failed replica writes and replaying them later, while repair goes through all data on all participaring replicas and makes sure the same data is stored on all nodes. The former is generally cheaper and sometimes (but not always) can bring back full consistency on its own; repair, while being more costly, is a sure way to bring back current data to full consistency.
When hinted handoff and repair are running at the same time, some of the work can be unnecessarily duplicated. For example, if a row is repaired first, then hints towards it become unnecessary. However, repair needs to do less work if data already has good consistency, so if hints finish first, then the repair will be shorter.
This PR introduces a possibility to wait for hints to be replayed before continuing with user-issued repair. The coordinator of the repair operation asks all nodes participating in the repair operation (including itself) to mark a point at the end of all hint queues pointing towards other nodes participating in repair. Then, it waits until hint replay in all those queues reaches marked point, or configured timeout is reached.
This operation is currently opt-in and can be turned on by setting the `wait_for_hint_replay_before_repair_in_ms` config option to a positive value.
Fixes#8102
Tests:
- unit(dev)
- some manual tests:
- shutting down repair coordinator during hints replay,
- shutting down node participating in repair during hints replay,
Closes#8452
* github.com:scylladb/scylla:
repair: introduce abort_source for repair abort
repair: introduce abort_source for shutdown
storage_proxy: add abort_source to wait_for_hints_to_be_replayed
storage_proxy: stop waiting for hints replay when node goes down
hints: dismiss segment waiters when hint queue can't send
repair: plug in waiting for hints to be sent before repair
repair: add get_hosts_participating_in_repair
storage_proxy: coordinate waiting for hints to be sent
config: add wait_for_hint_replay_before_repair option
storage_proxy: implement verbs for hint sync points
messaging_service: add verbs for hint sync points
storage_proxy: add functions for syncing with hints queue
db/hints: make it possible to wait until current hints are sent
db/hints: add a metric for counting processed files
db/hints: allow to forcefully update segment list on flush
Adds two verbs: HINT_SYNC_POINT_CREATE and HINT_SYNC_POINT_CHECK.
Those will make it possible to create a sync point and regularly poll
to check its existence.
This patch introduces the most basic bare infrastructure to generate
coverage report as well as a guide on how to manually generate them.
Although this barely qualifies as "support", it already allows one to
generate a coverage report with the help of this guide.
One immediate limitation of this patch is that it only supports clang,
which is not a terrible problem, given that its our main compiler
currently.
Future patches will build on this to incrementally improve and
automate this:
* Provide script to automatically merge profraw files and generate html
report from it.
* Integrate into test.py, adding a flag which causes it to generate
a coverage report after a run.
* Support GCC too, but at least auto-detect whether clang is used or
not.
Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <20210423140100.659452-1-bdenes@scylladb.com>
The migration_task is the class with the single static method
that's called from a single place in migration manager and
this method calls migration manager back right at once. There's
no much sense in keeping this abstraction, merge it into the
migration manager.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
The Redis server started as a copy of the CQL server, but did not
receive all the fixes of the CQL server over time. For example, commit
1a8630e ("transport: silence "broken pipe" and "connection reset by
peer" errors") was only done on the CQL server.
To remedy the situation, this pull request unifies code between the CQL
and Redis servers by introducing a "generic_server" component, and
switching CQL and Redis to use it.
Test: dtest(dev)
Closes#8388
* github.com:scylladb/scylla:
generic_server: Rename "maybe_idle" to "maybe_stop"
generic_server: API documentation for connection and server classes
transport, redis: Use generic server::listen()
transport/server: Remove "redis_server" prefix from logging
transport/server: Remove "cql_server" prefix from logging
generic_server: Remove unneeded static_pointer_cast<>
transport, redis: Use generic server::do_accepts()
transport, redis: Use generic server::process()
redis: Move Redis specific code to handle_error()
transport: Move CQL specific error handling to handle_error()
transport, redis: Move connection tracking to generic_server::server class
transport, redis: Move _stopped and _connections_list to generic_server::server class
transport, redis: Move total_connections to generic_server::server class
transport, redis: Use generic server::maybe_idle()
transport, redis: Move list_base_hook<> inheritance to generic_server::connection
transport, redis: Use generic connection::shutdown()
Support native building & unit testing in the Nix ecosystem under
nix-shell.
Actual dist packaging for Nixpkgs/NixOS is not there (yet?), because:
* Does not exactly seem like a huge priority.
* I don't even have a firm idea of how much work it would entail (it
certainly does not need the ld.so trickery, so there's that. But at
least some work would be needed, seeing how ScyllaDB needs to
integrate with its environment and NixOS is a little unorthodox).
Signed-off-by: Michael Livshin <michael.livshin@scylladb.com>
Message-Id: <20210413110508.5901-4-michael.livshin@scylladb.com>
This patch moves the duplicated connection::shutdown() method to to a
new generic_server::connection base class that is now inherited by
cql_server and redis_server.
This series introduces service level syntax borrowed from https://docs.scylladb.com/using-scylla/workload-prioritization/ , but without workload prioritization itself - just for the sake of using identical syntax to provide different parameters later. The new parameters may include:
* per-service-level timeouts
* oltp/olap declaration, which may change the way Scylla treats long requests - e.g. time them out (the oltp way) or keep them sustained with empty pages (the olap way)
Refs #7617Closes#7867
* github.com:scylladb/scylla:
transport: initialize query state with service level controller
main: add initializing service level data accessor
service: make enable_shared_from_this inheritance public
cql3: add SERVICE LEVEL syntax (without an underscore)
unit test: Add unit test for per user sla syntax
cql: Add support for service level cql queries
auth: Add service_level resource for supporting in authorization of cql service_level
cql: Support accessing service_level_controller from query state
instantiate and initialize the service_level_controller
qos: Add a standard implementation for service level data accessor
qos: add waiting for the updater future
service/qos: adding service level controller
service_levels: Add documentation for distributed tables
service/qos: adding service level table to the distributed keyspace
service/qos: add common definitions
auth: add support for role attributes
This patch adds support for new service level cql queries.
The queries implemented are:
CREATE SERVICE_LEVEL [IF NOT EXISTS] <service_level_name>
ALTER SERVICE_LEVEL <service_level_name> WITH param = <something>
DROP SERVICE_LEVEL [IF EXISTS] <service_level_name>
ATTACH SERVICE_LEVEL <service_level_name> TO <role_name>
DETACH SERVICE_LEVEL FROM <role_name>
LIST SERVICE_LEVEL <service_level_name>
LIST ALL SERVICE_LEVELS
LIST ATTACHED SERVICE_LEVEL OF <role_name>
LIST ALL ATTACHED SERVICE_LEVELS
service_level_controller defines an interface for accessing the service
level distributed data, this patch implements a standard implementation
of the interface that delegates to the system distributed keyspace.
Message-Id: <25e68302f6f4d4fe5fcb66ea19159ad68506ba64.1609175314.git.sarna@scylladb.com>
This pull request adds a "ninja help" build target in hopes of making
the different build targets more discoverable to developers.
Closes#8454
* github.com:scylladb/scylla:
building.md: Document "ninja help" target
configure.py: "ninja help" target
building.md: Document "ninja <mode>-dist" target
configure.py: Add <mode>-dist target as alias for dist-<mode>
This adds a "help" build target, which prints out important build
targets. The printing is done in a separate shell script, becaue "ninja"
insists on print out the "command" before executing it, which makes the
help text unreadable.
Those functions were defined in a header, but not marked inline.
This made including the header from two source files impossible,
as the linker would complain about duplicate symbols.
Rather than making them inline, put them in a new source file
perf.cc as they don't need to be inline.
caching_options is by no means performance sensitive, but it is
included in many places (via schema.hh), and it turn it pulls in
other includes. Reduce include load by moving deinlining it.
Ref #1.
Closes#8408
We will need them to replace bytes with managed_bytes in some places in an
upcoming patch.
The change to configure.py is necessary because opearator<< links to to_hex
in bytes.cc.
Until clang figures things out with the now infamous
`-llvm -inline-threshold X` parameter, let's allow customizing
it to make the compilation of release builds less tiresome.
For instance, scylla's row_level.o object file currently does not compile
for me until I decrease the inline threshold to a low value (e.g. 50).
Message-Id: <54113db9438e3c3371410996f49b7fbe9a1b7257.1616422536.git.sarna@scylladb.com>
Since we switched scylla-python3 build directory to tools/python3/build
on Jenkins, we nolonger need compat-python3 targets, drop them.
Related scylladb/scylla-pkg#1554
Closes#8328