"
This path set fixes stalls in repair that are caused by std::list merge and clear operations during test_latency_read_with_nemesis test.
Fixes#6940Fixes#6975Fixes#6976
"
* 'fix_repair_list_stall_merge_clear_v2' of github.com:asias/scylla:
repair: Fix stall in apply_rows_on_master_in_thread and apply_rows_on_follower
repair: Use clear_gently in get_sync_boundary to avoid stall
utils: Add clear_gently
repair: Use merge_to_gently to merge two lists
utils: Add merge_to_gently
It is used only for updating the metadata_collector {min,max}_column_names.
Implement metadata_collector::do_update_min_max_components in
sstables/metadata_collector.cc that will be used to host some other
metadata_collector methods in following patches that need not be
implemented in the header file.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Move the classes representing CQL expressions (and utility functions
on them) from the `restrictions` namespace to a new namespace `expr`.
Most of the restriction.hh content was moved verbatim to
expression.hh. Similarly, all expression-related code was moved from
statement_restrictions.cc verbatim to expression.cc.
As suggested in #5763 feedback
https://github.com/scylladb/scylla/pull/5763#discussion_r443210498
Tests: dev (unit)
Signed-off-by: Dejan Mircevski <dejan@scylladb.com>
Add a unit test checking if the top 32 bits of the number of remaining rows in paging state
is used correctly and a manual test checking if it's possible to select over 2^32 rows from a table
and a virtual reader for this table.
Fixes#6341
Since scylla no longer supports upgrading from a version without the
"new" (dedicated) truncation record table, we can remove support for these
and the migtration thereof.
Make sure the above holds whereever this is committed.
Note that this does not remove the "truncated_at" field in
system.local.
The "outdir" variable in configure.py and "$builddir" in build.ninja
file specifies the build directory. Let's use them to eliminate
hard-coded "build" paths from configure.py.
Message-Id: <20200731105113.388073-1-penberg@scylladb.com>
This is a 4.2% reduction in the scylla text size, from 38975956 to
37404404 bytes.
When benchmarking perf_simple_query without --shuffle-sections, there
is no performance difference.
Signed-off-by: Rafael Ávila de Espíndola <espindola@scylladb.com>
Message-Id: <20200724032504.3004-1-espindola@scylladb.com>
In another patch I noticed gcc producing dead functions. I am not sure
why gcc is doing that. Some of those functions are already placed in
independent sections, and so can be garbage collected by the linker.
This is a 1% text section reduction in scylla, from 39363380 to
38974324 bytes. There is no difference in the tps reported by
perf_simple_query.
Signed-off-by: Rafael Ávila de Espíndola <espindola@scylladb.com>
Message-Id: <20200723152511.8214-1-espindola@scylladb.com>
In the patch "Add exception overloads for Dynamo types", Alternator's single
api_error exception type was replaced by a more complex hierarchy of types.
The implementation was not only longer and more complex to understand -
I believe it also negated an important observation:
The "api_error" exception type is special. It is not an exception created
by code for other code. It is not meant to be caught in Alternator code.
Instead, it is supposed to contain an error message created for the *user*,
containing one of the few supported exception exception "names" described
in the DynamoDB documentation, and a user-readable text message. Throwing
such an exception in Alternator code means the thrower wants the request
to abort immediately, and this message to reach the user. These exceptions
are not designed to be caught in Alternator code. Code should use other
exceptions - or alternatives to exceptions (e.g., std::optional) for
problems that should be handled before returning a different error to the
user. Moreover, "api_error" isn't just thrown as an exception - it can
also be returned-by-value in a executor::request_return_type) - which is
another reason why it should not be subclassed.
For these reasons, I believe we should have a single api_error type, and
it's wrong to subclass it. So in this patch I am reverting the subclasses
and template added in the aforementioned patch.
Still, one correct observation made in that patch was that it is
inconvenient to type in DynamoDB exception names (no help from the editor
in completing those strings) and also error-prone. In this patch we
propse a different - simpler - solution to the same problem:
We add trivial factory functions, e.g., api_error::validation(std::string)
as a shortcut to api_error("ValidationException"). The new implementation
is easy to understand, and also more self explanatory to readers:
It is now clear that "api_error::validation()" is actually a user-visible
"api_error", something which was obscured by the name validation_exception()
used before this patch.
Finally, this patch also improves the comment in error.hh explaining the
purpose of api_error and the fact it can be returned or thrown. The fact
it should not be subclassed is legislated with a "finally". There is also
no point of this class inheriting from std::exception or having virtual
functions, or an empty constructor - so all these are dropped as well.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Ninja has a special pool called console that causes programs in that
pool to output directly to the console instead of being logged. By
putting test.py in it is now possible to run just
$ ninja dev-test
And see the test.py output while it is running.
Signed-off-by: Rafael Ávila de Espíndola <espindola@scylladb.com>
Message-Id: <20200716204048.452082-1-espindola@scylladb.com>
The targets {dev|debug|release}-test run all unit tests, including
alternator/run. But this test requires the Scylla executable, which
wasn't among the dependencies. Fix it by adding build/$mode/scylla to
the dependency list.
Fixes#6855.
Tests: `ninja dev-test` after removing build/dev/scylla
Signed-off-by: Dejan Mircevski <dejan@scylladb.com>
Merged pull request https://github.com/scylladb/scylla/pull/6694
by Calle Wilund:
Implementation of DynamoDB streams using Scylla CDC.
Fixes#5065
Initial, naive implementation insofar that it uses 1:1 mapping CDC stream to
DynamoDB shard. I.e. there are a lot of shards.
Includes tests verified against both local DynamoDB server and actual AWS
remote one.
Note:
Because of how data put is implemented in alternator, currently we do not
get "proper" INSERT labels for first write of data, because to CDC it looks
like an update. The test compensates for this, but actual users might not
like it.
The data model is now
bplus::tree<Key = int64_t, T = array<entry>>
where entry can be cache_entry or memtable_entry.
The whole thing is encapsulated into a collection called "double_decker"
from patch #3. The array<T> is an array of T-s with 0-bytes overhead used
to resolve hash conflicts (patch #2).
branch:
tests: unit(debug)
tests before v7:
unit(debug) for new collections, memtable and row_cache
unit(dev) for the rest
perf(dev)
* https://github.com/xemul/scylla/commits/row-cache-over-bptree-9:
test: Print more sizes in memory_footprint_test
memtable: Switch onto B+ rails
row_cache: Switch partition tree onto B+ rails
memtable: Count partitions separately
token: Introduce raw() helper and raw comparator
row-cache: Use ring_position_comparator in some places
dht: Detach ring_position_comparator_for_sstables
double-decker: A combination of B+tree with array
intrusive-array: Array with trusted bounds
utils: B+ tree implementation
test: Move perf measurement helpers into header
Add types exception overloads for ValidationException, ResourceNotFoundException, etc,
to avoid writing explicit error type as string everywhere (with the potential for
spelling errors ever present).
Also allows intellisense etc to complete the exception when coded.
The collection is K:V store
bplus::tree<Key = K, Value = array_trusted_bounds<V>>
It will be used as partitions cache. The outer tree is used to
quickly map token to cache_entry, the inner array -- to resolve
(expected to be rare) hash collisions.
It also must be equipped with two comparators -- less one for
keys and full one for values. The latter is not kept on-board,
but it required on all calls.
The core API consists of just 2 calls
- Heterogenuous lower_bound(search_key) -> iterator : finds the
element that's greater or equal to the provided search key
Other than the iterator the call returns a "hint" object
that helps the next call.
- emplace_before(iterator, key, hint, ...) : the call construct
the element right before the given iterator. The key and hint
are needed for more optimal algo, but strictly speaking not
required.
Adding an entry to the double_decker may result in growing the
node's array. Here to B+ iterator's .reconstruct() method
comes into play. The new array is created, old elements are
moved onto it, then the fresh node replaces the old one.
// TODO: Ideally this should be turned into the
// template <typename OuterCollection, typename InnerCollection>
// but for now the double_decker still has some intimate knowledge
// about what outer and inner collections are.
Insertion into this collection _may_ invalidate iterators, but
may leave intact. Invalidation only happens in case of hashing
conflict, which can be clearly seen from the hint object, so
there's a good room for improvement.
The main usage by row_cache (the find_or_create_entry) looks like
cache_entry find_or_create_entry() {
bound_hint hint;
it = lower_bound(decorated_key, &hint);
if (!hint.found) {
it = emplace_before(it, decorated_key.token(), hint,
<constructor args>)
}
return *it;
}
Now the hint. It contains 3 booleans, that are
- match: set to true when the "greater or equal" condition
evaluated to "equal". This frees the caller from the need
to manually check whether the entry returned matches the
search key or the new one should be inserted.
This is the "!found" check from the above snippet.
To explain the next 2 bools, here's a small example. Consider
the tree containing two elements {token, partition key}:
{ 3, "a" }, { 5, "z" }
As the collection is sorted they go in the order shown. Next,
this is what the lower_bound would return for some cases:
{ 3, "z" } -> { 5, "z" }
{ 4, "a" } -> { 5, "z" }
{ 5, "a" } -> { 5, "z" }
Apparently, the lower bound for those 3 elements are the same,
but the code-flows of emplacing them before one differ drastically.
{ 3, "z" } : need to get previous element from the tree and
push the element to it's vector's back
{ 4, "a" } : need to create new element in the tree and populate
its empty vector with the single element
{ 5, "a" } : need to put the new element in the found tree
element right before the found vector position
To make one of the above decisions the .emplace_before would need
to perform another set of comparisons of keys and elements.
Fortunately, the needed information was already known inside the
lower_bound call and can be reported via the hint.
Said that,
- key_match: set to true if tree.lower_bound() found the element
for the Key (which is token). For above examples this will be
true for cases 3z and 5a.
- key_tail: set to true if the tree element was found, but when
comparing values from array the bounding element turned out
to belong to the next tree element and the iterator was ++-ed.
For above examples this would be true for case 3z only.
And the last, but not least -- the "erase self" feature. Which is
given only the cache_entry pointer at hands remove it from the
collection. To make this happen we need to make two steps:
1. get the array the entry sits in
2. get the b+ tree node the vectors sits in
Both methods are provided by array_trusted_bounds and bplus::tree.
So, when we need to get iterator from the given T pointer, the algo
looks like
- Walk back the T array untill hitting the head element
- Call array_trusted_bounds::from_element() getting the array
- Construct b+ iterator from obtained array
- Construct the double_decker iterator from b+ iterator and from
the number of "steps back" from above
- Call double_decker::iterator.erase()
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
A plain array of elements that grows and shrinks by
constructing the new instance from an existing one and
moving the elements from it.
Behaves similarly to vector's external array, but has
0-bytes overhead. The array bounds (0-th and N-th
elemements) are determined by checking the flags on the
elements themselves. For this the type must support
getters and setters for the flags.
To remove an element from array there's also a nothrow
option that drops the requested element from array,
shifts the righter ones left and keeps the trailing
unused memory (so called "train") until reconstruction
or destruction.
Also comes with lower_bound() helper that helps keeping
the elements sotred and the from_element() one that
returns back reference to the array in which the element
sits.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
// The story is at
// https://groups.google.com/forum/#!msg/scylladb-dev/sxqTHM9rSDQ/WqwF1AQDAQAJ
This is the B+ version which satisfies several specific requirements
to be suitable for row-cache usage.
1. Insert/Remove doesn't invalidate iterators
2. Elements should be LSA-compactable
3. Low overhead of data nodes (1 pointer)
4. External less-only comparator
5. As little actions on insert/delete as possible
6. Iterator walks the sorted keys
The design, briefly is:
There are 3 types of nodes: inner, leaf and data, inner and leaf
keep build-in array of N keys and N(+1) nodes. Leaf nodes sit in
a doubly linked list. Data nodes live separately from the leaf ones
and keep pointers on them. Tree handler keeps pointers on root and
left-most and right-most leaves. Nodes do _not_ keep pointers or
references on the tree (except 3 of them, see below).
changes in v9:
- explicitly marked keys/kids indices with type aliases
- marked the whole erase/clear stuff noexcept
- disposers now accept object pointer instead of reference
- clear tree in destructor
- added more comments
- style/readability review comments fixed
Prior changes
**
- Add noexcepts where possible
- Restrict Less-comparator constraint -- it must be noexcept
- Generalized node_id
- Packed code for beging()/cbegin()
**
- Unsigned indices everywhere
- Cosmetics changes
**
- Const iterators
- C++20 concepts
**
- The index_for() implmenetation is templatized the other way
to make it possible for AVX key search specialization (further
patching)
**
- Insertion tries to push kids to siblings before split
Before this change insertion into full node resulted into this
node being split into two equal parts. This behaviour for random
keys stress gives a tree with ~2/3 of nodes half-filled.
With this change before splitting the full node try to push one
element to each of the siblings (if they exist and not full).
This slows the insertion a bit (but it's still way faster than
the std::set), but gives 15% less total number of nodes.
- Iterator method to reconstruct the data at the given position
The helper creates a new data node, emplaces data into it and
replaces the iterator's one with it. Needed to keep arrays of
data in tree.
- Milli-optimize erase()
- Return back an iterator that will likely be not re-validated
- Do not try to update ancestors separation key for leftmost kid
This caused the clear()-like workload work poorly as compared to
std:set. In particular the row_cache::invalidate() method does
exactly this and this change improves its timing.
- Perf test to measure drain speed
- Helper call to collect tree counters
**
- Fix corner case of iterator.emplace_before()
- Clean heterogenous lookup API
- Handle exceptions from nodes allocations
- Explicitly mark places where the key is copied (for future)
- Extend the tree.lower_bound() API to report back whether
the bound hit the key or not
- Addressed style/cleanness review comments
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
The default ninja build target now builds artifacts and packages. Let's
add a 'build' target that only builds the artifacts.
Message-Id: <20200714105042.416698-1-penberg@scylladb.com>
This adds a new "dist-<mode>" target, which builds the server package in
selected build mode together with the other packages, and wires it to
the "<mode>" target, which is built as part of default "ninja"
invocation.
This allows us to perform a full build, package, and test cycle across
all build modes with:
./configure.py && ninja && ./test.py
Message-Id: <20200713101918.117692-1-penberg@scylladb.com>
"
This is the first stage of replacing the existing restrictions code with a new representation. It adds a new class `expression` to replace the existing class `restriction`. Lots of the old code is deleted, though not all -- that will come in subsequent stages.
Tests: unit (dev, debug restrictions_test), dtest (next-gating)
"
* dekimir-restrictions-rewrite:
cql3/restrictions: Drop dead code
cql3/restrictions: Use free functions instead of methods
cql3/restrictions: Create expression objects
cql3/restrictions: Add free functions over new classes
cql3/restrictions: Add new representation
Instead of `restriction` class methods, use the new free functions.
Specific replacement actions are listed below.
Note that class `restrictions` (plural) remains intact -- both its
methods and its type hierarchy remain intact for now.
Ensure full test coverage of the replacement code with new file
test/boost/restrictions_test.cc and some extra testcases in
test/cql/*.
Drop some existing tests because they codify buggy behaviour
(reference #6369, #6382). Drop others because they forbid relation
combinations that are now allowed (eg, mixing equality and
inequality, comparing to NULL, etc.).
Here are some specific categories of what was replaced:
- restriction::is_foo predicates are replaced by using the free
function find_if; sometimes it is used transitively (see, eg,
has_slice)
- restriction::is_multi_column is replaced by dynamic casts (recall
that the `restrictions` class hierarchy still exists)
- utility methods is_satisfied_by, is_supported_by, to_string, and
uses_function are replaced by eponymous free functions; note that
restrictions::uses_function still exists
- restriction::apply_to is replaced by free function
replace_column_def
- when checking infinite_bound_range_deletions, the has_bound is
replaced by local free function bounded_ck
- restriction::bounds and restriction::value are replaced by the more
general free function possible_lhs_values
- using free functions allows us to simplify the
multi_column_restriction and token_restriction hierarchies; their
methods merge_with and uses_function became identical in all
subclasses, so they were moved to the base class
- single_column_primary_key_restrictions<clustering_key>::needs_filtering
was changed to reuse num_prefix_columns_that_need_not_be_filtered,
which uses free functions
Fixes#5799.
Fixes#6369.
Fixes#6371.
Fixes#6372.
Fixes#6382.
Signed-off-by: Dejan Mircevski <dejan@scylladb.com>
Merged patch series by Piotr Sarna:
The alternator project was in need of a more optimized
JSON library, which resulted in creating "rjson" helper functions.
Scylla generally used libjsoncpp for its JSON handling, but in order
to reduce the dependency hell, the usage is now migrated
to rjson, which is faster and offers the same functionality.
The original plan was to be able to drop the dependency
on libjsoncpp-lib altogether and remove it from install-dependencies.sh,
but one last usage of it remains in our test suite,
namely cql_repl. The tool compares its output JSON textually,
so it depends on how a library presents JSON - what are the delimeters,
indentation, etc. It's possible to provide a layer of translation
to force rjson to print in an identical format, but the other issue
is that libjsoncpp keeps subobjects sorted by their name,
while rjson uses an unordered structure.
There are two possible solutions for the last remaining usage
of libjsoncpp:
1. change our test suite to compare JSON documents with a JSON parser,
so that we don't rely on internal library details
2. provide a layer of translation which forces rjson to print
its objects in a format idential to libjsoncpp.
(1.) would be preferred, since now we're also vulnerable for changes
inside libjsoncpp itself - if they change anything in their output
format, tests would start failing. The issue is not critical however,
so it's left for later.
Tests: unit(dev), manual(json_test),
dtest(partitioner_tests.TestPartitioner.murmur3_partitioner_test)
Piotr Sarna (8):
alternator,utils: move rjson.hh to utils/
alternator: remove ambiguous string overloads in rjson
rjson: add parse_to_map helper function
rjson: add from_string_map function
rjson: add non-throwing parsing
rjson: move quote_json_string to rjson
treewide: replace libjsoncpp usage with rjson
configure: drop json.cc and json.hh helpers
alternator/base64.hh | 2 +-
alternator/conditions.cc | 2 +-
alternator/executor.hh | 2 +-
alternator/expressions.hh | 2 +-
alternator/expressions_types.hh | 2 +-
alternator/rmw_operation.hh | 2 +-
alternator/serialization.cc | 2 +-
alternator/serialization.hh | 2 +-
alternator/server.cc | 2 +-
caching_options.hh | 9 +-
cdc/log.cc | 4 +-
column_computation.hh | 5 +-
configure.py | 3 +-
cql3/functions/functions.cc | 4 +-
cql3/statements/update_statement.cc | 24 ++--
cql3/type_json.cc | 212 ++++++++++++++++++----------
cql3/type_json.hh | 7 +-
db/legacy_schema_migrator.cc | 12 +-
db/schema_tables.cc | 1 -
flat_mutation_reader.cc | 1 +
index/secondary_index.cc | 80 +++++------
json.cc | 80 -----------
json.hh | 113 ---------------
schema.cc | 25 ++--
test/boost/cql_query_test.cc | 9 +-
test/manual/json_test.cc | 4 +-
test/tools/cql_repl.cc | 1 +
{alternator => utils}/rjson.cc | 75 +++++++++-
{alternator => utils}/rjson.hh | 40 +++++-
29 files changed, 344 insertions(+), 383 deletions(-)
delete mode 100644 json.cc
delete mode 100644 json.hh
rename {alternator => utils}/rjson.cc (86%)
rename {alternator => utils}/rjson.hh (81%)
Thrift 0.12 includes a change [1] that avoids writing the generated output
if it has not changed. As a result, if you touch cassandra.thrift
(but not change it), the generated files will not update, and as
a result ninja will try to rebuild them every time. The compilation
of thrift files will be fast due to ccache, but still we will re-link
everything.
This touching of cassandra.thrift can happen naturally when switching
to a different git branch and then switching back. The net result
is that cassandra.thrift's contents has not changed, but its timestamp
has.
Fix by adding the "restat" option to the thrift rule. This instructs
ninja to check of the output has changed as expected or not, and to
avoid unneeded rebuilds if it has not.
[1] https://issues.apache.org/jira/browse/THRIFT-4532
This patch set adds a few new features in order to fix issue
The list of changes is briefly as follows:
- Add a new `LWT` flag to `cql3::prepared_metadata`,
which allows clients to clearly distinguish betwen lwt and
non-lwt statements without need to execute some custom parsing
logic (e.g. parsing the prepared query with regular expressions),
which is obviously quite fragile.
- Introduce the negotiation procedure for cql protocol extensions.
This is done via `cql_protocol_extension` enum and is expected
to have an appropriate mirroring implementation on the client
driver side in order to work properly.
- Implmenent a `LWT_ADD_METADATA_MARK` cql feature on top of the
aforementioned algorithm to make the feature negotiable and use
it conditionally (iff both server and client agrees with each
other on the set of cql extensions).
The feature is meant to be further utilized by client drivers
to use primary replicas consistently when dealing with conditional
statements.
* git@github.com:ManManson/scylla feature/lwt_prepared_meta_flag_2:
lwt: introduce "LWT" flag in prepared statement metadata
transport: introduce `cql_protocol_extension` enum and cql protocol extensions negotiation
"
The snapshotting code is already well isolated from the rest of
the storage_service, so it's relatively easy to move it into
independent component, thus de-bloating the storage_service.
As a side effect this allows painless removal of calls to global
get_storage_service() from schema::describe code.
Test: unit(debug), dtest.snapshot_test(dev), manual start-stop
"
* 'br-snapshot-controller-4' of https://github.com/xemul/scylla:
snap: Get rid of storage_service reference in schema.cc
main: Stop http server
snapshot: Make check_snapshot_not_exist a method
snapshots: Move ops gate from storage_service
snapshot: Move lock from storage_service
snapshot: Move all code into db::snapshot_ctl class
storage_service: Move all snapshot code into snapshot-ctl.cc
snapshots: Initial skeleton
snapshots: Properly shutdown API endpoints
api: Rewrap set_server_snapshot lambda
As HACKING.md suggests, we now require gcc version >= 10. Set the
minimum at 10.1.1, as that is the first official 10 release:
https://gcc.gnu.org/releases.html
Tests: manually run configure.py and ensure it passes/fails
appropriately.
Signed-off-by: Dejan Mircevski <dejan@scylladb.com>
A placeholder for snapshotting code that will be moved into it
from the storage_service.
Also -- pass it through the API for future use.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
"
This patch series attempts to decouple package build and release
infrastructure, which is internal to Scylla (the company). The goal of
this series is to make it easy for humans and machines to build the full
Scylla distribution package artifacts, and make it easy to quickly
verify them.
The improvements to build system are done in the following steps.
1. Make scylla.git a super-module, which has git submodules for
scylla-jmx and scylla-tools. A clone of scylla.git is now all that
is needed to access all source code of all the different components
that make up a Scylla distribution, which is a preparational step to
adding "dist" ninja build target. A scripts/sync-submodules.sh helper
script is included, which allows easy updating of the submodules to the
latest head of the respective git repositories.
2. Make builds reproducible by moving the remaining relocatable package
specific build options from reloc/build_reloc.sh to the build system.
After this step, you can build the exact same binaries from the git
repository by using the dbuild version from scylla.git.
3. Add a "dist" target to ninja build, which builds all .rpm and .deb
packages with one command. To build a release, run:
$ ./tools/toolchain/dbuild ./configure.py --mode release
$ ./tools/toolchain/dbuild ninja-build dist
and you will now have .rpm and .deb packages to all the components of
a Scylla distribution.
4. Add a "dist-check" target to ninja build for verification of .rpm and
.deb packages in one command. To verify all the built packages, run:
$ ninja-build dist-check
Please note that you must run this step on the host, because the
target uses Docker under the hood to verify packages by installing
them on different Linux distributions.
Currently only CentOS 7 verification is supported.
All these improvements are done so that backward compatibility is
retained. That is, any existing release infrastructure or other build
scripts are completely unaffacted.
Future improvements to consider:
- Package repository generation: add a "ninja repo" command to generate
a .rpm and .deb repositories, which can be uploaded to a web site.
This makes it possible to build a downloadable Scylla distribution
from scylla.git. The target requires some configuration, which user
has to provide. For example, download URL locations and package
signing keys.
- Amazon Machine Image (AMI) support: add a "ninja ami" command to
simplify the steps needed to generate a Scylla distribution AMI.
- Docker image support: add a "ninja docker" command to simplify the
steps needed to generate a Scylla distribution Docker image.
- Simplify and unify package build: simplify and unify the various shell
scripts needed to build packages in different git repositories. This
step will break backward compatiblity and can be done only after
relevant build scripts and release infrastructure is updated.
"
* 'penberg/packaging/v5' of github.com:penberg/scylla:
docs: Update packaging documentation
build: Add "dist-check" target
scripts/testing: Add "dist-check" for package verification
build: Add "dist" target
reloc: Add '--builddir' option to build_deb.sh
build: Add "-ffile-prefix-map" to cxxflags
docs: Document sync-submodules.sh script in maintainer.md
sync-submodules.sh: Add script for syncing submodules
Add scylla-tools submodule
Add scylla-jmx submodule