Compilation is not enabled by default as it requires coroutines support
and may require special compiler (until distributed one fixes all the
bugs related to coroutines). To enable raft tests compilation new
configure.py option is added (--build-raft).
Today when ever we are building scylla in a singel mode we still
building jmx, tools and python3 for all dev,release and debug.
Let's make sure we build only in relevant build mode
Also adding unified-tar to ninja build
Closes#7260
The "mode" variable name is used everywhere, usually in a loop.
Therefore, rename the global "mode" to "checkheaders_mode" so that if
your code block happens to be outside of a loop, you don't accidentally
use the globally visible "mode" and spend hours debugging why it's
always "dev".
Spotted by Yaron Kaikov.
Message-Id: <20200924112237.315817-1-penberg@scylladb.com>
Let's build scylla-unified-package.tar.gz in build/<mode>/dist/tar for
symmetry. The old location is still kept for backward compatibility for
now. Also document the new official artifact location.
Message-Id: <20200917071131.126098-1-penberg@scylladb.com>
The build target for scylla-unified-package.tar.gz incorrectly depends
on "build/<mode>/scylla-python3-package.tar.gz", which is never
generated. Instead, the package is either generated in
"build/release/scylla-python3-package.tar.gz" (for legacy reasons) or
"build/<mode>/dist/tar/scylla-python3-package.tar.gz". This issues
causes building unified package in other modes to fail.
To solve the problem, let's switch to using the "build/<mode>/dist/tar"
locations for unified tarball dependencies, which is the correct place
to use anyway.
We don't want to update scylla-python3 submodule for every python3 dependency
update, bring python3 package list to python3-dependencies.txt, pass it on
package building time.
See #6702
See scylladb/scylla-python3#6
[avi: add
* tools/python3 19a9cd3...b4e52ee (1):
> Allow specify package dependency list by --packages
to maintain bisectability]
Fix the default number of test repeats to 1, which it was before
(spotted by Nadav). Also, prefix the options so that they become
"--test-repeat" and "--test-timeout" (spotted by Avi).
Message-Id: <20200825081456.197210-1-penberg@scylladb.com>
It is useful to distinguish if the repair is a regular repair or used
for node operations.
In addition, log the keyspace and tables are repaired.
Fixes#7086
When user defines a build mode with configure.py, the build, check, and
test targets fail as follows:
./configure.py --mode=dev && ninja build
ninja: error: 'debug-build', needed by 'build', missing and no known rule to make it
Fix the issue by making the targets depend on build targets for
specified build modes, not all available modes.
Message-Id: <20200813105639.1641090-1-penberg@scylladb.com>
"
operator_type is awkward because it's not copyable or assignable. Replace it with a new enum class.
Tests: unit(dev)
"
* dekimir-operator-type:
cql3: Drop operator_type entirely
cql3: Drop operator_type from the parser
cql3/expr: Replace operator_type with an enum
As suggested by Avi, let's move the tarballs from
"build/dist/<mode>/tar" to "build/<mode>/dist/tar" to retain the
symmetry of different build modes, and make the tarballs easier to
discover. While at it, let's document the new tarball locations.
Message-Id: <20200818100427.1876968-1-penberg@scylladb.com>
Except scylla-python3, each scylla package has its own git repository, same package script filename, same build directory structure.
To put python3 thing on scylla repo, we created 'python3' directory on multiple locations, made '-python3' suffixed files, dig deeper build directory not to conflict scylla-server package build.
We should move all scylla-python3 related files to new repository, scylla-python3.
To keep compatibility with current Jenkins script, provide packages on
build/ directory for now.
Fixes#6751
"
This patch series changes the build system to build all tarballs to
build/dist/<mode>/tar directory. For example, running:
./tools/toolchain/dbuild ./configure.py --mode=dev && ./tools/toolchain/dbuild ninja-build dist-tar
produces the following tarballs in build/dist/dev/tar:
$ ls -1 build/dist/dev/tar/
scylla-jmx-package.tar.gz
scylla-package.tar.gz
scylla-python3-package.tar.gz
scylla-tools-package.tar.gz
This makes it easy to locate release tarballs for humans and scripts. To
preserve backward compatibility, the tarballs are also retained in their
original locations. Once release engineering infrastructure has been
adjusted to use the new locations, we can drop the duplicate copies.
"
* 'penberg/build-dist-tar/v1' of github.com:penberg/scylla:
configure.py: Copy tarballs to build/dist/<mode>/tar directory
configure.py: Add "dist-<component>-tar" targets
reloc/python3: Add "--builddir" to build_deb.sh
configure.py: Use copy-on-write copies when possible
"
This path set fixes stalls in repair that are caused by std::list merge and clear operations during test_latency_read_with_nemesis test.
Fixes#6940Fixes#6975Fixes#6976
"
* 'fix_repair_list_stall_merge_clear_v2' of github.com:asias/scylla:
repair: Fix stall in apply_rows_on_master_in_thread and apply_rows_on_follower
repair: Use clear_gently in get_sync_boundary to avoid stall
utils: Add clear_gently
repair: Use merge_to_gently to merge two lists
utils: Add merge_to_gently
It is used only for updating the metadata_collector {min,max}_column_names.
Implement metadata_collector::do_update_min_max_components in
sstables/metadata_collector.cc that will be used to host some other
metadata_collector methods in following patches that need not be
implemented in the header file.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Move the classes representing CQL expressions (and utility functions
on them) from the `restrictions` namespace to a new namespace `expr`.
Most of the restriction.hh content was moved verbatim to
expression.hh. Similarly, all expression-related code was moved from
statement_restrictions.cc verbatim to expression.cc.
As suggested in #5763 feedback
https://github.com/scylladb/scylla/pull/5763#discussion_r443210498
Tests: dev (unit)
Signed-off-by: Dejan Mircevski <dejan@scylladb.com>
Add a unit test checking if the top 32 bits of the number of remaining rows in paging state
is used correctly and a manual test checking if it's possible to select over 2^32 rows from a table
and a virtual reader for this table.
Fixes#6341
Since scylla no longer supports upgrading from a version without the
"new" (dedicated) truncation record table, we can remove support for these
and the migtration thereof.
Make sure the above holds whereever this is committed.
Note that this does not remove the "truncated_at" field in
system.local.
The "outdir" variable in configure.py and "$builddir" in build.ninja
file specifies the build directory. Let's use them to eliminate
hard-coded "build" paths from configure.py.
Message-Id: <20200731105113.388073-1-penberg@scylladb.com>
This is a 4.2% reduction in the scylla text size, from 38975956 to
37404404 bytes.
When benchmarking perf_simple_query without --shuffle-sections, there
is no performance difference.
Signed-off-by: Rafael Ávila de Espíndola <espindola@scylladb.com>
Message-Id: <20200724032504.3004-1-espindola@scylladb.com>
In another patch I noticed gcc producing dead functions. I am not sure
why gcc is doing that. Some of those functions are already placed in
independent sections, and so can be garbage collected by the linker.
This is a 1% text section reduction in scylla, from 39363380 to
38974324 bytes. There is no difference in the tps reported by
perf_simple_query.
Signed-off-by: Rafael Ávila de Espíndola <espindola@scylladb.com>
Message-Id: <20200723152511.8214-1-espindola@scylladb.com>
In the patch "Add exception overloads for Dynamo types", Alternator's single
api_error exception type was replaced by a more complex hierarchy of types.
The implementation was not only longer and more complex to understand -
I believe it also negated an important observation:
The "api_error" exception type is special. It is not an exception created
by code for other code. It is not meant to be caught in Alternator code.
Instead, it is supposed to contain an error message created for the *user*,
containing one of the few supported exception exception "names" described
in the DynamoDB documentation, and a user-readable text message. Throwing
such an exception in Alternator code means the thrower wants the request
to abort immediately, and this message to reach the user. These exceptions
are not designed to be caught in Alternator code. Code should use other
exceptions - or alternatives to exceptions (e.g., std::optional) for
problems that should be handled before returning a different error to the
user. Moreover, "api_error" isn't just thrown as an exception - it can
also be returned-by-value in a executor::request_return_type) - which is
another reason why it should not be subclassed.
For these reasons, I believe we should have a single api_error type, and
it's wrong to subclass it. So in this patch I am reverting the subclasses
and template added in the aforementioned patch.
Still, one correct observation made in that patch was that it is
inconvenient to type in DynamoDB exception names (no help from the editor
in completing those strings) and also error-prone. In this patch we
propse a different - simpler - solution to the same problem:
We add trivial factory functions, e.g., api_error::validation(std::string)
as a shortcut to api_error("ValidationException"). The new implementation
is easy to understand, and also more self explanatory to readers:
It is now clear that "api_error::validation()" is actually a user-visible
"api_error", something which was obscured by the name validation_exception()
used before this patch.
Finally, this patch also improves the comment in error.hh explaining the
purpose of api_error and the fact it can be returned or thrown. The fact
it should not be subclassed is legislated with a "finally". There is also
no point of this class inheriting from std::exception or having virtual
functions, or an empty constructor - so all these are dropped as well.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Ninja has a special pool called console that causes programs in that
pool to output directly to the console instead of being logged. By
putting test.py in it is now possible to run just
$ ninja dev-test
And see the test.py output while it is running.
Signed-off-by: Rafael Ávila de Espíndola <espindola@scylladb.com>
Message-Id: <20200716204048.452082-1-espindola@scylladb.com>
The targets {dev|debug|release}-test run all unit tests, including
alternator/run. But this test requires the Scylla executable, which
wasn't among the dependencies. Fix it by adding build/$mode/scylla to
the dependency list.
Fixes#6855.
Tests: `ninja dev-test` after removing build/dev/scylla
Signed-off-by: Dejan Mircevski <dejan@scylladb.com>
Merged pull request https://github.com/scylladb/scylla/pull/6694
by Calle Wilund:
Implementation of DynamoDB streams using Scylla CDC.
Fixes#5065
Initial, naive implementation insofar that it uses 1:1 mapping CDC stream to
DynamoDB shard. I.e. there are a lot of shards.
Includes tests verified against both local DynamoDB server and actual AWS
remote one.
Note:
Because of how data put is implemented in alternator, currently we do not
get "proper" INSERT labels for first write of data, because to CDC it looks
like an update. The test compensates for this, but actual users might not
like it.
The data model is now
bplus::tree<Key = int64_t, T = array<entry>>
where entry can be cache_entry or memtable_entry.
The whole thing is encapsulated into a collection called "double_decker"
from patch #3. The array<T> is an array of T-s with 0-bytes overhead used
to resolve hash conflicts (patch #2).
branch:
tests: unit(debug)
tests before v7:
unit(debug) for new collections, memtable and row_cache
unit(dev) for the rest
perf(dev)
* https://github.com/xemul/scylla/commits/row-cache-over-bptree-9:
test: Print more sizes in memory_footprint_test
memtable: Switch onto B+ rails
row_cache: Switch partition tree onto B+ rails
memtable: Count partitions separately
token: Introduce raw() helper and raw comparator
row-cache: Use ring_position_comparator in some places
dht: Detach ring_position_comparator_for_sstables
double-decker: A combination of B+tree with array
intrusive-array: Array with trusted bounds
utils: B+ tree implementation
test: Move perf measurement helpers into header
Add types exception overloads for ValidationException, ResourceNotFoundException, etc,
to avoid writing explicit error type as string everywhere (with the potential for
spelling errors ever present).
Also allows intellisense etc to complete the exception when coded.
The collection is K:V store
bplus::tree<Key = K, Value = array_trusted_bounds<V>>
It will be used as partitions cache. The outer tree is used to
quickly map token to cache_entry, the inner array -- to resolve
(expected to be rare) hash collisions.
It also must be equipped with two comparators -- less one for
keys and full one for values. The latter is not kept on-board,
but it required on all calls.
The core API consists of just 2 calls
- Heterogenuous lower_bound(search_key) -> iterator : finds the
element that's greater or equal to the provided search key
Other than the iterator the call returns a "hint" object
that helps the next call.
- emplace_before(iterator, key, hint, ...) : the call construct
the element right before the given iterator. The key and hint
are needed for more optimal algo, but strictly speaking not
required.
Adding an entry to the double_decker may result in growing the
node's array. Here to B+ iterator's .reconstruct() method
comes into play. The new array is created, old elements are
moved onto it, then the fresh node replaces the old one.
// TODO: Ideally this should be turned into the
// template <typename OuterCollection, typename InnerCollection>
// but for now the double_decker still has some intimate knowledge
// about what outer and inner collections are.
Insertion into this collection _may_ invalidate iterators, but
may leave intact. Invalidation only happens in case of hashing
conflict, which can be clearly seen from the hint object, so
there's a good room for improvement.
The main usage by row_cache (the find_or_create_entry) looks like
cache_entry find_or_create_entry() {
bound_hint hint;
it = lower_bound(decorated_key, &hint);
if (!hint.found) {
it = emplace_before(it, decorated_key.token(), hint,
<constructor args>)
}
return *it;
}
Now the hint. It contains 3 booleans, that are
- match: set to true when the "greater or equal" condition
evaluated to "equal". This frees the caller from the need
to manually check whether the entry returned matches the
search key or the new one should be inserted.
This is the "!found" check from the above snippet.
To explain the next 2 bools, here's a small example. Consider
the tree containing two elements {token, partition key}:
{ 3, "a" }, { 5, "z" }
As the collection is sorted they go in the order shown. Next,
this is what the lower_bound would return for some cases:
{ 3, "z" } -> { 5, "z" }
{ 4, "a" } -> { 5, "z" }
{ 5, "a" } -> { 5, "z" }
Apparently, the lower bound for those 3 elements are the same,
but the code-flows of emplacing them before one differ drastically.
{ 3, "z" } : need to get previous element from the tree and
push the element to it's vector's back
{ 4, "a" } : need to create new element in the tree and populate
its empty vector with the single element
{ 5, "a" } : need to put the new element in the found tree
element right before the found vector position
To make one of the above decisions the .emplace_before would need
to perform another set of comparisons of keys and elements.
Fortunately, the needed information was already known inside the
lower_bound call and can be reported via the hint.
Said that,
- key_match: set to true if tree.lower_bound() found the element
for the Key (which is token). For above examples this will be
true for cases 3z and 5a.
- key_tail: set to true if the tree element was found, but when
comparing values from array the bounding element turned out
to belong to the next tree element and the iterator was ++-ed.
For above examples this would be true for case 3z only.
And the last, but not least -- the "erase self" feature. Which is
given only the cache_entry pointer at hands remove it from the
collection. To make this happen we need to make two steps:
1. get the array the entry sits in
2. get the b+ tree node the vectors sits in
Both methods are provided by array_trusted_bounds and bplus::tree.
So, when we need to get iterator from the given T pointer, the algo
looks like
- Walk back the T array untill hitting the head element
- Call array_trusted_bounds::from_element() getting the array
- Construct b+ iterator from obtained array
- Construct the double_decker iterator from b+ iterator and from
the number of "steps back" from above
- Call double_decker::iterator.erase()
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>