The `result_callback` was a callback returned by `augment_mutation_call`
that was supposed to be used in the CDC postimage implementation.
Because CDC postimage was implemented without using this callback, and
currently a no-op function is always returned, this callback can safely
be removed.
It returns a future, so converting an exception to an exceptional
future simplifies error handling in the caller.
Without this code like the one in
standard_role_manager::create_metadata_tables_if_missing has a
surprising behavior:
return when_all_succeed(
create_metadata_table_if_missing(...),
create_metadata_table_if_missing(...));
Since it might not wait for both futures. We could use the lambda
version of when_all_succeed, but changing
create_metadata_table_if_missing seems a nice API improvement.
Signed-off-by: Rafael Ávila de Espíndola <espindola@scylladb.com>
Message-Id: <20200317002051.117832-4-espindola@scylladb.com>
View updates sent as part of the view building process should never
be ignored, but fd49fd7 introduced a bug which may cause exactly that:
the updates are mistakenly sent to background, so the view builder
will not receive negative feedback if an update failed, which will
in turn not cause a retry. Consequently, view building may report
that it "finished" building a view, while some of the updates were
lost. A simple fix is to restore previous behaviour - all updates
triggered by view building are now waited for.
Fixes#6038
Tests: unit(dev),
dtest: interrupt_build_process_with_resharding_low_to_half_test
When qualifying columns to be fetched for filtering, we also check
if the target column is not used as an index - in which case there's
no need of fetching it. However, the check was incorrectly assuming
that any restriction is eligible for indexing, while it's currently
only true for EQ. The fix makes a more specific check and contains
many dynamic casts, but these will hopefully we gone once our
long planned "restrictions rewrite" is done.
This commit comes with a test.
Fixes#5708
Tests: unit(dev)
The intention of the code was to clear sharding metadata
chunked_vector so that it doesn't bloat memory.
The type of c is `chunked_vector*`. Assigning `{}`
clears the pointer while the intended behavior was to reset the
`chunked_vector` instance. The original instance is left unmodified
with all its reserved space.
Because of this, the previous fix had no effect because token ranges
are stored entirely inline and popping them doesn't realease memory.
Fixes#4951
Tests:
- sstable_mutation_test (dev)
- manual using scylla binary on customer data on top of 2019.1.5
Reviewed-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <1584559892-27653-1-git-send-email-tgrabiec@scylladb.com>
An exception thrown after the start of auth_service and before
init_server_without_the_messaging_service_part returns would cause the
sharded<auth_service> destructor to assert.
Signed-off-by: Rafael Ávila de Espíndola <espindola@scylladb.com>
Message-Id: <20200317002051.117832-2-espindola@scylladb.com>
When investigating OOM related cores, a common thing to do is trying to
identify the objects in a particularly heavily populated size-class.
This command is meant to help with that, providing a way to list the
objects in any size-class, in a paginated way.
Traversing the objects of a pool is done through a
`small_object_iterator` object which is also exposed to python code, to
be used in custom scripts wanting to scan all objects belonging to a
pool.
Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <20200318085437.452906-1-bdenes@scylladb.com>
If delete_atomically() was called with a empty set for any reason,
it will fail to work because it relies on any of the sstables in
the set for getting the sstable directory.
This will be needed, in the future, when using sstable replacement
function only with new sstables.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Reviewed-by: Benny Halevy <bhalevy@scylladb.com>
Message-Id: <20200305144657.9440-1-raphaelsc@scylladb.com>
There's such an option, and it's not taken into account
on scylla start. There's a symmetrical start_rpc one, which
is, so make both act similarly.
The default value for the option is true, so default set-ups
will not get broken.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Message-Id: <20200310140518.29410-1-xemul@scylladb.com>
"
Since b783d40aa storage-proxy maintains separate coordinator stats per
scheduling group. This broke scylla_memory, which was still trying to
access the old global stats. This mini-series updates it to be able to
handle per-sg coordinator stats, while preserving backward compatibility
with older versions still using global stats.
"
* 'scylla-memory-per-sg-coordinator-stats/v1' of https://github.com/denesb/scylla:
scylla-gdb.py: scylla_memory: update w.r.t. per-sg coordinator stats
scylla-gdb.py: scylla_memory: move coordinator code to print_coordinator_stats()
"
The debug mode unit tests take ~half-an-hour to complete. Here's
the tests run-times top list
Test: Time (seconds):
... steady tail goes here ...
test/boost/user_function_test 496
test/boost/row_cache_test 502
test/boost/view_schema_test 932
test/boost/cql_query_test 997
test/boost/mutation_reader_test 1048
test/boost/sstable_mutation_test 1417
test/boost/secondary_index_test 1468
Splitting the spike (top-5) is the primary goal. However, the
distribution of test-cases in 3 of those tests is also _very_
non-uniform, so just cutting it into equal parts doesn't work.
For example, the test_index_with_paging from the slowest one
takes ~14 minutes on its own and is the slowest test-case out
there.
So the set does this:
- moves the champion test_index_with_paging into separate file
- detaches the most heavy parts from sstable_mutation_test and
mutation_reader_test into own tests too. The resulting split
is still non-uniform, but it's 4 tests that run notably less
than the 14 minutes record each
- splits the cql_query_test and view_schema_test into several
parts in a wildcard manner to run out of the 14 min threshold
- moves some shared code into lib/
As the result, the debug mode test run takes 14.5 minutes =)
which is almost 2 times faster than it was. The dev mode run
time is not affected noticeably.
Test: well, unit(debug) and unit(dev)
"
* 'br-split-unit-tests-3-next' of https://github.com/xemul/scylla:
test: Split view_schema_test
test: Split cql_query_test
test: Split mutation_reader_test
test: Split sstable_mutation_test
test: Split secondary_index test
Consider
1. Start n1, n2 in the cluster
2. Stop n2 and delete all data for n2
3. Start n2 to replace itself with replace_address_first_boot: n2
4. Kill n2 before n2 finishes the replace operation
5. Remove replace_address_first_boot: n2 from scylla.yaml of n2
6. Delete all data for n2
7. Start n2
At step 7, n2 will be allowed to bootstrap as a new node, because the
application state of n2 in the cluster is HIBERNATE which is not
rejected in the check of is_safe_for_bootstrap. As a result, n2 will
replace n2 with a different tokens and a different host_id, as if the
old n2 node was removed from the cluster silently.
Fixes#5172
Check that SELECT statement checks there is a partition key before
accessing it when determining the shard to execute the query on.
Essentially move the check for properly restricted partition key
from storage_proxy.cc to select_statement.cc, now that we access
it earlier in the call stack.
Keep the check in storage_proxy.cc since storage_proxy::query()
has other call sites (views), which today should never use
serial consistency for its queries, but this can change in the future.
Please note that Cassandra only partially enforce SERIAL consistency
and can silently downgrade SERIAL consistency to the default
non-serial one when doing unbounded SELECTS (
https://issues.apache.org/jira/browse/CASSANDRA-15641)
Fixes#6016
Detach *partition_key* and *clustering_key* ones into own files.
The resultint 2 tests run ~4 minutes each, the leftover ones
complete within 11 minutes. The same -- the goal to run out of
14 minutes is reached, further splitting needs more thinking
than just wildcarding.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
This detaches *like_operator*, *group_by*, *functions*
and *large* cases into own files. The split is not
uniform -- the resulting 4 tests run less that 3 minutes
each, what's left in the origin runs ~11 minutes. But
since the goal was to get out of 14 minutes threshold
and this file contains 126 cases (the champion) so I
just did "wildcard" selection that worked.
It also required moving require_rows() helpers into a
local header.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Detach test_multishard_combining_reader_as_mutation_source into
individual file.
This particular test runs ~13 minutes. What's left in the origin
completes a bit faster.
The split also requires moving the reader_lifecycle_policy and
the dummy_partitioner into lib/
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Detach test_schema_changes and test_sstable_conforms_to_mutation_source
into individual files. These two take ~10 minutes each, what's left in
origin finishes within 4 minutes alltogether.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Detach test_index_with_paging into individual file.
This particular test-case is the longest one in the sute,
it takes ~14 minutes to run, further splitting of this
test is pointless (for now) and all subsequent splits in
this set just make the resulting times less than this.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
The call for merge_schema_from in some cases is run in the
background and thus is not aborted/waited on shutdown. This
may result in use-after-free one of which is
merge_schema_from
-> read_schema_for_keyspace
-> db::system_keyspace::query
-> storage_proxy::query
-> query_partition_key_range_concurrent
in the latter function the proxy._token_metadata is accessed,
while the respective object can be already free (unlike the
storage_proxy itself that's still leaked on shutdown).
Related bug: #5903, #5999 (cannot reproduce though)
Tests: unit(dev), manual start-stop
dtest(consistency.TestConsistency, dev)
dtest(schema_management, dev)
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Reviewed-by: Pekka Enberg <penberg@scylladb.com>
Message-Id: <20200316150348.31118-1-xemul@scylladb.com>
"
Allow adding compacting to any reader pipeline. The intended users are
streaming and repair, with the goal to prevent wasting transfer
bandwidth with data that is purgeable.
No current user in the tree.
Tests: unit(dev), mutation_reader_test.compacting_reader_*(debug)
"
* 'compacting-reader/v3' of https://github.com/denesb/scylla:
test: boost/mutation_reader_test: add unit test for compacting_reader
test: lib/flat_mutation_reader_assertions: be more lenient about empty mutations
test: lib/mutation_source_test: make data compaction friendly
test: random_mutation_generator: add generate_uncompactable mode
mutation_reader: introduce compacting_reader
When expecting a mutation that compacts to an empty one, allow it to be
not produced at all. After all, compaction normally doesn't even emits
empty partitions.
Currently the mutation source test suite may generate data that is
compactable. This poses a problem for the next patch, where we want to
use it to test `compacting_reader` a reader which compacts data as it
reads it. When the input is compactable, this will introduce artificial
differences, failing the tests.
To allow also testing such readers, make sure data is not compactable,
i.e. compacting it will not change it.
The goal of the mutation source test suite is not to exercise compaction
logic, so this will not take anything away from its value.
The random mutation generator currently generates data and tombstones
with random timestamps selected from a pre-determined range. This
results in mutations where tombstones often cover each other and data.
There is nothing wrong with this, as this is how real data is too.
However for certain tests this is problematic, as compacting the
mutations will result in a different mutations. To cater for these users
too, introduce a `generate_uncompactable` option. When set to `yes`, the
generated mutations will be uncompactable, i.e. no tombstone will cover
lower-level tombstones and no tombstone will cover data. The mutations
will not change after compacted.
Compacting reader compacts the output of another reader on-the-fly.
Performs compaction-type compaction (`compact_for_sstables::yes`).
It will be used in streaming and repair to eliminate purgeable data from
the stream, thus prevent wasting transfer bandwidth.
Merged pull request https://github.com/scylladb/scylla/pull/5996 from
Calle Wilund:
Fixes#4992
Implements post-image support by synthesizing it from
pre-image + delta.
Post-image data differs from the delta data in two ways:
1.) It merges non-atomics into an actual result value
2.) It contains all columns of the row, not just
those affected by the update.
For a non-atomic field, the post-image value of a column
is either the pre-image or the delta (maybe null)
Tested by adding post-image checks to pre-image test
and collection/udt tests
Fixes#4992
Implements post-image support by synthesizing it from
pre-image + delta.
Post-image data differs from the delta data in two ways:
1.) It merges non-atomics into an actual result value
2.) It contains _all_ columns of the row, not just
those affected by the update.
For a non-atomic field, the post-image value of a column
is either the pre-image or the delta (maybe null)
Tested by adding post-image checks to pre-image test
and collection/udt tests
"
This PR makes it possible to enable the usage of different partitioner for each table. If no table-specific partitioner is set for a given table then a default partitioner is used.
The PR is composed of the following parts:
- Introduction of schema::get_partitioner that still returns dht::global_partitioner
- Replacement of all the usage of dht::global_partitioner with schema::get_partitioner
- Making it possible to set table-specific partitioner in a schema_builder
- Remove all the places that were setting default partitioner except for main.cc (mostly tests)
- Move default partitioner from i_partitioner to schema.cc and hide it from the rest of the codebase
- Remove dht::global_partitioner
After this PR there's no such thing as global partitioner at all. There is only a default partitioner but it still has to be accessed through schema::get_partitioner.
There are some intermediate states in which i_partitioner is stored as shared_ptr in the schema but the final version keeps it by const&.
The PR does not enable per table partitioner end-to-end. Just the internals of the single node are covered. I still have to deal with:
- Making sure a table has the same partitioner on each node
- Allowing user to set up a table-specific partitioner on table
- Signal driver about what partitioner is used by a given table
- Persist partitioner info for each table that does not use default partitioner.
Fixes#5493
Tests: unit(dev, release, debug), dtest(byo)
"
* 'per_table_partitioner' of https://github.com/haaawk/scylla:
schema: drop optional from _partitioner field
make_multishard_combining_reader: stop taking partitioner
split_range_to_single_shard: stop taking partitioner as argument
tests: remove unused murmur3 includes
partitioner: move default_partitioner to schema.cc
partitioner: hide dht::default_partitioner
schema: include partitioner name in scylla tables mutation
schema: make it possible to set custom partitioner
scylla_tables: add partitioner column
schema_features: add PER_TABLE_PARTITIONERS feature
features: add PER_TABLE_PARTITIONERS feature
* seastar 47d929dd1...3c498abca (5):
> reactor: Use do_with to save stack space
> reactor: Extract code into a schedule_retry helper
> reactor: Move an io_event buffer out of the stack
> temporary_buffer: fix typo in argument type in comparison operators
> tests: tls_test: add missing include <iostream>
This adds a warning with a different limit in each mode. The limit is
picked as 1KiB lower than the value where no warning would be print.
This makes it easy to spot the worse offender. With that we can either
fix it or silence the warning once we are sure we can handle large
frames in that context.
Signed-off-by: Rafael Ávila de Espíndola <espindola@scylladb.com>
Message-Id: <20200311205300.324383-1-espindola@scylladb.com>
The bug is that we failed to implement this part of the formula:
(T - C) * log4(T)
We were incorrectly implementing it as:
(T - C) * log4(T - C)
So it could result in a backlog being calculated as negative when it
should actually be positive, or backlog being lower than expected.
BTW, we do protect against negative backlog after commit 3e08bd17f0.
Given that STCS backlog tracker is inherited by TWCS and LCS trackers,
all compaction strategies are affected.
The formula to calculate the aggregate backlog is:
A = (T - C) * log4(T) - Sum(i = 0...N) { (Si - Ci)* log4(Si) }.
For example, negative backlog is calculated on a tested scenario where T
was 3129, C was 2337 and Sum(i = 0...N) { (Si - Ci)* log4(Si) } resulted
in 4222.53.
(T - C) * log4(T - C) = (3129 - 2337) * log4(3129 - 2337) = 3813.23
So backlog is negative because A = 3813.23 - 4222.53 = -409.302
But it should actually be calculated as follow:
(T - C) * log4(T) = (3129 - 2337) * log4(3129) = 4598.15
And the correct backlog is positive, as A = 4598.15 - 4223.53 = 375.621
Fixes#6021.
tests: unit(dev)
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20200315153711.23302-1-raphaelsc@scylladb.com>
When the user performed
alter ks.t with compaction = {...}
the values of most other options, which were not specified in the
statement, e.g. compression, were left unchanged. That wasn't true for
extension options however: for example, the "cdc" option was removed.
This commit fixes the behavior to keep the old values of extension
options not specified in the alter statement.
The function already takes schema so there's no need
for it to take partitioner. It can be obtained using
schema::get_partitioner
Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Remove last usage of this global outside i_partitioner.cc
and hide it inside the compilation unit.
Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>