This patch makes makes major compaction aware of time buckets
for TWCS. That means that calling a major compaction with TWCS
will not bundle all SSTables together, but rather split them
based on their timestamps.
There are two motivations for this work:
1. Telling users not to ever major compact is easier said than
done: in practice due to a variety of circumstances it might
end up being done in which case data will have a hard time
expiring later.
2. We are about to start working with offstrategy compactions,
which are compactions that work in parallel with the main
compactions. In those cases we may be converting SSTables from
one format to another and it might be necessary to split a single
big STCS SSTable into something that TWCS expects
With the motivation out of the way, let's talk about the implementation:
The implementation is quite simple and builds upon the previous patches.
It simply specializes the interposer implementation for regular compaction
with a table-specific interposer.
Fixes#1431
Signed-off-by: Glauber Costa <glauber@scylladb.com>
Our resharding code is complex, since the compaction object has to keep
track of many output SSTables, the current shard being written.
When implementing TWCS streaming writers, we ran away from such
write-side complexity by implementing an interposer: the interposer
consumes the flat_mutation_reader stream, creating many different writer
streams. We can do a similar thing for resharding SSTables and have each
writer be guaranteed to contain keys for only a specific source shard.
As we do that, we can move the SSTable and sstable_writer information
to the compacting_sstable_writer object. The compaction object will no
longer be responsible for it and can be simplified, paving the way for
TWCS-major, which will go through an interposer as well.
Note that the compaction_writer, which now holds both the SSTable
pointer and the sstable_writer still needs to be optional. This is
because LCS (and potentially others) still want to create more than
one SSTable per source stream. That is done to guarantee that each
SSTable complies with the max_sstable_size parameter, which is
information available in the sstable_writer that is not present at
the level of the flat_mutation_reader. We want to keep it in the writer
side.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
This is similar to the timestamp based splitting writer, except
that it splits data based on the shard where the partition key
is supposed to be placed.
It is similar to the multishard_writer, in the sense that it
creates n streams for n shards, but it does not want to process
the streams in the owner shards. We want to use that in processes
like resharding where it is fine for a foreign shard to deal
with a mutation.
One option would be to augment the multishard_writer to optionally
achieve these properties, but having a separate splitter is both
simpler and faster.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
I am about to introduce a new splitter. Therefore, move parts of it
that are common to its own file.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
I am about to get rid of the _shard attribute in the compaction object,
as I will create different streams of writers for different shards.
In preparation for that, remove the arbitrary _shard reference. Raphael
confirms that resharding should never be calling this, as this method is
used exclusively for garbage collection component of run-based
compaction. Therefore we'll just throw in this case and remove the shard
reference.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
The shard parameter is ignored for SSTable creation on regular
compaction. It is still good practice and good future proofing
to pass something meaningful here instead of zero. This patch
passes the id of the current shard.
Thanks Botond for pointing that out.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <20200402122212.12218-1-glauber@scylladb.com>
"
This patchseries is part of my effort to make resharding less special -
and hopefully less problematic. The next steps are a bit heavy, so I'd
like to, if possible, get this out of the way.
After these two patches, there is no more need to ever call
reshard_sstables: compact_sstables will do, and it will be able to
recognize resharding compactions.
To do that we need to unify the creator function, which is trivially
done by adding a shard parameter to regular compactions as well: they
can just ignore it. I have considered just making the
compaction_descriptor have a virtual create() function and specializing
it, but because we have to store the creator in the compaction object I
decided to keep the virtual function for now.
In a later cleanup step, if we can for instance store the entire
compaction_descriptor object in the compaction object we could do that.
Reviewed-by: Benny Halevy <bhalevy@scylladb.com>
Reviewed-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Reviewed-by: Botond Dénes <bdenes@scylladb.com>
Tests: unit tests (dev), dtest (resharding.py)
"
* 'resharding-through-compact-sstables' of github.com:glommer/scylla:
resharding: get rid of special reshard_sstables
compaction: enhance compaction_descriptor with creator and replace function
This reverts commit fdd2d9de3d because it
breaks one heat-weighted load balancing dtest:
FAIL: heat_weighted_load_balancing_cl_QUORUM_test (heat_weighted_load_balancing_test.HeatWeightedLB)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/penberg/src/scylla/scylla-dtest/heat_weighted_load_balancing_test.py", line 182, in heat_weighted_load_balancing_cl_QUORUM_test
self.run_heat_weighted_load_balancing('QUORUM')
File "/home/penberg/src/scylla/scylla-dtest/heat_weighted_load_balancing_test.py", line 165, in run_heat_weighted_load_balancing
self.verify_metrics(metrics, cached=False)
File "/home/penberg/src/scylla/scylla-dtest/heat_weighted_load_balancing_test.py", line 73, in verify_metrics
mean_avg, node_mean_avg, key))
AssertionError: 19.0 not found in range(3, 13) : Cache difference between nodes is less then expected: 6469.6/328.2, metric scylla_storage_proxy_coordinator_reads_local_node
I am reverting because it's a test issue, and we should bring this
commit back once the test is fixed.
Gleb Natapov explains:
"dtest result directly depends on replicas we contact. Glauber's patch
make us contacts less replicas, so numbers differ."
The "run" script for the Alternator tests needs to set a system table for
authentication credentials, so we can test this feature.
So far we did this with cqlsh, but cqlsh isn't always installed on build
machines. But install-dependencies.sh already installs the Cassandra driver
for Python, so it makes more sense to use that, so this patch switches to
use it.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20200331131522.28056-1-nyh@scylladb.com>
To run Alternator tests, only two additional dependencies need to be added to
install-dependencies.sh: pytest, and python3-boto3. We also need
python3-cassandra-driver, but this dependency is already listed.
This patch only updates the dependencies for Fedora, which is what we need
for dbuild and our Jenkins setups.
Tested by building a new dbuild docker image and verifying that the
Alternator tests pass.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
[avi: update toolchain image; note this upgrades gcc to 9.3.1]
Message-Id: <20200330181128.18582-1-nyh@scylladb.com>
In order to be extra sure that we always generate proper
base partition/clustering keys from paging info when executing
an indexed query, additional checks are added - if any of them
triggers, an exception will be thrown.
Created in order to help debug an existing issue:
Refs #5856
Tests: unit(dev)
Due to c&p error cas_now_pruning counter is increased instead of
decreased after an operation completes. Fix it.
Fixes#6116
Message-Id: <20200401142859.GA16953@scylladb.com>
Always enable lightweight transactions. Remove the check for the command
line switch from the feature service, assuming LWT is always enabled.
Remove the check for LWT from Alternator.
Note that in order for the cluster to work with LWT, all nodes need
to support it.
Rename LWT to UNUSED in db/config.hh, to keep accepting lwt keyword in
--experimental-features command line option, but do nothing with it.
Changes in v2:
* remove enable_lwt feature flag, it's always there
Closes#6102
test: unit (dev, debug)
Message-Id: <20200401071149.41921-1-kostja@scylladb.com>
During bootstrap and replace operations the node can't take reads and
we'd like to see the process ending ASAP. This is because until the
process ends, we keep having to duplicate writes to an extended set. Not
to mention, in the case of a cluster expansion users want to use the
added capacity sooner rather than later.
Streaming generates a lot of compaction activity, that competes with the
bootstrap itself, slowing it down.
Long term, we are moving to treat those compactions differently and
maybe postpone them altogether. However for now we can reduce the amount
of compactions by increasing the minimum threshold of SSTables that have
to accumulate before they are selected for compactions. The default is
2, meaning we will trigger a compaction every time 2 SSTables of about
the same size are found (for STCS, others follow a similar pattern).
Until we have offstrategy infrastructure we don't want the compactions
to stop happening altogether so the reads, when they start, don't
suffer. This patch sets the minimum threshold to 16 (for the default
max_threshold of 32), meaning we will generate a lot less compaction
activity during streaming. Once streaming is done we revert it to its
original.
Unfortunately there isn't much we can do at the moment about decommission.
During decommission the nodes receiving data are also taking reads and
we don't want SSTables to accumulate.
Fixes#5109
Signed-off-by: Glauber Costa <glauber@scylladb.com>
dc_local_read_repair_chance is a legacy of old times: Cassandra itself
now defaults to zero, and we should look into that too.
Most serious production clusters are either repaired through our
asynchronous repair, or don't need repair at all.
Synchronous read repair can help things converging, but it implies an
impact at query time. For clusters that are on an asynchronous repair
schedule this should not be needed.
Fixes#6109
Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <20200331183418.21452-1-glauber@scylladb.com>
There is a method, reshard_sstables(), whose sole purpose is to call a
resharding compaction. There is nothing special about this method: all
the information it needs is now present in the compaction_descriptor.
This patch extend the compaction_options class to recognize resharding
compactions as well, and uses that so that make_compaction() can also
create resharding compactions.
To make that happen we have to create a compaction_descriptor object in
the resharding method. Note however that resharding works by passing an
object very close to the compaction_descriptor around. Once this patch
is merged, a logical next step is to reuse it, and avoid creating the
descriptor right before calling compact_sstables().
Signed-off-by: Glauber Costa <glauber@scylladb.com>
There are many differences between resharding and compaction that are
artificial, arising more from the way we ended up implementing it than
necessity. This patch attempts to pass the creator and replacer functions
through the compaction_descriptor.
There is a difference between the creator function for resharding and
regular compaction: resharding has to pass the shard number on behalf
of which the SSTable is created. However regular compactions can just
ignore this. No need to have a special path just for this.
After this is done, the constructor for the compaction object can be
greatly simplified. In further patches I intend to simplify it a bit
further, but some more cleanup has to happen first.
To make that happen we have to construct a compaction_descriptor object
inside the resharding function. This is temporary: resharding currently
works with a descriptor, but at some point that descriptor is lost and
broken into pieces to be passed to this function. The overarching goal
of this work is exactly to be able to keep that descriptor for as long
as possible, which should simplify things a lot.
Callers are patched, but there are plenty for sstable_datafile_test.cc.
For their benefit, a helper function is provided to keep the previous
signature (test only).
Signed-off-by: Glauber Costa <glauber@scylladb.com>
"
Currently, both sharding and partitioning logic is encapsulated into partitioners. This is not desirable because these two concepts are totally independent and shouldn't be coupled together in such a way.
This PR separates sharding and partitioning. Partitioning will still live in i_partitioner class and its subclasses. Sharding is extracted to a new class called sharding_info. Both partitioners and sharding_info are still managed by schema class. Partitioner can be accessed with schema::get_partitioner while sharding_info can be accessed with schema::get_sharding_info.
The transition is done in steps:
1. sharding_info class is defined and all the sharding logic is extracted from partitioner to the new class. Temporarily sharding_info is still embedded into i_partitioner and all sharding related functions in i_partitioner call delegate to the embedded sharding_info object.
2. All calls to i_partitioner functions that are related to sharding are gradually switched to calls to sharding_info equivalents. sharding_info.
3. Once everything uses sharding_info, all sharding logic is dropped from i_partitioner.
Tests: unit(dev, release)
"
* haaawk-sharding_info: (32 commits)
dummy_sharder: rename dummy_sharding_info.* to dummy_sharder.*
sharding_info: rename the class to sharder
i_partitioner:remove embeded sharding_info
i_partitioner: remove unused get_sharding_info
schema: remove incorrect comment
schema: make it possible to set sharding_info per schema
i_partitioner: remove unused shard_count
multishard_writer: stop calling i_partitioner::shard_count
i_partitioner: remove sharding_ignore_msb
partitioner_test: test ranges and sharding_infos
i_partitioner: remove unused split_ranges_to_shards
i_partitioner: remove unused shard_of function
sstable-utils: use sharding_info::shard_of
create_token_range_from_keys: use sharding info for shard_of
multishard_mutation_query_test: use sharding info for shard_of
distribute_reader_and_consume_on_shards: use sharding_info::shard_of
multishard_mutation_query: use sharding_info::shard_of
dht::shard_of: use schema::get_sharding_info
i_partitioner: remove unused token_for_next_shard
split_range_to_single_shard: use sharding info instead of partitioner
...
Unfortunately, the boto3 library doen't allow us to check some of the
input error cases because it unnecessarily tests its input instead of
just passing it to Alternator and allowing Alternator to report the error.
In this patch we comment out a test case which used to work fine - i.e.,
the error was reported by Alternator - until recent changes to boto3
made it catch the problem without passing it to Alternator :-(
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20200330190521.19526-2-nyh@scylladb.com>
One of the Alternator tests in test_tag.py checks the feature of creating
a table with a set of tags (as opposed to adding tags to an existing table).
This is a relatively new DynamoDB feature, only added in April 2019, so if
the botocore library is too old, it cannot test this feature, and we have to
skip the test.
Alternator developers should make an effort to keep the botocore library
up-to-date and test the latest DynamoDB features, but it is less important
if some test environments (like Jenkins) cannot verify this specific test
until its distro gets updated - it is more important that the fast majority
of the tests, which do not rely on very new features, get tested.
After this patch, if running on Fedora 30 with
python3-botocore-1.12.101-2.fc30.noarch installed, we get the following
skip message:
$ pytest-3 -rs test_tag.py
...
test_tag.py ..s..x [100%]
=================================================== short test summary info ===================================================
SKIP [1] /home/nyh/scylla/test/alternator/test_tag.py:114: Botocore version 1.12.136 or above required to run this test
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20200330190521.19526-1-nyh@scylladb.com>
The learning stage of PAXOS protocol leaves behind an entry in
system.paxos table with the last learned value (which can be large). In
case not all participants learned it successfully next round on the same
key may complete the learning using this info. But if all nodes learned
the value the entry does not serve useful purpose any longer.
The patch adds another round, "prune", which is executed in background
(limited to 1000 simultaneous instances) and removes the entry in
case all nodes replied successfully to the "learn" round. It uses the
ballot's timestamp to do the deletion, so not to interfere with the
next round. Since deletion happens very close to previous writes it will
likely happen in memtable and will never reach sstable, so that reduces
memtable flush and compaction overhead.
Fixes#5779
Message-Id: <20200330154853.GA31074@scylladb.com>
Previously schema::get_sharding_info was obtaining
sharding_info from the partitioner but we want to remove
sharding_info from the partitioner so we need a place
in schema to store it there instead.
Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Previous patches have switched all the calls to
i_partitioner::shard_count to sharding_info::shard_count
and this function can now be removed.
Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Every place that has previously called this method is now
using sharding_info::sharding_ignore_msb and this function
can be removed.
Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Turn test_something_with_some_interesting_ranges_and_partitioners
into test_something_with_some_interesting_ranges_and_sharding_info.
Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Previous patches switched all the places that called
i_partitioner::shard_of to use sharding_info::shard_of
so i_partitioner::shard_of is no longer used and can
be removed.
Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Create sharding_info with the same parameters as
the partitioner and use it instead of the partitioner.
Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
This patch replaces all the uses of i_partitioner:shard_of
with sharding_info::shard_of in read_context.
Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Previous patches have switched all the places that was
using i_partitioner::token_for_next_shard to
sharding_info::token_for_next_shard. Now the function
can be removed from i_partitioner.
Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
The function relies only on i_partitioner::shard_count
and i_partitioner::token_fon_next_shard. Both are really implemented
in sharding_info so the method can use them directly.
Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
After previous patches that switched some tests to use sharding_info
instead of i_partitioner, we now don't need with_partitioner_for_tests_only
and the function can be removed.
Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>