Compare commits

...

87 Commits

Author SHA1 Message Date
Hagit Segev
5fcc1f205c release: prepare for 4.2.rc5 2020-09-30 20:40:44 +03:00
Avi Kivity
08c35c1aad Revert "Revert "config: Do not enable repair based node operations by default""
This reverts commit 71d0d58f8c. Repair-based
node operations stil have a significant regression (See #7249).
2020-09-30 14:18:37 +03:00
Tomasz Grabiec
54a913d452 Merge "evictable_reader: validate buffer on reader recreation" from Botond
The reader recreation mechanism is a very delicate and error-prone one,
as proven by the countless bugs it had. Most of these bugs were related
to the recreated reader not continuing the read from the expected
position, inserting out-of-order fragments into the stream.
This patch adds a defense mechanism against such bugs by validating the
start position of the recreated reader.
The intent is to prevent corrupt data from getting into the system as
well as to help catch these bugs as close to the source as possible.

Fixes: #7208

Tests: unit(dev), mutation_reader_test:debug (v4)

* botond/evictable-reader-validate-buffer/v5:
  mutation_reader_test: add unit test for evictable reader self-validation
  evictable_reader: validate buffer after recreation the underlying
  evictable_reader: update_next_position(): only use peek'd position on partition boundary
  mutation_reader_test: add unit test for evictable reader range tombstone trimming
  evictable_reader: trim range tombstones to the read clustering range
  position_in_partition_view: add position_in_partition_view before_key() overload
  flat_mutation_reader: add buffer() accessor

(cherry picked from commit 97c99ea9f3)
2020-09-30 13:13:09 +02:00
Piotr Dulikowski
8f9cd98c45 hinted handoff: fix race - decomission vs. endpoint mgr init
This patch fixes a race between two methods in hints manager: drain_for
and store_hint.

The first method is called when a node leaves the cluster, and it
'drains' end point hints manager for that node (sends out all hints for
that node). If this method is called when the local node is being
decomissioned or removed, it instead drains hints managers for all
endpoints.

In the case of decomission/remove, drain_for first calls
parallel_for_each on all current ep managers and tells them to drain
their hints. Then, after all of them complete, _ep_managers.clear() is
called.

End point hints managers are created lazily and inserted into
_ep_managers map the first time a hint is stored for that node. If
this happens between parallel_for_each and _ep_managers.clear()
described above, the clear operation will destroy the new ep manager
without draining it first. This is a bug and will trigger an assert in
ep manager's destructor.

To solve this, a new flag for the hints manager is added which is set
when it drains all ep managers on removenode/decommission, and prevents
further hints from being written.

Fixes #7257

Closes #7278

(cherry picked from commit 39771967bb)
2020-09-29 14:18:48 +03:00
Avi Kivity
2893f6e43b Update seastar submodule
* seastar 0c289412a9...61b88d1da4 (1):
  > lz4_fragmented_compressor: Fix buffer requirements

Fixes #6925.
2020-09-23 11:04:22 +03:00
Avi Kivity
18d6c27b05 Merge 'storage_proxy: add a separate smp_group for hints' from Eliran
Hints writes are handled by storage_proxy in the exact same way
regular writes are, which in turn means that the same smp service
group is used for both. The problem is that it can lead to a priority
inversion where writes of the lower priority  kind occupies a lot of
the semaphores units making the higher priority writes wait for an
empty slot.
This series adds a separate smp group for hints as well as a field
to pass the correct smp group to mutate_locally functions, and
then uses this field to properly classify the writes.

Fixes #7177

* eliransin-hint_priority_inversion:
  Storage proxy: use hints smp group in mutate locally
  Storage proxy: add a dedicated smp group for hints

(cherry picked from commit c075539fea)
2020-09-22 14:06:14 +03:00
Pavel Solodovnikov
97d7f6990c storage_proxy: un-hardcode force sync flag for mutate_locally(mutation) overload
Corresponding overload of `storage_proxy::mutate_locally`
was hardcoded to pass `db::commitlog::force_sync::no` to the
`database::apply`. Unhardcode it and substitute `force_sync::no`
to all existing call sites (as it were before).

`force_sync::yes` will be used later for paxos learn writes
when trying to apply mutations upgraded from an obsolete
schema version (similar to the current case when applying
locally a `frozen_mutation` stored in accepted proposal).

Tests: unit(dev)

Signed-off-by: Pavel Solodovnikov <pa.solodovnikov@scylladb.com>
Message-Id: <20200716124915.464789-1-pa.solodovnikov@scylladb.com>
(cherry picked from commit 5ff5df1afd)

Prerequisite for #7177.
2020-09-22 14:05:39 +03:00
Nadav Har'El
9855e18c0d alternator: fix corruption of PutItem operation in case of contention
This patch fixes a bug noted in issue #7218 - where PutItem operations
sometimes lose part of the item's data - some attributes were lost,
and the name of other attributes replaced by empty strings. The problem
happened when the write-isolation policy was LWT and there was contention
of writes to the same partition (not necessarily the same item).

To use CAS (a.k.a. LWT), Alternator builds an alternator::rmw_operation
object with an apply() function which takes the old contents of the item
(if needed) and a timestamp, and builds a mutation that the CAS should
apply. In the case of the PutItem operation, we wrongly assumed that apply()
will be called only once - so as an optimization the strings saved in the
put_item_operation were moved into the returned mutation. But this
optimization is wrong - when there is contention, apply() may be called
again when the changed proposed by the previous one was not accepted by
the Paxos protocol.

The fix is to change the one place where put_item_operation *moved* strings
out of the saved operations into the mutations, to be a copy. But to prevent
this sort of bug from reoccuring in future code, this patch enlists the
compiler to help us verify that it can't happen: The apply() function is
marked "const" - it can use the information in the operation to build the
mutation, but it can never modify this information or move things out of it,
so it will be fine to call this function twice.

The single output field that apply() does write (_return_attributes) is
marked "mutable" to allow the const apply() to write to it anyway. Because
apply() might be called twice, it is important that if some apply()
implementation sometimes sets _return_attributes, then it must always
set it (even if to the default, empty, value) on every call to apply().

The const apply() means that the compiler verfies for us that I didn't
forget to fix additional wrong std::move()s. Additionally, a test I wrote
to easily reproduce issue #7218 (which I will submit as a dtest later)
passes after this fix.

Fixes #7218.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20200916064906.333420-1-nyh@scylladb.com>
(cherry picked from commit 5e8bdf6877)
2020-09-16 19:17:52 +03:00
Avi Kivity
5d5ddd3539 Merge "materialized views: Fix undefined behavior on base table schema changes" from Tomasz
"
The view_info object, which is attached to the schema object of the
view, contains a data structure called
"base_non_pk_columns_in_view_pk". This data structure contains column
ids of the base table so is valid only for a particular version of the
base table schema. This data structure is used by materialized view
code to interpret mutations of the base table, those coming from base
table writes, or reads of the base table done as part of view updates
or view building.

The base table schema version of that data structure must match the
schema version of the mutation fragments, otherwise we hit undefined
behavior. This may include aborts, exceptions, segfaults, or data
corruption (e.g. writes landing in the wrong column in the view).

Before this patch, we could get schema version mismatch here after the
base table was altered. That's because the view schema did not change
when the base table was altered.

Another problem was that view building was using the current table's schema
to interpret the fragments and invoke view building. That's incorrect for two
reasons. First, fragments generated by a reader must be accessed only using
the reader's schema. Second, base_non_pk_columns_in_view_pk of the recorded
view ptrs may not longer match the current base table schema, which is used
to generate the view updates.

Part of the fix is to extract base_non_pk_columns_in_view_pk into a
third entity called base_dependent_view_info, which changes both on
base table schema changes and view schema changes.

It is managed by a shared pointer so that we can take immutable
snapshots of it, just like with schema_ptr. When starting the view
update, the base table schema_ptr and the corresponding
base_dependent_view_info have to match. So we must obtain them
atomically, and base_dependent_view_info cannot change during update.

Also, whenever the base table schema changes, we must update
base_dependent_view_infos of all attached views (atomically) so that
it matches the base table schema.

Fixes #7061.

Tests:

  - unit (dev)
  - [v1] manual (reproduced using scylla binary and cqlsh)
"

* tag 'mv-schema-mismatch-fix-v2' of github.com:tgrabiec/scylla:
  db: view: Refactor view_info::initialize_base_dependent_fields()
  tests: mv: Test dropping columns from base table
  db: view: Fix incorrect schema access during view building after base table schema changes
  schema: Call on_internal_error() when out of range id is passed to column_at()
  db: views: Fix undefined behavior on base table schema changes
  db: views: Introduce has_base_non_pk_columns_in_view_pk()

(cherry picked from commit 3daa49f098)
2020-09-16 16:42:02 +03:00
Benny Halevy
0a72893fef test: cql_query_test: test_cache_bypass: use table stats
test is currently flaky since system reads can happen
in the background and disturb the global row cache stats.

Use the table's row_cache stats instead.

Fixes #6773

Test: cql_query_test.test_cache_bypass(dev, debug)

Credit-to: Botond Dénes <bdenes@scylladb.com>
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Message-Id: <20200811140521.421813-1-bhalevy@scylladb.com>
(cherry picked from commit 6deba1d0b4)
2020-09-16 16:05:53 +03:00
Dejan Mircevski
1e45557d2a cql3: Fix NULL reference in get_column_defs_for_filtering
There was a typo in get_column_defs_for_filtering(): it checked the
wrong pointer before dereferencing.  Add a test exposing the NULL
dereference and fix the typo.

Tests: unit (dev)

Fixes #7198.

Signed-off-by: Dejan Mircevski <dejan@scylladb.com>
(cherry picked from commit 9d02f10c71)
2020-09-16 15:46:58 +03:00
Avi Kivity
96e1e95c1d reconcilable_result_builder: don't aggrevate out-of-memory condition during recovery
Consider an unpaged query that consumes all of available memory, despite
fea5067dfa which limits them (perhaps the
user raised the limit, or this is a system query). Eventually we will see a
bad_alloc which will abort the query and destroy this reconcilable_result_builder.

During destruction, we first destroy _memory_accounter, and then _result.
Destroying _memory_accounter resumes some continuations which can then
allocate memory synchronously when increasing the task queue to accomodate
them. We will then crash. Had we not crashed, we would immediately afterwards
release _result, freeing all the memory that we would ever need.

Fix by making _result the last member, so it is freed first.

Fixes #7240.

(cherry picked from commit 9421cfded4)
2020-09-16 15:40:40 +03:00
Raphael S. Carvalho
338196eab6 storage_service: Fix use-after-free when calculating effective ownership
Use-after-free happens because we take a ref to keyspace_name, which
is stack allocated, and ceases to exist after the next deferring
action.

Fixes #7209.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20200909210741.104397-1-raphaelsc@scylladb.com>
(cherry picked from commit 86b9ea6fb2)
2020-09-12 13:58:45 +03:00
Asias He
71cbec966b storage_service: Fix a TOKENS update race for replace operation
In commit 7d86a3b208 (storage_service:
Make replacing node take writes), application state of TOKENS of the
replacing node is added into gossip and propagated to the cluster after
the initial start of gossip service. This can cause a race below

1. The replacing node replaces the old dead node with the same ip address
2. The replacing node starts gossip without application state of the TOKENS
3. Other nodes in the cluster replace the application states of old dead node's
   version with the new replacing node's version
4. replacing node dies
5. replace operation is performed again, the TOKENS application state is
   not preset and replace operation fails.

To fix, we can always add TOKENS application state when the
gossip service starts.

Fixes: #7166
Backports: 4.1 and 4.2
(cherry picked from commit 3ba6e3d264)
2020-09-10 13:12:56 +03:00
Avi Kivity
067a065553 Merge "Fix repair stalls in get_sync_boundary and apply_rows_on_master_in_thread" from Asias
"
This path set fixes stalls in repair that are caused by std::list merge and clear operations during test_latency_read_with_nemesis test.

Fixes #6940
Fixes #6975
Fixes #6976
"

* 'fix_repair_list_stall_merge_clear_v2' of github.com:asias/scylla:
  repair: Fix stall in apply_rows_on_master_in_thread and apply_rows_on_follower
  repair: Use clear_gently in get_sync_boundary to avoid stall
  utils: Add clear_gently
  repair: Use merge_to_gently to merge two lists
  utils: Add merge_to_gently

(cherry picked from commit 4547949420)
2020-09-10 13:12:53 +03:00
Avi Kivity
e00bdc4f57 repair: apply_rows_on_follower(): remove copy of repair_rows list
We copy a list, which was reported to generate a 15ms stall.

This is easily fixed by moving it instead, which is safe since this is
the last use of the variable.

Fixes #7115.

(cherry picked from commit 6ff12b7f79)
2020-09-10 11:53:05 +03:00
Juliusz Stasiewicz
ad40f9222c cdc: Retry generation fetching after read_failure_exception
While fetching CDC generations, various exceptions can occur. They
are divided into "fatal" and "nonfatal", where "fatal" ones prevent
retrying of the fetch operation.

This patch makes `read_failure_exception` "non-fatal", because such
error may appear during restart. In general this type of error can
mean a few different things (e.g. an error code in a response from
replica, but also a broken connection) so retrying seems reasonable.

Fixes #6804

(cherry picked from commit d1dec3fcd7)
2020-09-09 15:10:50 +03:00
Kamil Braun
5d90fa17d6 cdc: fix deadlock inside check_and_repair_cdc_streams
check_and_repair_cdc_streams, in case it decides to create a new CDC
generation, updates the STATUS application state so that other nodes
gossiped with pick up the generation change.

The node which runs check_and_repair_cdc_streams also learns about a
generation change: the STATUS update causes a notification change.
This happens during add_local_application_state call
which caused the STATUS update; it would lead to calling
handle_cdc_generation, which detects a generation change and calls
add_local_application_state with the new generation's timestamp.

Thus, we get a recursive add_local_application_state call. Unforunately,
the function takes a lock before doing on_change notifications, so we
get a deadlock.

This commit prevents the deadlock.
We update the local variable which stores the generation timestamp
before updating STATUS, so handle_cdc_generation won't consider
the observed generation to be new, hence it won't perform the recursive
add_local_application_state call.

(cherry picked from commit 42fb4fe37c)
2020-09-09 10:14:18 +03:00
Yaron Kaikov
bf0c493c28 release: prepare for 4.2.rc4 2020-09-07 14:56:32 +03:00
Raphael S. Carvalho
26cb0935f0 sstables/LCS: increase per-level overlapping tolerance in reshape
LCS can have its overlapping invariant broken after operations that can
proceed in parallel to regular compaction like cleanup. That's because
there could be two compactions in parallel placing data in overlapping
token ranges of a given level > 0.
After reshape, the whole table will be rewritten, on restart, if a
given level has more than (fan_out*2)=20 overlaps.
That may sound like enough, but that's not taking into account the
exponential growth in # of SSTables per level, so 20 overlaps may
sound like a lot for level 2 which can afford 100 sstables, but it's
only 2% of level 3, and 0.2% of level 4. So let's change the
overlapping tolerance from the constant of fan_out*2 to 10% of level
limit on # of SSTables, or fan_out, whichever is higher.

Refs #6938.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20200810154510.32794-1-raphaelsc@scylladb.com>
(cherry picked from commit 7d7f9e1c54)
2020-09-06 18:28:55 +03:00
Raphael S. Carvalho
4e97d562eb compaction: Prevent non-regular compaction from picking compacting SSTables
After 8014c7124, cleanup can potentially pick a compacting SSTable.
Upgrade and scrub can also pick a compacting SSTable.
The problem is that table::candidates_for_compaction() was badly named.
It misleads the user into thinking that the SSTables returned are perfect
candidates for compaction, but manager still need to filter out the
compacting SSTables from the returned set. So it's being renamed.

When the same SSTable is compacted in parallel, the strategy invariant
can be broken like overlapping being introduced in LCS, and also
some deletion failures as more than one compaction process would try
to delete the same files.

Let's fix scrub, cleanup and ugprade by calling the manager function
which gets the correct candidates for compaction.

Fixes #6938.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20200811200135.25421-1-raphaelsc@scylladb.com>
(cherry picked from commit 11df96718a)
2020-09-06 18:26:43 +03:00
Takuya ASADA
3f1b932c04 aws: update enhanced networking supported instance list
Sync enhanced networking supported instance list to latest one.

Reference: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html

Fixes #6991

(cherry picked from commit 7cccb018b8)
2020-09-06 18:21:12 +03:00
Avi Kivity
b9498ab947 Update seastar submodule
* seastar 7816796dd1...0c289412a9 (1):
  > TLS: Use "known" (precalculated) DH parameters if available

Fixes #6191.
2020-09-06 17:38:05 +03:00
Avi Kivity
e1b3d6d0a2 Update seastar submodule
* seastar adaabdfbc...7816796dd (1):
  > core/reactor: complete_timers(): restore previous scheduling group

Fixes #7117.
2020-09-03 23:47:22 +03:00
Avi Kivity
67378cda03 Merge "Fix TWCS compaction aggressiveness due to data segregation" from Raphael
"
After data segregation feature, anything that cause out-of-order writes,
like read repair, can result in small updates to past time windows.
This causes compaction to be very aggressive because whenever a past time
window is updated like that, that time window is recompacted into a
single SSTable.
Users expect that once a window is closed, it will no longer be written
to, but that has changed since the introduction of the data segregation
future. We didn't anticipate the write amplification issues that the
feature would cause. To fix this problem, let's perform size-tiered
compaction on the windows that are no longer active and were updated
because data was segregated. The current behavior where the last active
window is merged into one file is kept. But thereafter, that same
window will only be compacted using STCS.

Fixes #6928.
"

* 'fix_twcs_agressiveness_after_data_segregation_v2' of github.com:raphaelsc/scylla:
  compaction/twcs: improve further debug messages
  compaction/twcs: Improve debug log which shows all windows
  test: Check that TWCS properly performs size-tiered compaction on past windows
  compaction/twcs: Make task estimation take into account the size-tiered behavior
  compaction/stcs: Export static function that estimates pending tasks
  compaction/stcs: Make get_buckets() static
  compact/twcs: Perform size-tiered compaction on past time windows
  compaction/twcs: Make strategy easier to extend by removing duplicated knowledge
  compaction/twcs: Make newest_bucket() non-static
  compaction/twcs: Move TWCS implementation into source file

(cherry picked from commit 6f986df458)
2020-09-02 12:53:45 +03:00
Nadav Har'El
6ab3965465 redis: fix another use-after-free crash in "exists" command
Never trust Occam's Razor - it turns out that the use-after-free bug in the
"exists" command was caused by two separate bugs. We fixed one in commit
9636a33993, but there is a second one fixed in
this patch.

The problem fixed here was that a "service_permit" object, which is designed to
be copied around from place to place (it contains a shared pointer, so is cheap
to copy), was saved by reference, and the reference was to a function argument
and was destroyed prematurely.

This time I tested *many times* that that test_strings.py passes on both dev and
debug builds.

Note that test/run/redis still fails in a debug build, but due to a different
problem.

Fixes #6469

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Reviewed-by: Benny Halevy <bhalevy@scylladb.com>
Message-Id: <20200825183313.120331-1-nyh@scylladb.com>
(cherry picked from commit 868194cd17)
2020-08-27 12:16:19 +03:00
Nadav Har'El
ca22461a9b redis: fix use-after-free crash in "exists" command
A missing "&" caused the key stored in a long-living command to be copied
and the copy quickly freed - and then used after freed.
This caused the test test_strings.py::test_exists_multiple_existent_key for
this feature to frequently crash.

Fixes #6469

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20200823190141.88816-1-nyh@scylladb.com>
(cherry picked from commit 9636a33993)
2020-08-27 12:16:19 +03:00
Asias He
b3d83ad073 compaction_manager: Avoid stall in perform_cleanup
The following stall was seen during a cleanup operation:

scylla: Reactor stalled for 16262 ms on shard 4.

| std::_MakeUniq<locator::tokens_iterator_impl>::__single_object std::make_unique<locator::tokens_iterator_impl, locator::tokens_iterator_impl&>(locator::tokens_iterator_impl&) at /usr/include/fmt/format.h:1158
|  (inlined by) locator::token_metadata::tokens_iterator::tokens_iterator(locator::token_metadata::tokens_iterator const&) at ./locator/token_metadata.cc:1602
| locator::simple_strategy::calculate_natural_endpoints(dht::token const&, locator::token_metadata&) const at simple_strategy.cc:?
|  (inlined by) locator::simple_strategy::calculate_natural_endpoints(dht::token const&, locator::token_metadata&) const at ./locator/simple_strategy.cc:56
| locator::abstract_replication_strategy::get_ranges(gms::inet_address, locator::token_metadata&) const at /usr/include/fmt/format.h:1158
| locator::abstract_replication_strategy::get_ranges(gms::inet_address) const at /usr/include/fmt/format.h:1158
| service::storage_service::get_ranges_for_endpoint(seastar::basic_sstring<char, unsigned int, 15u, true> const&, gms::inet_address const&) const at /usr/include/fmt/format.h:1158
| service::storage_service::get_local_ranges(seastar::basic_sstring<char, unsigned int, 15u, true> const&) const at /usr/include/fmt/format.h:1158
|  (inlined by) operator() at ./sstables/compaction_manager.cc:691
|  (inlined by) _M_invoke at /usr/include/c++/9/bits/std_function.h:286
| std::function<std::vector<seastar::lw_shared_ptr<sstables::sstable>, std::allocator<seastar::lw_shared_ptr<sstables::sstable> > > (table const&)>::operator()(table const&) const at /usr/include/fmt/format.h:1158
|  (inlined by) compaction_manager::rewrite_sstables(table*, sstables::compaction_options, std::function<std::vector<seastar::lw_shared_ptr<sstables::sstable>, std::allocator<seastar::lw_shared_ptr<sstables::sstable> > > (table const&)>) at ./sstables/compaction_manager.cc:604
| compaction_manager::perform_cleanup(table*) at /usr/include/fmt/format.h:1158

To fix, we furturize the function to get local ranges and sstables.

In addition, this patch removes the dependency to global storage_service object.

Fixes #6662

(cherry picked from commit 07e253542d)
2020-08-27 12:16:19 +03:00
Raphael S. Carvalho
7e6f47fbce sstables: optimize procedure that checks if a sstable needs cleanup
needs_cleanup() returns true if a sstable needs cleanup.

Turns out it's very slow because it iterates through all the local
ranges for all sstables in the set, making its complexity:
	O(num_sstables * local_ranges)

We can optimize it by taking into account that abstract_replication_strategy
documents that get_ranges() will return a list of ranges that is sorted
and non-overlapping. Compaction for cleanup already takes advantage of that
when checking if a given partition can be actually purged.

So needs_cleanup() can be optimized into O(num_sstables * log(local_ranges)).

With num_sstables=1000, RF=3, then local_ranges=256(num_tokens)*3, it means
the max # of checks performed will go from 768000 to ~9584.

Fixes #6730.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20200629171355.45118-2-raphaelsc@scylladb.com>
(cherry picked from commit cf352e7c14)
2020-08-27 12:16:16 +03:00
Asias He
9ca49cba6b abstract_replication_strategy: Add get_ranges_in_thread
Add a version that runs inside a seastar thread. The benefit is that
get_ranges can yield to avoid stalls.

Refs #6662

(cherry picked from commit 94995acedb)
2020-08-27 12:15:33 +03:00
Raphael S. Carvalho
6da8ba2d3f sstables: export needs_cleanup()
May be needed elsewhere, like in an unit test.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20200629171355.45118-1-raphaelsc@scylladb.com>
(cherry picked from commit a9eebdc778)
2020-08-27 12:15:29 +03:00
Asias He
366c1c2c59 gossip: Fix race between shutdown message handler and apply_state_locally
1. The node1 is shutdown
2. The node1 sends shutdown message to node2
3. The node2 receives gossip shutdown message but the handler yields
4. The node1 is restarted
5. The node1 sends new gossip endpoint_state to node2, node2 applies the state
   in apply_state_locally and calls gossiper::handle_major_state_change
   and then calls gossiper::mark_alive
6. The shutdown message handler in step 3 resumes and sets status of node1 to SHUTDOWN
7. The gossiper::mark_alive fiber in step 5 resumes and calls gossiper::real_mark_alive,
   node2 will skip to mark node1 as alive because the status of node1 is
   SHUTDOWN. As a result, node1 is alive but it is not marked as UP by node2.

To fix, we serialize the two operations.

Fixes #7032

(cherry picked from commit e6ceec1685)
2020-08-27 11:15:48 +03:00
Nadav Har'El
05cdb173f3 Alternator: allow CreateTable with SSESpecification explicitly disabled
While Alternator doesn't yet support creating a table with a different
"server-side encryption" (a.k.a. encryption-at-rest) parameters, the
SSESpecification option with Enabled=false should still be allowed, as
it is just the default, and means exactly the same as would a missing
SSESpecification.

This patch also adds a test for this case, which failed on Alternator
before this patch.

Fixes #7031.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20200812205853.173846-1-nyh@scylladb.com>
(cherry picked from commit 4c73d43153)
2020-08-26 20:15:19 +03:00
Nadav Har'El
8c929a96cf alternator: CreateTable with bad Tags shouldn't create a table
Currently, if a user tries to CreateTable with a forbidden set of tags,
e.g., the Tags list is too long or contains an invalid value for
system:write_isolation, then the CreateTable request fails but the table
is still created. Without the tag of course.

This patch fixes this bug, and adds two test cases for it that fail
before this patch, and succeed with it. One of the test cases is
scylla_only because it checks the Scylla-specific system:write_isolation
tag, but the second test case works on DynamoDB as well.

What this patch does is to split the update_tags() function into two
parts - the first part just parses the Tags, validates them, and builds
a map. Only the second part actually writes the tags to the schema.
CreateTable now does the first part early, before creating the table,
so failure in parsing or validating the Tags will not leave a created
table behind.

Fixes #6809.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20200713120611.767736-1-nyh@scylladb.com>
(cherry picked from commit 35f7048228)
2020-08-26 19:52:49 +03:00
Avi Kivity
a9aa10e8de Merge "Unregister RPC verbs on stop" from Pavel E
"
There are 5 services, that register their RPC handlers in messaging
service, but quite a few of them unregister them on stop.

Unregistering is somewhat critical, not just because it makes the
code look clean, but also because unregistration does wait for the
message processing to complete, thus avoiding use-after-free's in
the handlers.

In particular, several handlers call service::get_schema_for_write()
which, in turn, may end up in service::maybe_sync() calling for
the local migration manager instance. All those handlers' processing
must be waited for before stopping the migration manager.

The set brings the RPC handlers unregistration in sync with the
registration part.

tests: unit (dev)
       dtest (dev: simple_boot_shutdown, repair)
       start-stop by hands (dev)
fixes: #6904
"

* 'br-rpc-unregister-verbs' of https://github.com/xemul/scylla:
  main: Add missing calls to unregister RPC hanlers
  messaging: Add missing per-service unregistering methods
  messaging: Add missing handlers unregistration helpers
  streaming: Do not use db->invoke_on_all in vain
  storage_proxy: Detach rpc unregistration from stop
  main: Shorten call to storage_proxy::init_messaging_service

(cherry picked from commit 01b838e291)
2020-08-26 14:41:04 +03:00
Raphael S. Carvalho
989d8fe636 cql3/statements: verify that counter column cannot be added into non-counter table
A check, to validate that counter column cannot be added into non-counter table,
is missing for alter table statement. Validation is performed when building new
schema, but it's limited to checking that a schema will not contain both counter
and non-counter columns.

Due to lack of validation, the added counter column could be incorrectly
persisted to the schema, but this results in a crash when setting the new
schema to its table. On restart, it can be confirmed that the schema change
was indeed persisted when describing the table.
This problem is fixed by doing proper validation for the alter table statement,
which consists of making sure a new counter column cannot be added to a
non-counter table.

The test cdc_disallow_cdc_for_counters_test is adjusted because one of its tests
was built on the assumption that counter column can be added into a non-counter
table.

Fixes #7065.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20200824155709.34743-1-raphaelsc@scylladb.com>
(cherry picked from commit 1c29f0a43d)
2020-08-25 18:44:42 +03:00
Takuya ASADA
48d79a1d9f dist/debian: disable debuginfo compression on .deb
Since older binutils on some distribution does not able to handle
compressed debuginfo generated on Fedora, we need to disable it.
However, debian packager force debuginfo compression since debian/compat = 9,
we have to uncompress them after compressed automatically.

Fixes #6982

(cherry picked from commit 75c2362c95)
2020-08-23 19:01:00 +03:00
Botond Dénes
4c65413413 scylla-gdb.py: find_db(): don't return current shard's database for shard=0
The `shard` parameter of `find_db()` is optional and is defaulted to
`None`. When missing, the current shard's database instance is returned.
The problem is that the if condition checking this uses `not shard`,
which also evaluates to `True` if `shard == 0`, resulting in returning
the current shard's database instance for shard 0. Change the condition
to `shard is None` to avoid this.

Fixes: #7016
Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <20200812091546.1704016-1-bdenes@scylladb.com>
(cherry picked from commit 4cfab59eb1)
2020-08-23 18:56:19 +03:00
Hagit Segev
e931d28673 release: prepare for 4.2.rc3 2020-08-19 14:39:08 +03:00
Botond Dénes
ec71688ff2 view_update_generator: fix race between registering and processing sstables
fea83f6 introduced a race between processing (and hence removing)
sstables from `_sstables_with_tables` and registering new ones. This
manifested in sstables that were added concurrently with processing a
batch for the same sstables being dropped and the semaphore units
associated with them not returned. This resulted in repairs being
blocked indefinitely as the units of the semaphore were effectively
leaked.

This patch fixes this by moving the contents of `_sstables_with_tables`
to a local variable before starting the processing. A unit test
reproducing the problem is also added.

Fixes: #6892

Tests: unit(dev)
Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <20200817160913.2296444-1-bdenes@scylladb.com>
(cherry picked from commit 22a6493716)
2020-08-19 00:11:48 +03:00
Botond Dénes
9710a91100 table: get_sstables_by_partition_key(): don't make a copy of selected sstables
Currently we assign the reference to the vector of selected sstables to
`auto sst`. This makes a copy and we pass this local variable to
`do_for_each()`, which will result in a use-after-free if the latter
defers.
Fix by not making a copy and instead just keep the reference.

Fixes: #7060

Tests: unit(dev)
Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <20200818091241.2341332-1-bdenes@scylladb.com>
(cherry picked from commit 78f94ba36a)
2020-08-19 00:01:36 +03:00
Nadav Har'El
b052f3f5ce Update Seastar submodule module
> http: add "Expect: 100-continue" handling

Refs #6844.
2020-08-11 13:06:03 +03:00
Calle Wilund
d70cab0444 database: Do not assert on replay positions if truncate does not flush
Fixes #6995

In c2c6c71 the assert on replay positions in flushed sstables discarded by
truncate was broken, by the fact that we no longer flush all sstables
unless auto snapshot is enabled.

This means the low_mark assertion does not hold, because we maybe/probably
never got around to creating the sstables that would hold said mark.

Note that the (old) change to not create sstables and then just delete
them is in itself good. But in that case we should not try to verify
the rp mark.

(cherry picked from commit 9620755c7f)
2020-08-11 00:00:43 +03:00
Avi Kivity
0ce3799187 Update seastar submodule
* seastar 4641f4f2d3...2775a54dcb (1):
  > memory: fix small aligned free memory corruption

Fixes #6831
2020-08-09 18:35:44 +03:00
Avi Kivity
ee113eca52 Merge 'hinted handoff: fix commitlog memory leak' from Piotr D
"
When commitlog is recreated in hints manager, only shutdown() method is
called, but not release(). Because of that, some internal commitlog
objects (`segment_manager` and `segment`s) may be left pointing to each
other through shared_ptr reference cycles, which may result in memory
leak when the parent commitlog object is destroyed.

This PR prevents memory leaks that may happen this way by calling
release() after shutdown() from the hints manager.

Fixes: #6409, Fixes #6776
"

* piodul-fix-commitlog-memory-leak-in-hinted-handoff:
  hinted handoff: disable warnings about segments left on disk
  hinted handoff: release memory on commitlog termination

(cherry picked from commit 4c221855a1)
2020-08-09 17:25:20 +03:00
Tomasz Grabiec
be11514985 thrift: Fix crash on unsorted column names in SlicePredicate
The column names in SlicePredicate can be passed in arbitrary order.
We converted them to clustering ranges in read_command preserving the
original order. As a result, the clustering ranges in read command may
appear out of order. This violates storage engine's assumptions and
lead to undefined behavior.

It was seen manifesting as a SIGSEGV or an abort in sstable reader
when executing a get_slice() thrift verb:

scylla: sstables/consumer.hh:476: seastar::future<> data_consumer::continuous_data_consumer<StateProcessor>::fast_forward_to(size_t, size_t) [with StateProcessor = sstables::data_consume_rows_context_m; size_t = long unsigned int]: Assertion `end >= _stream_position.position' failed.

Fixes #6486.

Tests:

   - added a new dtest to thrift_tests.py which reproduces the problem

Message-Id: <1596725657-15802-1-git-send-email-tgrabiec@scylladb.com>
(cherry picked from commit bfd129cffe)
2020-08-08 19:47:57 +03:00
Rafael Ávila de Espíndola
ec874bdc31 alternator: Fix use after return
Avoid a copy of timeout so that we don't end up with a reference to a
stack allocated variable.

Fixes #6897

Signed-off-by: Rafael Ávila de Espíndola <espindola@scylladb.com>
Message-Id: <20200721184939.111665-1-espindola@scylladb.com>
(cherry picked from commit e83e91e352)
2020-08-03 22:24:12 +03:00
Nadav Har'El
43169ffa2c alternator: fix Expected's "NULL" operator with missing AttributeValueList
The "NULL" operator in Expected (old-style conditional operations) doesn't
have any parameters, so we insisted that the AttributeValueList be empty.
However, we forgot to allow it to also be missing - a possibility which
DynamoDB allows.

This patch adds a test to reproduce this case (the test passes on DyanmoDB,
fails on Alternator before this patch, and succeeds after this patch), and
a fix.

Fixes #6816.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20200709161254.618755-1-nyh@scylladb.com>
(cherry picked from commit f549d147ea)
2020-08-03 20:39:01 +03:00
Yaron Kaikov
c5ed14bff6 release: prepare for 4.2.rc2 2020-08-03 16:50:38 +03:00
Takuya ASADA
8366eda943 scylla_util.py: always use relocatable CLI tools
On some CLI tools, command options may different between latest version
vs older version.
To maximize compatibility of setup scripts, we should always use
relocatable CLI tools instead of distribution version of the tool.

Related #6954

(cherry picked from commit a19a62e6f6)
2020-08-03 10:39:26 +03:00
Takuya ASADA
5d0b0dd4c4 create-relocatable-package.py: add lsblk for relocatable CLI tools
We need latest version of lsblk that supported partition type UUID.

Fixes #6954

(cherry picked from commit 6ba2a6c42e)
2020-08-03 10:39:07 +03:00
Juliusz Stasiewicz
6f259be5f1 aggregate_fcts: Use per-type comparators for dynamic types
For collections and UDTs the `MIN()` and `MAX()` functions are
generated on the fly. Until now they worked by comparing just the
byte representations of arguments.

This patch uses specific per-type comparators to provide semantically
sensible, dynamically created aggregates.

Fixes #6768

(cherry picked from commit 5b438e79be)
2020-08-03 10:26:02 +03:00
Calle Wilund
16e512e21c cql3::lists: Fix setter_by_uuid not handing null value
Fixes #6828

When using the scylla list index from UUID extension,
null values were not handled properly causing throws
from underlying layer.

(cherry picked from commit 3b74b9585f)
2020-08-03 10:19:13 +03:00
Avi Kivity
c61dc4e87d tools: toolchain: regenerate for gcc 10.2
Fixes #6813.

As a side effect, this also brings in xxhash 0.7.4.

(matches commit 66c2b4c8bf)
2020-07-31 08:48:12 +03:00
Takuya ASADA
af76a3ba79 scylla_post_install.sh: generate memory.conf for CentOS7
On CentOS7, systemd does not support percentage-based parameter.
To apply memory parameter on CentOS7, we need to override the parameter
in bytes, instead of percentage.

Fixes #6783

(cherry picked from commit 3a25e7285b)
2020-07-30 16:41:10 +03:00
Tomasz Grabiec
8fb5ebb2c6 commitlog: Fix use-after-free on mutation object during replay
The mutation object may be freed prematurely during commitlog replay
in the schema upgrading path. We will hit the problem if the memtable
is full and apply_in_memory() needs to defer.

This will typically manifest as a segfault.

Fixes #6953

Introduced in 79935df

Tests:
  - manual using scylla binary. Reproduced the problem then verified the fix makes it go away

Message-Id: <1596044010-27296-1-git-send-email-tgrabiec@scylladb.com>
(cherry picked from commit 3486eba1ce)
2020-07-30 16:36:42 +03:00
Takuya ASADA
bfb11defdd scylla_setup: skip boot partition
On GCE, /dev/sda14 reported as unused disk but it's BIOS boot partition,
should not use for scylla data partition, also cannot use for it since it's
too small.

It's better to exclude such partiotion from unsed disk list.

Fixes #6636

(cherry picked from commit d7de9518fe)
2020-07-29 09:48:10 +03:00
Asias He
2d1ddcbb6a repair: Fix race between create_writer and wait_for_writer_done
We saw scylla hit user after free in repair with the following procedure during tests:

- n1 and n2 in the cluster

- n2 ran decommission

- n2 sent data to n1 using repair

- n2 was killed forcely

- n1 tried to remove repair_meta for n1

- n1 hit use after free on repair_meta object

This was what happened on n1:

1) data was received -> do_apply_rows was called -> yield before create_writer() was called

2) repair_meta::stop() was called -> wait_for_writer_done() / do_wait_for_writer_done was called
   with _writer_done[node_idx] not engaged

3) step 1 resumed, create_writer() was called and _repair_writer object was referenced

4) repair_meta::stop() finished, repair_meta object and its member _repair_writer was destroyed

5) The fiber created by create_writer() at step 3 hit use after free on _repair_writer object

To fix, we should call wait_for_writer_done() after any pending
operations were done which were protected by repair_meta::_gate. This
prevents wait for writer done finishes before the writer is in the
process of being created.

Fixes: #6853
Fixes: #6868
Backports: 4.0, 4.1, 4.2
(cherry picked from commit e6f640441a)
2020-07-29 09:48:10 +03:00
Raphael S. Carvalho
4c560b63f0 sstable: index_reader: Make sure streams are all properly closed on failure
Turns out the fix f591c9c710 wasn't enough to make sure all input streams
are properly closed on failure.
It only closes the main input stream that belongs to context, but it misses
all the input streams that can be opened in the consumer for promote index
reading. Consumer stores a list of indexes, where each of them has its own
input stream. On failure, we need to make sure that every single one of
them is properly closed before destroying the indexes as that could cause
memory corruption due to read ahead.

Fixes #6924.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20200727182214.377140-1-raphaelsc@scylladb.com>
(cherry picked from commit 0d70efa58e)
2020-07-29 09:48:10 +03:00
Nadav Har'El
00155e32b1 merge: db/view: view_update_generator: make staging reader evictable
Merged patch set by Botond Dénes:

The view update generation process creates two readers. One is used to
read the staging sstables, the data which needs view updates to be
generated for, and another reader for each processed mutation, which
reads the current value (pre-image) of each row in said mutation. The

staging reader is created first and is kept alive until all staging data
is processed. The pre-image reader is created separately for each
processed mutation. The staging reader is not restricted, meaning it
does not wait for admission on the relevant reader concurrency
semaphore, but it does register its resource usage on it. The pre-image
reader however *is* restricted. This creates a situation, where the
staging reader possibly consumes all resources from the semaphore,
leaving none for the later created pre-image reader, which will not be
able to start reading. This will block the view building process meaning
that the staging reader will not be destroyed, causing a deadlock.

This patch solves this by making the staging reader restricted and
making it evictable. To prevent thrashing -- evicting the staging reader
after reading only a really small partition -- we only make the staging
reader evictable after we have read at least 1MB worth of data from it.

  test/boost: view_build_test: add test_view_update_generator_buffering
  test/boost: view_build_test: add test test_view_update_generator_deadlock
  reader_permit: reader_resources: add operator- and operator+
  reader_concurrency_semaphore: add initial_resources()
  test: cql_test_env: allow overriding database_config
  mutation_reader: expose new_reader_base_cost
  db/view: view_updating_consumer: allow passing custom update pusher
  db/view: view_update_generator: make staging reader evictable
  db/view: view_updating_consumer: move implementation from table.cc to view.cc
  database: add make_restricted_range_sstable_reader()

Signed-off-by: Botond Dénes <bdenes@scylladb.com>

(cherry picked from commit f488eaebaf)

Fixes #6892.
2020-07-28 17:02:09 +03:00
Avi Kivity
b06dffcc19 Merge "messaging: make verb handler registering independent of current scheduling group" from Botond
"
0c6bbc8 refactored `get_rpc_client_idx()` to select different clients
for statement verbs depending on the current scheduling group.
The goal was to allow statement verbs to be sent on different
connections depending on the current scheduling group. The new
connections use per-connection isolation. For backward compatibility the
already existing connections fall-back to per-handler isolation used
previously. The old statement connection, called the default statement
connection, also used this. `get_rpc_client_idx()` was changed to select
the default statement connection when the current scheduling group is
the statement group, and a non-default connection otherwise.

This inadvertently broke `scheduling_group_for_verb()` which also used
this method to get the scheduling group to be used to isolate a verb at
handle register time. This method needs the default client idx for each
verb, but if verb registering is run under the system group it instead
got the non-default one, resulting in the per-handler isolation not
being set-up for the default statement connection, resulting in default
statement verb handlers running in whatever scheduling group the process
loop of the rpc is running in, which is the system scheduling group.

This caused all sorts of problems, even beyond user queries running in
the system group. Also as of 0c6bbc8 queries on the replicas are
classified based on the scheduling group they are running on, so user
reads also ended up using the system concurrency semaphore.

In particular this caused severe problems with ranges scans, which in
some cases ended up using different semaphores per page resulting in a
crash. This could happen because when the page was read locally the code
would run in the statement scheduling group, but when the request
arrived from a remote coordinator via rpc, it was read in a system
scheduling group. This caused a mismatch between the semaphore the saved
reader was created with and the one the new page was read with. The
result was that in some cases when looking up a paused reader from the
wrong semaphore, a reader belonging to another read was returned,
creating a disconnect between the lifecycle between readers and that of
the slice and range they were referencing.

This series fixes the underlying problem of the scheduling group
influencing the verb handler registration, as well as adding some
additional defenses if this semaphore mismatch ever happens in the
future. Inactive read handles are now unique across all semaphores,
meaning that it is not possible anymore that a handle succeeds in
looking up a reader when used with the wrong semaphore. The range scan
algorithm now also makes sure there is no semaphore mismatch between the
one used for the current page and that of the saved reader from the
previous page.

I manually checked that each individual defense added is already
preventing the crash from happening.

Fixes: #6613
Fixes: #6907
Fixes: #6908

Tests: unit(dev), manual(run the crash reproducer, observe no crash)
"

* 'query-classification-regressions/v1' of https://github.com/denesb/scylla:
  multishard_mutation_query: use cached semaphore
  messaging: make verb handler registering independent of current scheduling group
  multishard_mutation_query: validate the semaphore of the looked-up reader
  reader_concurrency_semaphore: make inactive read handles unique across semaphores
  reader_concurrency_semaphore: add name() accessor
  reader_concurrency_semaphore: allow passing name to no-limit constructor

(cherry picked from commit 3f84d41880)
2020-07-27 17:41:51 +03:00
Botond Dénes
508e58ef9e sstables: clamp estimated_partitions to [1, +inf) in writers
In some cases estimated number of partitions can be 0, which is albeit a
legit estimation result, breaks many low-level sstable writer code, so
some of these have assertions to ensure estimated partitions is > 0.
To avoid hitting this assert all users of the sstable writers do the
clamping, to ensure estimated partitions is at least 1. However leaving
this to the callers is error prone as #6913 has shown it. As this
clamping is standard practice, it is better to do it in the writers
themselves, avoiding this problem altogether. This is exactly what this
patch does. It also adds two unit tests, one that reproduces the crash
in #6913, and another one that ensures all sstable writers are fine with
estimated partitions being 0 now. Call sites previously doing the
clamping are changed to not do it, it is unnecessary now as the writer
does it itself.

Fixes #6913

Tests: unit(dev)
Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <20200724120227.267184-1-bdenes@scylladb.com>
(cherry picked from commit fe127a2155)
2020-07-27 15:00:00 +03:00
Piotr Sarna
776faa809f Merge 'view_update_generator: use partitioned sstable set'
from Botond.

Recently it was observed (#6603) that since 4e6400293ea, the staging
reader is reading from a lot of sstables (200+). This consumes a lot of
memory, and after this reaches a certain threshold -- the entire memory
amount of the streaming reader concurrency semaphore -- it can cause a
deadlock within the view update generation. To reduce this memory usage,
we exploit the fact that the staging sstables are usually disjoint, and
use the partitioned sstable set to create the staging reader. This
should ensure that only the minimum number of sstable readers will be
opened at any time.

Refs: #6603
Fixes: #6707

Tests: unit(dev)

* 'view-update-generator-use-partitioned-set/v1' of https://github.com/denesb/scylla:
  db/view: view_update_generator: use partitioned sstable set
  sstables: make_partitioned_sstable_set(): return an sstable_set

(cherry picked from commit e4b74356bb)
2020-07-21 15:40:02 +03:00
Raphael S. Carvalho
7037f43a17 table: Fix Staging SSTables being incorrectly added or removed from the backlog tracker
Staging SSTables can be incorrectly added or removed from the backlog tracker,
after an ALTER TABLE or TRUNCATE, because the add and removal don't take
into account if the SSTable requires view building, so a Staging SSTable can
be added to the tracker after a ALTER table, or removed after a TRUNCATE,
even though not added previously, potentially causing the backlog to
become negative.

Fixes #6798.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20200716180737.944269-1-raphaelsc@scylladb.com>
(cherry picked from commit b67066cae2)
2020-07-21 12:57:09 +03:00
Avi Kivity
bd713959ce Update seastar submodule
* seastar 8aad24a5f8...4641f4f2d3 (4):
  > httpd: Don't warn on ECONNABORTED
  > httpd: Avoid calling future::then twice on the same future
Fixes #6709.
  > httpd: Use handle_exception instead of then_wrapped
  > httpd: Use std::unique_ptr instead of a raw pointer
2020-07-19 11:49:02 +03:00
Rafael Ávila de Espíndola
b7c5a918cb mutation_reader_test: Wait for a future
Nothing was waiting for this future. Found while testing another
patch.

Signed-off-by: Rafael Ávila de Espíndola <espindola@scylladb.com>
Message-Id: <20200630183929.1704908-1-espindola@scylladb.com>
(cherry picked from commit 6fe7706fce)

Fixes #6858.
2020-07-16 14:44:31 +03:00
Asias He
fb2ae9e66b repair: Relax node selection in bootstrap when nodes are less than RF
Consider a cluster with two nodes:

 - n1 (dc1)
 - n2 (dc2)

A third node is bootstrapped:

 - n3 (dc2)

The n3 fails to bootstrap as follows:

 [shard 0] init - Startup failed: std::runtime_error
 (bootstrap_with_repair: keyspace=system_distributed,
 range=(9183073555191895134, 9196226903124807343], no existing node in
 local dc)

The system_distributed keyspace is using SimpleStrategy with RF 3. For
the keyspace that does not use NetworkTopologyStrategy, we should not
require the source node to be in the same DC.

Fixes: #6744
Backports: 4.0 4.1, 4.2
(cherry picked from commit 38d964352d)
2020-07-16 12:02:38 +03:00
Asias He
7a7ed8c65d repair: Relax size check of get_row_diff and set_diff
In case a row hash conflict, a hash in set_diff will get more than one
row from get_row_diff.

For example,

Node1 (Repair master):
row1  -> hash1
row2  -> hash2
row3  -> hash3
row3' -> hash3

Node2 (Repair follower):
row1  -> hash1
row2  -> hash2

We will have set_diff = {hash3} between node1 and node2, while
get_row_diff({hash3}) will return two rows: row3 and row3'. And the
error below was observed:

   repair - Got error in row level repair: std::runtime_error
   (row_diff.size() != set_diff.size())

In this case, node1 should send both row3 and row3' to peer node
instead of fail the whole repair. Because node2 does not have row3 or
row3', otherwise node1 won't send row with hash3 to node1 in the first
place.

Refs: #6252
(cherry picked from commit a00ab8688f)
2020-07-15 14:48:49 +03:00
Nadav Har'El
7b9be752ec alternator test: configurable temporary directory
The test/alternator/run script creates a temporary directory for the Scylla
database in /tmp. The assumption was that this is the fastest disk (usually
even a ramdisk) on the test machine, and we didn't need anything else from
it.

But it turns out that on some systems, /tmp is actually a slow disk, so
this patch adds a way to configure the temporary directory - if the TMPDIR
environment variable exists, it is used instead of /tmp. As before this
patch, a temporary subdirectry is created in $TMPDIR, and this subdirectory
is automatically deleted when the test ends.

The test.py script already passes an appropriate TMPDIR (testlog/$mode),
which after this patch the Alternator test will use instead of /tmp.

Fixes #6750

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20200713193023.788634-1-nyh@scylladb.com>
(cherry picked from commit 8e3be5e7d6)
2020-07-14 12:34:26 +03:00
Konstantin Osipov
903e967a16 Export TMPDIR pointing at subdir of testlog/
Export TMPDIR environment variable pointing at a subdir of testlog.
This variable is used by seastar/scylla tests to create a
a subdirectory with temporary test data. Normally a test cleans
up the temporary directory, but if it crashes or is killed the
directory remains.

By resetting the default location from /tmp to testlog/{mode}
we allow test.py we consolidate all test artefacts in a single
place.

Fixes #6062, "test.py uses tmpfs"

(cherry picked from commit e628da863d)
2020-07-14 12:34:06 +03:00
Avi Kivity
b84946895c Update seastar submodule
* seastar 1e762652c4...8aad24a5f8 (2):
  > futures: Add a test for a broken promise in a parallel_for_each
  > future: Call set_to_broken_promise earlier

Fixes #6749 (probably).
2020-07-13 20:08:16 +03:00
Asias He
a27188886a repair: Switch to btree_set for repair_hash.
In one of the longevity tests, we observed 1.3s reactor stall which came from
repair_meta::get_full_row_hashes_source_op. It traced back to a call to
std::unordered_set::insert() which triggered big memory allocation and
reclaim.

I measured std::unordered_set, absl::flat_hash_set, absl::node_hash_set
and absl::btree_set. The absl::btree_set was the only one that seastar
oversized allocation checker did not warn in my tests where around 300K
repair hashes were inserted into the container.

- unordered_set:
hash_sets=295634, time=333029199 ns

- flat_hash_set:
hash_sets=295634, time=312484711 ns

- node_hash_set:
hash_sets=295634, time=346195835 ns

- btree_set:
hash_sets=295634, time=341379801 ns

The btree_set is a bit slower than unordered_set but it does not have
huge memory allocation. I do not measure real difference of total time
to finish repair of the same dataset with unordered_set and btree_set.

To fix, switch to absl btree_set container.

Fixes #6190

(cherry picked from commit 67f6da6466)
2020-07-13 10:09:23 +03:00
Dmitry Kropachev
51d4efc321 dist/common/scripts/scylla-housekeeping: wrap urllib.request with try ... except
We could hit "cannot serialize '_io.BufferedReader' object" when request get 404 error from the server
	Now you will get legit error message in the case.

	Fixes #6690

(cherry picked from commit de82b3efae)
2020-07-09 18:24:55 +03:00
Avi Kivity
0847eea8d6 Update seastar submodule
* seastar 11e86172ba...1e762652c4 (1):
  > sharded: Do not hang on never set freed promise

Fixes #6606.
2020-07-09 15:52:26 +03:00
Avi Kivity
35ad57cb9c Point seastar submodule at scylla-seastar.git
This allows us to backport seastar patches to 4.2.
2020-07-09 15:50:25 +03:00
Hagit Segev
42b0b9ad08 release: prepare for 4.2.rc1 2020-07-08 23:01:10 +03:00
Dejan Mircevski
68b95bf2ac cql/restrictions: Handle WHERE a>0 AND a<0
WHERE clauses with start point above the end point were handled
incorrectly.  When the slice bounds are transformed to interval
bounds, the resulting interval is interpreted as wrap-around (because
start > end), so it contains all values above 0 and all values below
0.  This is clearly incorrect, as the user's intent was to filter out
all possible values of a.

Fix it by explicitly short-circuiting to false when start > end.  Add
a test case.

Fixes #5799.

Tests: unit (dev)

Signed-off-by: Dejan Mircevski <dejan@scylladb.com>
(cherry picked from commit 921dbd0978)
2020-07-08 13:20:10 +03:00
Botond Dénes
fea83f6ae0 db/view: view_update_generator: re-balance wait/signal on the register semaphore
The view update generator has a semaphore to limit concurrency. This
semaphore is waited on in `register_staging_sstable()` and later the
unit is returned after the sstable is processed in the loop inside
`start()`.
This was broken by 4e64002, which changed the loop inside `start()` to
process sstables in per table batches, however didn't change the
`signal()` call to return the amount of units according to the number of
sstables processed. This can cause the semaphore units to dry up, as the
loop can process multiple sstables per table but return just a single
unit. This can also block callers of `register_staging_sstable()`
indefinitely as some waiters will never be released as under the right
circumstances the units on the semaphore can permanently go below 0.
In addition to this, 4e64002 introduced another bug: table entries from
the `_sstables_with_tables` are never removed, so they are processed
every turn. If the sstable list is empty, there won't be any update
generated but due to the unconditional `signal()` described above, this
can cause the units on the semaphore to grow to infinity, allowing
future staging sstables producers to register a huge amount of sstables,
causing memory problems due to the amount of sstable readers that have
to be opened (#6603, #6707).
Both outcomes are equally bad. This patch fixes both issues and modifies
the `test_view_update_generator` unit test to reproduce them and hence
to verify that this doesn't happen in the future.

Fixes: #6774
Refs: #6707
Refs: #6603

Tests: unit(dev)
Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <20200706135108.116134-1-bdenes@scylladb.com>
(cherry picked from commit 5ebe2c28d1)
2020-07-08 11:13:24 +03:00
Takuya ASADA
76618a7e06 scylla_setup: don't add same disk device twice
We shouldn't accept adding same disk twice for RAID prompt.

Fixes #6711

(cherry picked from commit 835e76fdfc)
2020-07-07 13:07:59 +03:00
Takuya ASADA
189a08ac72 scylla_setup: follow hugepages package name change on Ubuntu 20.04LTS
hugepages package now renamed to libhugetlbfs-bin, we need to follow
the change.

Fixes #6673

(cherry picked from commit 03ce19d53a)
2020-07-05 14:41:33 +03:00
Takuya ASADA
a3e9915a83 dist/debian: apply generated package version for .orig.tar.gz file
We currently does not able to apply version number fixup for .orig.tar.gz file,
even we applied correct fixup on debian/changelog, becuase it just reading
SCYLLA-VERSION-FILE.
We should parse debian/{changelog,control} instead.

Fixes #6736

(cherry picked from commit a107f086bc)
2020-07-05 14:08:37 +03:00
Asias He
e4bc14ec1a boot_strapper: Ignore node to be replaced explicitly as stream source
After commit 7d86a3b208 (storage_service:
Make replacing node take writes), during replace operation, tokens in
_token_metadata for node being replaced are updated only after the replace
operation is finished. As a result, in range_streamer::add_ranges, the
node being replaced will be considered as a source to stream data from.

Before commit 7d86a3b208, the node being
replaced will not be considered as a source node because it is already
replaced by the replacing node before the replace operation is finished.
This is the reason why it works in the past.

To fix, filter out the node being replaced as a source node explicitly.

Tests: replace_first_boot_test and replace_stopped_node_test
Backports: 4.1
Fixes: #6728
(cherry picked from commit e338028b7e22b0a80be7f80c337c52f958bfe1d7)
2020-07-01 14:36:43 +03:00
Takuya ASADA
972acb6d56 scylla_swap_setup: handle <1GB environment
Show better error message and exit with non-zero status when memory size <1GB.

Fixes #6659

(cherry picked from commit a9de438b1f)
2020-07-01 12:40:25 +03:00
Yaron Kaikov
7fbfedf025 dist/docker/redhat/Dockerfile: update 4.2 params
Set SCYLLA_REPO and VERSION values for scylla-4.2
2020-06-30 13:09:06 +03:00
Avi Kivity
5f175f8103 Merge "Fix handling of decimals with negative scales" from Rafael
"
Before this series scylla would effectively infinite loop when, for
example, casting a decimal with a negative scale to float.

Fixes #6720
"

* 'espindola/fix-decimal-issue' of https://github.com/espindola/scylla:
  big_decimal: Add a test for a corner case
  big_decimal: Correctly handle negative scales
  big_decimal: Add a as_rational member function
  big_decimal: Move constructors out of line

(cherry picked from commit 3e2eeec83a)
2020-06-29 12:05:17 +03:00
Benny Halevy
674ad6656a comapction: restore % in compaction completion message
The % sign fell off in c4841fa735

Fixes #6727.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Message-Id: <20200625151352.736561-1-bhalevy@scylladb.com>
(cherry picked from commit a843945115)
2020-06-28 12:10:21 +03:00
Hagit Segev
58498b4b6c release: prepare for 4.2.rc0 2020-06-26 13:06:07 +03:00
109 changed files with 2868 additions and 540 deletions

2
.gitmodules vendored
View File

@@ -1,6 +1,6 @@
[submodule "seastar"]
path = seastar
url = ../seastar
url = ../scylla-seastar
ignore = dirty
[submodule "swagger-ui"]
path = swagger-ui

View File

@@ -1,7 +1,7 @@
#!/bin/sh
PRODUCT=scylla
VERSION=666.development
VERSION=4.2.rc5
if test -f version
then

View File

@@ -129,7 +129,7 @@ future<std::string> get_key_from_roles(cql3::query_processor& qp, std::string us
auth::meta::roles_table::qualified_name(), auth::meta::roles_table::role_col_name);
auto cl = auth::password_authenticator::consistency_for_user(username);
auto timeout = auth::internal_distributed_timeout_config();
auto& timeout = auth::internal_distributed_timeout_config();
return qp.execute_internal(query, cl, timeout, {sstring(username)}, true).then_wrapped([username = std::move(username)] (future<::shared_ptr<cql3::untyped_result_set>> f) {
auto res = f.get0();
auto salted_hash = std::optional<sstring>();

View File

@@ -98,6 +98,11 @@ struct nonempty : public size_check {
// Check that array has the expected number of elements
static void verify_operand_count(const rjson::value* array, const size_check& expected, const rjson::value& op) {
if (!array && expected(0)) {
// If expected() allows an empty AttributeValueList, it is also fine
// that it is missing.
return;
}
if (!array || !array->IsArray()) {
throw api_error("ValidationException", "With ComparisonOperator, AttributeValueList must be given and an array");
}

View File

@@ -626,11 +626,8 @@ void rmw_operation::set_default_write_isolation(std::string_view value) {
default_write_isolation = parse_write_isolation(value);
}
// FIXME: Updating tags currently relies on updating schema, which may be subject
// to races during concurrent updates of the same table. Once Scylla schema updates
// are fixed, this issue will automatically get fixed as well.
enum class update_tags_action { add_tags, delete_tags };
static future<> update_tags(service::migration_manager& mm, const rjson::value& tags, schema_ptr schema, std::map<sstring, sstring>&& tags_map, update_tags_action action) {
static void update_tags_map(const rjson::value& tags, std::map<sstring, sstring>& tags_map, update_tags_action action) {
if (action == update_tags_action::add_tags) {
for (auto it = tags.Begin(); it != tags.End(); ++it) {
const rjson::value& key = (*it)["Key"];
@@ -652,28 +649,20 @@ static future<> update_tags(service::migration_manager& mm, const rjson::value&
}
if (tags_map.size() > 50) {
return make_exception_future<>(api_error("ValidationException", "Number of Tags exceed the current limit for the provided ResourceArn"));
throw api_error("ValidationException", "Number of Tags exceed the current limit for the provided ResourceArn");
}
validate_tags(tags_map);
}
// FIXME: Updating tags currently relies on updating schema, which may be subject
// to races during concurrent updates of the same table. Once Scylla schema updates
// are fixed, this issue will automatically get fixed as well.
static future<> update_tags(service::migration_manager& mm, schema_ptr schema, std::map<sstring, sstring>&& tags_map) {
schema_builder builder(schema);
builder.set_extensions(schema::extensions_map{{sstring(tags_extension::NAME), ::make_shared<tags_extension>(std::move(tags_map))}});
return mm.announce_column_family_update(builder.build(), false, std::vector<view_ptr>(), false);
}
static future<> add_tags(service::migration_manager& mm, service::storage_proxy& proxy, schema_ptr schema, rjson::value& request_info) {
const rjson::value* tags = rjson::find(request_info, "Tags");
if (!tags || !tags->IsArray()) {
return make_exception_future<>(api_error("ValidationException", format("Cannot parse tags")));
}
if (tags->Size() < 1) {
return make_exception_future<>(api_error("ValidationException", "The number of tags must be at least 1"));
}
std::map<sstring, sstring> tags_map = get_tags_of_table(schema);
return update_tags(mm, rjson::copy(*tags), schema, std::move(tags_map), update_tags_action::add_tags);
}
future<executor::request_return_type> executor::tag_resource(client_state& client_state, service_permit permit, rjson::value request) {
_stats.api_operations.tag_resource++;
@@ -683,7 +672,16 @@ future<executor::request_return_type> executor::tag_resource(client_state& clien
return api_error("AccessDeniedException", "Incorrect resource identifier");
}
schema_ptr schema = get_table_from_arn(_proxy, rjson::to_string_view(*arn));
add_tags(_mm, _proxy, schema, request).get();
std::map<sstring, sstring> tags_map = get_tags_of_table(schema);
const rjson::value* tags = rjson::find(request, "Tags");
if (!tags || !tags->IsArray()) {
return api_error("ValidationException", format("Cannot parse tags"));
}
if (tags->Size() < 1) {
return api_error("ValidationException", "The number of tags must be at least 1") ;
}
update_tags_map(*tags, tags_map, update_tags_action::add_tags);
update_tags(_mm, schema, std::move(tags_map)).get();
return json_string("");
});
}
@@ -704,7 +702,8 @@ future<executor::request_return_type> executor::untag_resource(client_state& cli
schema_ptr schema = get_table_from_arn(_proxy, rjson::to_string_view(*arn));
std::map<sstring, sstring> tags_map = get_tags_of_table(schema);
update_tags(_mm, *tags, schema, std::move(tags_map), update_tags_action::delete_tags).get();
update_tags_map(*tags, tags_map, update_tags_action::delete_tags);
update_tags(_mm, schema, std::move(tags_map)).get();
return json_string("");
});
}
@@ -891,9 +890,22 @@ future<executor::request_return_type> executor::create_table(client_state& clien
view_builders.emplace_back(std::move(view_builder));
}
}
if (rjson::find(request, "SSESpecification")) {
return make_ready_future<request_return_type>(api_error("ValidationException", "SSESpecification: configuring encryption-at-rest is not yet supported."));
// We don't yet support configuring server-side encryption (SSE) via the
// SSESpecifiction attribute, but an SSESpecification with Enabled=false
// is simply the default, and should be accepted:
rjson::value* sse_specification = rjson::find(request, "SSESpecification");
if (sse_specification && sse_specification->IsObject()) {
rjson::value* enabled = rjson::find(*sse_specification, "Enabled");
if (!enabled || !enabled->IsBool()) {
return make_ready_future<request_return_type>(api_error("ValidationException", "SSESpecification needs boolean Enabled"));
}
if (enabled->GetBool()) {
// TODO: full support for SSESpecification
return make_ready_future<request_return_type>(api_error("ValidationException", "SSESpecification: configuring encryption-at-rest is not yet supported."));
}
}
// We don't yet support streams (CDC), but a StreamSpecification asking
// *not* to use streams should be accepted:
rjson::value* stream_specification = rjson::find(request, "StreamSpecification");
@@ -908,6 +920,14 @@ future<executor::request_return_type> executor::create_table(client_state& clien
}
}
// Parse the "Tags" parameter early, so we can avoid creating the table
// at all if this parsing failed.
const rjson::value* tags = rjson::find(request, "Tags");
std::map<sstring, sstring> tags_map;
if (tags && tags->IsArray()) {
update_tags_map(*tags, tags_map, update_tags_action::add_tags);
}
builder.set_extensions(schema::extensions_map{{sstring(tags_extension::NAME), ::make_shared<tags_extension>()}});
schema_ptr schema = builder.build();
auto where_clause_it = where_clauses.begin();
@@ -928,14 +948,14 @@ future<executor::request_return_type> executor::create_table(client_state& clien
return create_keyspace(keyspace_name).handle_exception_type([] (exceptions::already_exists_exception&) {
// Ignore the fact that the keyspace may already exist. See discussion in #6340
}).then([this, table_name, request = std::move(request), schema, view_builders = std::move(view_builders)] () mutable {
return futurize_invoke([&] { return _mm.announce_new_column_family(schema, false); }).then([this, table_info = std::move(request), schema, view_builders = std::move(view_builders)] () mutable {
}).then([this, table_name, request = std::move(request), schema, view_builders = std::move(view_builders), tags_map = std::move(tags_map)] () mutable {
return futurize_invoke([&] { return _mm.announce_new_column_family(schema, false); }).then([this, table_info = std::move(request), schema, view_builders = std::move(view_builders), tags_map = std::move(tags_map)] () mutable {
return parallel_for_each(std::move(view_builders), [this, schema] (schema_builder builder) {
return _mm.announce_new_view(view_ptr(builder.build()));
}).then([this, table_info = std::move(table_info), schema] () mutable {
}).then([this, table_info = std::move(table_info), schema, tags_map = std::move(tags_map)] () mutable {
future<> f = make_ready_future<>();
if (rjson::find(table_info, "Tags")) {
f = add_tags(_mm, _proxy, schema, table_info);
if (!tags_map.empty()) {
f = update_tags(_mm, schema, std::move(tags_map));
}
return f.then([this] {
return wait_for_schema_agreement(_mm, db::timeout_clock::now() + 10s);
@@ -963,15 +983,24 @@ class attribute_collector {
void add(bytes&& name, atomic_cell&& cell) {
collected.emplace(std::move(name), std::move(cell));
}
void add(const bytes& name, atomic_cell&& cell) {
collected.emplace(name, std::move(cell));
}
public:
attribute_collector() : collected(attrs_type()->get_keys_type()->as_less_comparator()) { }
void put(bytes&& name, bytes&& val, api::timestamp_type ts) {
add(std::move(name), atomic_cell::make_live(*bytes_type, ts, std::move(val), atomic_cell::collection_member::yes));
void put(bytes&& name, const bytes& val, api::timestamp_type ts) {
add(std::move(name), atomic_cell::make_live(*bytes_type, ts, val, atomic_cell::collection_member::yes));
}
void put(const bytes& name, const bytes& val, api::timestamp_type ts) {
add(name, atomic_cell::make_live(*bytes_type, ts, val, atomic_cell::collection_member::yes));
}
void del(bytes&& name, api::timestamp_type ts) {
add(std::move(name), atomic_cell::make_dead(ts, gc_clock::now()));
}
void del(const bytes& name, api::timestamp_type ts) {
add(name, atomic_cell::make_dead(ts, gc_clock::now()));
}
collection_mutation_description to_mut() {
collection_mutation_description ret;
for (auto&& e : collected) {
@@ -1048,7 +1077,7 @@ public:
put_or_delete_item(const rjson::value& item, schema_ptr schema, put_item);
// put_or_delete_item doesn't keep a reference to schema (so it can be
// moved between shards for LWT) so it needs to be given again to build():
mutation build(schema_ptr schema, api::timestamp_type ts);
mutation build(schema_ptr schema, api::timestamp_type ts) const;
const partition_key& pk() const { return _pk; }
const clustering_key& ck() const { return _ck; }
};
@@ -1077,7 +1106,7 @@ put_or_delete_item::put_or_delete_item(const rjson::value& item, schema_ptr sche
}
}
mutation put_or_delete_item::build(schema_ptr schema, api::timestamp_type ts) {
mutation put_or_delete_item::build(schema_ptr schema, api::timestamp_type ts) const {
mutation m(schema, _pk);
// If there's no clustering key, a tombstone should be created directly
// on a partition, not on a clustering row - otherwise it will look like
@@ -1099,7 +1128,7 @@ mutation put_or_delete_item::build(schema_ptr schema, api::timestamp_type ts) {
for (auto& c : *_cells) {
const column_definition* cdef = schema->get_column_definition(c.column_name);
if (!cdef) {
attrs_collector.put(std::move(c.column_name), std::move(c.value), ts);
attrs_collector.put(c.column_name, c.value, ts);
} else {
row.cells().apply(*cdef, atomic_cell::make_live(*cdef->type, ts, std::move(c.value)));
}
@@ -1410,7 +1439,7 @@ public:
check_needs_read_before_write(_condition_expression) ||
_returnvalues == returnvalues::ALL_OLD;
}
virtual std::optional<mutation> apply(std::unique_ptr<rjson::value> previous_item, api::timestamp_type ts) override {
virtual std::optional<mutation> apply(std::unique_ptr<rjson::value> previous_item, api::timestamp_type ts) const override {
if (!verify_expected(_request, previous_item.get()) ||
!verify_condition_expression(_condition_expression, previous_item.get())) {
// If the update is to be cancelled because of an unfulfilled Expected
@@ -1420,6 +1449,8 @@ public:
}
if (_returnvalues == returnvalues::ALL_OLD && previous_item) {
_return_attributes = std::move(*previous_item);
} else {
_return_attributes = {};
}
return _mutation_builder.build(_schema, ts);
}
@@ -1493,7 +1524,7 @@ public:
check_needs_read_before_write(_condition_expression) ||
_returnvalues == returnvalues::ALL_OLD;
}
virtual std::optional<mutation> apply(std::unique_ptr<rjson::value> previous_item, api::timestamp_type ts) override {
virtual std::optional<mutation> apply(std::unique_ptr<rjson::value> previous_item, api::timestamp_type ts) const override {
if (!verify_expected(_request, previous_item.get()) ||
!verify_condition_expression(_condition_expression, previous_item.get())) {
// If the update is to be cancelled because of an unfulfilled Expected
@@ -1503,6 +1534,8 @@ public:
}
if (_returnvalues == returnvalues::ALL_OLD && previous_item) {
_return_attributes = std::move(*previous_item);
} else {
_return_attributes = {};
}
return _mutation_builder.build(_schema, ts);
}
@@ -1577,7 +1610,7 @@ public:
virtual ~put_or_delete_item_cas_request() = default;
virtual std::optional<mutation> apply(foreign_ptr<lw_shared_ptr<query::result>> qr, const query::partition_slice& slice, api::timestamp_type ts) override {
std::optional<mutation> ret;
for (put_or_delete_item& mutation_builder : _mutation_builders) {
for (const put_or_delete_item& mutation_builder : _mutation_builders) {
// We assume all these builders have the same partition.
if (ret) {
ret->apply(mutation_builder.build(schema, ts));
@@ -1906,7 +1939,7 @@ public:
update_item_operation(service::storage_proxy& proxy, rjson::value&& request);
virtual ~update_item_operation() = default;
virtual std::optional<mutation> apply(std::unique_ptr<rjson::value> previous_item, api::timestamp_type ts) override;
virtual std::optional<mutation> apply(std::unique_ptr<rjson::value> previous_item, api::timestamp_type ts) const override;
bool needs_read_before_write() const;
};
@@ -1984,7 +2017,7 @@ update_item_operation::needs_read_before_write() const {
}
std::optional<mutation>
update_item_operation::apply(std::unique_ptr<rjson::value> previous_item, api::timestamp_type ts) {
update_item_operation::apply(std::unique_ptr<rjson::value> previous_item, api::timestamp_type ts) const {
if (!verify_expected(_request, previous_item.get()) ||
!verify_condition_expression(_condition_expression, previous_item.get())) {
// If the update is to be cancelled because of an unfulfilled

View File

@@ -87,7 +87,11 @@ protected:
// When _returnvalues != NONE, apply() should store here, in JSON form,
// the values which are to be returned in the "Attributes" field.
// The default null JSON means do not return an Attributes field at all.
rjson::value _return_attributes;
// This field is marked "mutable" so that the const apply() can modify
// it (see explanation below), but note that because apply() may be
// called more than once, if apply() will sometimes set this field it
// must set it (even if just to the default empty value) every time.
mutable rjson::value _return_attributes;
public:
// The constructor of a rmw_operation subclass should parse the request
// and try to discover as many input errors as it can before really
@@ -100,7 +104,12 @@ public:
// conditional expression, apply() should return an empty optional.
// apply() may throw if it encounters input errors not discovered during
// the constructor.
virtual std::optional<mutation> apply(std::unique_ptr<rjson::value> previous_item, api::timestamp_type ts) = 0;
// apply() may be called more than once in case of contention, so it must
// not change the state saved in the object (issue #7218 was caused by
// violating this). We mark apply() "const" to let the compiler validate
// this for us. The output-only field _return_attributes is marked
// "mutable" above so that apply() can still write to it.
virtual std::optional<mutation> apply(std::unique_ptr<rjson::value> previous_item, api::timestamp_type ts) const = 0;
// Convert the above apply() into the signature needed by cas_request:
virtual std::optional<mutation> apply(foreign_ptr<lw_shared_ptr<query::result>> qr, const query::partition_slice& slice, api::timestamp_type ts) override;
virtual ~rmw_operation() = default;

View File

@@ -322,8 +322,8 @@ void set_storage_service(http_context& ctx, routes& r) {
for (auto cf : column_families) {
column_families_vec.push_back(&db.find_column_family(keyspace, cf));
}
return parallel_for_each(column_families_vec, [&cm] (column_family* cf) {
return cm.perform_cleanup(cf);
return parallel_for_each(column_families_vec, [&cm, &db] (column_family* cf) {
return cm.perform_cleanup(db, cf);
});
}).then([]{
return make_ready_future<json::json_return_type>(0);

View File

@@ -386,6 +386,7 @@ scylla_tests = set([
'test/boost/view_schema_ckey_test',
'test/boost/vint_serialization_test',
'test/boost/virtual_reader_test',
'test/boost/stall_free_test',
'test/manual/ec2_snitch_test',
'test/manual/gce_snitch_test',
'test/manual/gossip',

View File

@@ -267,10 +267,13 @@ public:
}
};
/// The same as `impl_max_function_for' but without knowledge of `Type'.
/// The same as `impl_max_function_for' but without compile-time dependency on `Type'.
class impl_max_dynamic_function final : public aggregate_function::aggregate {
data_type _io_type;
opt_bytes _max;
public:
impl_max_dynamic_function(data_type io_type) : _io_type(std::move(io_type)) {}
virtual void reset() override {
_max = {};
}
@@ -278,12 +281,11 @@ public:
return _max.value_or(bytes{});
}
virtual void add_input(cql_serialization_format sf, const std::vector<opt_bytes>& values) override {
if (!values[0]) {
if (values.empty() || !values[0]) {
return;
}
const auto val = *values[0];
if (!_max || *_max < val) {
_max = val;
if (!_max || _io_type->less(*_max, *values[0])) {
_max = values[0];
}
}
};
@@ -298,10 +300,13 @@ public:
};
class max_dynamic_function final : public native_aggregate_function {
data_type _io_type;
public:
max_dynamic_function(data_type io_type) : native_aggregate_function("max", io_type, { io_type }) {}
max_dynamic_function(data_type io_type)
: native_aggregate_function("max", io_type, { io_type })
, _io_type(std::move(io_type)) {}
virtual std::unique_ptr<aggregate> new_aggregate() override {
return std::make_unique<impl_max_dynamic_function>();
return std::make_unique<impl_max_dynamic_function>(_io_type);
}
};
@@ -358,10 +363,13 @@ public:
}
};
/// The same as `impl_min_function_for' but without knowledge of `Type'.
/// The same as `impl_min_function_for' but without compile-time dependency on `Type'.
class impl_min_dynamic_function final : public aggregate_function::aggregate {
data_type _io_type;
opt_bytes _min;
public:
impl_min_dynamic_function(data_type io_type) : _io_type(std::move(io_type)) {}
virtual void reset() override {
_min = {};
}
@@ -369,12 +377,11 @@ public:
return _min.value_or(bytes{});
}
virtual void add_input(cql_serialization_format sf, const std::vector<opt_bytes>& values) override {
if (!values[0]) {
if (values.empty() || !values[0]) {
return;
}
const auto val = *values[0];
if (!_min || val < *_min) {
_min = val;
if (!_min || _io_type->less(*values[0], *_min)) {
_min = values[0];
}
}
};
@@ -389,10 +396,13 @@ public:
};
class min_dynamic_function final : public native_aggregate_function {
data_type _io_type;
public:
min_dynamic_function(data_type io_type) : native_aggregate_function("min", io_type, { io_type }) {}
min_dynamic_function(data_type io_type)
: native_aggregate_function("min", io_type, { io_type })
, _io_type(std::move(io_type)) {}
virtual std::unique_ptr<aggregate> new_aggregate() override {
return std::make_unique<impl_min_dynamic_function>();
return std::make_unique<impl_min_dynamic_function>(_io_type);
}
};

View File

@@ -88,16 +88,13 @@ static data_value castas_fctn_simple(data_value from) {
template<typename ToType>
static data_value castas_fctn_from_decimal_to_float(data_value from) {
auto val_from = value_cast<big_decimal>(from);
boost::multiprecision::cpp_int ten(10);
boost::multiprecision::cpp_rational r = val_from.unscaled_value();
r /= boost::multiprecision::pow(ten, val_from.scale());
return static_cast<ToType>(r);
return static_cast<ToType>(val_from.as_rational());
}
static utils::multiprecision_int from_decimal_to_cppint(const data_value& from) {
const auto& val_from = value_cast<big_decimal>(from);
boost::multiprecision::cpp_int ten(10);
return boost::multiprecision::cpp_int(val_from.unscaled_value() / boost::multiprecision::pow(ten, val_from.scale()));
auto r = val_from.as_rational();
return utils::multiprecision_int(numerator(r)/denominator(r));
}
template<typename ToType>

View File

@@ -357,7 +357,12 @@ lists::setter_by_uuid::execute(mutation& m, const clustering_key_prefix& prefix,
collection_mutation_description mut;
mut.cells.reserve(1);
mut.cells.emplace_back(to_bytes(*index), params.make_cell(*ltype->value_comparator(), *value, atomic_cell::collection_member::yes));
if (!value) {
mut.cells.emplace_back(to_bytes(*index), params.make_dead_cell());
} else {
mut.cells.emplace_back(to_bytes(*index), params.make_cell(*ltype->value_comparator(), *value, atomic_cell::collection_member::yes));
}
m.set_cell(prefix, column, mut.serialize(*ltype));
}

View File

@@ -417,7 +417,7 @@ std::vector<const column_definition*> statement_restrictions::get_column_defs_fo
_clustering_columns_restrictions->num_prefix_columns_that_need_not_be_filtered();
for (auto&& cdef : _clustering_columns_restrictions->get_column_defs()) {
::shared_ptr<single_column_restriction> restr;
if (single_pk_restrs) {
if (single_ck_restrs) {
auto it = single_ck_restrs->restrictions().find(cdef);
if (it != single_ck_restrs->restrictions().end()) {
restr = dynamic_pointer_cast<single_column_restriction>(it->second);
@@ -688,6 +688,11 @@ static query::range<bytes_view> to_range(const term_slice& slice, const query_op
extract_bound(statements::bound::END));
}
static bool contains_without_wraparound(
const query::range<bytes_view>& range, bytes_view value, const serialized_tri_compare& cmp) {
return !range.is_wrap_around(cmp) && range.contains(value, cmp);
}
bool single_column_restriction::slice::is_satisfied_by(const schema& schema,
const partition_key& key,
const clustering_key_prefix& ckey,
@@ -702,13 +707,13 @@ bool single_column_restriction::slice::is_satisfied_by(const schema& schema,
return false;
}
return cell_value->with_linearized([&] (bytes_view cell_value_bv) {
return to_range(_slice, options, _column_def.name_as_text()).contains(
return contains_without_wraparound(to_range(_slice, options, _column_def.name_as_text()),
cell_value_bv, _column_def.type->as_tri_comparator());
});
}
bool single_column_restriction::slice::is_satisfied_by(bytes_view data, const query_options& options) const {
return to_range(_slice, options, _column_def.name_as_text()).contains(
return contains_without_wraparound(to_range(_slice, options, _column_def.name_as_text()),
data, _column_def.type->underlying_type()->as_tri_comparator());
}

View File

@@ -207,6 +207,9 @@ void alter_table_statement::add_column(const schema& schema, const table& cf, sc
"because a collection with the same name and a different type has already been used in the past", column_name));
}
}
if (type->is_counter() && !schema.is_counter()) {
throw exceptions::configuration_exception(format("Cannot add a counter column ({}) in a non counter column family", column_name));
}
cfm.with_column(column_name.name(), type, is_static ? column_kind::static_column : column_kind::regular_column);
@@ -222,7 +225,7 @@ void alter_table_statement::add_column(const schema& schema, const table& cf, sc
schema_builder builder(view);
if (view->view_info()->include_all_columns()) {
builder.with_column(column_name.name(), type);
} else if (view->view_info()->base_non_pk_columns_in_view_pk().empty()) {
} else if (!view->view_info()->has_base_non_pk_columns_in_view_pk()) {
db::view::create_virtual_column(builder, column_name.name(), type);
}
view_updates.push_back(view_ptr(builder.build()));

View File

@@ -1851,7 +1851,11 @@ future<> database::truncate(const keyspace& ks, column_family& cf, timestamp_fun
// TODO: indexes.
// Note: since discard_sstables was changed to only count tables owned by this shard,
// we can get zero rp back. Changed assert, and ensure we save at least low_mark.
assert(low_mark <= rp || rp == db::replay_position());
// #6995 - the assert below was broken in c2c6c71 and remained so for many years.
// We nowadays do not flush tables with sstables but autosnapshot=false. This means
// the low_mark assertion does not hold, because we maybe/probably never got around to
// creating the sstables that would create them.
assert(!should_flush || low_mark <= rp || rp == db::replay_position());
rp = std::max(low_mark, rp);
return truncate_views(cf, truncated_at, should_flush).then([&cf, truncated_at, rp] {
// save_truncation_record() may actually fail after we cached the truncation time

View File

@@ -55,6 +55,7 @@
#include <limits>
#include <cstddef>
#include "schema_fwd.hh"
#include "db/view/view.hh"
#include "db/schema_features.hh"
#include "gms/feature.hh"
#include "timestamp.hh"
@@ -901,7 +902,7 @@ public:
lw_shared_ptr<const sstable_list> get_sstables_including_compacted_undeleted() const;
const std::vector<sstables::shared_sstable>& compacted_undeleted_sstables() const;
std::vector<sstables::shared_sstable> select_sstables(const dht::partition_range& range) const;
std::vector<sstables::shared_sstable> candidates_for_compaction() const;
std::vector<sstables::shared_sstable> non_staging_sstables() const;
std::vector<sstables::shared_sstable> sstables_need_rewrite() const;
size_t sstables_count() const;
std::vector<uint64_t> sstable_count_per_level() const;
@@ -1008,8 +1009,9 @@ public:
return *_config.sstables_manager;
}
// Reader's schema must be the same as the base schema of each of the views.
future<> populate_views(
std::vector<view_ptr>,
std::vector<db::view::view_and_base>,
dht::token base_token,
flat_mutation_reader&&,
gc_clock::time_point);
@@ -1027,7 +1029,7 @@ private:
tracing::trace_state_ptr tr_state, reader_concurrency_semaphore& sem, const io_priority_class& io_priority, query::partition_slice::option_set custom_opts) const;
std::vector<view_ptr> affected_views(const schema_ptr& base, const mutation& update, gc_clock::time_point now) const;
future<> generate_and_propagate_view_updates(const schema_ptr& base,
std::vector<view_ptr>&& views,
std::vector<db::view::view_and_base>&& views,
mutation&& m,
flat_mutation_reader_opt existings,
tracing::trace_state_ptr tr_state,
@@ -1099,6 +1101,10 @@ flat_mutation_reader make_local_shard_sstable_reader(schema_ptr s,
mutation_reader::forwarding fwd_mr,
sstables::read_monitor_generator& monitor_generator = sstables::default_read_monitor_generator());
/// Read a range from the passed-in sstables.
///
/// The reader is unrestricted, but will account its resource usage on the
/// semaphore belonging to the passed-in permit.
flat_mutation_reader make_range_sstable_reader(schema_ptr s,
reader_permit permit,
lw_shared_ptr<sstables::sstable_set> sstables,
@@ -1110,6 +1116,21 @@ flat_mutation_reader make_range_sstable_reader(schema_ptr s,
mutation_reader::forwarding fwd_mr,
sstables::read_monitor_generator& monitor_generator = sstables::default_read_monitor_generator());
/// Read a range from the passed-in sstables.
///
/// The reader is restricted, that is it will wait for admission on the semaphore
/// belonging to the passed-in permit, before starting to read.
flat_mutation_reader make_restricted_range_sstable_reader(schema_ptr s,
reader_permit permit,
lw_shared_ptr<sstables::sstable_set> sstables,
const dht::partition_range& pr,
const query::partition_slice& slice,
const io_priority_class& pc,
tracing::trace_state_ptr trace_state,
streamed_mutation::forwarding fwd,
mutation_reader::forwarding fwd_mr,
sstables::read_monitor_generator& monitor_generator = sstables::default_read_monitor_generator());
class user_types_metadata;
class keyspace_metadata final {

View File

@@ -290,7 +290,7 @@ future<> db::batchlog_manager::replay_all_failed_batches() {
mutation m(schema, key);
auto now = service::client_state(service::client_state::internal_tag()).get_timestamp();
m.partition().apply_delete(*schema, clustering_key_prefix::make_empty(), tombstone(now, gc_clock::now()));
return _qp.proxy().mutate_locally(m, tracing::trace_state_ptr());
return _qp.proxy().mutate_locally(m, tracing::trace_state_ptr(), db::commitlog::force_sync::no);
});
};

View File

@@ -521,7 +521,7 @@ public:
_segment_manager->totals.total_size_on_disk -= size_on_disk();
_segment_manager->totals.total_size -= (size_on_disk() + _buffer.size_bytes());
_segment_manager->add_file_to_delete(_file_name, _desc);
} else {
} else if (_segment_manager->cfg.warn_about_segments_left_on_disk_after_shutdown) {
clogger.warn("Segment {} is dirty and is left on disk.", *this);
}
}

View File

@@ -137,6 +137,7 @@ public:
bool reuse_segments = true;
bool use_o_dsync = false;
bool warn_about_segments_left_on_disk_after_shutdown = true;
const db::extensions * extensions = nullptr;
};

View File

@@ -304,7 +304,7 @@ future<> db::commitlog_replayer::impl::process(stats* s, commitlog::buffer_and_r
mutation m(cf.schema(), fm.decorated_key(*cf.schema()));
converting_mutation_partition_applier v(cm, *cf.schema(), m.partition());
fm.partition().accept(cm, v);
return do_with(std::move(m), [&db, &cf] (mutation m) {
return do_with(std::move(m), [&db, &cf] (const mutation& m) {
return db.apply_in_memory(m, cf, db::rp_handle(), db::no_timeout);
});
} else {

View File

@@ -681,7 +681,7 @@ db::config::config(std::shared_ptr<db::extensions> exts)
, replace_address(this, "replace_address", value_status::Used, "", "The listen_address or broadcast_address of the dead node to replace. Same as -Dcassandra.replace_address.")
, replace_address_first_boot(this, "replace_address_first_boot", value_status::Used, "", "Like replace_address option, but if the node has been bootstrapped successfully it will be ignored. Same as -Dcassandra.replace_address_first_boot.")
, override_decommission(this, "override_decommission", value_status::Used, false, "Set true to force a decommissioned node to join the cluster")
, enable_repair_based_node_ops(this, "enable_repair_based_node_ops", liveness::LiveUpdate, value_status::Used, true, "Set true to use enable repair based node operations instead of streaming based")
, enable_repair_based_node_ops(this, "enable_repair_based_node_ops", liveness::LiveUpdate, value_status::Used, false, "Set true to use enable repair based node operations instead of streaming based")
, ring_delay_ms(this, "ring_delay_ms", value_status::Used, 30 * 1000, "Time a node waits to hear from other nodes before joining the ring in milliseconds. Same as -Dcassandra.ring_delay_ms in cassandra.")
, shadow_round_ms(this, "shadow_round_ms", value_status::Used, 300 * 1000, "The maximum gossip shadow round time. Can be used to reduce the gossip feature check time during node boot up.")
, fd_max_interval_ms(this, "fd_max_interval_ms", value_status::Used, 2 * 1000, "The maximum failure_detector interval time in milliseconds. Interval larger than the maximum will be ignored. Larger cluster may need to increase the default.")

View File

@@ -224,7 +224,9 @@ future<> manager::end_point_hints_manager::stop(drain should_drain) noexcept {
with_lock(file_update_mutex(), [this] {
if (_hints_store_anchor) {
hints_store_ptr tmp = std::exchange(_hints_store_anchor, nullptr);
return tmp->shutdown().finally([tmp] {});
return tmp->shutdown().finally([tmp] {
return tmp->release();
}).finally([tmp] {});
}
return make_ready_future<>();
}).handle_exception([&eptr] (auto e) { eptr = std::move(e); }).get();
@@ -290,7 +292,7 @@ inline bool manager::have_ep_manager(ep_key_type ep) const noexcept {
}
bool manager::store_hint(ep_key_type ep, schema_ptr s, lw_shared_ptr<const frozen_mutation> fm, tracing::trace_state_ptr tr_state) noexcept {
if (stopping() || !started() || !can_hint_for(ep)) {
if (stopping() || draining_all() || !started() || !can_hint_for(ep)) {
manager_logger.trace("Can't store a hint to {}", ep);
++_stats.dropped;
return false;
@@ -326,6 +328,10 @@ future<db::commitlog> manager::end_point_hints_manager::add_store() noexcept {
// HH doesn't utilize the flow that benefits from reusing segments.
// Therefore let's simply disable it to avoid any possible confusion.
cfg.reuse_segments = false;
// HH leaves segments on disk after commitlog shutdown, and later reads
// them when commitlog is re-created. This is expected to happen regularly
// during standard HH workload, so no need to print a warning about it.
cfg.warn_about_segments_left_on_disk_after_shutdown = false;
return commitlog::create_commitlog(std::move(cfg)).then([this] (commitlog l) {
// add_store() is triggered every time hint files are forcefully flushed to I/O (every hints_flush_period).
@@ -352,7 +358,9 @@ future<> manager::end_point_hints_manager::flush_current_hints() noexcept {
return futurize_invoke([this] {
return with_lock(file_update_mutex(), [this]() -> future<> {
return get_or_load().then([] (hints_store_ptr cptr) {
return cptr->shutdown();
return cptr->shutdown().finally([cptr] {
return cptr->release();
}).finally([cptr] {});
}).then([this] {
// Un-hold the commitlog object. Since we are under the exclusive _file_update_mutex lock there are no
// other hints_store_ptr copies and this would destroy the commitlog shared value.
@@ -529,7 +537,7 @@ bool manager::check_dc_for(ep_key_type ep) const noexcept {
}
void manager::drain_for(gms::inet_address endpoint) {
if (stopping()) {
if (stopping() || draining_all()) {
return;
}
@@ -540,6 +548,7 @@ void manager::drain_for(gms::inet_address endpoint) {
return with_semaphore(drain_lock(), 1, [this, endpoint] {
return futurize_invoke([this, endpoint] () {
if (utils::fb_utilities::is_me(endpoint)) {
set_draining_all();
return parallel_for_each(_ep_managers, [] (auto& pair) {
return pair.second.stop(drain::yes).finally([&pair] {
return with_file_update_mutex(pair.second, [&pair] {

View File

@@ -424,12 +424,14 @@ public:
enum class state {
started, // hinting is currently allowed (start() call is complete)
replay_allowed, // replaying (hints sending) is allowed
draining_all, // hinting is not allowed - all ep managers are being stopped because this node is leaving the cluster
stopping // hinting is not allowed - stopping is in progress (stop() method has been called)
};
using state_set = enum_set<super_enum<state,
state::started,
state::replay_allowed,
state::draining_all,
state::stopping>>;
private:
@@ -690,6 +692,14 @@ private:
return _state.contains(state::replay_allowed);
}
void set_draining_all() noexcept {
_state.set(state::draining_all);
}
bool draining_all() noexcept {
return _state.contains(state::draining_all);
}
public:
ep_managers_map_type::iterator find_ep_manager(ep_key_type ep_key) noexcept {
return _ep_managers.find(ep_key);

View File

@@ -58,6 +58,7 @@
#include "cql3/util.hh"
#include "db/view/view.hh"
#include "db/view/view_builder.hh"
#include "db/view/view_updating_consumer.hh"
#include "db/system_keyspace_view_types.hh"
#include "db/system_keyspace.hh"
#include "frozen_mutation.hh"
@@ -136,17 +137,26 @@ const column_definition* view_info::view_column(const column_definition& base_de
return _schema.get_column_definition(base_def.name());
}
const std::vector<column_id>& view_info::base_non_pk_columns_in_view_pk() const {
return _base_non_pk_columns_in_view_pk;
void view_info::set_base_info(db::view::base_info_ptr base_info) {
_base_info = std::move(base_info);
}
void view_info::initialize_base_dependent_fields(const schema& base) {
db::view::base_info_ptr view_info::make_base_dependent_view_info(const schema& base) const {
std::vector<column_id> base_non_pk_columns_in_view_pk;
for (auto&& view_col : boost::range::join(_schema.partition_key_columns(), _schema.clustering_key_columns())) {
auto* base_col = base.get_column_definition(view_col.name());
if (base_col && !base_col->is_primary_key()) {
_base_non_pk_columns_in_view_pk.push_back(base_col->id);
base_non_pk_columns_in_view_pk.push_back(base_col->id);
}
}
return make_lw_shared<db::view::base_dependent_view_info>({
.base_schema = base.shared_from_this(),
.base_non_pk_columns_in_view_pk = std::move(base_non_pk_columns_in_view_pk)
});
}
bool view_info::has_base_non_pk_columns_in_view_pk() const {
return !_base_info->base_non_pk_columns_in_view_pk.empty();
}
namespace db {
@@ -194,12 +204,12 @@ bool may_be_affected_by(const schema& base, const view_info& view, const dht::de
}
static bool update_requires_read_before_write(const schema& base,
const std::vector<view_ptr>& views,
const std::vector<view_and_base>& views,
const dht::decorated_key& key,
const rows_entry& update,
gc_clock::time_point now) {
for (auto&& v : views) {
view_info& vf = *v->view_info();
view_info& vf = *v.view->view_info();
if (may_be_affected_by(base, vf, key, update, now)) {
return true;
}
@@ -246,12 +256,14 @@ class view_updates final {
view_ptr _view;
const view_info& _view_info;
schema_ptr _base;
base_info_ptr _base_info;
std::unordered_map<partition_key, mutation_partition, partition_key::hashing, partition_key::equality> _updates;
public:
explicit view_updates(view_ptr view, schema_ptr base)
: _view(std::move(view))
explicit view_updates(view_and_base vab)
: _view(std::move(vab.view))
, _view_info(*_view->view_info())
, _base(std::move(base))
, _base(vab.base->base_schema)
, _base_info(vab.base)
, _updates(8, partition_key::hashing(*_view), partition_key::equality(*_view)) {
}
@@ -313,7 +325,7 @@ row_marker view_updates::compute_row_marker(const clustering_row& base_row) cons
// they share liveness information. It's true especially in the only case currently allowed by CQL,
// which assumes there's up to one non-pk column in the view key. It's also true in alternator,
// which does not carry TTL information.
const auto& col_ids = _view_info.base_non_pk_columns_in_view_pk();
const auto& col_ids = _base_info->base_non_pk_columns_in_view_pk;
if (!col_ids.empty()) {
auto& def = _base->regular_column_at(col_ids[0]);
// Note: multi-cell columns can't be part of the primary key.
@@ -544,7 +556,7 @@ void view_updates::delete_old_entry(const partition_key& base_key, const cluster
void view_updates::do_delete_old_entry(const partition_key& base_key, const clustering_row& existing, const clustering_row& update, gc_clock::time_point now) {
auto& r = get_view_row(base_key, existing);
const auto& col_ids = _view_info.base_non_pk_columns_in_view_pk();
const auto& col_ids = _base_info->base_non_pk_columns_in_view_pk;
if (!col_ids.empty()) {
// We delete the old row using a shadowable row tombstone, making sure that
// the tombstone deletes everything in the row (or it might still show up).
@@ -685,7 +697,7 @@ void view_updates::generate_update(
return;
}
const auto& col_ids = _view_info.base_non_pk_columns_in_view_pk();
const auto& col_ids = _base_info->base_non_pk_columns_in_view_pk;
if (col_ids.empty()) {
// The view key is necessarily the same pre and post update.
if (existing && existing->is_live(*_base)) {
@@ -940,12 +952,17 @@ future<stop_iteration> view_update_builder::on_results() {
future<std::vector<frozen_mutation_and_schema>> generate_view_updates(
const schema_ptr& base,
std::vector<view_ptr>&& views_to_update,
std::vector<view_and_base>&& views_to_update,
flat_mutation_reader&& updates,
flat_mutation_reader_opt&& existings,
gc_clock::time_point now) {
auto vs = boost::copy_range<std::vector<view_updates>>(views_to_update | boost::adaptors::transformed([&] (auto&& v) {
return view_updates(std::move(v), base);
auto vs = boost::copy_range<std::vector<view_updates>>(views_to_update | boost::adaptors::transformed([&] (view_and_base v) {
if (base->version() != v.base->base_schema->version()) {
on_internal_error(vlogger, format("Schema version used for view updates ({}) does not match the current"
" base schema version of the view ({}) for view {}.{} of {}.{}",
base->version(), v.base->base_schema->version(), v.view->ks_name(), v.view->cf_name(), base->ks_name(), base->cf_name()));
}
return view_updates(std::move(v));
}));
auto builder = std::make_unique<view_update_builder>(base, std::move(vs), std::move(updates), std::move(existings), now);
auto f = builder->build();
@@ -955,7 +972,7 @@ future<std::vector<frozen_mutation_and_schema>> generate_view_updates(
query::clustering_row_ranges calculate_affected_clustering_ranges(const schema& base,
const dht::decorated_key& key,
const mutation_partition& mp,
const std::vector<view_ptr>& views,
const std::vector<view_and_base>& views,
gc_clock::time_point now) {
std::vector<nonwrapping_range<clustering_key_prefix_view>> row_ranges;
std::vector<nonwrapping_range<clustering_key_prefix_view>> view_row_ranges;
@@ -963,11 +980,11 @@ query::clustering_row_ranges calculate_affected_clustering_ranges(const schema&
if (mp.partition_tombstone() || !mp.row_tombstones().empty()) {
for (auto&& v : views) {
// FIXME: #2371
if (v->view_info()->select_statement().get_restrictions()->has_unrestricted_clustering_columns()) {
if (v.view->view_info()->select_statement().get_restrictions()->has_unrestricted_clustering_columns()) {
view_row_ranges.push_back(nonwrapping_range<clustering_key_prefix_view>::make_open_ended_both_sides());
break;
}
for (auto&& r : v->view_info()->partition_slice().default_row_ranges()) {
for (auto&& r : v.view->view_info()->partition_slice().default_row_ranges()) {
view_row_ranges.push_back(r.transform(std::mem_fn(&clustering_key_prefix::view)));
}
}
@@ -1732,7 +1749,7 @@ public:
return stop_iteration::yes;
}
_fragments_memory_usage += cr.memory_usage(*_step.base->schema());
_fragments_memory_usage += cr.memory_usage(*_step.reader.schema());
_fragments.push_back(std::move(cr));
if (_fragments_memory_usage > batch_memory_max) {
// Although we have not yet completed the batch of base rows that
@@ -1754,10 +1771,14 @@ public:
_builder._as.check();
if (!_fragments.empty()) {
_fragments.push_front(partition_start(_step.current_key, tombstone()));
auto base_schema = _step.base->schema();
auto views = with_base_info_snapshot(_views_to_build);
auto reader = make_flat_mutation_reader_from_fragments(_step.reader.schema(), std::move(_fragments));
reader.upgrade_schema(base_schema);
_step.base->populate_views(
_views_to_build,
std::move(views),
_step.current_token(),
make_flat_mutation_reader_from_fragments(_step.base->schema(), std::move(_fragments)),
std::move(reader),
_now).get();
_fragments.clear();
_fragments_memory_usage = 0;
@@ -1909,5 +1930,54 @@ future<bool> check_needs_view_update_path(db::system_distributed_keyspace& sys_d
});
}
const size_t view_updating_consumer::buffer_size_soft_limit{1 * 1024 * 1024};
const size_t view_updating_consumer::buffer_size_hard_limit{2 * 1024 * 1024};
void view_updating_consumer::do_flush_buffer() {
_staging_reader_handle.pause();
if (_buffer.front().partition().empty()) {
// If we flushed mid-partition we can have an empty mutation if we
// flushed right before getting the end-of-partition fragment.
_buffer.pop_front();
}
while (!_buffer.empty()) {
try {
auto lock_holder = _view_update_pusher(std::move(_buffer.front())).get();
} catch (...) {
vlogger.warn("Failed to push replica updates for table {}.{}: {}", _schema->ks_name(), _schema->cf_name(), std::current_exception());
}
_buffer.pop_front();
}
_buffer_size = 0;
_m = nullptr;
}
void view_updating_consumer::maybe_flush_buffer_mid_partition() {
if (_buffer_size >= buffer_size_hard_limit) {
auto m = mutation(_schema, _m->decorated_key(), mutation_partition(_schema));
do_flush_buffer();
_buffer.emplace_back(std::move(m));
_m = &_buffer.back();
}
}
view_updating_consumer::view_updating_consumer(schema_ptr schema, table& table, std::vector<sstables::shared_sstable> excluded_sstables, const seastar::abort_source& as,
evictable_reader_handle& staging_reader_handle)
: view_updating_consumer(std::move(schema), as, staging_reader_handle,
[table = table.shared_from_this(), excluded_sstables = std::move(excluded_sstables)] (mutation m) mutable {
auto s = m.schema();
return table->stream_view_replica_updates(std::move(s), std::move(m), db::no_timeout, excluded_sstables);
})
{ }
std::vector<db::view::view_and_base> with_base_info_snapshot(std::vector<view_ptr> vs) {
return boost::copy_range<std::vector<db::view::view_and_base>>(vs | boost::adaptors::transformed([] (const view_ptr& v) {
return db::view::view_and_base{v, v->view_info()->base_info()};
}));
}
} // namespace view
} // namespace db

View File

@@ -43,6 +43,27 @@ namespace db {
namespace view {
// Part of the view description which depends on the base schema version.
//
// This structure may change even though the view schema doesn't change, so
// it needs to live outside view_ptr.
struct base_dependent_view_info {
schema_ptr base_schema;
// Id of a regular base table column included in the view's PK, if any.
// Scylla views only allow one such column, alternator can have up to two.
std::vector<column_id> base_non_pk_columns_in_view_pk;
};
// Immutable snapshot of view's base-schema-dependent part.
using base_info_ptr = lw_shared_ptr<const base_dependent_view_info>;
// Snapshot of the view schema and its base-schema-dependent part.
struct view_and_base {
view_ptr view;
base_info_ptr base;
};
/**
* Whether the view filter considers the specified partition key.
*
@@ -94,7 +115,7 @@ bool clustering_prefix_matches(const schema& base, const partition_key& key, con
future<std::vector<frozen_mutation_and_schema>> generate_view_updates(
const schema_ptr& base,
std::vector<view_ptr>&& views_to_update,
std::vector<view_and_base>&& views_to_update,
flat_mutation_reader&& updates,
flat_mutation_reader_opt&& existings,
gc_clock::time_point now);
@@ -103,7 +124,7 @@ query::clustering_row_ranges calculate_affected_clustering_ranges(
const schema& base,
const dht::decorated_key& key,
const mutation_partition& mp,
const std::vector<view_ptr>& views,
const std::vector<view_and_base>& views,
gc_clock::time_point now);
struct wait_for_all_updates_tag {};
@@ -133,6 +154,13 @@ future<> mutate_MV(
*/
void create_virtual_column(schema_builder& builder, const bytes& name, const data_type& type);
/**
* Converts a collection of view schema snapshots into a collection of
* view_and_base objects, which are snapshots of both the view schema
* and the base-schema-dependent part of view description.
*/
std::vector<view_and_base> with_base_info_snapshot(std::vector<view_ptr>);
}
}

View File

@@ -42,35 +42,52 @@ future<> view_update_generator::start() {
_pending_sstables.wait().get();
}
// To ensure we don't race with updates, move the entire content
// into a local variable.
auto sstables_with_tables = std::exchange(_sstables_with_tables, {});
// If we got here, we will process all tables we know about so far eventually so there
// is no starvation
for (auto& t : _sstables_with_tables | boost::adaptors::map_keys) {
for (auto table_it = sstables_with_tables.begin(); table_it != sstables_with_tables.end(); table_it = sstables_with_tables.erase(table_it)) {
auto& [t, sstables] = *table_it;
schema_ptr s = t->schema();
// Copy what we have so far so we don't miss new updates
auto sstables = std::exchange(_sstables_with_tables[t], {});
vug_logger.trace("Processing {}.{}: {} sstables", s->ks_name(), s->cf_name(), sstables.size());
const auto num_sstables = sstables.size();
try {
// temporary: need an sstable set for the flat mutation reader, but the
// compaction_descriptor takes a vector. Soon this will become a compaction
// so the transformation to the SSTable set will not be needed.
auto ssts = make_lw_shared(t->get_compaction_strategy().make_sstable_set(s));
// Exploit the fact that sstables in the staging directory
// are usually non-overlapping and use a partitioned set for
// the read.
auto ssts = make_lw_shared(sstables::make_partitioned_sstable_set(s, make_lw_shared<sstable_list>(sstable_list{}), false));
for (auto& sst : sstables) {
ssts->insert(sst);
}
flat_mutation_reader staging_sstable_reader = ::make_range_sstable_reader(s,
auto ms = mutation_source([this, ssts] (
schema_ptr s,
reader_permit permit,
const dht::partition_range& pr,
const query::partition_slice& ps,
const io_priority_class& pc,
tracing::trace_state_ptr ts,
streamed_mutation::forwarding fwd_ms,
mutation_reader::forwarding fwd_mr) {
return ::make_restricted_range_sstable_reader(s, std::move(permit), std::move(ssts), pr, ps, pc, std::move(ts), fwd_ms, fwd_mr);
});
auto [staging_sstable_reader, staging_sstable_reader_handle] = make_manually_paused_evictable_reader(
std::move(ms),
s,
_db.make_query_class_config().semaphore.make_permit(),
std::move(ssts),
query::full_partition_range,
s->full_slice(),
service::get_local_streaming_priority(),
nullptr,
::streamed_mutation::forwarding::no,
::mutation_reader::forwarding::no);
inject_failure("view_update_generator_consume_staging_sstable");
auto result = staging_sstable_reader.consume_in_thread(view_updating_consumer(s, *t, sstables, _as), db::no_timeout);
auto result = staging_sstable_reader.consume_in_thread(view_updating_consumer(s, *t, sstables, _as, staging_sstable_reader_handle), db::no_timeout);
if (result == stop_iteration::yes) {
break;
}
@@ -89,7 +106,7 @@ future<> view_update_generator::start() {
// Move from staging will be retried upon restart.
vug_logger.warn("Moving {} from staging failed: {}:{}. Ignoring...", s->ks_name(), s->cf_name(), std::current_exception());
}
_registration_sem.signal();
_registration_sem.signal(num_sstables);
}
// For each table, move the processed staging sstables into the table's base dir.
for (auto it = _sstables_to_move.begin(); it != _sstables_to_move.end(); ) {

View File

@@ -32,7 +32,10 @@
namespace db::view {
class view_update_generator {
public:
static constexpr size_t registration_queue_size = 5;
private:
database& _db;
seastar::abort_source _as;
future<> _started = make_ready_future<>();
@@ -51,6 +54,8 @@ public:
future<> start();
future<> stop();
future<> register_staging_sstable(sstables::shared_sstable sst, lw_shared_ptr<table> table);
ssize_t available_register_units() const { return _registration_sem.available_units(); }
private:
bool should_throttle() const;
};

View File

@@ -27,6 +27,8 @@
#include "sstables/shared_sstable.hh"
#include "database.hh"
class evictable_reader_handle;
namespace db::view {
/*
@@ -34,22 +36,46 @@ namespace db::view {
* It is expected to be run in seastar::async threaded context through consume_in_thread()
*/
class view_updating_consumer {
schema_ptr _schema;
lw_shared_ptr<table> _table;
std::vector<sstables::shared_sstable> _excluded_sstables;
const seastar::abort_source* _as;
std::optional<mutation> _m;
public:
view_updating_consumer(schema_ptr schema, table& table, std::vector<sstables::shared_sstable> excluded_sstables, const seastar::abort_source& as)
// We prefer flushing on partition boundaries, so at the end of a partition,
// we flush on reaching the soft limit. Otherwise we continue accumulating
// data. We flush mid-partition if we reach the hard limit.
static const size_t buffer_size_soft_limit;
static const size_t buffer_size_hard_limit;
private:
schema_ptr _schema;
const seastar::abort_source* _as;
evictable_reader_handle& _staging_reader_handle;
circular_buffer<mutation> _buffer;
mutation* _m{nullptr};
size_t _buffer_size{0};
noncopyable_function<future<row_locker::lock_holder>(mutation)> _view_update_pusher;
private:
void do_flush_buffer();
void maybe_flush_buffer_mid_partition();
public:
// Push updates with a custom pusher. Mainly for tests.
view_updating_consumer(schema_ptr schema, const seastar::abort_source& as, evictable_reader_handle& staging_reader_handle,
noncopyable_function<future<row_locker::lock_holder>(mutation)> view_update_pusher)
: _schema(std::move(schema))
, _table(table.shared_from_this())
, _excluded_sstables(std::move(excluded_sstables))
, _as(&as)
, _m()
, _staging_reader_handle(staging_reader_handle)
, _view_update_pusher(std::move(view_update_pusher))
{ }
view_updating_consumer(schema_ptr schema, table& table, std::vector<sstables::shared_sstable> excluded_sstables, const seastar::abort_source& as,
evictable_reader_handle& staging_reader_handle);
view_updating_consumer(view_updating_consumer&&) = default;
view_updating_consumer& operator=(view_updating_consumer&&) = delete;
void consume_new_partition(const dht::decorated_key& dk) {
_m = mutation(_schema, dk, mutation_partition(_schema));
_buffer.emplace_back(_schema, dk, mutation_partition(_schema));
_m = &_buffer.back();
}
void consume(tombstone t) {
@@ -60,7 +86,9 @@ public:
if (_as->abort_requested()) {
return stop_iteration::yes;
}
_buffer_size += sr.memory_usage(*_schema);
_m->partition().apply(*_schema, std::move(sr));
maybe_flush_buffer_mid_partition();
return stop_iteration::no;
}
@@ -68,7 +96,9 @@ public:
if (_as->abort_requested()) {
return stop_iteration::yes;
}
_buffer_size += cr.memory_usage(*_schema);
_m->partition().apply(*_schema, std::move(cr));
maybe_flush_buffer_mid_partition();
return stop_iteration::no;
}
@@ -76,14 +106,27 @@ public:
if (_as->abort_requested()) {
return stop_iteration::yes;
}
_buffer_size += rt.memory_usage(*_schema);
_m->partition().apply(*_schema, std::move(rt));
maybe_flush_buffer_mid_partition();
return stop_iteration::no;
}
// Expected to be run in seastar::async threaded context (consume_in_thread())
stop_iteration consume_end_of_partition();
stop_iteration consume_end_of_partition() {
if (_as->abort_requested()) {
return stop_iteration::yes;
}
if (_buffer_size >= buffer_size_soft_limit) {
do_flush_buffer();
}
return stop_iteration::no;
}
stop_iteration consume_end_of_stream() {
if (!_buffer.empty()) {
do_flush_buffer();
}
return stop_iteration(_as->abort_requested());
}
};

View File

@@ -59,7 +59,12 @@ future<> boot_strapper::bootstrap(streaming::stream_reason reason) {
return make_exception_future<>(std::runtime_error("Wrong stream_reason provided: it can only be replace or bootstrap"));
}
auto streamer = make_lw_shared<range_streamer>(_db, _token_metadata, _abort_source, _tokens, _address, description, reason);
streamer->add_source_filter(std::make_unique<range_streamer::failure_detector_source_filter>(gms::get_local_gossiper().get_unreachable_members()));
auto nodes_to_filter = gms::get_local_gossiper().get_unreachable_members();
if (reason == streaming::stream_reason::replace && _db.local().get_replace_address()) {
nodes_to_filter.insert(_db.local().get_replace_address().value());
}
blogger.debug("nodes_to_filter={}", nodes_to_filter);
streamer->add_source_filter(std::make_unique<range_streamer::failure_detector_source_filter>(nodes_to_filter));
auto keyspaces = make_lw_shared<std::vector<sstring>>(_db.local().get_non_system_keyspaces());
return do_for_each(*keyspaces, [this, keyspaces, streamer] (sstring& keyspace_name) {
auto& ks = _db.local().find_keyspace(keyspace_name);

View File

@@ -61,7 +61,15 @@ def sh_command(*args):
return out
def get_url(path):
return urllib.request.urlopen(path).read().decode('utf-8')
# If server returns any error, like 403, or 500 urllib.request throws exception, which is not serializable.
# When multiprocessing routines fail to serialize it, it throws ambiguous serialization exception
# from get_json_from_url.
# In order to see legit error we catch it from the inside of process, covert to string and
# pass it as part of return value
try:
return 0, urllib.request.urlopen(path).read().decode('utf-8')
except Exception as exc:
return 1, str(exc)
def get_json_from_url(path):
pool = mp.Pool(processes=1)
@@ -71,13 +79,16 @@ def get_json_from_url(path):
# to enforce a wallclock timeout.
result = pool.apply_async(get_url, args=(path,))
try:
retval = result.get(timeout=5)
status, retval = result.get(timeout=5)
except mp.TimeoutError as err:
pool.terminate()
pool.join()
raise
if status == 1:
raise RuntimeError(f'Failed to get "{path}" due to the following error: {retval}')
return json.loads(retval)
def get_api(path):
return get_json_from_url("http://" + api_address + path)

View File

@@ -27,6 +27,7 @@ import glob
import shutil
import io
import stat
import distro
from scylla_util import *
interactive = False
@@ -385,6 +386,9 @@ if __name__ == '__main__':
if not stat.S_ISBLK(os.stat(dsk).st_mode):
print('{} is not block device'.format(dsk))
continue
if dsk in selected:
print(f'{dsk} is already added')
continue
selected.append(dsk)
devices.remove(dsk)
disks = ','.join(selected)
@@ -468,5 +472,10 @@ if __name__ == '__main__':
print('Please restart your machine before using ScyllaDB, as you have disabled')
print(' SELinux.')
if dist_name() == 'Ubuntu':
run('apt-get install -y hugepages')
if distro.id() == 'ubuntu':
# Ubuntu version is 20.04 or later
if int(distro.major_version()) >= 20:
hugepkg = 'libhugetlbfs-bin'
else:
hugepkg = 'hugepages'
run(f'apt-get install -y {hugepkg}')

View File

@@ -40,6 +40,10 @@ if __name__ == '__main__':
sys.exit(1)
memtotal = get_memtotal_gb()
if memtotal == 0:
print('memory too small: {} KB'.format(get_memtotal()))
sys.exit(1)
# Scylla document says 'swap size should be set to either total_mem/3 or
# 16GB - lower of the two', so we need to compare 16g vs memtotal/3 and
# choose lower one

View File

@@ -184,7 +184,7 @@ class aws_instance:
instance_size = self.instance_size()
if instance_class in ['c3', 'c4', 'd2', 'i2', 'r3']:
return 'ixgbevf'
if instance_class in ['a1', 'c5', 'c5d', 'f1', 'g3', 'g4', 'h1', 'i3', 'i3en', 'inf1', 'm5', 'm5a', 'm5ad', 'm5d', 'm5dn', 'm5n', 'm6g', 'p2', 'p3', 'r4', 'r5', 'r5a', 'r5ad', 'r5d', 'r5dn', 'r5n', 't3', 't3a', 'u-6tb1', 'u-9tb1', 'u-12tb1', 'u-18tn1', 'u-24tb1', 'x1', 'x1e', 'z1d']:
if instance_class in ['a1', 'c5', 'c5a', 'c5d', 'c5n', 'c6g', 'c6gd', 'f1', 'g3', 'g4', 'h1', 'i3', 'i3en', 'inf1', 'm5', 'm5a', 'm5ad', 'm5d', 'm5dn', 'm5n', 'm6g', 'm6gd', 'p2', 'p3', 'r4', 'r5', 'r5a', 'r5ad', 'r5d', 'r5dn', 'r5n', 't3', 't3a', 'u-6tb1', 'u-9tb1', 'u-12tb1', 'u-18tn1', 'u-24tb1', 'x1', 'x1e', 'z1d']:
return 'ena'
if instance_class == 'm4':
if instance_size == '16xlarge':
@@ -331,7 +331,7 @@ class scylla_cpuinfo:
# When a CLI tool is not installed, use relocatable CLI tool provided by Scylla
scylla_env = os.environ.copy()
scylla_env['PATH'] = '{}:{}'.format(scylla_env['PATH'], scyllabindir())
scylla_env['PATH'] = '{}:{}'.format(scyllabindir(), scylla_env['PATH'])
def run(cmd, shell=False, silent=False, exception=True):
stdout = subprocess.DEVNULL if silent else None
@@ -446,6 +446,19 @@ def dist_ver():
return distro.version()
SYSTEM_PARTITION_UUIDS = [
'21686148-6449-6e6f-744e-656564454649', # BIOS boot partition
'c12a7328-f81f-11d2-ba4b-00a0c93ec93b', # EFI system partition
'024dee41-33e7-11d3-9d69-0008c781f39f' # MBR partition scheme
]
def get_partition_uuid(dev):
return out(f'lsblk -n -oPARTTYPE {dev}')
def is_system_partition(dev):
uuid = get_partition_uuid(dev)
return (uuid in SYSTEM_PARTITION_UUIDS)
def is_unused_disk(dev):
# dev is not in /sys/class/block/, like /dev/nvme[0-9]+
if not os.path.isdir('/sys/class/block/{dev}'.format(dev=dev.replace('/dev/', ''))):
@@ -453,7 +466,8 @@ def is_unused_disk(dev):
try:
fd = os.open(dev, os.O_EXCL)
os.close(fd)
return True
# dev is not reserved for system
return not is_system_partition(dev)
except OSError:
return False

View File

@@ -39,6 +39,7 @@ override_dh_strip:
# The binaries (ethtool...patchelf) don't pass dh_strip after going through patchelf. Since they are
# already stripped, nothing is lost if we exclude them, so that's what we do.
dh_strip -Xlibprotobuf.so.15 -Xld.so -Xethtool -Xgawk -Xgzip -Xhwloc-calc -Xhwloc-distrib -Xifconfig -Xlscpu -Xnetstat -Xpatchelf --dbg-package=$(product)-server-dbg
find $(CURDIR)/debian/$(product)-server-dbg/usr/lib/debug/.build-id/ -name "*.debug" -exec objcopy --decompress-debug-sections {} \;
override_dh_makeshlibs:

View File

@@ -5,8 +5,8 @@ MAINTAINER Avi Kivity <avi@cloudius-systems.com>
ENV container docker
# The SCYLLA_REPO_URL argument specifies the URL to the RPM repository this Docker image uses to install Scylla. The default value is the Scylla's unstable RPM repository, which contains the daily build.
ARG SCYLLA_REPO_URL=http://downloads.scylladb.com/rpm/unstable/centos/master/latest/scylla.repo
ARG VERSION=666.development
ARG SCYLLA_REPO_URL=http://downloads.scylladb.com/rpm/unstable/centos/scylla-4.2/latest/scylla.repo
ARG VERSION=4.2
ADD scylla_bashrc /scylla_bashrc

View File

@@ -468,6 +468,9 @@ public:
size_t buffer_size() const {
return _impl->buffer_size();
}
const circular_buffer<mutation_fragment>& buffer() const {
return _impl->buffer();
}
// Detach the internal buffer of the reader.
// Roughly equivalent to depleting it by calling pop_mutation_fragment()
// until is_buffer_empty() returns true.

View File

@@ -428,6 +428,7 @@ future<> gossiper::handle_shutdown_msg(inet_address from) {
return make_ready_future<>();
}
return seastar::async([this, from] {
auto permit = this->lock_endpoint(from).get0();
this->mark_as_shutdown(from);
});
}

View File

@@ -98,6 +98,7 @@ fedora_packages=(
debhelper
fakeroot
file
dpkg-dev
)
centos_packages=(

View File

@@ -168,15 +168,33 @@ insert_token_range_to_sorted_container_while_unwrapping(
dht::token_range_vector
abstract_replication_strategy::get_ranges(inet_address ep) const {
return get_ranges(ep, _token_metadata);
return do_get_ranges(ep, _token_metadata, false);
}
dht::token_range_vector
abstract_replication_strategy::get_ranges_in_thread(inet_address ep) const {
return do_get_ranges(ep, _token_metadata, true);
}
dht::token_range_vector
abstract_replication_strategy::get_ranges(inet_address ep, token_metadata& tm) const {
return do_get_ranges(ep, tm, false);
}
dht::token_range_vector
abstract_replication_strategy::get_ranges_in_thread(inet_address ep, token_metadata& tm) const {
return do_get_ranges(ep, tm, true);
}
dht::token_range_vector
abstract_replication_strategy::do_get_ranges(inet_address ep, token_metadata& tm, bool can_yield) const {
dht::token_range_vector ret;
auto prev_tok = tm.sorted_tokens().back();
for (auto tok : tm.sorted_tokens()) {
for (inet_address a : calculate_natural_endpoints(tok, tm)) {
if (can_yield) {
seastar::thread::maybe_yield();
}
if (a == ep) {
insert_token_range_to_sorted_container_while_unwrapping(prev_tok, tok, ret);
break;

View File

@@ -113,10 +113,15 @@ public:
// It the analogue of Origin's getAddressRanges().get(endpoint).
// This function is not efficient, and not meant for the fast path.
dht::token_range_vector get_ranges(inet_address ep) const;
dht::token_range_vector get_ranges_in_thread(inet_address ep) const;
// Use the token_metadata provided by the caller instead of _token_metadata
dht::token_range_vector get_ranges(inet_address ep, token_metadata& tm) const;
dht::token_range_vector get_ranges_in_thread(inet_address ep, token_metadata& tm) const;
private:
dht::token_range_vector do_get_ranges(inet_address ep, token_metadata& tm, bool can_yield) const;
public:
// get_primary_ranges() returns the list of "primary ranges" for the given
// endpoint. "Primary ranges" are the ranges that the node is responsible
// for storing replica primarily, which means this is the first node

10
lua.cc
View File

@@ -262,14 +262,12 @@ static auto visit_lua_raw_value(lua_State* l, int index, Func&& f) {
template <typename Func>
static auto visit_decimal(const big_decimal &v, Func&& f) {
boost::multiprecision::cpp_int ten(10);
const auto& dividend = v.unscaled_value();
auto divisor = boost::multiprecision::pow(ten, v.scale());
boost::multiprecision::cpp_rational r = v.as_rational();
const boost::multiprecision::cpp_int& dividend = numerator(r);
const boost::multiprecision::cpp_int& divisor = denominator(r);
if (dividend % divisor == 0) {
return f(utils::multiprecision_int(boost::multiprecision::cpp_int(dividend/divisor)));
return f(utils::multiprecision_int(dividend/divisor));
}
boost::multiprecision::cpp_rational r = dividend;
r /= divisor;
return f(r.convert_to<double>());
}

14
main.cc
View File

@@ -830,6 +830,7 @@ int main(int ac, char** av) {
storage_proxy_smp_service_group_config.max_nonlocal_requests = 5000;
spcfg.read_smp_service_group = create_smp_service_group(storage_proxy_smp_service_group_config).get0();
spcfg.write_smp_service_group = create_smp_service_group(storage_proxy_smp_service_group_config).get0();
spcfg.hints_write_smp_service_group = create_smp_service_group(storage_proxy_smp_service_group_config).get0();
spcfg.write_ack_smp_service_group = create_smp_service_group(storage_proxy_smp_service_group_config).get0();
static db::view::node_update_backlog node_backlog(smp::count, 10ms);
scheduling_group_key_config storage_proxy_stats_cfg =
@@ -967,12 +968,16 @@ int main(int ac, char** av) {
mm.init_messaging_service();
}).get();
supervisor::notify("initializing storage proxy RPC verbs");
proxy.invoke_on_all([] (service::storage_proxy& p) {
p.init_messaging_service();
}).get();
proxy.invoke_on_all(&service::storage_proxy::init_messaging_service).get();
auto stop_proxy_handlers = defer_verbose_shutdown("storage proxy RPC verbs", [&proxy] {
proxy.invoke_on_all(&service::storage_proxy::uninit_messaging_service).get();
});
supervisor::notify("starting streaming service");
streaming::stream_session::init_streaming_service(db, sys_dist_ks, view_update_generator).get();
auto stop_streaming_service = defer_verbose_shutdown("streaming service", [] {
streaming::stream_session::uninit_streaming_service().get();
});
api::set_server_stream_manager(ctx).get();
supervisor::notify("starting hinted handoff manager");
@@ -1005,6 +1010,9 @@ int main(int ac, char** av) {
rs.stop().get();
});
repair_init_messaging_service_handler(rs, sys_dist_ks, view_update_generator).get();
auto stop_repair_messages = defer_verbose_shutdown("repair message handlers", [] {
repair_uninit_messaging_service_handler().get();
});
supervisor::notify("starting storage service", true);
auto& ss = service::get_local_storage_service();
ss.init_messaging_service_part().get();

View File

@@ -572,7 +572,12 @@ messaging_service::initial_scheduling_info() const {
scheduling_group
messaging_service::scheduling_group_for_verb(messaging_verb verb) const {
return _scheduling_info_for_connection_index[get_rpc_client_idx(verb)].sched_group;
// We are not using get_rpc_client_idx() because it figures out the client
// index based on the current scheduling group, which is relevant when
// selecting the right client for sending a message, but is not relevant
// when registering handlers.
const auto idx = s_rpc_client_idx_table[static_cast<size_t>(verb)];
return _scheduling_info_for_connection_index[idx].sched_group;
}
scheduling_group
@@ -791,6 +796,10 @@ void messaging_service::register_stream_mutation_fragments(std::function<future<
register_handler(this, messaging_verb::STREAM_MUTATION_FRAGMENTS, std::move(func));
}
future<> messaging_service::unregister_stream_mutation_fragments() {
return unregister_handler(messaging_verb::STREAM_MUTATION_FRAGMENTS);
}
template<class SinkType, class SourceType>
future<rpc::sink<SinkType>, rpc::source<SourceType>>
do_make_sink_source(messaging_verb verb, uint32_t repair_meta_id, shared_ptr<messaging_service::rpc_protocol_client_wrapper> rpc_client, std::unique_ptr<messaging_service::rpc_protocol_wrapper>& rpc) {
@@ -822,6 +831,9 @@ rpc::sink<repair_row_on_wire_with_cmd> messaging_service::make_sink_for_repair_g
void messaging_service::register_repair_get_row_diff_with_rpc_stream(std::function<future<rpc::sink<repair_row_on_wire_with_cmd>> (const rpc::client_info& cinfo, uint32_t repair_meta_id, rpc::source<repair_hash_with_cmd> source)>&& func) {
register_handler(this, messaging_verb::REPAIR_GET_ROW_DIFF_WITH_RPC_STREAM, std::move(func));
}
future<> messaging_service::unregister_repair_get_row_diff_with_rpc_stream() {
return unregister_handler(messaging_verb::REPAIR_GET_ROW_DIFF_WITH_RPC_STREAM);
}
// Wrapper for REPAIR_PUT_ROW_DIFF_WITH_RPC_STREAM
future<rpc::sink<repair_row_on_wire_with_cmd>, rpc::source<repair_stream_cmd>>
@@ -841,6 +853,9 @@ rpc::sink<repair_stream_cmd> messaging_service::make_sink_for_repair_put_row_dif
void messaging_service::register_repair_put_row_diff_with_rpc_stream(std::function<future<rpc::sink<repair_stream_cmd>> (const rpc::client_info& cinfo, uint32_t repair_meta_id, rpc::source<repair_row_on_wire_with_cmd> source)>&& func) {
register_handler(this, messaging_verb::REPAIR_PUT_ROW_DIFF_WITH_RPC_STREAM, std::move(func));
}
future<> messaging_service::unregister_repair_put_row_diff_with_rpc_stream() {
return unregister_handler(messaging_verb::REPAIR_PUT_ROW_DIFF_WITH_RPC_STREAM);
}
// Wrapper for REPAIR_GET_FULL_ROW_HASHES_WITH_RPC_STREAM
future<rpc::sink<repair_stream_cmd>, rpc::source<repair_hash_with_cmd>>
@@ -860,6 +875,9 @@ rpc::sink<repair_hash_with_cmd> messaging_service::make_sink_for_repair_get_full
void messaging_service::register_repair_get_full_row_hashes_with_rpc_stream(std::function<future<rpc::sink<repair_hash_with_cmd>> (const rpc::client_info& cinfo, uint32_t repair_meta_id, rpc::source<repair_stream_cmd> source)>&& func) {
register_handler(this, messaging_verb::REPAIR_GET_FULL_ROW_HASHES_WITH_RPC_STREAM, std::move(func));
}
future<> messaging_service::unregister_repair_get_full_row_hashes_with_rpc_stream() {
return unregister_handler(messaging_verb::REPAIR_GET_FULL_ROW_HASHES_WITH_RPC_STREAM);
}
// Send a message for verb
template <typename MsgIn, typename... MsgOut>
@@ -943,6 +961,9 @@ future<streaming::prepare_message> messaging_service::send_prepare_message(msg_a
return send_message<streaming::prepare_message>(this, messaging_verb::PREPARE_MESSAGE, id,
std::move(msg), plan_id, std::move(description), reason);
}
future<> messaging_service::unregister_prepare_message() {
return unregister_handler(messaging_verb::PREPARE_MESSAGE);
}
// PREPARE_DONE_MESSAGE
void messaging_service::register_prepare_done_message(std::function<future<> (const rpc::client_info& cinfo, UUID plan_id, unsigned dst_cpu_id)>&& func) {
@@ -952,6 +973,9 @@ future<> messaging_service::send_prepare_done_message(msg_addr id, UUID plan_id,
return send_message<void>(this, messaging_verb::PREPARE_DONE_MESSAGE, id,
plan_id, dst_cpu_id);
}
future<> messaging_service::unregister_prepare_done_message() {
return unregister_handler(messaging_verb::PREPARE_DONE_MESSAGE);
}
// STREAM_MUTATION
void messaging_service::register_stream_mutation(std::function<future<> (const rpc::client_info& cinfo, UUID plan_id, frozen_mutation fm, unsigned dst_cpu_id, rpc::optional<bool> fragmented, rpc::optional<streaming::stream_reason> reason)>&& func) {
@@ -976,6 +1000,9 @@ future<> messaging_service::send_stream_mutation_done(msg_addr id, UUID plan_id,
return send_message<void>(this, messaging_verb::STREAM_MUTATION_DONE, id,
plan_id, std::move(ranges), cf_id, dst_cpu_id);
}
future<> messaging_service::unregister_stream_mutation_done() {
return unregister_handler(messaging_verb::STREAM_MUTATION_DONE);
}
// COMPLETE_MESSAGE
void messaging_service::register_complete_message(std::function<future<> (const rpc::client_info& cinfo, UUID plan_id, unsigned dst_cpu_id, rpc::optional<bool> failed)>&& func) {
@@ -985,6 +1012,9 @@ future<> messaging_service::send_complete_message(msg_addr id, UUID plan_id, uns
return send_message<void>(this, messaging_verb::COMPLETE_MESSAGE, id,
plan_id, dst_cpu_id, failed);
}
future<> messaging_service::unregister_complete_message() {
return unregister_handler(messaging_verb::COMPLETE_MESSAGE);
}
void messaging_service::register_gossip_echo(std::function<future<> ()>&& func) {
register_handler(this, messaging_verb::GOSSIP_ECHO, std::move(func));
@@ -1199,14 +1229,14 @@ future<partition_checksum> messaging_service::send_repair_checksum_range(
}
// Wrapper for REPAIR_GET_FULL_ROW_HASHES
void messaging_service::register_repair_get_full_row_hashes(std::function<future<std::unordered_set<repair_hash>> (const rpc::client_info& cinfo, uint32_t repair_meta_id)>&& func) {
void messaging_service::register_repair_get_full_row_hashes(std::function<future<repair_hash_set> (const rpc::client_info& cinfo, uint32_t repair_meta_id)>&& func) {
register_handler(this, messaging_verb::REPAIR_GET_FULL_ROW_HASHES, std::move(func));
}
future<> messaging_service::unregister_repair_get_full_row_hashes() {
return unregister_handler(messaging_verb::REPAIR_GET_FULL_ROW_HASHES);
}
future<std::unordered_set<repair_hash>> messaging_service::send_repair_get_full_row_hashes(msg_addr id, uint32_t repair_meta_id) {
return send_message<future<std::unordered_set<repair_hash>>>(this, messaging_verb::REPAIR_GET_FULL_ROW_HASHES, std::move(id), repair_meta_id);
future<repair_hash_set> messaging_service::send_repair_get_full_row_hashes(msg_addr id, uint32_t repair_meta_id) {
return send_message<future<repair_hash_set>>(this, messaging_verb::REPAIR_GET_FULL_ROW_HASHES, std::move(id), repair_meta_id);
}
// Wrapper for REPAIR_GET_COMBINED_ROW_HASH
@@ -1231,13 +1261,13 @@ future<get_sync_boundary_response> messaging_service::send_repair_get_sync_bound
}
// Wrapper for REPAIR_GET_ROW_DIFF
void messaging_service::register_repair_get_row_diff(std::function<future<repair_rows_on_wire> (const rpc::client_info& cinfo, uint32_t repair_meta_id, std::unordered_set<repair_hash> set_diff, bool needs_all_rows)>&& func) {
void messaging_service::register_repair_get_row_diff(std::function<future<repair_rows_on_wire> (const rpc::client_info& cinfo, uint32_t repair_meta_id, repair_hash_set set_diff, bool needs_all_rows)>&& func) {
register_handler(this, messaging_verb::REPAIR_GET_ROW_DIFF, std::move(func));
}
future<> messaging_service::unregister_repair_get_row_diff() {
return unregister_handler(messaging_verb::REPAIR_GET_ROW_DIFF);
}
future<repair_rows_on_wire> messaging_service::send_repair_get_row_diff(msg_addr id, uint32_t repair_meta_id, std::unordered_set<repair_hash> set_diff, bool needs_all_rows) {
future<repair_rows_on_wire> messaging_service::send_repair_get_row_diff(msg_addr id, uint32_t repair_meta_id, repair_hash_set set_diff, bool needs_all_rows) {
return send_message<future<repair_rows_on_wire>>(this, messaging_verb::REPAIR_GET_ROW_DIFF, std::move(id), repair_meta_id, std::move(set_diff), needs_all_rows);
}

View File

@@ -297,10 +297,12 @@ public:
streaming::prepare_message msg, UUID plan_id, sstring description, rpc::optional<streaming::stream_reason> reason)>&& func);
future<streaming::prepare_message> send_prepare_message(msg_addr id, streaming::prepare_message msg, UUID plan_id,
sstring description, streaming::stream_reason);
future<> unregister_prepare_message();
// Wrapper for PREPARE_DONE_MESSAGE verb
void register_prepare_done_message(std::function<future<> (const rpc::client_info& cinfo, UUID plan_id, unsigned dst_cpu_id)>&& func);
future<> send_prepare_done_message(msg_addr id, UUID plan_id, unsigned dst_cpu_id);
future<> unregister_prepare_done_message();
// Wrapper for STREAM_MUTATION verb
void register_stream_mutation(std::function<future<> (const rpc::client_info& cinfo, UUID plan_id, frozen_mutation fm, unsigned dst_cpu_id, rpc::optional<bool>, rpc::optional<streaming::stream_reason>)>&& func);
@@ -309,6 +311,7 @@ public:
// Wrapper for STREAM_MUTATION_FRAGMENTS
// The receiver of STREAM_MUTATION_FRAGMENTS sends status code to the sender to notify any error on the receiver side. The status code is of type int32_t. 0 means successful, -1 means error, other status code value are reserved for future use.
void register_stream_mutation_fragments(std::function<future<rpc::sink<int32_t>> (const rpc::client_info& cinfo, UUID plan_id, UUID schema_id, UUID cf_id, uint64_t estimated_partitions, rpc::optional<streaming::stream_reason> reason_opt, rpc::source<frozen_mutation_fragment, rpc::optional<streaming::stream_mutation_fragments_cmd>> source)>&& func);
future<> unregister_stream_mutation_fragments();
rpc::sink<int32_t> make_sink_for_stream_mutation_fragments(rpc::source<frozen_mutation_fragment, rpc::optional<streaming::stream_mutation_fragments_cmd>>& source);
future<rpc::sink<frozen_mutation_fragment, streaming::stream_mutation_fragments_cmd>, rpc::source<int32_t>> make_sink_and_source_for_stream_mutation_fragments(utils::UUID schema_id, utils::UUID plan_id, utils::UUID cf_id, uint64_t estimated_partitions, streaming::stream_reason reason, msg_addr id);
@@ -316,22 +319,27 @@ public:
future<rpc::sink<repair_hash_with_cmd>, rpc::source<repair_row_on_wire_with_cmd>> make_sink_and_source_for_repair_get_row_diff_with_rpc_stream(uint32_t repair_meta_id, msg_addr id);
rpc::sink<repair_row_on_wire_with_cmd> make_sink_for_repair_get_row_diff_with_rpc_stream(rpc::source<repair_hash_with_cmd>& source);
void register_repair_get_row_diff_with_rpc_stream(std::function<future<rpc::sink<repair_row_on_wire_with_cmd>> (const rpc::client_info& cinfo, uint32_t repair_meta_id, rpc::source<repair_hash_with_cmd> source)>&& func);
future<> unregister_repair_get_row_diff_with_rpc_stream();
// Wrapper for REPAIR_PUT_ROW_DIFF_WITH_RPC_STREAM
future<rpc::sink<repair_row_on_wire_with_cmd>, rpc::source<repair_stream_cmd>> make_sink_and_source_for_repair_put_row_diff_with_rpc_stream(uint32_t repair_meta_id, msg_addr id);
rpc::sink<repair_stream_cmd> make_sink_for_repair_put_row_diff_with_rpc_stream(rpc::source<repair_row_on_wire_with_cmd>& source);
void register_repair_put_row_diff_with_rpc_stream(std::function<future<rpc::sink<repair_stream_cmd>> (const rpc::client_info& cinfo, uint32_t repair_meta_id, rpc::source<repair_row_on_wire_with_cmd> source)>&& func);
future<> unregister_repair_put_row_diff_with_rpc_stream();
// Wrapper for REPAIR_GET_FULL_ROW_HASHES_WITH_RPC_STREAM
future<rpc::sink<repair_stream_cmd>, rpc::source<repair_hash_with_cmd>> make_sink_and_source_for_repair_get_full_row_hashes_with_rpc_stream(uint32_t repair_meta_id, msg_addr id);
rpc::sink<repair_hash_with_cmd> make_sink_for_repair_get_full_row_hashes_with_rpc_stream(rpc::source<repair_stream_cmd>& source);
void register_repair_get_full_row_hashes_with_rpc_stream(std::function<future<rpc::sink<repair_hash_with_cmd>> (const rpc::client_info& cinfo, uint32_t repair_meta_id, rpc::source<repair_stream_cmd> source)>&& func);
future<> unregister_repair_get_full_row_hashes_with_rpc_stream();
void register_stream_mutation_done(std::function<future<> (const rpc::client_info& cinfo, UUID plan_id, dht::token_range_vector ranges, UUID cf_id, unsigned dst_cpu_id)>&& func);
future<> send_stream_mutation_done(msg_addr id, UUID plan_id, dht::token_range_vector ranges, UUID cf_id, unsigned dst_cpu_id);
future<> unregister_stream_mutation_done();
void register_complete_message(std::function<future<> (const rpc::client_info& cinfo, UUID plan_id, unsigned dst_cpu_id, rpc::optional<bool> failed)>&& func);
future<> send_complete_message(msg_addr id, UUID plan_id, unsigned dst_cpu_id, bool failed = false);
future<> unregister_complete_message();
// Wrapper for REPAIR_CHECKSUM_RANGE verb
void register_repair_checksum_range(std::function<future<partition_checksum> (sstring keyspace, sstring cf, dht::token_range range, rpc::optional<repair_checksum> hash_version)>&& func);
@@ -339,9 +347,9 @@ public:
future<partition_checksum> send_repair_checksum_range(msg_addr id, sstring keyspace, sstring cf, dht::token_range range, repair_checksum hash_version);
// Wrapper for REPAIR_GET_FULL_ROW_HASHES
void register_repair_get_full_row_hashes(std::function<future<std::unordered_set<repair_hash>> (const rpc::client_info& cinfo, uint32_t repair_meta_id)>&& func);
void register_repair_get_full_row_hashes(std::function<future<repair_hash_set> (const rpc::client_info& cinfo, uint32_t repair_meta_id)>&& func);
future<> unregister_repair_get_full_row_hashes();
future<std::unordered_set<repair_hash>> send_repair_get_full_row_hashes(msg_addr id, uint32_t repair_meta_id);
future<repair_hash_set> send_repair_get_full_row_hashes(msg_addr id, uint32_t repair_meta_id);
// Wrapper for REPAIR_GET_COMBINED_ROW_HASH
void register_repair_get_combined_row_hash(std::function<future<get_combined_row_hash_response> (const rpc::client_info& cinfo, uint32_t repair_meta_id, std::optional<repair_sync_boundary> common_sync_boundary)>&& func);
@@ -354,9 +362,9 @@ public:
future<get_sync_boundary_response> send_repair_get_sync_boundary(msg_addr id, uint32_t repair_meta_id, std::optional<repair_sync_boundary> skipped_sync_boundary);
// Wrapper for REPAIR_GET_ROW_DIFF
void register_repair_get_row_diff(std::function<future<repair_rows_on_wire> (const rpc::client_info& cinfo, uint32_t repair_meta_id, std::unordered_set<repair_hash> set_diff, bool needs_all_rows)>&& func);
void register_repair_get_row_diff(std::function<future<repair_rows_on_wire> (const rpc::client_info& cinfo, uint32_t repair_meta_id, repair_hash_set set_diff, bool needs_all_rows)>&& func);
future<> unregister_repair_get_row_diff();
future<repair_rows_on_wire> send_repair_get_row_diff(msg_addr id, uint32_t repair_meta_id, std::unordered_set<repair_hash> set_diff, bool needs_all_rows);
future<repair_rows_on_wire> send_repair_get_row_diff(msg_addr id, uint32_t repair_meta_id, repair_hash_set set_diff, bool needs_all_rows);
// Wrapper for REPAIR_PUT_ROW_DIFF
void register_repair_put_row_diff(std::function<future<> (const rpc::client_info& cinfo, uint32_t repair_meta_id, repair_rows_on_wire row_diff)>&& func);

View File

@@ -300,10 +300,9 @@ flat_mutation_reader read_context::create_reader(
}
auto& table = _db.local().find_column_family(schema);
auto class_config = _db.local().make_query_class_config();
if (!rm.rparts) {
rm.rparts = make_foreign(std::make_unique<reader_meta::remote_parts>(class_config.semaphore));
rm.rparts = make_foreign(std::make_unique<reader_meta::remote_parts>(semaphore()));
}
rm.rparts->range = std::make_unique<const dht::partition_range>(pr);
@@ -513,18 +512,28 @@ future<> read_context::lookup_readers() {
}
return parallel_for_each(boost::irange(0u, smp::count), [this] (shard_id shard) {
return _db.invoke_on(shard, [shard, cmd = &_cmd, ranges = &_ranges, gs = global_schema_ptr(_schema),
return _db.invoke_on(shard, [this, shard, cmd = &_cmd, ranges = &_ranges, gs = global_schema_ptr(_schema),
gts = tracing::global_trace_state_ptr(_trace_state)] (database& db) mutable {
auto schema = gs.get();
auto querier_opt = db.get_querier_cache().lookup_shard_mutation_querier(cmd->query_uuid, *schema, *ranges, cmd->slice, gts.get());
auto& table = db.find_column_family(schema);
auto& semaphore = db.make_query_class_config().semaphore;
auto& semaphore = this->semaphore();
if (!querier_opt) {
return reader_meta(reader_state::inexistent, reader_meta::remote_parts(semaphore));
}
auto& q = *querier_opt;
if (&q.permit().semaphore() != &semaphore) {
on_internal_error(mmq_log, format("looked-up reader belongs to different semaphore than the one appropriate for this query class: "
"looked-up reader belongs to {} (0x{:x}) the query class appropriate is {} (0x{:x})",
q.permit().semaphore().name(),
reinterpret_cast<uintptr_t>(&q.permit().semaphore()),
semaphore.name(),
reinterpret_cast<uintptr_t>(&semaphore)));
}
auto handle = pause(semaphore, std::move(q).reader());
return reader_meta(
reader_state::successful_lookup,

View File

@@ -1721,7 +1721,7 @@ void row::apply_monotonically(const schema& s, column_kind kind, row&& other) {
// we erase the live cells according to the shadowable_tombstone rules.
static bool dead_marker_shadows_row(const schema& s, column_kind kind, const row_marker& marker) {
return s.is_view()
&& !s.view_info()->base_non_pk_columns_in_view_pk().empty()
&& s.view_info()->has_base_non_pk_columns_in_view_pk()
&& !marker.is_live()
&& kind == column_kind::regular_column; // not applicable to static rows
}

View File

@@ -114,9 +114,6 @@ class reconcilable_result_builder {
const schema& _schema;
const query::partition_slice& _slice;
utils::chunked_vector<partition> _result;
uint32_t _live_rows{};
bool _return_static_content_on_partition_with_no_rows{};
bool _static_row_is_alive{};
uint32_t _total_live_rows = 0;
@@ -124,6 +121,10 @@ class reconcilable_result_builder {
stop_iteration _stop;
bool _short_read_allowed;
std::optional<streamed_mutation_freezer> _mutation_consumer;
uint32_t _live_rows{};
// make this the last member so it is destroyed first. #7240
utils::chunked_vector<partition> _result;
public:
reconcilable_result_builder(const schema& s, const query::partition_slice& slice,
query::result_memory_accounter&& accounter)

View File

@@ -30,6 +30,7 @@
#include "schema_registry.hh"
#include "mutation_compactor.hh"
logging::logger mrlog("mutation_reader");
static constexpr size_t merger_small_vector_size = 4;
@@ -659,6 +660,8 @@ flat_mutation_reader make_combined_reader(schema_ptr schema,
return make_combined_reader(std::move(schema), std::move(v), fwd_sm, fwd_mr);
}
const ssize_t new_reader_base_cost{16 * 1024};
class restricting_mutation_reader : public flat_mutation_reader::impl {
struct mutation_source_and_params {
mutation_source _ms;
@@ -685,8 +688,6 @@ class restricting_mutation_reader : public flat_mutation_reader::impl {
};
std::variant<pending_state, admitted_state> _state;
static const ssize_t new_reader_base_cost{16 * 1024};
template<typename Function>
requires std::is_move_constructible<Function>::value
&& requires(Function fn, flat_mutation_reader& reader) {
@@ -1026,6 +1027,13 @@ private:
bool _reader_created = false;
bool _drop_partition_start = false;
bool _drop_static_row = false;
// Trim range tombstones on the start of the buffer to the start of the read
// range (_next_position_in_partition). Set after reader recreation.
// Also validate the first not-trimmed mutation fragment's position.
bool _trim_range_tombstones = false;
// Validate the partition key of the first emitted partition, set after the
// reader was recreated.
bool _validate_partition_key = false;
position_in_partition::tri_compare _tri_cmp;
std::optional<dht::decorated_key> _last_pkey;
@@ -1047,7 +1055,10 @@ private:
void adjust_partition_slice();
flat_mutation_reader recreate_reader();
flat_mutation_reader resume_or_create_reader();
void maybe_validate_partition_start(const circular_buffer<mutation_fragment>& buffer);
void validate_position_in_partition(position_in_partition_view pos) const;
bool should_drop_fragment(const mutation_fragment& mf);
bool maybe_trim_range_tombstone(mutation_fragment& mf) const;
future<> do_fill_buffer(flat_mutation_reader& reader, db::timeout_clock::time_point timeout);
future<> fill_buffer(flat_mutation_reader& reader, db::timeout_clock::time_point timeout);
@@ -1120,16 +1131,11 @@ void evictable_reader::update_next_position(flat_mutation_reader& reader) {
_next_position_in_partition = position_in_partition::before_all_clustered_rows();
break;
case partition_region::clustered:
if (reader.is_buffer_empty()) {
_next_position_in_partition = position_in_partition::after_key(last_pos);
} else {
const auto& next_frag = reader.peek_buffer();
if (next_frag.is_end_of_partition()) {
if (!reader.is_buffer_empty() && reader.peek_buffer().is_end_of_partition()) {
push_mutation_fragment(reader.pop_mutation_fragment());
_next_position_in_partition = position_in_partition::for_partition_start();
} else {
_next_position_in_partition = position_in_partition(next_frag.position());
}
} else {
_next_position_in_partition = position_in_partition::after_key(last_pos);
}
break;
case partition_region::partition_end:
@@ -1154,6 +1160,9 @@ flat_mutation_reader evictable_reader::recreate_reader() {
const dht::partition_range* range = _pr;
const query::partition_slice* slice = &_ps;
_range_override.reset();
_slice_override.reset();
if (_last_pkey) {
bool partition_range_is_inclusive = true;
@@ -1190,6 +1199,9 @@ flat_mutation_reader evictable_reader::recreate_reader() {
range = &*_range_override;
}
_trim_range_tombstones = true;
_validate_partition_key = true;
return _ms.make_reader(
_schema,
_permit,
@@ -1216,6 +1228,78 @@ flat_mutation_reader evictable_reader::resume_or_create_reader() {
return recreate_reader();
}
template <typename... Arg>
static void require(bool condition, const char* msg, const Arg&... arg) {
if (!condition) {
on_internal_error(mrlog, format(msg, arg...));
}
}
void evictable_reader::maybe_validate_partition_start(const circular_buffer<mutation_fragment>& buffer) {
if (!_validate_partition_key || buffer.empty()) {
return;
}
// If this is set we can assume the first fragment is a partition-start.
const auto& ps = buffer.front().as_partition_start();
const auto tri_cmp = dht::ring_position_comparator(*_schema);
// If we recreated the reader after fast-forwarding it we won't have
// _last_pkey set. In this case it is enough to check if the partition
// is in range.
if (_last_pkey) {
const auto cmp_res = tri_cmp(*_last_pkey, ps.key());
if (_drop_partition_start) { // should be the same partition
require(
cmp_res == 0,
"{}(): validation failed, expected partition with key equal to _last_pkey {} due to _drop_partition_start being set, but got {}",
__FUNCTION__,
*_last_pkey,
ps.key());
} else { // should be a larger partition
require(
cmp_res < 0,
"{}(): validation failed, expected partition with key larger than _last_pkey {} due to _drop_partition_start being unset, but got {}",
__FUNCTION__,
*_last_pkey,
ps.key());
}
}
const auto& prange = _range_override ? *_range_override : *_pr;
require(
// TODO: somehow avoid this copy
prange.contains(ps.key(), tri_cmp),
"{}(): validation failed, expected partition with key that falls into current range {}, but got {}",
__FUNCTION__,
prange,
ps.key());
_validate_partition_key = false;
}
void evictable_reader::validate_position_in_partition(position_in_partition_view pos) const {
require(
_tri_cmp(_next_position_in_partition, pos) <= 0,
"{}(): validation failed, expected position in partition that is larger-than-equal than _next_position_in_partition {}, but got {}",
__FUNCTION__,
_next_position_in_partition,
pos);
if (_slice_override && pos.region() == partition_region::clustered) {
const auto ranges = _slice_override->row_ranges(*_schema, _last_pkey->key());
const bool any_contains = std::any_of(ranges.begin(), ranges.end(), [this, &pos] (const query::clustering_range& cr) {
// TODO: somehow avoid this copy
auto range = position_range(cr);
return range.contains(*_schema, pos);
});
require(
any_contains,
"{}(): validation failed, expected clustering fragment that is included in the slice {}, but got {}",
__FUNCTION__,
*_slice_override,
pos);
}
}
bool evictable_reader::should_drop_fragment(const mutation_fragment& mf) {
if (_drop_partition_start && mf.is_partition_start()) {
_drop_partition_start = false;
@@ -1228,12 +1312,50 @@ bool evictable_reader::should_drop_fragment(const mutation_fragment& mf) {
return false;
}
bool evictable_reader::maybe_trim_range_tombstone(mutation_fragment& mf) const {
// We either didn't read a partition yet (evicted after fast-forwarding) or
// didn't stop in a clustering region. We don't need to trim range
// tombstones in either case.
if (!_last_pkey || _next_position_in_partition.region() != partition_region::clustered) {
return false;
}
if (!mf.is_range_tombstone()) {
validate_position_in_partition(mf.position());
return false;
}
if (_tri_cmp(mf.position(), _next_position_in_partition) >= 0) {
validate_position_in_partition(mf.position());
return false; // rt in range, no need to trim
}
auto& rt = mf.as_mutable_range_tombstone();
require(
_tri_cmp(_next_position_in_partition, rt.end_position()) <= 0,
"{}(): validation failed, expected range tombstone with end pos larger than _next_position_in_partition {}, but got {}",
__FUNCTION__,
_next_position_in_partition,
rt.end_position());
rt.set_start(*_schema, position_in_partition_view::before_key(_next_position_in_partition));
return true;
}
future<> evictable_reader::do_fill_buffer(flat_mutation_reader& reader, db::timeout_clock::time_point timeout) {
if (!_drop_partition_start && !_drop_static_row) {
return reader.fill_buffer(timeout);
auto fill_buf_fut = reader.fill_buffer(timeout);
if (_validate_partition_key) {
fill_buf_fut = fill_buf_fut.then([this, &reader] {
maybe_validate_partition_start(reader.buffer());
});
}
return fill_buf_fut;
}
return repeat([this, &reader, timeout] {
return reader.fill_buffer(timeout).then([this, &reader] {
maybe_validate_partition_start(reader.buffer());
while (!reader.is_buffer_empty() && should_drop_fragment(reader.peek_buffer())) {
reader.pop_mutation_fragment();
}
@@ -1247,6 +1369,11 @@ future<> evictable_reader::fill_buffer(flat_mutation_reader& reader, db::timeout
if (reader.is_buffer_empty()) {
return make_ready_future<>();
}
while (_trim_range_tombstones && !reader.is_buffer_empty()) {
auto mf = reader.pop_mutation_fragment();
_trim_range_tombstones = maybe_trim_range_tombstone(mf);
push_mutation_fragment(std::move(mf));
}
reader.move_buffer_content_to(*this);
auto stop = [this, &reader] {
// The only problematic fragment kind is the range tombstone.
@@ -1287,7 +1414,13 @@ future<> evictable_reader::fill_buffer(flat_mutation_reader& reader, db::timeout
if (reader.is_buffer_empty()) {
return do_fill_buffer(reader, timeout);
}
push_mutation_fragment(reader.pop_mutation_fragment());
if (_trim_range_tombstones) {
auto mf = reader.pop_mutation_fragment();
_trim_range_tombstones = maybe_trim_range_tombstone(mf);
push_mutation_fragment(std::move(mf));
} else {
push_mutation_fragment(reader.pop_mutation_fragment());
}
return make_ready_future<>();
});
}).then([this, &reader] {

View File

@@ -304,6 +304,8 @@ public:
mutation_source make_empty_mutation_source();
snapshot_source make_empty_snapshot_source();
extern const ssize_t new_reader_base_cost;
// Creates a restricted reader whose resource usages will be tracked
// during it's lifetime. If there are not enough resources (dues to
// existing readers) to create the new reader, it's construction will

View File

@@ -163,6 +163,11 @@ public:
return {partition_region::clustered, bound_weight::before_all_prefixed, &ck};
}
// Returns a view to before_key(pos._ck) if pos.is_clustering_row() else returns pos as-is.
static position_in_partition_view before_key(position_in_partition_view pos) {
return {partition_region::clustered, pos._bound_weight == bound_weight::equal ? bound_weight::before_all_prefixed : pos._bound_weight, pos._ck};
}
partition_region region() const { return _type; }
bound_weight get_bound_weight() const { return _bound_weight; }
bool is_partition_start() const { return _type == partition_region::partition_start; }

View File

@@ -104,7 +104,7 @@ reader_concurrency_semaphore::inactive_read_handle reader_concurrency_semaphore:
const auto [it, _] = _inactive_reads.emplace(_next_id++, std::move(ir));
(void)_;
++_inactive_read_stats.population;
return inactive_read_handle(it->first);
return inactive_read_handle(*this, it->first);
}
// The evicted reader will release its permit, hopefully allowing us to
@@ -115,6 +115,17 @@ reader_concurrency_semaphore::inactive_read_handle reader_concurrency_semaphore:
}
std::unique_ptr<reader_concurrency_semaphore::inactive_read> reader_concurrency_semaphore::unregister_inactive_read(inactive_read_handle irh) {
if (irh && irh._sem != this) {
throw std::runtime_error(fmt::format(
"reader_concurrency_semaphore::unregister_inactive_read(): "
"attempted to unregister an inactive read with a handle belonging to another semaphore: "
"this is {} (0x{:x}) but the handle belongs to {} (0x{:x})",
name(),
reinterpret_cast<uintptr_t>(this),
irh._sem->name(),
reinterpret_cast<uintptr_t>(irh._sem)));
}
if (auto it = _inactive_reads.find(irh._id); it != _inactive_reads.end()) {
auto ir = std::move(it->second);
_inactive_reads.erase(it);

View File

@@ -60,18 +60,20 @@ public:
};
class inactive_read_handle {
reader_concurrency_semaphore* _sem = nullptr;
uint64_t _id = 0;
friend class reader_concurrency_semaphore;
explicit inactive_read_handle(uint64_t id)
: _id(id) {
explicit inactive_read_handle(reader_concurrency_semaphore& sem, uint64_t id)
: _sem(&sem), _id(id) {
}
public:
inactive_read_handle() = default;
inactive_read_handle(inactive_read_handle&& o) : _id(std::exchange(o._id, 0)) {
inactive_read_handle(inactive_read_handle&& o) : _sem(std::exchange(o._sem, nullptr)), _id(std::exchange(o._id, 0)) {
}
inactive_read_handle& operator=(inactive_read_handle&& o) {
_sem = std::exchange(o._sem, nullptr);
_id = std::exchange(o._id, 0);
return *this;
}
@@ -105,6 +107,7 @@ private:
};
private:
const resources _initial_resources;
resources _resources;
expiring_fifo<entry, expiry_handler, db::timeout_clock> _wait_list;
@@ -135,7 +138,8 @@ public:
sstring name,
size_t max_queue_length = std::numeric_limits<size_t>::max(),
std::function<void()> prethrow_action = nullptr)
: _resources(count, memory)
: _initial_resources(count, memory)
, _resources(count, memory)
, _wait_list(expiry_handler(name))
, _name(std::move(name))
, _max_queue_length(max_queue_length)
@@ -144,11 +148,11 @@ public:
/// Create a semaphore with practically unlimited count and memory.
///
/// And conversely, no queue limit either.
explicit reader_concurrency_semaphore(no_limits)
explicit reader_concurrency_semaphore(no_limits, sstring name = "unlimited reader_concurrency_semaphore")
: reader_concurrency_semaphore(
std::numeric_limits<int>::max(),
std::numeric_limits<ssize_t>::max(),
"unlimited reader_concurrency_semaphore") {}
std::move(name)) {}
~reader_concurrency_semaphore();
@@ -158,6 +162,13 @@ public:
reader_concurrency_semaphore(reader_concurrency_semaphore&&) = delete;
reader_concurrency_semaphore& operator=(reader_concurrency_semaphore&&) = delete;
/// Returns the name of the semaphore
///
/// If the semaphore has no name, "unnamed reader concurrency semaphore" is returned.
std::string_view name() const {
return _name.empty() ? "unnamed reader concurrency semaphore" : std::string_view(_name);
}
/// Register an inactive read.
///
/// The semaphore will evict this read when there is a shortage of
@@ -193,6 +204,10 @@ public:
reader_permit make_permit();
const resources initial_resources() const {
return _initial_resources;
}
const resources available_resources() const {
return _resources;
}

View File

@@ -42,12 +42,20 @@ struct reader_resources {
return count >= other.count && memory >= other.memory;
}
reader_resources operator-(const reader_resources& other) const {
return reader_resources{count - other.count, memory - other.memory};
}
reader_resources& operator-=(const reader_resources& other) {
count -= other.count;
memory -= other.memory;
return *this;
}
reader_resources operator+(const reader_resources& other) const {
return reader_resources{count + other.count, memory + other.memory};
}
reader_resources& operator+=(const reader_resources& other) {
count += other.count;
memory += other.memory;

View File

@@ -62,7 +62,7 @@ shared_ptr<abstract_command> exists::prepare(service::storage_proxy& proxy, requ
}
future<redis_message> exists::execute(service::storage_proxy& proxy, redis::redis_options& options, service_permit permit) {
return seastar::do_for_each(_keys, [&proxy, &options, &permit, this] (bytes key) {
return seastar::do_for_each(_keys, [&proxy, &options, permit, this] (bytes& key) {
return redis::read_strings(proxy, options, key, permit).then([this] (lw_shared_ptr<strings_result> result) {
if (result->has_result()) {
_count++;

View File

@@ -44,15 +44,15 @@ mkdir -p $BUILDDIR/scylla-package
tar -C $BUILDDIR/scylla-package -xpf $RELOC_PKG
cd $BUILDDIR/scylla-package
PRODUCT=$(cat scylla/SCYLLA-PRODUCT-FILE)
SCYLLA_VERSION=$(cat scylla/SCYLLA-VERSION-FILE)
SCYLLA_RELEASE=$(cat scylla/SCYLLA-RELEASE-FILE)
ln -fv $RELOC_PKG ../$PRODUCT-server_$SCYLLA_VERSION-$SCYLLA_RELEASE.orig.tar.gz
if $DIST; then
export DEB_BUILD_OPTIONS="housekeeping"
fi
mv scylla/debian debian
PKG_NAME=$(dpkg-parsechangelog --show-field Source)
# XXX: Drop revision number from version string.
# Since it always '1', this should be okay for now.
PKG_VERSION=$(dpkg-parsechangelog --show-field Version |sed -e 's/-1$//')
ln -fv $RELOC_PKG ../"$PKG_NAME"_"$PKG_VERSION".orig.tar.gz
debuild -rfakeroot -us -uc

View File

@@ -1633,6 +1633,7 @@ future<> bootstrap_with_repair(seastar::sharded<database>& db, locator::token_me
auto& ks = db.local().find_keyspace(keyspace_name);
auto& strat = ks.get_replication_strategy();
dht::token_range_vector desired_ranges = strat.get_pending_address_ranges(tm, tokens, myip);
bool find_node_in_local_dc_only = strat.get_type() == locator::replication_strategy_type::network_topology;
//Active ranges
auto metadata_clone = tm.clone_only_token_map();
@@ -1719,6 +1720,9 @@ future<> bootstrap_with_repair(seastar::sharded<database>& db, locator::token_me
mandatory_neighbors = get_node_losing_the_ranges(old_endpoints, new_endpoints);
neighbors = mandatory_neighbors;
} else if (old_endpoints.size() < strat.get_replication_factor()) {
if (!find_node_in_local_dc_only) {
neighbors = old_endpoints;
} else {
if (old_endpoints_in_local_dc.size() == rf_in_local_dc) {
// Local DC has enough replica nodes.
mandatory_neighbors = get_node_losing_the_ranges(old_endpoints_in_local_dc, new_endpoints);
@@ -1746,6 +1750,7 @@ future<> bootstrap_with_repair(seastar::sharded<database>& db, locator::token_me
throw std::runtime_error(format("bootstrap_with_repair: keyspace={}, range={}, wrong number of old_endpoints_in_local_dc={}, rf_in_local_dc={}",
keyspace_name, desired_range, old_endpoints_in_local_dc.size(), rf_in_local_dc));
}
}
} else {
throw std::runtime_error(format("bootstrap_with_repair: keyspace={}, range={}, wrong number of old_endpoints={}, rf={}",
keyspace_name, desired_range, old_endpoints, strat.get_replication_factor()));

View File

@@ -23,6 +23,7 @@
#include <unordered_map>
#include <exception>
#include <absl/container/btree_set.h>
#include <seastar/core/sstring.hh>
#include <seastar/core/sharded.hh>
@@ -339,6 +340,8 @@ public:
}
};
using repair_hash_set = absl::btree_set<repair_hash>;
enum class repair_row_level_start_status: uint8_t {
ok,
no_such_column_family,

View File

@@ -47,6 +47,7 @@
#include "gms/gossiper.hh"
#include "repair/row_level.hh"
#include "mutation_source_metadata.hh"
#include "utils/stall_free.hh"
extern logging::logger rlogger;
@@ -529,7 +530,7 @@ public:
sstables::shared_sstable sst = use_view_update_path ? t->make_streaming_staging_sstable() : t->make_streaming_sstable_for_write();
schema_ptr s = reader.schema();
auto& pc = service::get_local_streaming_priority();
return sst->write_components(std::move(reader), std::max(1ul, adjusted_estimated_partitions), s,
return sst->write_components(std::move(reader), adjusted_estimated_partitions, s,
t->get_sstables_manager().configure_writer(),
encoding_stats{}, pc).then([sst] {
return sst->open_data();
@@ -666,7 +667,7 @@ private:
// Tracks current sync boundary
std::optional<repair_sync_boundary> _current_sync_boundary;
// Contains the hashes of rows in the _working_row_buffor for all peer nodes
std::vector<std::unordered_set<repair_hash>> _peer_row_hash_sets;
std::vector<repair_hash_set> _peer_row_hash_sets;
// Gate used to make sure pending operation of meta data is done
seastar::gate _gate;
sink_source_for_get_full_row_hashes _sink_source_for_get_full_row_hashes;
@@ -754,11 +755,12 @@ public:
public:
future<> stop() {
auto gate_future = _gate.close();
auto writer_future = _repair_writer.wait_for_writer_done();
auto f1 = _sink_source_for_get_full_row_hashes.close();
auto f2 = _sink_source_for_get_row_diff.close();
auto f3 = _sink_source_for_put_row_diff.close();
return when_all_succeed(std::move(gate_future), std::move(writer_future), std::move(f1), std::move(f2), std::move(f3)).discard_result();
return when_all_succeed(std::move(gate_future), std::move(f1), std::move(f2), std::move(f3)).discard_result().finally([this] {
return _repair_writer.wait_for_writer_done();
});
}
static std::unordered_map<node_repair_meta_id, lw_shared_ptr<repair_meta>>& repair_meta_map() {
@@ -886,9 +888,9 @@ public:
}
// Must run inside a seastar thread
static std::unordered_set<repair_hash>
get_set_diff(const std::unordered_set<repair_hash>& x, const std::unordered_set<repair_hash>& y) {
std::unordered_set<repair_hash> set_diff;
static repair_hash_set
get_set_diff(const repair_hash_set& x, const repair_hash_set& y) {
repair_hash_set set_diff;
// Note std::set_difference needs x and y are sorted.
std::copy_if(x.begin(), x.end(), std::inserter(set_diff, set_diff.end()),
[&y] (auto& item) { thread::maybe_yield(); return y.find(item) == y.end(); });
@@ -906,14 +908,14 @@ public:
}
std::unordered_set<repair_hash>& peer_row_hash_sets(unsigned node_idx) {
repair_hash_set& peer_row_hash_sets(unsigned node_idx) {
return _peer_row_hash_sets[node_idx];
}
// Get a list of row hashes in _working_row_buf
future<std::unordered_set<repair_hash>>
future<repair_hash_set>
working_row_hashes() {
return do_with(std::unordered_set<repair_hash>(), [this] (std::unordered_set<repair_hash>& hashes) {
return do_with(repair_hash_set(), [this] (repair_hash_set& hashes) {
return do_for_each(_working_row_buf, [&hashes] (repair_row& r) {
hashes.emplace(r.hash());
}).then([&hashes] {
@@ -1090,24 +1092,32 @@ private:
});
}
future<> clear_row_buf() {
return utils::clear_gently(_row_buf);
}
future<> clear_working_row_buf() {
return utils::clear_gently(_working_row_buf).then([this] {
_working_row_buf_combined_hash.clear();
});
}
// Read rows from disk until _max_row_buf_size of rows are filled into _row_buf.
// Calculate the combined checksum of the rows
// Calculate the total size of the rows in _row_buf
future<get_sync_boundary_response>
get_sync_boundary(std::optional<repair_sync_boundary> skipped_sync_boundary) {
auto f = make_ready_future<>();
if (skipped_sync_boundary) {
_current_sync_boundary = skipped_sync_boundary;
_row_buf.clear();
_working_row_buf.clear();
_working_row_buf_combined_hash.clear();
} else {
_working_row_buf.clear();
_working_row_buf_combined_hash.clear();
f = clear_row_buf();
}
// Here is the place we update _last_sync_boundary
rlogger.trace("SET _last_sync_boundary from {} to {}", _last_sync_boundary, _current_sync_boundary);
_last_sync_boundary = _current_sync_boundary;
return row_buf_size().then([this, sb = std::move(skipped_sync_boundary)] (size_t cur_size) {
return f.then([this, sb = std::move(skipped_sync_boundary)] () mutable {
return clear_working_row_buf().then([this, sb = sb] () mutable {
return row_buf_size().then([this, sb = std::move(sb)] (size_t cur_size) {
return read_rows_from_disk(cur_size).then([this, sb = std::move(sb)] (std::list<repair_row> new_rows, size_t new_rows_size) mutable {
size_t new_rows_nr = new_rows.size();
_row_buf.splice(_row_buf.end(), new_rows);
@@ -1124,6 +1134,8 @@ private:
});
});
});
});
});
}
future<> move_row_buf_to_working_row_buf() {
@@ -1199,9 +1211,9 @@ private:
}
future<std::list<repair_row>>
copy_rows_from_working_row_buf_within_set_diff(std::unordered_set<repair_hash> set_diff) {
copy_rows_from_working_row_buf_within_set_diff(repair_hash_set set_diff) {
return do_with(std::list<repair_row>(), std::move(set_diff),
[this] (std::list<repair_row>& rows, std::unordered_set<repair_hash>& set_diff) {
[this] (std::list<repair_row>& rows, repair_hash_set& set_diff) {
return do_for_each(_working_row_buf, [this, &set_diff, &rows] (const repair_row& r) {
if (set_diff.count(r.hash()) > 0) {
rows.push_back(r);
@@ -1216,7 +1228,7 @@ private:
// Give a set of row hashes, return the corresponding rows
// If needs_all_rows is set, return all the rows in _working_row_buf, ignore the set_diff
future<std::list<repair_row>>
get_row_diff(std::unordered_set<repair_hash> set_diff, needs_all_rows_t needs_all_rows = needs_all_rows_t::no) {
get_row_diff(repair_hash_set set_diff, needs_all_rows_t needs_all_rows = needs_all_rows_t::no) {
if (needs_all_rows) {
if (!_repair_master || _nr_peer_nodes == 1) {
return make_ready_future<std::list<repair_row>>(std::move(_working_row_buf));
@@ -1227,19 +1239,28 @@ private:
}
}
future<> do_apply_rows(std::list<repair_row>& row_diff, unsigned node_idx, update_working_row_buf update_buf) {
return with_semaphore(_repair_writer.sem(), 1, [this, node_idx, update_buf, &row_diff] {
_repair_writer.create_writer(_db, node_idx);
return do_for_each(row_diff, [this, node_idx, update_buf] (repair_row& r) {
if (update_buf) {
_working_row_buf_combined_hash.add(r.hash());
}
// The repair_row here is supposed to have
// mutation_fragment attached because we have stored it in
// to_repair_rows_list above where the repair_row is created.
mutation_fragment mf = std::move(r.get_mutation_fragment());
auto dk_with_hash = r.get_dk_with_hash();
return _repair_writer.do_write(node_idx, std::move(dk_with_hash), std::move(mf));
future<> do_apply_rows(std::list<repair_row>&& row_diff, unsigned node_idx, update_working_row_buf update_buf) {
return do_with(std::move(row_diff), [this, node_idx, update_buf] (std::list<repair_row>& row_diff) {
return with_semaphore(_repair_writer.sem(), 1, [this, node_idx, update_buf, &row_diff] {
_repair_writer.create_writer(_db, node_idx);
return repeat([this, node_idx, update_buf, &row_diff] () mutable {
if (row_diff.empty()) {
return make_ready_future<stop_iteration>(stop_iteration::yes);
}
repair_row& r = row_diff.front();
if (update_buf) {
_working_row_buf_combined_hash.add(r.hash());
}
// The repair_row here is supposed to have
// mutation_fragment attached because we have stored it in
// to_repair_rows_list above where the repair_row is created.
mutation_fragment mf = std::move(r.get_mutation_fragment());
auto dk_with_hash = r.get_dk_with_hash();
return _repair_writer.do_write(node_idx, std::move(dk_with_hash), std::move(mf)).then([&row_diff] {
row_diff.pop_front();
return make_ready_future<stop_iteration>(stop_iteration::no);
});
});
});
});
}
@@ -1257,19 +1278,17 @@ private:
stats().rx_row_nr += row_diff.size();
stats().rx_row_nr_peer[from] += row_diff.size();
if (update_buf) {
std::list<repair_row> tmp;
tmp.swap(_working_row_buf);
// Both row_diff and _working_row_buf and are ordered, merging
// two sored list to make sure the combination of row_diff
// and _working_row_buf are ordered.
std::merge(tmp.begin(), tmp.end(), row_diff.begin(), row_diff.end(), std::back_inserter(_working_row_buf),
[this] (const repair_row& x, const repair_row& y) { thread::maybe_yield(); return _cmp(x.boundary(), y.boundary()) < 0; });
utils::merge_to_gently(_working_row_buf, row_diff,
[this] (const repair_row& x, const repair_row& y) { return _cmp(x.boundary(), y.boundary()) < 0; });
}
if (update_hash_set) {
_peer_row_hash_sets[node_idx] = boost::copy_range<std::unordered_set<repair_hash>>(row_diff |
_peer_row_hash_sets[node_idx] = boost::copy_range<repair_hash_set>(row_diff |
boost::adaptors::transformed([] (repair_row& r) { thread::maybe_yield(); return r.hash(); }));
}
do_apply_rows(row_diff, node_idx, update_buf).get();
do_apply_rows(std::move(row_diff), node_idx, update_buf).get();
}
future<>
@@ -1277,11 +1296,9 @@ private:
if (rows.empty()) {
return make_ready_future<>();
}
return to_repair_rows_list(rows).then([this] (std::list<repair_row> row_diff) {
return do_with(std::move(row_diff), [this] (std::list<repair_row>& row_diff) {
unsigned node_idx = 0;
return do_apply_rows(row_diff, node_idx, update_working_row_buf::no);
});
return to_repair_rows_list(std::move(rows)).then([this] (std::list<repair_row> row_diff) {
unsigned node_idx = 0;
return do_apply_rows(std::move(row_diff), node_idx, update_working_row_buf::no);
});
}
@@ -1360,13 +1377,13 @@ private:
public:
// RPC API
// Return the hashes of the rows in _working_row_buf
future<std::unordered_set<repair_hash>>
future<repair_hash_set>
get_full_row_hashes(gms::inet_address remote_node) {
if (remote_node == _myip) {
return get_full_row_hashes_handler();
}
return netw::get_local_messaging_service().send_repair_get_full_row_hashes(msg_addr(remote_node),
_repair_meta_id).then([this, remote_node] (std::unordered_set<repair_hash> hashes) {
_repair_meta_id).then([this, remote_node] (repair_hash_set hashes) {
rlogger.debug("Got full hashes from peer={}, nr_hashes={}", remote_node, hashes.size());
_metrics.rx_hashes_nr += hashes.size();
stats().rx_hashes_nr += hashes.size();
@@ -1377,7 +1394,7 @@ public:
private:
future<> get_full_row_hashes_source_op(
lw_shared_ptr<std::unordered_set<repair_hash>> current_hashes,
lw_shared_ptr<repair_hash_set> current_hashes,
gms::inet_address remote_node,
unsigned node_idx,
rpc::source<repair_hash_with_cmd>& source) {
@@ -1415,12 +1432,12 @@ private:
}
public:
future<std::unordered_set<repair_hash>>
future<repair_hash_set>
get_full_row_hashes_with_rpc_stream(gms::inet_address remote_node, unsigned node_idx) {
if (remote_node == _myip) {
return get_full_row_hashes_handler();
}
auto current_hashes = make_lw_shared<std::unordered_set<repair_hash>>();
auto current_hashes = make_lw_shared<repair_hash_set>();
return _sink_source_for_get_full_row_hashes.get_sink_source(remote_node, node_idx).then(
[this, current_hashes, remote_node, node_idx]
(rpc::sink<repair_stream_cmd>& sink, rpc::source<repair_hash_with_cmd>& source) mutable {
@@ -1435,7 +1452,7 @@ public:
}
// RPC handler
future<std::unordered_set<repair_hash>>
future<repair_hash_set>
get_full_row_hashes_handler() {
return with_gate(_gate, [this] {
return working_row_hashes();
@@ -1585,7 +1602,7 @@ public:
// RPC API
// Return rows in the _working_row_buf with hash within the given sef_diff
// Must run inside a seastar thread
void get_row_diff(std::unordered_set<repair_hash> set_diff, needs_all_rows_t needs_all_rows, gms::inet_address remote_node, unsigned node_idx) {
void get_row_diff(repair_hash_set set_diff, needs_all_rows_t needs_all_rows, gms::inet_address remote_node, unsigned node_idx) {
if (needs_all_rows || !set_diff.empty()) {
if (remote_node == _myip) {
return;
@@ -1654,11 +1671,11 @@ private:
}
future<> get_row_diff_sink_op(
std::unordered_set<repair_hash> set_diff,
repair_hash_set set_diff,
needs_all_rows_t needs_all_rows,
rpc::sink<repair_hash_with_cmd>& sink,
gms::inet_address remote_node) {
return do_with(std::move(set_diff), [needs_all_rows, remote_node, &sink] (std::unordered_set<repair_hash>& set_diff) mutable {
return do_with(std::move(set_diff), [needs_all_rows, remote_node, &sink] (repair_hash_set& set_diff) mutable {
if (inject_rpc_stream_error) {
return make_exception_future<>(std::runtime_error("get_row_diff: Inject sender error in sink loop"));
}
@@ -1685,7 +1702,7 @@ private:
public:
// Must run inside a seastar thread
void get_row_diff_with_rpc_stream(
std::unordered_set<repair_hash> set_diff,
repair_hash_set set_diff,
needs_all_rows_t needs_all_rows,
update_peer_row_hash_sets update_hash_set,
gms::inet_address remote_node,
@@ -1711,7 +1728,7 @@ public:
}
// RPC handler
future<repair_rows_on_wire> get_row_diff_handler(std::unordered_set<repair_hash> set_diff, needs_all_rows_t needs_all_rows) {
future<repair_rows_on_wire> get_row_diff_handler(repair_hash_set set_diff, needs_all_rows_t needs_all_rows) {
return with_gate(_gate, [this, set_diff = std::move(set_diff), needs_all_rows] () mutable {
return get_row_diff(std::move(set_diff), needs_all_rows).then([this] (std::list<repair_row> row_diff) {
return to_repair_rows_on_wire(std::move(row_diff));
@@ -1721,15 +1738,16 @@ public:
// RPC API
// Send rows in the _working_row_buf with hash within the given sef_diff
future<> put_row_diff(std::unordered_set<repair_hash> set_diff, needs_all_rows_t needs_all_rows, gms::inet_address remote_node) {
future<> put_row_diff(repair_hash_set set_diff, needs_all_rows_t needs_all_rows, gms::inet_address remote_node) {
if (!set_diff.empty()) {
if (remote_node == _myip) {
return make_ready_future<>();
}
auto sz = set_diff.size();
size_t sz = set_diff.size();
return get_row_diff(std::move(set_diff), needs_all_rows).then([this, remote_node, sz] (std::list<repair_row> row_diff) {
if (row_diff.size() != sz) {
throw std::runtime_error("row_diff.size() != set_diff.size()");
rlogger.warn("Hash conflict detected, keyspace={}, table={}, range={}, row_diff.size={}, set_diff.size={}. It is recommended to compact the table and rerun repair for the range.",
_schema->ks_name(), _schema->cf_name(), _range, row_diff.size(), sz);
}
return do_with(std::move(row_diff), [this, remote_node] (std::list<repair_row>& row_diff) {
return get_repair_rows_size(row_diff).then([this, remote_node, &row_diff] (size_t row_bytes) mutable {
@@ -1796,17 +1814,18 @@ private:
public:
future<> put_row_diff_with_rpc_stream(
std::unordered_set<repair_hash> set_diff,
repair_hash_set set_diff,
needs_all_rows_t needs_all_rows,
gms::inet_address remote_node, unsigned node_idx) {
if (!set_diff.empty()) {
if (remote_node == _myip) {
return make_ready_future<>();
}
auto sz = set_diff.size();
size_t sz = set_diff.size();
return get_row_diff(std::move(set_diff), needs_all_rows).then([this, remote_node, node_idx, sz] (std::list<repair_row> row_diff) {
if (row_diff.size() != sz) {
throw std::runtime_error("row_diff.size() != set_diff.size()");
rlogger.warn("Hash conflict detected, keyspace={}, table={}, range={}, row_diff.size={}, set_diff.size={}. It is recommended to compact the table and rerun repair for the range.",
_schema->ks_name(), _schema->cf_name(), _range, row_diff.size(), sz);
}
return do_with(std::move(row_diff), [this, remote_node, node_idx] (std::list<repair_row>& row_diff) {
return get_repair_rows_size(row_diff).then([this, remote_node, node_idx, &row_diff] (size_t row_bytes) mutable {
@@ -1845,7 +1864,7 @@ static future<stop_iteration> repair_get_row_diff_with_rpc_stream_process_op(
rpc::sink<repair_row_on_wire_with_cmd> sink,
rpc::source<repair_hash_with_cmd> source,
bool &error,
std::unordered_set<repair_hash>& current_set_diff,
repair_hash_set& current_set_diff,
std::optional<std::tuple<repair_hash_with_cmd>> hash_cmd_opt) {
repair_hash_with_cmd hash_cmd = std::get<0>(hash_cmd_opt.value());
rlogger.trace("Got repair_hash_with_cmd from peer={}, hash={}, cmd={}", from, hash_cmd.hash, int(hash_cmd.cmd));
@@ -1858,7 +1877,7 @@ static future<stop_iteration> repair_get_row_diff_with_rpc_stream_process_op(
}
bool needs_all_rows = hash_cmd.cmd == repair_stream_cmd::needs_all_rows;
_metrics.rx_hashes_nr += current_set_diff.size();
auto fp = make_foreign(std::make_unique<std::unordered_set<repair_hash>>(std::move(current_set_diff)));
auto fp = make_foreign(std::make_unique<repair_hash_set>(std::move(current_set_diff)));
return smp::submit_to(src_cpu_id % smp::count, [from, repair_meta_id, needs_all_rows, fp = std::move(fp)] {
auto rm = repair_meta::get_repair_meta(from, repair_meta_id);
if (fp.get_owner_shard() == this_shard_id()) {
@@ -1936,12 +1955,12 @@ static future<stop_iteration> repair_get_full_row_hashes_with_rpc_stream_process
if (status == repair_stream_cmd::get_full_row_hashes) {
return smp::submit_to(src_cpu_id % smp::count, [from, repair_meta_id] {
auto rm = repair_meta::get_repair_meta(from, repair_meta_id);
return rm->get_full_row_hashes_handler().then([] (std::unordered_set<repair_hash> hashes) {
return rm->get_full_row_hashes_handler().then([] (repair_hash_set hashes) {
_metrics.tx_hashes_nr += hashes.size();
return hashes;
});
}).then([sink] (std::unordered_set<repair_hash> hashes) mutable {
return do_with(std::move(hashes), [sink] (std::unordered_set<repair_hash>& hashes) mutable {
}).then([sink] (repair_hash_set hashes) mutable {
return do_with(std::move(hashes), [sink] (repair_hash_set& hashes) mutable {
return do_for_each(hashes, [sink] (const repair_hash& hash) mutable {
return sink(repair_hash_with_cmd{repair_stream_cmd::hash_data, hash});
}).then([sink] () mutable {
@@ -1964,7 +1983,7 @@ static future<> repair_get_row_diff_with_rpc_stream_handler(
uint32_t repair_meta_id,
rpc::sink<repair_row_on_wire_with_cmd> sink,
rpc::source<repair_hash_with_cmd> source) {
return do_with(false, std::unordered_set<repair_hash>(), [from, src_cpu_id, repair_meta_id, sink, source] (bool& error, std::unordered_set<repair_hash>& current_set_diff) mutable {
return do_with(false, repair_hash_set(), [from, src_cpu_id, repair_meta_id, sink, source] (bool& error, repair_hash_set& current_set_diff) mutable {
return repeat([from, src_cpu_id, repair_meta_id, sink, source, &error, &current_set_diff] () mutable {
return source().then([from, src_cpu_id, repair_meta_id, sink, source, &error, &current_set_diff] (std::optional<std::tuple<repair_hash_with_cmd>> hash_cmd_opt) mutable {
if (hash_cmd_opt) {
@@ -2107,7 +2126,7 @@ future<> repair_init_messaging_service_handler(repair_service& rs, distributed<d
auto from = cinfo.retrieve_auxiliary<gms::inet_address>("baddr");
return smp::submit_to(src_cpu_id % smp::count, [from, repair_meta_id] {
auto rm = repair_meta::get_repair_meta(from, repair_meta_id);
return rm->get_full_row_hashes_handler().then([] (std::unordered_set<repair_hash> hashes) {
return rm->get_full_row_hashes_handler().then([] (repair_hash_set hashes) {
_metrics.tx_hashes_nr += hashes.size();
return hashes;
});
@@ -2135,11 +2154,11 @@ future<> repair_init_messaging_service_handler(repair_service& rs, distributed<d
});
});
ms.register_repair_get_row_diff([] (const rpc::client_info& cinfo, uint32_t repair_meta_id,
std::unordered_set<repair_hash> set_diff, bool needs_all_rows) {
repair_hash_set set_diff, bool needs_all_rows) {
auto src_cpu_id = cinfo.retrieve_auxiliary<uint32_t>("src_cpu_id");
auto from = cinfo.retrieve_auxiliary<gms::inet_address>("baddr");
_metrics.rx_hashes_nr += set_diff.size();
auto fp = make_foreign(std::make_unique<std::unordered_set<repair_hash>>(std::move(set_diff)));
auto fp = make_foreign(std::make_unique<repair_hash_set>(std::move(set_diff)));
return smp::submit_to(src_cpu_id % smp::count, [from, repair_meta_id, fp = std::move(fp), needs_all_rows] () mutable {
auto rm = repair_meta::get_repair_meta(from, repair_meta_id);
if (fp.get_owner_shard() == this_shard_id()) {
@@ -2207,6 +2226,25 @@ future<> repair_init_messaging_service_handler(repair_service& rs, distributed<d
});
}
future<> repair_uninit_messaging_service_handler() {
return netw::get_messaging_service().invoke_on_all([] (auto& ms) {
return when_all_succeed(
ms.unregister_repair_get_row_diff_with_rpc_stream(),
ms.unregister_repair_put_row_diff_with_rpc_stream(),
ms.unregister_repair_get_full_row_hashes_with_rpc_stream(),
ms.unregister_repair_get_full_row_hashes(),
ms.unregister_repair_get_combined_row_hash(),
ms.unregister_repair_get_sync_boundary(),
ms.unregister_repair_get_row_diff(),
ms.unregister_repair_put_row_diff(),
ms.unregister_repair_row_level_start(),
ms.unregister_repair_row_level_stop(),
ms.unregister_repair_get_estimated_partitions(),
ms.unregister_repair_set_estimated_partitions(),
ms.unregister_repair_get_diff_algorithms()).discard_result();
});
}
class row_level_repair {
repair_info& _ri;
sstring _cf_name;
@@ -2439,7 +2477,7 @@ private:
// sequentially because the rows from repair follower 1 to
// repair master might reduce the amount of missing data
// between repair master and repair follower 2.
std::unordered_set<repair_hash> set_diff = repair_meta::get_set_diff(master.peer_row_hash_sets(node_idx), master.working_row_hashes().get0());
repair_hash_set set_diff = repair_meta::get_set_diff(master.peer_row_hash_sets(node_idx), master.working_row_hashes().get0());
// Request missing sets from peer node
rlogger.debug("Before get_row_diff to node {}, local={}, peer={}, set_diff={}",
node, master.working_row_hashes().get0().size(), master.peer_row_hash_sets(node_idx).size(), set_diff.size());
@@ -2462,9 +2500,9 @@ private:
// So we can figure out which rows peer node are missing and send the missing rows to them
check_in_shutdown();
_ri.check_in_abort();
std::unordered_set<repair_hash> local_row_hash_sets = master.working_row_hashes().get0();
repair_hash_set local_row_hash_sets = master.working_row_hashes().get0();
auto sz = _all_live_peer_nodes.size();
std::vector<std::unordered_set<repair_hash>> set_diffs(sz);
std::vector<repair_hash_set> set_diffs(sz);
for (size_t idx : boost::irange(size_t(0), sz)) {
set_diffs[idx] = repair_meta::get_set_diff(local_row_hash_sets, master.peer_row_hash_sets(idx));
}

View File

@@ -45,6 +45,7 @@ private:
};
future<> repair_init_messaging_service_handler(repair_service& rs, distributed<db::system_distributed_keyspace>& sys_dist_ks, distributed<db::view::view_update_generator>& view_update_generator);
future<> repair_uninit_messaging_service_handler();
class repair_info;

View File

@@ -19,6 +19,7 @@
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#include <seastar/core/on_internal_error.hh>
#include <map>
#include "utils/UUID_gen.hh"
#include "cql3/column_identifier.hh"
@@ -43,6 +44,8 @@
constexpr int32_t schema::NAME_LENGTH;
extern logging::logger dblog;
sstring to_sstring(column_kind k) {
switch (k) {
case column_kind::partition_key: return "PARTITION_KEY";
@@ -592,11 +595,15 @@ schema::get_column_definition(const bytes& name) const {
const column_definition&
schema::column_at(column_kind kind, column_id id) const {
return _raw._columns.at(column_offset(kind) + id);
return column_at(static_cast<ordinal_column_id>(column_offset(kind) + id));
}
const column_definition&
schema::column_at(ordinal_column_id ordinal_id) const {
if (size_t(ordinal_id) >= _raw._columns.size()) [[unlikely]] {
on_internal_error(dblog, format("{}.{}@{}: column id {:d} >= {:d}",
ks_name(), cf_name(), version(), size_t(ordinal_id), _raw._columns.size()));
}
return _raw._columns.at(static_cast<column_count_type>(ordinal_id));
}

View File

@@ -92,7 +92,8 @@ executables = ['build/{}/scylla'.format(args.mode),
'/usr/sbin/ethtool',
'/usr/bin/netstat',
'/usr/bin/hwloc-distrib',
'/usr/bin/hwloc-calc']
'/usr/bin/hwloc-calc',
'/usr/bin/lsblk']
output = args.dest

View File

@@ -597,7 +597,7 @@ def current_shard():
def find_db(shard=None):
if not shard:
if shard is None:
shard = current_shard()
return gdb.parse_and_eval('::debug::db')['_instances']['_M_impl']['_M_start'][shard]['service']['_p']

View File

@@ -63,6 +63,17 @@ MemoryHigh=1200M
MemoryMax=1400M
MemoryLimit=1400M
EOS
# On CentOS7, systemd does not support percentage-based parameter.
# To apply memory parameter on CentOS7, we need to override the parameter
# in bytes, instead of percentage.
elif [ "$RHEL" -a "$VERSION_ID" = "7" ]; then
MEMORY_LIMIT=$((MEMTOTAL_BYTES / 100 * 5))
mkdir -p /etc/systemd/system/scylla-helper.slice.d/
cat << EOS > /etc/systemd/system/scylla-helper.slice.d/memory.conf
[Slice]
MemoryLimit=$MEMORY_LIMIT
EOS
fi
systemctl --system daemon-reload >/dev/null || true

Submodule seastar updated: 11e86172ba...61b88d1da4

View File

@@ -25,6 +25,7 @@
#include <seastar/util/bool_class.hh>
#include <boost/range/algorithm/for_each.hpp>
#include "utils/small_vector.hh"
#include <absl/container/btree_set.h>
namespace ser {
@@ -81,6 +82,17 @@ static inline void serialize_array(Output& out, const Container& v) {
template<typename Container>
struct container_traits;
template<typename T>
struct container_traits<absl::btree_set<T>> {
struct back_emplacer {
absl::btree_set<T>& c;
back_emplacer(absl::btree_set<T>& c_) : c(c_) {}
void operator()(T&& v) {
c.emplace(std::move(v));
}
};
};
template<typename T>
struct container_traits<std::unordered_set<T>> {
struct back_emplacer {
@@ -253,6 +265,27 @@ struct serializer<std::list<T>> {
}
};
template<typename T>
struct serializer<absl::btree_set<T>> {
template<typename Input>
static absl::btree_set<T> read(Input& in) {
auto sz = deserialize(in, boost::type<uint32_t>());
absl::btree_set<T> v;
deserialize_array_helper<false, T>::doit(in, v, sz);
return v;
}
template<typename Output>
static void write(Output& out, const absl::btree_set<T>& v) {
safe_serialize_as_uint32(out, v.size());
serialize_array_helper<false, T>::doit(out, v);
}
template<typename Input>
static void skip(Input& in) {
auto sz = deserialize(in, boost::type<uint32_t>());
skip_array<T>(in, sz);
}
};
template<typename T>
struct serializer<std::unordered_set<T>> {
template<typename Input>

View File

@@ -1760,6 +1760,7 @@ storage_proxy::storage_proxy(distributed<database>& db, storage_proxy::config cf
, _token_metadata(tm)
, _read_smp_service_group(cfg.read_smp_service_group)
, _write_smp_service_group(cfg.write_smp_service_group)
, _hints_write_smp_service_group(cfg.hints_write_smp_service_group)
, _write_ack_smp_service_group(cfg.write_ack_smp_service_group)
, _next_response_id(std::chrono::system_clock::now().time_since_epoch()/1ms)
, _hints_resource_manager(cfg.available_memory / 10)
@@ -1803,39 +1804,48 @@ storage_proxy::response_id_type storage_proxy::unique_response_handler::release(
}
future<>
storage_proxy::mutate_locally(const mutation& m, tracing::trace_state_ptr tr_state, clock_type::time_point timeout) {
storage_proxy::mutate_locally(const mutation& m, tracing::trace_state_ptr tr_state, db::commitlog::force_sync sync, clock_type::time_point timeout, smp_service_group smp_grp) {
auto shard = _db.local().shard_of(m);
get_stats().replica_cross_shard_ops += shard != this_shard_id();
return _db.invoke_on(shard, {_write_smp_service_group, timeout},
[s = global_schema_ptr(m.schema()), m = freeze(m), gtr = tracing::global_trace_state_ptr(std::move(tr_state)), timeout] (database& db) mutable -> future<> {
return db.apply(s, m, gtr.get(), db::commitlog::force_sync::no, timeout);
return _db.invoke_on(shard, {smp_grp, timeout},
[s = global_schema_ptr(m.schema()),
m = freeze(m),
gtr = tracing::global_trace_state_ptr(std::move(tr_state)),
timeout,
sync] (database& db) mutable -> future<> {
return db.apply(s, m, gtr.get(), sync, timeout);
});
}
future<>
storage_proxy::mutate_locally(const schema_ptr& s, const frozen_mutation& m, tracing::trace_state_ptr tr_state, db::commitlog::force_sync sync, clock_type::time_point timeout) {
storage_proxy::mutate_locally(const schema_ptr& s, const frozen_mutation& m, tracing::trace_state_ptr tr_state, db::commitlog::force_sync sync, clock_type::time_point timeout,
smp_service_group smp_grp) {
auto shard = _db.local().shard_of(m);
get_stats().replica_cross_shard_ops += shard != this_shard_id();
return _db.invoke_on(shard, {_write_smp_service_group, timeout},
return _db.invoke_on(shard, {smp_grp, timeout},
[&m, gs = global_schema_ptr(s), gtr = tracing::global_trace_state_ptr(std::move(tr_state)), timeout, sync] (database& db) mutable -> future<> {
return db.apply(gs, m, gtr.get(), sync, timeout);
});
}
future<>
storage_proxy::mutate_locally(std::vector<mutation> mutations, tracing::trace_state_ptr tr_state, clock_type::time_point timeout) {
return do_with(std::move(mutations), [this, timeout, tr_state = std::move(tr_state)] (std::vector<mutation>& pmut) mutable {
return parallel_for_each(pmut.begin(), pmut.end(), [this, tr_state = std::move(tr_state), timeout] (const mutation& m) mutable {
return mutate_locally(m, tr_state, timeout);
storage_proxy::mutate_locally(std::vector<mutation> mutations, tracing::trace_state_ptr tr_state, clock_type::time_point timeout, smp_service_group smp_grp) {
return do_with(std::move(mutations), [this, timeout, tr_state = std::move(tr_state), smp_grp] (std::vector<mutation>& pmut) mutable {
return parallel_for_each(pmut.begin(), pmut.end(), [this, tr_state = std::move(tr_state), timeout, smp_grp] (const mutation& m) mutable {
return mutate_locally(m, tr_state, db::commitlog::force_sync::no, timeout, smp_grp);
});
});
}
future<>
storage_proxy::mutate_locally(std::vector<mutation> mutation, tracing::trace_state_ptr tr_state, clock_type::time_point timeout) {
return mutate_locally(std::move(mutation), tr_state, timeout, _write_smp_service_group);
}
future<>
storage_proxy::mutate_hint(const schema_ptr& s, const frozen_mutation& m, tracing::trace_state_ptr tr_state, clock_type::time_point timeout) {
auto shard = _db.local().shard_of(m);
get_stats().replica_cross_shard_ops += shard != this_shard_id();
return _db.invoke_on(shard, {_write_smp_service_group, timeout}, [&m, gs = global_schema_ptr(s), tr_state = std::move(tr_state), timeout] (database& db) mutable -> future<> {
return _db.invoke_on(shard, {_hints_write_smp_service_group, timeout}, [&m, gs = global_schema_ptr(s), tr_state = std::move(tr_state), timeout] (database& db) mutable -> future<> {
return db.apply_hint(gs, m, std::move(tr_state), timeout);
});
}
@@ -4849,7 +4859,7 @@ void storage_proxy::init_messaging_service() {
});
};
auto receive_mutation_handler = [] (const rpc::client_info& cinfo, rpc::opt_time_point t, frozen_mutation in, std::vector<gms::inet_address> forward,
auto receive_mutation_handler = [] (smp_service_group smp_grp, const rpc::client_info& cinfo, rpc::opt_time_point t, frozen_mutation in, std::vector<gms::inet_address> forward,
gms::inet_address reply_to, unsigned shard, storage_proxy::response_id_type response_id, rpc::optional<std::optional<tracing::trace_info>> trace_info) {
tracing::trace_state_ptr trace_state_ptr;
auto src_addr = netw::messaging_service::get_source(cinfo);
@@ -4857,9 +4867,9 @@ void storage_proxy::init_messaging_service() {
utils::UUID schema_version = in.schema_version();
return handle_write(src_addr, t, schema_version, std::move(in), std::move(forward), reply_to, shard, response_id,
trace_info ? *trace_info : std::nullopt,
/* apply_fn */ [] (shared_ptr<storage_proxy>& p, tracing::trace_state_ptr tr_state, schema_ptr s, const frozen_mutation& m,
/* apply_fn */ [smp_grp] (shared_ptr<storage_proxy>& p, tracing::trace_state_ptr tr_state, schema_ptr s, const frozen_mutation& m,
clock_type::time_point timeout) {
return p->mutate_locally(std::move(s), m, std::move(tr_state), db::commitlog::force_sync::no, timeout);
return p->mutate_locally(std::move(s), m, std::move(tr_state), db::commitlog::force_sync::no, timeout, smp_grp);
},
/* forward_fn */ [] (netw::messaging_service::msg_addr addr, clock_type::time_point timeout, const frozen_mutation& m,
gms::inet_address reply_to, unsigned shard, response_id_type response_id,
@@ -4868,8 +4878,8 @@ void storage_proxy::init_messaging_service() {
return ms.send_mutation(addr, timeout, m, {}, reply_to, shard, response_id, std::move(trace_info));
});
};
ms.register_mutation(receive_mutation_handler);
ms.register_hint_mutation(receive_mutation_handler);
ms.register_mutation(std::bind_front<>(receive_mutation_handler, _write_smp_service_group));
ms.register_hint_mutation(std::bind_front<>(receive_mutation_handler, _hints_write_smp_service_group));
ms.register_paxos_learn([] (const rpc::client_info& cinfo, rpc::opt_time_point t, paxos::proposal decision,
std::vector<gms::inet_address> forward, gms::inet_address reply_to, unsigned shard,
@@ -5112,18 +5122,22 @@ void storage_proxy::init_messaging_service() {
future<> storage_proxy::uninit_messaging_service() {
auto& ms = netw::get_local_messaging_service();
return when_all_succeed(
ms.unregister_counter_mutation(),
ms.unregister_mutation(),
ms.unregister_hint_mutation(),
ms.unregister_mutation_done(),
ms.unregister_mutation_failed(),
ms.unregister_read_data(),
ms.unregister_read_mutation_data(),
ms.unregister_read_digest(),
ms.unregister_truncate(),
ms.unregister_get_schema_version(),
ms.unregister_paxos_prepare(),
ms.unregister_paxos_accept(),
ms.unregister_paxos_learn(),
ms.unregister_paxos_prune()
).discard_result();
}
future<rpc::tuple<foreign_ptr<lw_shared_ptr<reconcilable_result>>, cache_temperature>>
@@ -5217,8 +5231,7 @@ future<> storage_proxy::drain_on_shutdown() {
future<>
storage_proxy::stop() {
// FIXME: hints manager should be stopped here but it seems like this function is never called
return uninit_messaging_service();
return make_ready_future<>();
}
}

View File

@@ -166,6 +166,7 @@ public:
size_t available_memory;
smp_service_group read_smp_service_group = default_smp_service_group();
smp_service_group write_smp_service_group = default_smp_service_group();
smp_service_group hints_write_smp_service_group = default_smp_service_group();
// Write acknowledgments might not be received on the correct shard, and
// they need a separate smp_service_group to prevent an ABBA deadlock
// with writes.
@@ -256,6 +257,7 @@ private:
locator::token_metadata& _token_metadata;
smp_service_group _read_smp_service_group;
smp_service_group _write_smp_service_group;
smp_service_group _hints_write_smp_service_group;
smp_service_group _write_ack_smp_service_group;
response_id_type _next_response_id;
response_handlers_map _response_handlers;
@@ -314,7 +316,6 @@ private:
cdc_stats _cdc_stats;
private:
future<> uninit_messaging_service();
future<coordinator_query_result> query_singular(lw_shared_ptr<query::read_command> cmd,
dht::partition_range_vector&& partition_ranges,
db::consistency_level cl,
@@ -469,13 +470,31 @@ public:
return next;
}
void init_messaging_service();
future<> uninit_messaging_service();
private:
// Applies mutation on this node.
// Resolves with timed_out_error when timeout is reached.
future<> mutate_locally(const mutation& m, tracing::trace_state_ptr tr_state, clock_type::time_point timeout = clock_type::time_point::max());
future<> mutate_locally(const mutation& m, tracing::trace_state_ptr tr_state, db::commitlog::force_sync sync, clock_type::time_point timeout, smp_service_group smp_grp);
// Applies mutation on this node.
// Resolves with timed_out_error when timeout is reached.
future<> mutate_locally(const schema_ptr&, const frozen_mutation& m, tracing::trace_state_ptr tr_state, db::commitlog::force_sync sync, clock_type::time_point timeout = clock_type::time_point::max());
future<> mutate_locally(const schema_ptr&, const frozen_mutation& m, tracing::trace_state_ptr tr_state, db::commitlog::force_sync sync, clock_type::time_point timeout,
smp_service_group smp_grp);
// Applies mutations on this node.
// Resolves with timed_out_error when timeout is reached.
future<> mutate_locally(std::vector<mutation> mutation, tracing::trace_state_ptr tr_state, clock_type::time_point timeout, smp_service_group smp_grp);
public:
// Applies mutation on this node.
// Resolves with timed_out_error when timeout is reached.
future<> mutate_locally(const mutation& m, tracing::trace_state_ptr tr_state, db::commitlog::force_sync sync, clock_type::time_point timeout = clock_type::time_point::max()) {
return mutate_locally(m, tr_state, sync, timeout, _write_smp_service_group);
}
// Applies mutation on this node.
// Resolves with timed_out_error when timeout is reached.
future<> mutate_locally(const schema_ptr& s, const frozen_mutation& m, tracing::trace_state_ptr tr_state, db::commitlog::force_sync sync, clock_type::time_point timeout = clock_type::time_point::max()) {
return mutate_locally(s, m, tr_state, sync, timeout, _write_smp_service_group);
}
// Applies mutations on this node.
// Resolves with timed_out_error when timeout is reached.
future<> mutate_locally(std::vector<mutation> mutation, tracing::trace_state_ptr tr_state, clock_type::time_point timeout = clock_type::time_point::max());

View File

@@ -369,6 +369,9 @@ void storage_service::prepare_to_join(std::vector<inet_address> loaded_endpoints
app_states.emplace(gms::application_state::CDC_STREAMS_TIMESTAMP, versioned_value::cdc_streams_timestamp(_cdc_streams_ts));
app_states.emplace(gms::application_state::STATUS, versioned_value::normal(my_tokens));
}
if (replacing_a_node_with_same_ip || replacing_a_node_with_diff_ip) {
app_states.emplace(gms::application_state::TOKENS, versioned_value::tokens(_bootstrap_tokens));
}
slogger.info("Starting up server gossip");
auto generation_number = db::system_keyspace::increment_and_get_generation().get0();
@@ -698,6 +701,8 @@ bool storage_service::do_handle_cdc_generation_intercept_nonfatal_errors(db_cloc
throw cdc_generation_handling_nonfatal_exception(e.what());
} catch (exceptions::unavailable_exception& e) {
throw cdc_generation_handling_nonfatal_exception(e.what());
} catch (exceptions::read_failure_exception& e) {
throw cdc_generation_handling_nonfatal_exception(e.what());
} catch (...) {
const auto ep = std::current_exception();
if (is_timeout_exception(ep)) {
@@ -890,12 +895,14 @@ future<> storage_service::check_and_repair_cdc_streams() {
cdc_log.error("Aborting CDC generation repair due to missing STATUS");
return;
}
// Update _cdc_streams_ts first, so that do_handle_cdc_generation (which will get called due to the status update)
// won't try to update the gossiper, which would result in a deadlock inside add_local_application_state
_cdc_streams_ts = new_streams_ts;
_gossiper.add_local_application_state({
{ gms::application_state::CDC_STREAMS_TIMESTAMP, versioned_value::cdc_streams_timestamp(new_streams_ts) },
{ gms::application_state::STATUS, *status }
}).get();
db::system_keyspace::update_cdc_streams_timestamp(new_streams_ts).get();
_cdc_streams_ts = new_streams_ts;
});
}
@@ -1884,9 +1891,11 @@ future<std::map<gms::inet_address, float>> storage_service::effective_ownership(
return do_with(dht::token::describe_ownership(ss._token_metadata.sorted_tokens()),
ss._token_metadata.get_topology().get_datacenter_endpoints(),
std::map<gms::inet_address, float>(),
[&ss, keyspace_name](const std::map<token, float>& token_ownership, std::unordered_map<sstring,
std::move(keyspace_name),
[&ss](const std::map<token, float>& token_ownership, std::unordered_map<sstring,
std::unordered_set<gms::inet_address>>& datacenter_endpoints,
std::map<gms::inet_address, float>& final_ownership) {
std::map<gms::inet_address, float>& final_ownership,
sstring& keyspace_name) {
return do_for_each(datacenter_endpoints, [&ss, &keyspace_name, &final_ownership, &token_ownership](std::pair<sstring,std::unordered_set<inet_address>>&& endpoints) mutable {
return do_with(std::unordered_set<inet_address>(endpoints.second), [&ss, &keyspace_name, &final_ownership, &token_ownership](const std::unordered_set<inet_address>& endpoints_map) mutable {
return do_for_each(endpoints_map, [&ss, &keyspace_name, &final_ownership, &token_ownership](const gms::inet_address& endpoint) mutable {

View File

@@ -602,7 +602,7 @@ private:
// - add support to merge summary (message: Partition merge counts were {%s}.).
// - there is no easy way, currently, to know the exact number of total partitions.
// By the time being, using estimated key count.
sstring formatted_msg = fmt::format("{} sstables to [{}]. {} to {} (~{} of original) in {}ms = {}. " \
sstring formatted_msg = fmt::format("{} sstables to [{}]. {} to {} (~{}% of original) in {}ms = {}. " \
"~{} total partitions merged to {}.",
_info->sstables, new_sstables_msg, pretty_printed_data_size(_info->start_size), pretty_printed_data_size(_info->end_size), int(ratio * 100),
std::chrono::duration_cast<std::chrono::milliseconds>(duration).count(), pretty_printed_throughput(_info->end_size, duration),
@@ -1236,11 +1236,8 @@ private:
// return estimated partitions per sstable for a given shard
uint64_t partitions_per_sstable(shard_id s) const {
uint64_t estimated_sstables = std::max(uint64_t(1), uint64_t(ceil(double(_estimation_per_shard[s].estimated_size) / _max_sstable_size)));
// As we adjust this estimate downwards from the compaction strategy, it can get to 0 so
// make sure we're returning at least 1.
return std::max(uint64_t(1),
std::min(uint64_t(ceil(double(_estimation_per_shard[s].estimated_partitions) / estimated_sstables)),
_cf.get_compaction_strategy().adjust_partition_estimate(_ms_metadata, _estimation_per_shard[s].estimated_partitions)));
return std::min(uint64_t(ceil(double(_estimation_per_shard[s].estimated_partitions) / estimated_sstables)),
_cf.get_compaction_strategy().adjust_partition_estimate(_ms_metadata, _estimation_per_shard[s].estimated_partitions));
}
public:
resharding_compaction(column_family& cf, sstables::compaction_descriptor descriptor)

View File

@@ -92,6 +92,9 @@ public:
void transfer_ongoing_charges(compaction_backlog_tracker& new_bt, bool move_read_charges = true);
void revert_charges(sstables::shared_sstable sst);
private:
// Returns true if this SSTable can be added or removed from the tracker.
bool sstable_belongs_to_tracker(const sstables::shared_sstable& sst);
void disable() {
_disabled = true;
_ongoing_writes = {};

View File

@@ -218,7 +218,7 @@ std::vector<sstables::shared_sstable> compaction_manager::get_candidates(const c
auto& cs = cf.get_compaction_strategy();
// Filter out sstables that are being compacted.
for (auto& sst : cf.candidates_for_compaction()) {
for (auto& sst : cf.non_staging_sstables()) {
if (_compacting_sstables.count(sst)) {
continue;
}
@@ -708,8 +708,8 @@ future<> compaction_manager::rewrite_sstables(column_family* cf, sstables::compa
return task->compaction_done.get_future().then([task] {});
}
static bool needs_cleanup(const sstables::shared_sstable& sst,
const dht::token_range_vector& owned_ranges,
bool needs_cleanup(const sstables::shared_sstable& sst,
const dht::token_range_vector& sorted_owned_ranges,
schema_ptr s) {
auto first = sst->get_first_partition_key();
auto last = sst->get_last_partition_key();
@@ -717,29 +717,40 @@ static bool needs_cleanup(const sstables::shared_sstable& sst,
auto last_token = dht::get_token(*s, last);
dht::token_range sst_token_range = dht::token_range::make(first_token, last_token);
auto r = std::lower_bound(sorted_owned_ranges.begin(), sorted_owned_ranges.end(), first_token,
[] (const range<dht::token>& a, const dht::token& b) {
// check that range a is before token b.
return a.after(b, dht::token_comparator());
});
// return true iff sst partition range isn't fully contained in any of the owned ranges.
for (auto& r : owned_ranges) {
if (r.contains(sst_token_range, dht::token_comparator())) {
if (r != sorted_owned_ranges.end()) {
if (r->contains(sst_token_range, dht::token_comparator())) {
return false;
}
}
return true;
}
future<> compaction_manager::perform_cleanup(column_family* cf) {
future<> compaction_manager::perform_cleanup(database& db, column_family* cf) {
if (check_for_cleanup(cf)) {
throw std::runtime_error(format("cleanup request failed: there is an ongoing cleanup on {}.{}",
cf->schema()->ks_name(), cf->schema()->cf_name()));
}
return rewrite_sstables(cf, sstables::compaction_options::make_cleanup(), [this] (const table& table) {
auto schema = table.schema();
auto owned_ranges = service::get_local_storage_service().get_local_ranges(schema->ks_name());
return seastar::async([this, cf, &db] {
auto schema = cf->schema();
auto& rs = db.find_keyspace(schema->ks_name()).get_replication_strategy();
auto sorted_owned_ranges = rs.get_ranges_in_thread(utils::fb_utilities::get_broadcast_address());
auto sstables = std::vector<sstables::shared_sstable>{};
const auto candidates = table.candidates_for_compaction();
std::copy_if(candidates.begin(), candidates.end(), std::back_inserter(sstables), [&owned_ranges, schema] (const sstables::shared_sstable& sst) {
return owned_ranges.empty() || needs_cleanup(sst, owned_ranges, schema);
const auto candidates = get_candidates(*cf);
std::copy_if(candidates.begin(), candidates.end(), std::back_inserter(sstables), [&sorted_owned_ranges, schema] (const sstables::shared_sstable& sst) {
seastar::thread::maybe_yield();
return sorted_owned_ranges.empty() || needs_cleanup(sst, sorted_owned_ranges, schema);
});
return sstables;
}).then([this, cf] (std::vector<sstables::shared_sstable> sstables) {
return rewrite_sstables(cf, sstables::compaction_options::make_cleanup(),
[sstables = std::move(sstables)] (const table&) { return sstables; });
});
}
@@ -754,7 +765,7 @@ future<> compaction_manager::perform_sstable_upgrade(column_family* cf, bool exc
return cf->run_with_compaction_disabled([this, cf, &tables, exclude_current_version] {
auto last_version = cf->get_sstables_manager().get_highest_supported_format();
for (auto& sst : cf->candidates_for_compaction()) {
for (auto& sst : get_candidates(*cf)) {
// if we are a "normal" upgrade, we only care about
// tables with older versions, but potentially
// we are to actually rewrite everything. (-a)
@@ -779,8 +790,8 @@ future<> compaction_manager::perform_sstable_upgrade(column_family* cf, bool exc
// Submit a column family to be scrubbed and wait for its termination.
future<> compaction_manager::perform_sstable_scrub(column_family* cf, bool skip_corrupted) {
return rewrite_sstables(cf, sstables::compaction_options::make_scrub(skip_corrupted), [] (const table& cf) {
return cf.candidates_for_compaction();
return rewrite_sstables(cf, sstables::compaction_options::make_scrub(skip_corrupted), [this] (const table& cf) {
return get_candidates(cf);
});
}
@@ -857,7 +868,7 @@ double compaction_backlog_tracker::backlog() const {
}
void compaction_backlog_tracker::add_sstable(sstables::shared_sstable sst) {
if (_disabled) {
if (_disabled || !sstable_belongs_to_tracker(sst)) {
return;
}
_ongoing_writes.erase(sst);
@@ -870,7 +881,7 @@ void compaction_backlog_tracker::add_sstable(sstables::shared_sstable sst) {
}
void compaction_backlog_tracker::remove_sstable(sstables::shared_sstable sst) {
if (_disabled) {
if (_disabled || !sstable_belongs_to_tracker(sst)) {
return;
}
@@ -883,6 +894,10 @@ void compaction_backlog_tracker::remove_sstable(sstables::shared_sstable sst) {
}
}
bool compaction_backlog_tracker::sstable_belongs_to_tracker(const sstables::shared_sstable& sst) {
return !sst->requires_view_building();
}
void compaction_backlog_tracker::register_partially_written_sstable(sstables::shared_sstable sst, backlog_write_progress_manager& wp) {
if (_disabled) {
return;

View File

@@ -205,7 +205,7 @@ public:
// Cleanup is about discarding keys that are no longer relevant for a
// given sstable, e.g. after node loses part of its token range because
// of a newly added node.
future<> perform_cleanup(column_family* cf);
future<> perform_cleanup(database& db, column_family* cf);
// Submit a column family to be upgraded and wait for its termination.
future<> perform_sstable_upgrade(column_family* cf, bool exclude_current_version);
@@ -271,3 +271,5 @@ public:
friend class compaction_weight_registration;
};
bool needs_cleanup(const sstables::shared_sstable& sst, const dht::token_range_vector& owned_ranges, schema_ptr s);

View File

@@ -438,8 +438,8 @@ std::unique_ptr<sstable_set_impl> leveled_compaction_strategy::make_sstable_set(
return std::make_unique<partitioned_sstable_set>(std::move(schema));
}
std::unique_ptr<sstable_set_impl> make_partitioned_sstable_set(schema_ptr schema, bool use_level_metadata) {
return std::make_unique<partitioned_sstable_set>(std::move(schema), use_level_metadata);
sstable_set make_partitioned_sstable_set(schema_ptr schema, lw_shared_ptr<sstable_list> all, bool use_level_metadata) {
return sstables::sstable_set(std::make_unique<partitioned_sstable_set>(schema, use_level_metadata), schema, std::move(all));
}
compaction_descriptor compaction_strategy_impl::get_major_compaction_job(column_family& cf, std::vector<sstables::shared_sstable> candidates) {

View File

@@ -453,9 +453,16 @@ private:
auto indexes = std::move(entries_reader->_consumer.indexes);
return entries_reader->_context.close().then([indexes = std::move(indexes), ex = std::move(ex)] () mutable {
if (ex) {
std::rethrow_exception(std::move(ex));
return do_with(std::move(indexes), [ex = std::move(ex)] (index_list& indexes) mutable {
return parallel_for_each(indexes, [] (index_entry& ie) mutable {
return ie.close_pi_stream();
}).then_wrapped([ex = std::move(ex)] (future<>&& fut) mutable {
fut.ignore_ready_future();
return make_exception_future<index_list>(std::move(ex));
});
});
}
return std::move(indexes);
return make_ready_future<index_list>(std::move(indexes));
});
});

View File

@@ -178,7 +178,13 @@ leveled_compaction_strategy::get_reshaping_job(std::vector<shared_sstable> input
size_t offstrategy_threshold = std::max(schema->min_compaction_threshold(), 4);
size_t max_sstables = std::max(schema->max_compaction_threshold(), int(offstrategy_threshold));
unsigned tolerance = mode == reshape_mode::strict ? 0 : leveled_manifest::leveled_fan_out * 2;
auto tolerance = [mode] (unsigned level) -> unsigned {
if (mode == reshape_mode::strict) {
return 0;
}
constexpr unsigned fan_out = leveled_manifest::leveled_fan_out;
return std::max(double(fan_out), std::ceil(std::pow(fan_out, level) * 0.1));
};
if (level_info[0].size() > offstrategy_threshold) {
level_info[0].resize(std::min(level_info[0].size(), max_sstables));
@@ -193,7 +199,7 @@ leveled_compaction_strategy::get_reshaping_job(std::vector<shared_sstable> input
}
max_filled_level = std::max(max_filled_level, level);
if (!is_disjoint(level_info[level], tolerance)) {
if (!is_disjoint(level_info[level], tolerance(level))) {
leveled_manifest::logger.warn("Turns out that level {} is not disjoint, so compacting everything on behalf of {}.{}", level, schema->ks_name(), schema->cf_name());
// Unfortunately no good limit to limit input size to max_sstables for LCS major
compaction_descriptor desc(std::move(input), std::optional<sstables::sstable_set>(), iop, max_filled_level, _max_sstable_size_in_mb * 1024 * 1024);

View File

@@ -741,6 +741,11 @@ public:
, _run_identifier(cfg.run_identifier)
, _write_regular_as_static(cfg.correctly_serialize_static_compact_in_mc && s.is_static_compact_table())
{
// This can be 0 in some cases, which is albeit benign, can wreak havoc
// in lower-level writer code, so clamp it to [1, +inf) here, which is
// exactly what callers used to do anyway.
estimated_partitions = std::max(uint64_t(1), estimated_partitions);
_sst.generate_toc(_schema.get_compressor_params().get_compressor(), _schema.bloom_filter_fp_chance());
_sst.write_toc(_pc);
_sst.create_data().get();

View File

@@ -27,7 +27,7 @@
namespace sstables {
std::vector<std::pair<sstables::shared_sstable, uint64_t>>
size_tiered_compaction_strategy::create_sstable_and_length_pairs(const std::vector<sstables::shared_sstable>& sstables) const {
size_tiered_compaction_strategy::create_sstable_and_length_pairs(const std::vector<sstables::shared_sstable>& sstables) {
std::vector<std::pair<sstables::shared_sstable, uint64_t>> sstable_length_pairs;
sstable_length_pairs.reserve(sstables.size());
@@ -43,7 +43,7 @@ size_tiered_compaction_strategy::create_sstable_and_length_pairs(const std::vect
}
std::vector<std::vector<sstables::shared_sstable>>
size_tiered_compaction_strategy::get_buckets(const std::vector<sstables::shared_sstable>& sstables) const {
size_tiered_compaction_strategy::get_buckets(const std::vector<sstables::shared_sstable>& sstables, size_tiered_compaction_strategy_options options) {
// sstables sorted by size of its data file.
auto sorted_sstables = create_sstable_and_length_pairs(sstables);
@@ -64,8 +64,8 @@ size_tiered_compaction_strategy::get_buckets(const std::vector<sstables::shared_
for (auto it = buckets.begin(); it != buckets.end(); it++) {
size_t old_average_size = it->first;
if ((size > (old_average_size * _options.bucket_low) && size < (old_average_size * _options.bucket_high)) ||
(size < _options.min_sstable_size && old_average_size < _options.min_sstable_size)) {
if ((size > (old_average_size * options.bucket_low) && size < (old_average_size * options.bucket_high)) ||
(size < options.min_sstable_size && old_average_size < options.min_sstable_size)) {
auto bucket = std::move(it->second);
size_t total_size = bucket.size() * old_average_size;
size_t new_average_size = (total_size + size) / (bucket.size() + 1);
@@ -97,6 +97,11 @@ size_tiered_compaction_strategy::get_buckets(const std::vector<sstables::shared_
return bucket_list;
}
std::vector<std::vector<sstables::shared_sstable>>
size_tiered_compaction_strategy::get_buckets(const std::vector<sstables::shared_sstable>& sstables) const {
return get_buckets(sstables, _options);
}
std::vector<sstables::shared_sstable>
size_tiered_compaction_strategy::most_interesting_bucket(std::vector<std::vector<sstables::shared_sstable>> buckets,
unsigned min_threshold, unsigned max_threshold)
@@ -176,23 +181,28 @@ size_tiered_compaction_strategy::get_sstables_for_compaction(column_family& cfs,
return sstables::compaction_descriptor();
}
int64_t size_tiered_compaction_strategy::estimated_pending_compactions(const std::vector<sstables::shared_sstable>& sstables,
int min_threshold, int max_threshold, size_tiered_compaction_strategy_options options) {
int64_t n = 0;
for (auto& bucket : get_buckets(sstables, options)) {
if (bucket.size() >= size_t(min_threshold)) {
n += std::ceil(double(bucket.size()) / max_threshold);
}
}
return n;
}
int64_t size_tiered_compaction_strategy::estimated_pending_compactions(column_family& cf) const {
int min_threshold = cf.min_compaction_threshold();
int max_threshold = cf.schema()->max_compaction_threshold();
std::vector<sstables::shared_sstable> sstables;
int64_t n = 0;
sstables.reserve(cf.sstables_count());
for (auto& entry : *cf.get_sstables()) {
sstables.push_back(entry);
}
for (auto& bucket : get_buckets(sstables)) {
if (bucket.size() >= size_t(min_threshold)) {
n += std::ceil(double(bucket.size()) / max_threshold);
}
}
return n;
return estimated_pending_compactions(sstables, min_threshold, max_threshold, _options);
}
std::vector<sstables::shared_sstable>

View File

@@ -116,9 +116,11 @@ class size_tiered_compaction_strategy : public compaction_strategy_impl {
compaction_backlog_tracker _backlog_tracker;
// Return a list of pair of shared_sstable and its respective size.
std::vector<std::pair<sstables::shared_sstable, uint64_t>> create_sstable_and_length_pairs(const std::vector<sstables::shared_sstable>& sstables) const;
static std::vector<std::pair<sstables::shared_sstable, uint64_t>> create_sstable_and_length_pairs(const std::vector<sstables::shared_sstable>& sstables);
// Group files of similar size into buckets.
static std::vector<std::vector<sstables::shared_sstable>> get_buckets(const std::vector<sstables::shared_sstable>& sstables, size_tiered_compaction_strategy_options options);
std::vector<std::vector<sstables::shared_sstable>> get_buckets(const std::vector<sstables::shared_sstable>& sstables) const;
// Maybe return a bucket of sstables to compact
@@ -154,6 +156,8 @@ public:
virtual compaction_descriptor get_sstables_for_compaction(column_family& cfs, std::vector<sstables::shared_sstable> candidates) override;
static int64_t estimated_pending_compactions(const std::vector<sstables::shared_sstable>& sstables,
int min_threshold, int max_threshold, size_tiered_compaction_strategy_options options);
virtual int64_t estimated_pending_compactions(column_family& cf) const override;
virtual compaction_strategy_type type() const {

View File

@@ -101,7 +101,7 @@ public:
incremental_selector make_incremental_selector() const;
};
std::unique_ptr<sstable_set_impl> make_partitioned_sstable_set(schema_ptr schema, bool use_level_metadata = true);
sstable_set make_partitioned_sstable_set(schema_ptr schema, lw_shared_ptr<sstable_list> all, bool use_level_metadata = true);
std::ostream& operator<<(std::ostream& os, const sstables::sstable_run& run);

View File

@@ -2012,6 +2012,11 @@ components_writer::components_writer(sstable& sst, const schema& s, file_writer&
, _tombstone_written(false)
, _range_tombstones(s)
{
// This can be 0 in some cases, which is albeit benign, can wreak havoc
// in lower-level writer code, so clamp it to [1, +inf) here, which is
// exactly what callers used to do anyway.
estimated_partitions = std::max(uint64_t(1), estimated_partitions);
_sst._components->filter = utils::i_filter::get_filter(estimated_partitions, _schema.bloom_filter_fp_chance(), utils::filter_format::k_l_format);
_sst._pi_write.desired_block_size = cfg.promoted_index_block_size;
_sst._correctly_serialize_non_compound_range_tombstones = cfg.correctly_serialize_non_compound_range_tombstones;

View File

@@ -125,4 +125,194 @@ time_window_compaction_strategy::get_reshaping_job(std::vector<shared_sstable> i
return compaction_descriptor();
}
compaction_descriptor
time_window_compaction_strategy::get_sstables_for_compaction(column_family& cf, std::vector<shared_sstable> candidates) {
auto gc_before = gc_clock::now() - cf.schema()->gc_grace_seconds();
if (candidates.empty()) {
return compaction_descriptor();
}
// Find fully expired SSTables. Those will be included no matter what.
std::unordered_set<shared_sstable> expired;
if (db_clock::now() - _last_expired_check > _options.expired_sstable_check_frequency) {
clogger.debug("TWCS expired check sufficiently far in the past, checking for fully expired SSTables");
expired = get_fully_expired_sstables(cf, candidates, gc_before);
_last_expired_check = db_clock::now();
} else {
clogger.debug("TWCS skipping check for fully expired SSTables");
}
if (!expired.empty()) {
auto is_expired = [&] (const shared_sstable& s) { return expired.find(s) != expired.end(); };
candidates.erase(boost::remove_if(candidates, is_expired), candidates.end());
}
auto compaction_candidates = get_next_non_expired_sstables(cf, std::move(candidates), gc_before);
if (!expired.empty()) {
compaction_candidates.insert(compaction_candidates.end(), expired.begin(), expired.end());
}
return compaction_descriptor(std::move(compaction_candidates), cf.get_sstable_set(), service::get_local_compaction_priority());
}
time_window_compaction_strategy::bucket_compaction_mode
time_window_compaction_strategy::compaction_mode(const bucket_t& bucket, timestamp_type bucket_key,
timestamp_type now, size_t min_threshold) const {
// STCS will also be performed on older window buckets, to avoid a bad write and
// space amplification when something like read repair cause small updates to
// those past windows.
if (bucket.size() >= 2 && !is_last_active_bucket(bucket_key, now) && _recent_active_windows.contains(bucket_key)) {
return bucket_compaction_mode::major;
} else if (bucket.size() >= size_t(min_threshold)) {
return bucket_compaction_mode::size_tiered;
}
return bucket_compaction_mode::none;
}
std::vector<shared_sstable>
time_window_compaction_strategy::get_next_non_expired_sstables(column_family& cf,
std::vector<shared_sstable> non_expiring_sstables, gc_clock::time_point gc_before) {
auto most_interesting = get_compaction_candidates(cf, non_expiring_sstables);
if (!most_interesting.empty()) {
return most_interesting;
}
// if there is no sstable to compact in standard way, try compacting single sstable whose droppable tombstone
// ratio is greater than threshold.
auto e = boost::range::remove_if(non_expiring_sstables, [this, &gc_before] (const shared_sstable& sst) -> bool {
return !worth_dropping_tombstones(sst, gc_before);
});
non_expiring_sstables.erase(e, non_expiring_sstables.end());
if (non_expiring_sstables.empty()) {
return {};
}
auto it = boost::min_element(non_expiring_sstables, [] (auto& i, auto& j) {
return i->get_stats_metadata().min_timestamp < j->get_stats_metadata().min_timestamp;
});
return { *it };
}
std::vector<shared_sstable>
time_window_compaction_strategy::get_compaction_candidates(column_family& cf, std::vector<shared_sstable> candidate_sstables) {
auto p = get_buckets(std::move(candidate_sstables), _options);
// Update the highest window seen, if necessary
_highest_window_seen = std::max(_highest_window_seen, p.second);
update_estimated_compaction_by_tasks(p.first, cf.min_compaction_threshold(), cf.schema()->max_compaction_threshold());
return newest_bucket(std::move(p.first), cf.min_compaction_threshold(), cf.schema()->max_compaction_threshold(),
_options.sstable_window_size, _highest_window_seen, _stcs_options);
}
timestamp_type
time_window_compaction_strategy::get_window_lower_bound(std::chrono::seconds sstable_window_size, timestamp_type timestamp) {
using namespace std::chrono;
auto timestamp_in_sec = duration_cast<seconds>(microseconds(timestamp)).count();
// mask out window size from timestamp to get lower bound of its window
auto window_lower_bound_in_sec = seconds(timestamp_in_sec - (timestamp_in_sec % sstable_window_size.count()));
return timestamp_type(duration_cast<microseconds>(window_lower_bound_in_sec).count());
}
std::pair<std::map<timestamp_type, std::vector<shared_sstable>>, timestamp_type>
time_window_compaction_strategy::get_buckets(std::vector<shared_sstable> files, time_window_compaction_strategy_options& options) {
std::map<timestamp_type, std::vector<shared_sstable>> buckets;
timestamp_type max_timestamp = 0;
// Create map to represent buckets
// For each sstable, add sstable to the time bucket
// Where the bucket is the file's max timestamp rounded to the nearest window bucket
for (auto&& f : files) {
timestamp_type ts = to_timestamp_type(options.timestamp_resolution, f->get_stats_metadata().max_timestamp);
timestamp_type lower_bound = get_window_lower_bound(options.sstable_window_size, ts);
buckets[lower_bound].push_back(std::move(f));
max_timestamp = std::max(max_timestamp, lower_bound);
}
return std::make_pair(std::move(buckets), max_timestamp);
}
static std::ostream& operator<<(std::ostream& os, const std::map<timestamp_type, std::vector<shared_sstable>>& buckets) {
os << " buckets = {\n";
for (auto& bucket : buckets | boost::adaptors::reversed) {
os << format(" key={}, size={}\n", bucket.first, bucket.second.size());
}
os << " }\n";
return os;
}
std::vector<shared_sstable>
time_window_compaction_strategy::newest_bucket(std::map<timestamp_type, std::vector<shared_sstable>> buckets,
int min_threshold, int max_threshold, std::chrono::seconds sstable_window_size, timestamp_type now,
size_tiered_compaction_strategy_options& stcs_options) {
clogger.debug("time_window_compaction_strategy::newest_bucket:\n now {}\n{}", now, buckets);
for (auto&& key_bucket : buckets | boost::adaptors::reversed) {
auto key = key_bucket.first;
auto& bucket = key_bucket.second;
if (is_last_active_bucket(key, now)) {
_recent_active_windows.insert(key);
}
switch (compaction_mode(bucket, key, now, min_threshold)) {
case bucket_compaction_mode::size_tiered: {
// If we're in the newest bucket, we'll use STCS to prioritize sstables.
auto stcs_interesting_bucket = size_tiered_compaction_strategy::most_interesting_bucket(bucket, min_threshold, max_threshold, stcs_options);
// If the tables in the current bucket aren't eligible in the STCS strategy, we'll skip it and look for other buckets
if (!stcs_interesting_bucket.empty()) {
clogger.debug("bucket size {} >= 2, key {}, performing STCS on what's here", bucket.size(), key);
return stcs_interesting_bucket;
}
break;
}
case bucket_compaction_mode::major:
_recent_active_windows.erase(key);
clogger.debug("bucket size {} >= 2 and not in current bucket, key {}, compacting what's here", bucket.size(), key);
return trim_to_threshold(std::move(bucket), max_threshold);
default:
clogger.debug("No compaction necessary for bucket size {} , key {}, now {}", bucket.size(), key, now);
break;
}
}
return {};
}
std::vector<shared_sstable>
time_window_compaction_strategy::trim_to_threshold(std::vector<shared_sstable> bucket, int max_threshold) {
auto n = std::min(bucket.size(), size_t(max_threshold));
// Trim the largest sstables off the end to meet the maxThreshold
boost::partial_sort(bucket, bucket.begin() + n, [] (auto& i, auto& j) {
return i->ondisk_data_size() < j->ondisk_data_size();
});
bucket.resize(n);
return bucket;
}
void time_window_compaction_strategy::update_estimated_compaction_by_tasks(std::map<timestamp_type, std::vector<shared_sstable>>& tasks,
int min_threshold, int max_threshold) {
int64_t n = 0;
timestamp_type now = _highest_window_seen;
for (auto& task : tasks) {
const bucket_t& bucket = task.second;
timestamp_type bucket_key = task.first;
switch (compaction_mode(bucket, bucket_key, now, min_threshold)) {
case bucket_compaction_mode::size_tiered:
n += size_tiered_compaction_strategy::estimated_pending_compactions(bucket, min_threshold, max_threshold, _stcs_options);
break;
case bucket_compaction_mode::major:
n++;
default:
break;
}
}
_estimated_remaining_tasks = n;
}
}

View File

@@ -141,6 +141,8 @@ class time_window_compaction_strategy : public compaction_strategy_impl {
int64_t _estimated_remaining_tasks = 0;
db_clock::time_point _last_expired_check;
timestamp_type _highest_window_seen;
// Keep track of all recent active windows that still need to be compacted into a single SSTable
std::unordered_set<timestamp_type> _recent_active_windows;
size_tiered_compaction_strategy_options _stcs_options;
compaction_backlog_tracker _backlog_tracker;
public:
@@ -149,37 +151,11 @@ public:
// Better co-locate some windows into the same sstables than OOM.
static constexpr uint64_t max_data_segregation_window_count = 100;
using bucket_t = std::vector<shared_sstable>;
enum class bucket_compaction_mode { none, size_tiered, major };
public:
time_window_compaction_strategy(const std::map<sstring, sstring>& options);
virtual compaction_descriptor get_sstables_for_compaction(column_family& cf, std::vector<shared_sstable> candidates) override {
auto gc_before = gc_clock::now() - cf.schema()->gc_grace_seconds();
if (candidates.empty()) {
return compaction_descriptor();
}
// Find fully expired SSTables. Those will be included no matter what.
std::unordered_set<shared_sstable> expired;
if (db_clock::now() - _last_expired_check > _options.expired_sstable_check_frequency) {
clogger.debug("TWCS expired check sufficiently far in the past, checking for fully expired SSTables");
expired = get_fully_expired_sstables(cf, candidates, gc_before);
_last_expired_check = db_clock::now();
} else {
clogger.debug("TWCS skipping check for fully expired SSTables");
}
if (!expired.empty()) {
auto is_expired = [&] (const shared_sstable& s) { return expired.find(s) != expired.end(); };
candidates.erase(boost::remove_if(candidates, is_expired), candidates.end());
}
auto compaction_candidates = get_next_non_expired_sstables(cf, std::move(candidates), gc_before);
if (!expired.empty()) {
compaction_candidates.insert(compaction_candidates.end(), expired.begin(), expired.end());
}
return compaction_descriptor(std::move(compaction_candidates), cf.get_sstable_set(), service::get_local_compaction_priority());
}
virtual compaction_descriptor get_sstables_for_compaction(column_family& cf, std::vector<shared_sstable> candidates) override;
private:
static timestamp_type
to_timestamp_type(time_window_compaction_strategy_options::timestamp_resolutions resolution, int64_t timestamp_from_sstable) {
@@ -193,114 +169,36 @@ private:
};
}
// Returns true if bucket is the last, most active one.
bool is_last_active_bucket(timestamp_type bucket_key, timestamp_type now) const {
return bucket_key >= now;
}
// Returns which compaction type should be performed on a given window bucket.
bucket_compaction_mode
compaction_mode(const bucket_t& bucket, timestamp_type bucket_key, timestamp_type now, size_t min_threshold) const;
std::vector<shared_sstable>
get_next_non_expired_sstables(column_family& cf, std::vector<shared_sstable> non_expiring_sstables, gc_clock::time_point gc_before) {
auto most_interesting = get_compaction_candidates(cf, non_expiring_sstables);
get_next_non_expired_sstables(column_family& cf, std::vector<shared_sstable> non_expiring_sstables, gc_clock::time_point gc_before);
if (!most_interesting.empty()) {
return most_interesting;
}
// if there is no sstable to compact in standard way, try compacting single sstable whose droppable tombstone
// ratio is greater than threshold.
auto e = boost::range::remove_if(non_expiring_sstables, [this, &gc_before] (const shared_sstable& sst) -> bool {
return !worth_dropping_tombstones(sst, gc_before);
});
non_expiring_sstables.erase(e, non_expiring_sstables.end());
if (non_expiring_sstables.empty()) {
return {};
}
auto it = boost::min_element(non_expiring_sstables, [] (auto& i, auto& j) {
return i->get_stats_metadata().min_timestamp < j->get_stats_metadata().min_timestamp;
});
return { *it };
}
std::vector<shared_sstable> get_compaction_candidates(column_family& cf, std::vector<shared_sstable> candidate_sstables) {
auto p = get_buckets(std::move(candidate_sstables), _options);
// Update the highest window seen, if necessary
_highest_window_seen = std::max(_highest_window_seen, p.second);
update_estimated_compaction_by_tasks(p.first, cf.min_compaction_threshold());
return newest_bucket(std::move(p.first), cf.min_compaction_threshold(), cf.schema()->max_compaction_threshold(),
_options.sstable_window_size, _highest_window_seen, _stcs_options);
}
std::vector<shared_sstable> get_compaction_candidates(column_family& cf, std::vector<shared_sstable> candidate_sstables);
public:
// Find the lowest timestamp for window of given size
static timestamp_type
get_window_lower_bound(std::chrono::seconds sstable_window_size, timestamp_type timestamp) {
using namespace std::chrono;
auto timestamp_in_sec = duration_cast<seconds>(microseconds(timestamp)).count();
// mask out window size from timestamp to get lower bound of its window
auto window_lower_bound_in_sec = seconds(timestamp_in_sec - (timestamp_in_sec % sstable_window_size.count()));
return timestamp_type(duration_cast<microseconds>(window_lower_bound_in_sec).count());
}
get_window_lower_bound(std::chrono::seconds sstable_window_size, timestamp_type timestamp);
// Group files with similar max timestamp into buckets.
// @return A pair, where the left element is the bucket representation (map of timestamp to sstablereader),
// and the right is the highest timestamp seen
static std::pair<std::map<timestamp_type, std::vector<shared_sstable>>, timestamp_type>
get_buckets(std::vector<shared_sstable> files, time_window_compaction_strategy_options& options) {
std::map<timestamp_type, std::vector<shared_sstable>> buckets;
get_buckets(std::vector<shared_sstable> files, time_window_compaction_strategy_options& options);
timestamp_type max_timestamp = 0;
// Create map to represent buckets
// For each sstable, add sstable to the time bucket
// Where the bucket is the file's max timestamp rounded to the nearest window bucket
for (auto&& f : files) {
timestamp_type ts = to_timestamp_type(options.timestamp_resolution, f->get_stats_metadata().max_timestamp);
timestamp_type lower_bound = get_window_lower_bound(options.sstable_window_size, ts);
buckets[lower_bound].push_back(std::move(f));
max_timestamp = std::max(max_timestamp, lower_bound);
}
return std::make_pair(std::move(buckets), max_timestamp);
}
static std::vector<shared_sstable>
std::vector<shared_sstable>
newest_bucket(std::map<timestamp_type, std::vector<shared_sstable>> buckets, int min_threshold, int max_threshold,
std::chrono::seconds sstable_window_size, timestamp_type now, size_tiered_compaction_strategy_options& stcs_options) {
// If the current bucket has at least minThreshold SSTables, choose that one.
// For any other bucket, at least 2 SSTables is enough.
// In any case, limit to maxThreshold SSTables.
for (auto&& key_bucket : buckets | boost::adaptors::reversed) {
auto key = key_bucket.first;
auto& bucket = key_bucket.second;
clogger.trace("Key {}, now {}", key, now);
if (bucket.size() >= size_t(min_threshold) && key >= now) {
// If we're in the newest bucket, we'll use STCS to prioritize sstables
auto stcs_interesting_bucket = size_tiered_compaction_strategy::most_interesting_bucket(bucket, min_threshold, max_threshold, stcs_options);
// If the tables in the current bucket aren't eligible in the STCS strategy, we'll skip it and look for other buckets
if (!stcs_interesting_bucket.empty()) {
return stcs_interesting_bucket;
}
} else if (bucket.size() >= 2 && key < now) {
clogger.debug("bucket size {} >= 2 and not in current bucket, compacting what's here", bucket.size());
return trim_to_threshold(std::move(bucket), max_threshold);
} else {
clogger.debug("No compaction necessary for bucket size {} , key {}, now {}", bucket.size(), key, now);
}
}
return {};
}
std::chrono::seconds sstable_window_size, timestamp_type now, size_tiered_compaction_strategy_options& stcs_options);
static std::vector<shared_sstable>
trim_to_threshold(std::vector<shared_sstable> bucket, int max_threshold) {
auto n = std::min(bucket.size(), size_t(max_threshold));
// Trim the largest sstables off the end to meet the maxThreshold
boost::partial_sort(bucket, bucket.begin() + n, [] (auto& i, auto& j) {
return i->ondisk_data_size() < j->ondisk_data_size();
});
bucket.resize(n);
return bucket;
}
trim_to_threshold(std::vector<shared_sstable> bucket, int max_threshold);
static int64_t
get_window_for(const time_window_compaction_strategy_options& options, api::timestamp_type ts) {
@@ -312,23 +210,8 @@ public:
return timestamp_type(std::chrono::duration_cast<std::chrono::microseconds>(options.get_sstable_window_size()).count());
}
private:
void update_estimated_compaction_by_tasks(std::map<timestamp_type, std::vector<shared_sstable>>& tasks, int min_threshold) {
int64_t n = 0;
timestamp_type now = _highest_window_seen;
for (auto task : tasks) {
auto key = task.first;
// For current window, make sure it's compactable
auto count = task.second.size();
if (key >= now && count >= size_t(min_threshold)) {
n++;
} else if (key < now && count >= 2) {
n++;
}
}
_estimated_remaining_tasks = n;
}
void update_estimated_compaction_by_tasks(std::map<timestamp_type, std::vector<shared_sstable>>& tasks,
int min_threshold, int max_threshold);
friend class time_window_backlog_tracker;
public:

View File

@@ -229,7 +229,7 @@ void stream_session::init_messaging_service_handler() {
schema_ptr s = reader.schema();
auto& pc = service::get_local_streaming_priority();
return sst->write_components(std::move(reader), std::max(1ul, adjusted_estimated_partitions), s,
return sst->write_components(std::move(reader), adjusted_estimated_partitions, s,
cf->get_sstables_manager().configure_writer(),
encoding_stats{}, pc).then([sst] {
return sst->open_data();
@@ -317,6 +317,15 @@ void stream_session::init_messaging_service_handler() {
});
}
future<> stream_session::uninit_messaging_service_handler() {
return when_all_succeed(
ms().unregister_prepare_message(),
ms().unregister_prepare_done_message(),
ms().unregister_stream_mutation_fragments(),
ms().unregister_stream_mutation_done(),
ms().unregister_complete_message()).discard_result();
}
distributed<database>* stream_session::_db;
distributed<db::system_distributed_keyspace>* stream_session::_sys_dist_ks;
distributed<db::view::view_update_generator>* stream_session::_view_update_generator;
@@ -340,9 +349,13 @@ future<> stream_session::init_streaming_service(distributed<database>& db, distr
// });
return get_stream_manager().start().then([] {
gms::get_local_gossiper().register_(get_local_stream_manager().shared_from_this());
return _db->invoke_on_all([] (auto& db) {
init_messaging_service_handler();
});
return smp::invoke_on_all([] { init_messaging_service_handler(); });
});
}
future<> stream_session::uninit_streaming_service() {
return smp::invoke_on_all([] {
return uninit_messaging_service_handler();
});
}

View File

@@ -142,6 +142,7 @@ private:
using token = dht::token;
using ring_position = dht::ring_position;
static void init_messaging_service_handler();
static future<> uninit_messaging_service_handler();
static distributed<database>* _db;
static distributed<db::system_distributed_keyspace>* _sys_dist_ks;
static distributed<db::view::view_update_generator>* _view_update_generator;
@@ -152,6 +153,7 @@ public:
static database& get_local_db() { return _db->local(); }
static distributed<database>& get_db() { return *_db; };
static future<> init_streaming_service(distributed<database>& db, distributed<db::system_distributed_keyspace>& sys_dist_ks, distributed<db::view::view_update_generator>& view_update_generator);
static future<> uninit_streaming_service();
public:
/**
* Streaming endpoint.

View File

@@ -23,7 +23,6 @@
#include "sstables/sstables.hh"
#include "sstables/sstables_manager.hh"
#include "service/priority_manager.hh"
#include "db/view/view_updating_consumer.hh"
#include "db/schema_tables.hh"
#include "cell_locking.hh"
#include "mutation_fragment.hh"
@@ -326,6 +325,32 @@ flat_mutation_reader make_range_sstable_reader(schema_ptr s,
fwd_mr);
}
flat_mutation_reader make_restricted_range_sstable_reader(schema_ptr s,
reader_permit permit,
lw_shared_ptr<sstables::sstable_set> sstables,
const dht::partition_range& pr,
const query::partition_slice& slice,
const io_priority_class& pc,
tracing::trace_state_ptr trace_state,
streamed_mutation::forwarding fwd,
mutation_reader::forwarding fwd_mr,
sstables::read_monitor_generator& monitor_generator)
{
auto ms = mutation_source([sstables=std::move(sstables), &monitor_generator] (
schema_ptr s,
reader_permit permit,
const dht::partition_range& pr,
const query::partition_slice& slice,
const io_priority_class& pc,
tracing::trace_state_ptr trace_state,
streamed_mutation::forwarding fwd,
mutation_reader::forwarding fwd_mr) {
return make_range_sstable_reader(std::move(s), std::move(permit), std::move(sstables), pr, slice, pc,
std::move(trace_state), fwd, fwd_mr, monitor_generator);
});
return make_restricted_flat_reader(std::move(ms), std::move(s), std::move(permit), pr, slice, pc, std::move(trace_state), fwd, fwd_mr);
}
flat_mutation_reader
table::make_sstable_reader(schema_ptr s,
reader_permit permit,
@@ -1396,7 +1421,7 @@ future<std::unordered_set<sstring>> table::get_sstables_by_partition_key(const s
[this] (std::unordered_set<sstring>& filenames, lw_shared_ptr<sstables::sstable_set::incremental_selector>& sel, partition_key& pk) {
return do_with(dht::decorated_key(dht::decorate_key(*_schema, pk)),
[this, &filenames, &sel, &pk](dht::decorated_key& dk) mutable {
auto sst = sel->select(dk).sstables;
const auto& sst = sel->select(dk).sstables;
auto hk = sstables::sstable::make_hashed_key(*_schema, dk.key());
return do_for_each(sst, [this, &filenames, &dk, hk = std::move(hk)] (std::vector<sstables::shared_sstable>::const_iterator::reference s) mutable {
@@ -1425,7 +1450,7 @@ std::vector<sstables::shared_sstable> table::select_sstables(const dht::partitio
return _sstables->select(range);
}
std::vector<sstables::shared_sstable> table::candidates_for_compaction() const {
std::vector<sstables::shared_sstable> table::non_staging_sstables() const {
return boost::copy_range<std::vector<sstables::shared_sstable>>(*get_sstables()
| boost::adaptors::filtered([this] (auto& sst) {
return !_sstables_need_rewrite.count(sst->generation()) && !_sstables_staging.count(sst->generation());
@@ -1958,6 +1983,11 @@ void table::set_schema(schema_ptr s) {
}
_schema = std::move(s);
for (auto&& v : _views) {
v->view_info()->set_base_info(
v->view_info()->make_base_dependent_view_info(*_schema));
}
set_compaction_strategy(_schema->compaction_strategy());
trigger_compaction();
}
@@ -1969,7 +1999,8 @@ static std::vector<view_ptr>::iterator find_view(std::vector<view_ptr>& views, c
}
void table::add_or_update_view(view_ptr v) {
v->view_info()->initialize_base_dependent_fields(*schema());
v->view_info()->set_base_info(
v->view_info()->make_base_dependent_view_info(*_schema));
auto existing = find_view(_views, v);
if (existing != _views.end()) {
*existing = std::move(v);
@@ -2022,7 +2053,7 @@ static size_t memory_usage_of(const std::vector<frozen_mutation_and_schema>& ms)
* @return a future resolving to the mutations to apply to the views, which can be empty.
*/
future<> table::generate_and_propagate_view_updates(const schema_ptr& base,
std::vector<view_ptr>&& views,
std::vector<db::view::view_and_base>&& views,
mutation&& m,
flat_mutation_reader_opt existings,
tracing::trace_state_ptr tr_state,
@@ -2134,7 +2165,7 @@ table::local_base_lock(
* @return a future that resolves when the updates have been acknowledged by the view replicas
*/
future<> table::populate_views(
std::vector<view_ptr> views,
std::vector<db::view::view_and_base> views,
dht::token base_token,
flat_mutation_reader&& reader,
gc_clock::time_point now) {
@@ -2505,7 +2536,7 @@ future<row_locker::lock_holder> table::do_push_view_replica_updates(const schema
utils::get_local_injector().inject("table_push_view_replica_updates_stale_time_point", [&now] {
now -= 10s;
});
auto views = affected_views(base, m, now);
auto views = db::view::with_base_info_snapshot(affected_views(base, m, now));
if (views.empty()) {
return make_ready_future<row_locker::lock_holder>();
}
@@ -2588,16 +2619,3 @@ table::as_mutation_source_excluding(std::vector<sstables::shared_sstable>& ssts)
return this->make_reader_excluding_sstables(std::move(s), std::move(permit), ssts, range, slice, pc, std::move(trace_state), fwd, fwd_mr);
});
}
stop_iteration db::view::view_updating_consumer::consume_end_of_partition() {
if (_as->abort_requested()) {
return stop_iteration::yes;
}
try {
auto lock_holder = _table->stream_view_replica_updates(_schema, std::move(*_m), db::no_timeout, _excluded_sstables).get();
} catch (...) {
tlogger.warn("Failed to push replica updates for table {}.{}: {}", _schema->ks_name(), _schema->cf_name(), std::current_exception());
}
_m.reset();
return stop_iteration::no;
}

View File

@@ -447,6 +447,9 @@ async def run_test(test, options, gentle_kill=False, env=dict()):
env=dict(os.environ,
UBSAN_OPTIONS=":".join(filter(None, UBSAN_OPTIONS)),
ASAN_OPTIONS=":".join(filter(None, ASAN_OPTIONS)),
# TMPDIR env variable is used by any seastar/scylla
# test for directory to store test temporary data.
TMPDIR=os.path.join(options.tmpdir, test.mode),
**env,
),
preexec_fn=os.setsid,

View File

@@ -28,8 +28,8 @@ fi
SCYLLA_IP=127.1.$(($$ >> 8 & 255)).$(($$ & 255))
echo "Running Scylla on $SCYLLA_IP"
tmp_dir=/tmp/alternator-test-$$
mkdir $tmp_dir
tmp_dir="$(readlink -e ${TMPDIR-/tmp})"/alternator-test-$$
mkdir "$tmp_dir"
# We run the cleanup() function on exit for any reason - successful finish
# of the script, an error (since we have "set -e"), or a signal.
@@ -76,7 +76,7 @@ done
# argv[0] isn't good enough - because killall inspects the actual executable
# filename in /proc/<pid>/stat. So we need to name the executable differently.
# Luckily, using a symbolic link is good enough.
SCYLLA_LINK=$tmp_dir/test_scylla
SCYLLA_LINK="$tmp_dir"/test_scylla
ln -s "$SCYLLA" "$SCYLLA_LINK"
"$SCYLLA_LINK" --options-file "$source_path/conf/scylla.yaml" \

View File

@@ -522,6 +522,15 @@ def test_update_expected_1_null(test_table_s):
Expected={'a': {'ComparisonOperator': 'NULL', 'AttributeValueList': [2]}}
)
# When ComparisonOperator = "NULL", AttributeValueList should be empty if it
# exists, but as this test verifies, it may also be missing completely.
def test_update_expected_1_null_missing_list(test_table_s):
p = random_string()
test_table_s.update_item(Key={'p': p},
AttributeUpdates={'a': {'Value': 2, 'Action': 'PUT'}},
Expected={'a': {'ComparisonOperator': 'NULL'}})
assert test_table_s.get_item(Key={'p': p}, ConsistentRead=True)['Item']['a'] == 2
# Tests for Expected with ComparisonOperator = "CONTAINS":
def test_update_expected_1_contains(test_table_s):
# true cases. CONTAINS can be used for two unrelated things: check substrings

View File

@@ -308,3 +308,17 @@ def test_list_tables_wrong_limit(dynamodb):
# lower limit (min. 1) is imposed by boto3 library checks
with pytest.raises(ClientError, match='ValidationException'):
dynamodb.meta.client.list_tables(Limit=101)
# Even before Alternator gains support for configuring server-side encryption
# ("encryption at rest") with CreateTable's SSESpecification option, we should
# support the option "Enabled=false" which is the default, and means the server
# takes care of whatever server-side encryption is done, on its own.
# Reproduces issue #7031.
def test_table_sse_off(dynamodb):
# If StreamSpecification is given, but has StreamEnabled=false, it's as
# if StreamSpecification was missing, and fine. No other attribues are
# necessary.
table = create_test_table(dynamodb, SSESpecification = {'Enabled': False},
KeySchema=[{ 'AttributeName': 'p', 'KeyType': 'HASH' }],
AttributeDefinitions=[{ 'AttributeName': 'p', 'AttributeType': 'S' }]);
table.delete();

View File

@@ -25,7 +25,7 @@ import pytest
from botocore.exceptions import ClientError
import re
import time
from util import multiset, create_test_table
from util import multiset, create_test_table, test_table_name
def delete_tags(table, arn):
got = table.meta.client.list_tags_of_resource(ResourceArn=arn)
@@ -156,6 +156,56 @@ def test_tag_resource_write_isolation_values(scylla_only, test_table):
with pytest.raises(ClientError, match='ValidationException'):
test_table.meta.client.tag_resource(ResourceArn=arn, Tags=[{'Key':'system:write_isolation', 'Value':'bah'}])
# Test that if trying to create a table with forbidden tags (in this test,
# a list of tags longer than the maximum allowed of 50 tags), the table
# is not created at all.
def test_too_long_tags_from_creation(dynamodb):
# The feature of creating a table already with tags was only added to
# DynamoDB in April 2019, and to the botocore library in version 1.12.136
# so older versions of the library cannot run this test.
import botocore
from distutils.version import LooseVersion
if (LooseVersion(botocore.__version__) < LooseVersion('1.12.136')):
pytest.skip("Botocore version 1.12.136 or above required to run this test")
name = test_table_name()
# Setting 100 tags is not allowed, the following table creation should fail:
with pytest.raises(ClientError, match='ValidationException'):
dynamodb.create_table(TableName=name,
BillingMode='PAY_PER_REQUEST',
KeySchema=[{ 'AttributeName': 'p', 'KeyType': 'HASH' }],
AttributeDefinitions=[{ 'AttributeName': 'p', 'AttributeType': 'S' }],
Tags=[{'Key': str(i), 'Value': str(i)} for i in range(100)])
# After the table creation failed, the table should not exist.
with pytest.raises(ClientError, match='ResourceNotFoundException'):
dynamodb.meta.client.describe_table(TableName=name)
# This test is similar to the above, but uses another case of forbidden tags -
# here an illegal value for the system::write_isolation tag. This is a
# scylla_only test because only Alternator checks the validity of the
# system::write_isolation tag.
# Reproduces issue #6809, where the table creation appeared to fail, but it
# was actually created (without the tag).
def test_forbidden_tags_from_creation(scylla_only, dynamodb):
# The feature of creating a table already with tags was only added to
# DynamoDB in April 2019, and to the botocore library in version 1.12.136
# so older versions of the library cannot run this test.
import botocore
from distutils.version import LooseVersion
if (LooseVersion(botocore.__version__) < LooseVersion('1.12.136')):
pytest.skip("Botocore version 1.12.136 or above required to run this test")
name = test_table_name()
# It is not allowed to set the system:write_isolation to "dog", so the
# following table creation should fail:
with pytest.raises(ClientError, match='ValidationException'):
dynamodb.create_table(TableName=name,
BillingMode='PAY_PER_REQUEST',
KeySchema=[{ 'AttributeName': 'p', 'KeyType': 'HASH' }],
AttributeDefinitions=[{ 'AttributeName': 'p', 'AttributeType': 'S' }],
Tags=[{'Key': 'system:write_isolation', 'Value': 'dog'}])
# After the table creation failed, the table should not exist.
with pytest.raises(ClientError, match='ResourceNotFoundException'):
dynamodb.meta.client.describe_table(TableName=name)
# Test checking that unicode tags are allowed
@pytest.mark.xfail(reason="unicode tags not yet supported")
def test_tag_resource_unicode(test_table):

View File

@@ -64,7 +64,7 @@ SEASTAR_TEST_CASE(test_execute_batch) {
auto version = netw::messaging_service::current_version;
auto bm = bp.get_batch_log_mutation_for({ m }, s->id(), version, db_clock::now() - db_clock::duration(3h));
return qp.proxy().mutate_locally(bm, tracing::trace_state_ptr()).then([&bp] () mutable {
return qp.proxy().mutate_locally(bm, tracing::trace_state_ptr(), db::commitlog::force_sync::no).then([&bp] () mutable {
return bp.count_all_batches().then([](auto n) {
BOOST_CHECK_EQUAL(n, 1);
}).then([&bp] () mutable {

View File

@@ -157,6 +157,13 @@ BOOST_AUTO_TEST_CASE(test_big_decimal_div) {
test_div("-0.25", 10, "-0.02");
test_div("-0.26", 10, "-0.03");
test_div("-10E10", 3, "-3E10");
// Document a small oddity, 1e1 has -1 decimal places, so dividing
// it by 2 produces 0. This is not the behavior in cassandra, but
// scylla doesn't expose arithmetic operations, so this doesn't
// seem to be visible from CQL.
test_div("10", 2, "5");
test_div("1e1", 2, "0e1");
}
BOOST_AUTO_TEST_CASE(test_big_decimal_assignadd) {

View File

@@ -142,6 +142,19 @@ SEASTAR_TEST_CASE(test_decimal_to_bigint) {
});
}
SEASTAR_TEST_CASE(test_decimal_to_float) {
return do_with_cql_env_thread([&](auto& e) {
e.execute_cql("CREATE TABLE test (key text primary key, value decimal)").get();
e.execute_cql("INSERT INTO test (key, value) VALUES ('k1', 10)").get();
e.execute_cql("INSERT INTO test (key, value) VALUES ('k2', 1e1)").get();
auto v = e.execute_cql("SELECT key, CAST(value as float) from test").get0();
assert_that(v).is_rows().with_rows_ignore_order({
{{serialized("k1")}, {serialized(float(10))}},
{{serialized("k2")}, {serialized(float(10))}},
});
});
}
SEASTAR_TEST_CASE(test_varint_to_bigint) {
return do_with_cql_env_thread([&](auto& e) {
e.execute_cql("CREATE TABLE test (key text primary key, value varint)").get();

View File

@@ -3479,10 +3479,13 @@ SEASTAR_TEST_CASE(test_select_with_mixed_order_table) {
}
uint64_t
run_and_examine_cache_stat_change(cql_test_env& e, uint64_t cache_tracker::stats::*metric, std::function<void (cql_test_env& e)> func) {
run_and_examine_cache_read_stats_change(cql_test_env& e, std::string_view cf_name, std::function<void (cql_test_env& e)> func) {
auto read_stat = [&] {
auto local_read_metric = [metric] (database& db) { return db.row_cache_tracker().get_stats().*metric; };
return e.db().map_reduce0(local_read_metric, uint64_t(0), std::plus<uint64_t>()).get0();
return e.db().map_reduce0([&cf_name] (const database& db) {
auto& t = db.find_column_family("ks", cf_name);
auto& stats = t.get_row_cache().stats();
return stats.reads_with_misses.count() + stats.reads_with_no_misses.count();
}, uint64_t(0), std::plus<uint64_t>()).get0();
};
auto before = read_stat();
func(e);
@@ -3493,11 +3496,11 @@ run_and_examine_cache_stat_change(cql_test_env& e, uint64_t cache_tracker::stats
SEASTAR_TEST_CASE(test_cache_bypass) {
return do_with_cql_env_thread([] (cql_test_env& e) {
e.execute_cql("CREATE TABLE t (k int PRIMARY KEY)").get();
auto with_cache = run_and_examine_cache_stat_change(e, &cache_tracker::stats::reads, [] (cql_test_env& e) {
auto with_cache = run_and_examine_cache_read_stats_change(e, "t", [] (cql_test_env& e) {
e.execute_cql("SELECT * FROM t").get();
});
BOOST_REQUIRE(with_cache >= smp::count); // scan may make multiple passes per shard
auto without_cache = run_and_examine_cache_stat_change(e, &cache_tracker::stats::reads, [] (cql_test_env& e) {
auto without_cache = run_and_examine_cache_read_stats_change(e, "t", [] (cql_test_env& e) {
e.execute_cql("SELECT * FROM t BYPASS CACHE").get();
});
BOOST_REQUIRE_EQUAL(without_cache, 0);
@@ -4583,3 +4586,30 @@ SEASTAR_TEST_CASE(test_internal_alter_table_on_a_distributed_table) {
});
});
}
SEASTAR_TEST_CASE(test_impossible_where) {
return do_with_cql_env_thread([] (cql_test_env& e) {
cquery_nofail(e, "CREATE TABLE t(p int PRIMARY KEY, r int)");
cquery_nofail(e, "INSERT INTO t(p,r) VALUES (0, 0)");
cquery_nofail(e, "INSERT INTO t(p,r) VALUES (1, 10)");
cquery_nofail(e, "INSERT INTO t(p,r) VALUES (2, 20)");
require_rows(e, "SELECT * FROM t WHERE r>10 AND r<10 ALLOW FILTERING", {});
require_rows(e, "SELECT * FROM t WHERE r>=10 AND r<=0 ALLOW FILTERING", {});
cquery_nofail(e, "CREATE TABLE t2(p int, c int, PRIMARY KEY(p, c)) WITH CLUSTERING ORDER BY (c DESC)");
cquery_nofail(e, "INSERT INTO t2(p,c) VALUES (0, 0)");
cquery_nofail(e, "INSERT INTO t2(p,c) VALUES (1, 10)");
cquery_nofail(e, "INSERT INTO t2(p,c) VALUES (2, 20)");
require_rows(e, "SELECT * FROM t2 WHERE c>10 AND c<10 ALLOW FILTERING", {});
require_rows(e, "SELECT * FROM t2 WHERE c>=10 AND c<=0 ALLOW FILTERING", {});
});
}
SEASTAR_TEST_CASE(test_counter_column_added_into_non_counter_table) {
return do_with_cql_env_thread([] (cql_test_env& e) {
cquery_nofail(e, "CREATE TABLE t (pk int, ck int, PRIMARY KEY(pk, ck))");
BOOST_REQUIRE_THROW(e.execute_cql("ALTER TABLE t ADD \"c\" counter;").get(),
exceptions::configuration_exception);
});
}

View File

@@ -1134,6 +1134,9 @@ SEASTAR_TEST_CASE(test_filtering) {
{ int32_type->decompose(8), int32_type->decompose(3) },
{ int32_type->decompose(9), int32_type->decompose(3) },
});
require_rows(e, "SELECT k FROM cf WHERE k=12 AND (m,n)>=(4,0) ALLOW FILTERING;", {
{ int32_type->decompose(12), int32_type->decompose(4), int32_type->decompose(5)},
});
}
// test filtering on clustering keys

View File

@@ -870,8 +870,6 @@ SEASTAR_TEST_CASE(reader_selector_fast_forwarding_test) {
});
}
static const std::size_t new_reader_base_cost{16 * 1024};
sstables::shared_sstable create_sstable(sstables::test_env& env, simple_schema& sschema, const sstring& path) {
std::vector<mutation> mutations;
mutations.reserve(1 << 14);
@@ -2588,6 +2586,7 @@ SEASTAR_THREAD_TEST_CASE(test_queue_reader) {
BOOST_REQUIRE_THROW(handle.push(partition_end{}).get(), std::runtime_error);
BOOST_REQUIRE_THROW(handle.push_end_of_stream(), std::runtime_error);
BOOST_REQUIRE_THROW(fill_buffer_fut.get(), broken_promise);
}
// Abandoned handle aborts, move-assignment
@@ -2850,3 +2849,488 @@ SEASTAR_THREAD_TEST_CASE(test_manual_paused_evictable_reader_is_mutation_source)
run_mutation_source_tests(make_populate);
}
namespace {
std::deque<mutation_fragment> copy_fragments(const schema& s, const std::deque<mutation_fragment>& o) {
std::deque<mutation_fragment> buf;
for (const auto& mf : o) {
buf.emplace_back(s, mf);
}
return buf;
}
flat_mutation_reader create_evictable_reader_and_evict_after_first_buffer(
schema_ptr schema,
reader_permit permit,
const dht::partition_range& prange,
const query::partition_slice& slice,
std::deque<mutation_fragment> first_buffer,
position_in_partition_view last_fragment_position,
std::deque<mutation_fragment> second_buffer,
size_t max_buffer_size) {
class factory {
schema_ptr _schema;
std::optional<std::deque<mutation_fragment>> _first_buffer;
std::optional<std::deque<mutation_fragment>> _second_buffer;
size_t _max_buffer_size;
private:
std::optional<std::deque<mutation_fragment>> copy_buffer(const std::optional<std::deque<mutation_fragment>>& o) {
if (!o) {
return {};
}
return copy_fragments(*_schema, *o);
}
public:
factory(schema_ptr schema, std::deque<mutation_fragment> first_buffer, std::deque<mutation_fragment> second_buffer, size_t max_buffer_size)
: _schema(std::move(schema)), _first_buffer(std::move(first_buffer)), _second_buffer(std::move(second_buffer)), _max_buffer_size(max_buffer_size) {
}
factory(const factory& o)
: _schema(o._schema)
, _first_buffer(copy_buffer(o._first_buffer))
, _second_buffer(copy_buffer(o._second_buffer)) {
}
factory(factory&& o) = default;
flat_mutation_reader operator()(
schema_ptr s,
reader_permit permit,
const dht::partition_range& range,
const query::partition_slice& slice,
const io_priority_class& pc,
tracing::trace_state_ptr trace_state,
streamed_mutation::forwarding fwd_sm,
mutation_reader::forwarding fwd_mr) {
BOOST_REQUIRE(s == _schema);
if (_first_buffer) {
auto buf = *std::exchange(_first_buffer, {});
auto rd = make_flat_mutation_reader_from_fragments(_schema, std::move(buf));
rd.set_max_buffer_size(_max_buffer_size);
return rd;
}
if (_second_buffer) {
auto buf = *std::exchange(_second_buffer, {});
auto rd = make_flat_mutation_reader_from_fragments(_schema, std::move(buf));
rd.set_max_buffer_size(_max_buffer_size);
return rd;
}
return make_empty_flat_reader(_schema);
}
};
auto ms = mutation_source(factory(schema, std::move(first_buffer), std::move(second_buffer), max_buffer_size));
auto [rd, handle] = make_manually_paused_evictable_reader(
std::move(ms),
schema,
permit,
prange,
slice,
seastar::default_priority_class(),
nullptr,
mutation_reader::forwarding::yes);
rd.set_max_buffer_size(max_buffer_size);
rd.fill_buffer(db::no_timeout).get0();
const auto eq_cmp = position_in_partition::equal_compare(*schema);
BOOST_REQUIRE(rd.is_buffer_full());
BOOST_REQUIRE(eq_cmp(rd.buffer().back().position(), last_fragment_position));
BOOST_REQUIRE(!rd.is_end_of_stream());
rd.detach_buffer();
handle.pause();
while(permit.semaphore().try_evict_one_inactive_read());
return std::move(rd);
}
}
SEASTAR_THREAD_TEST_CASE(test_evictable_reader_trim_range_tombstones) {
reader_concurrency_semaphore semaphore(reader_concurrency_semaphore::no_limits{}, get_name());
simple_schema s;
const auto pkey = s.make_pkey();
size_t max_buffer_size = 512;
const int first_ck = 100;
const int second_buffer_ck = first_ck + 100;
size_t mem_usage = 0;
std::deque<mutation_fragment> first_buffer;
first_buffer.emplace_back(partition_start{pkey, {}});
mem_usage = first_buffer.back().memory_usage(*s.schema());
for (int i = 0; i < second_buffer_ck; ++i) {
first_buffer.emplace_back(s.make_row(s.make_ckey(i++), "v"));
mem_usage += first_buffer.back().memory_usage(*s.schema());
}
const auto last_fragment_position = position_in_partition(first_buffer.back().position());
max_buffer_size = mem_usage;
first_buffer.emplace_back(s.make_row(s.make_ckey(second_buffer_ck), "v"));
std::deque<mutation_fragment> second_buffer;
second_buffer.emplace_back(partition_start{pkey, {}});
mem_usage = second_buffer.back().memory_usage(*s.schema());
second_buffer.emplace_back(s.make_range_tombstone(query::clustering_range::make_ending_with(s.make_ckey(second_buffer_ck + 10))));
int ckey = second_buffer_ck;
while (mem_usage <= max_buffer_size) {
second_buffer.emplace_back(s.make_row(s.make_ckey(ckey++), "v"));
mem_usage += second_buffer.back().memory_usage(*s.schema());
}
second_buffer.emplace_back(partition_end{});
auto rd = create_evictable_reader_and_evict_after_first_buffer(s.schema(), semaphore.make_permit(), query::full_partition_range,
s.schema()->full_slice(), std::move(first_buffer), last_fragment_position, std::move(second_buffer), max_buffer_size);
rd.fill_buffer(db::no_timeout).get();
const auto tri_cmp = position_in_partition::tri_compare(*s.schema());
BOOST_REQUIRE(tri_cmp(last_fragment_position, rd.peek_buffer().position()) < 0);
}
namespace {
void check_evictable_reader_validation_is_triggered(
std::string_view test_name,
std::string_view error_prefix, // empty str if no exception is expected
schema_ptr schema,
reader_permit permit,
const dht::partition_range& prange,
const query::partition_slice& slice,
std::deque<mutation_fragment> first_buffer,
position_in_partition_view last_fragment_position,
std::deque<mutation_fragment> second_buffer,
size_t max_buffer_size) {
testlog.info("check_evictable_reader_validation_is_triggered(): checking {} test case: {}", error_prefix.empty() ? "positive" : "negative", test_name);
auto rd = create_evictable_reader_and_evict_after_first_buffer(std::move(schema), std::move(permit), prange, slice, std::move(first_buffer),
last_fragment_position, std::move(second_buffer), max_buffer_size);
const bool fail = !error_prefix.empty();
try {
rd.fill_buffer(db::no_timeout).get0();
} catch (std::runtime_error& e) {
if (fail) {
if (error_prefix == std::string_view(e.what(), error_prefix.size())) {
testlog.trace("Expected exception caught: {}", std::current_exception());
return;
} else {
BOOST_FAIL(fmt::format("Exception with unexpected message caught: {}", std::current_exception()));
}
} else {
BOOST_FAIL(fmt::format("Unexpected exception caught: {}", std::current_exception()));
}
}
if (fail) {
BOOST_FAIL(fmt::format("Expected exception not thrown"));
}
}
}
SEASTAR_THREAD_TEST_CASE(test_evictable_reader_self_validation) {
set_abort_on_internal_error(false);
auto reset_on_internal_abort = defer([] {
set_abort_on_internal_error(true);
});
reader_concurrency_semaphore semaphore(reader_concurrency_semaphore::no_limits{}, get_name());
simple_schema s;
auto pkeys = s.make_pkeys(4);
std::ranges::sort(pkeys, dht::decorated_key::less_comparator(s.schema()));
size_t max_buffer_size = 512;
const int first_ck = 100;
const int second_buffer_ck = first_ck + 100;
const int last_ck = second_buffer_ck + 100;
static const char partition_error_prefix[] = "maybe_validate_partition_start(): validation failed";
static const char position_in_partition_error_prefix[] = "validate_position_in_partition(): validation failed";
static const char trim_range_tombstones_error_prefix[] = "maybe_trim_range_tombstone(): validation failed";
const auto prange = dht::partition_range::make(
dht::partition_range::bound(pkeys[1], true),
dht::partition_range::bound(pkeys[2], true));
const auto ckrange = query::clustering_range::make(
query::clustering_range::bound(s.make_ckey(first_ck), true),
query::clustering_range::bound(s.make_ckey(last_ck), true));
const auto slice = partition_slice_builder(*s.schema()).with_range(ckrange).build();
std::deque<mutation_fragment> first_buffer;
first_buffer.emplace_back(partition_start{pkeys[1], {}});
size_t mem_usage = first_buffer.back().memory_usage(*s.schema());
for (int i = 0; i < second_buffer_ck; ++i) {
first_buffer.emplace_back(s.make_row(s.make_ckey(i++), "v"));
mem_usage += first_buffer.back().memory_usage(*s.schema());
}
max_buffer_size = mem_usage;
auto last_fragment_position = position_in_partition(first_buffer.back().position());
first_buffer.emplace_back(s.make_row(s.make_ckey(second_buffer_ck), "v"));
auto make_second_buffer = [&s, &max_buffer_size, second_buffer_ck] (dht::decorated_key pkey, std::optional<int> first_ckey = {},
bool inject_range_tombstone = false) mutable {
auto ckey = first_ckey ? *first_ckey : second_buffer_ck;
std::deque<mutation_fragment> second_buffer;
second_buffer.emplace_back(partition_start{std::move(pkey), {}});
size_t mem_usage = second_buffer.back().memory_usage(*s.schema());
if (inject_range_tombstone) {
second_buffer.emplace_back(s.make_range_tombstone(query::clustering_range::make_ending_with(s.make_ckey(last_ck))));
}
while (mem_usage <= max_buffer_size) {
second_buffer.emplace_back(s.make_row(s.make_ckey(ckey++), "v"));
mem_usage += second_buffer.back().memory_usage(*s.schema());
}
second_buffer.emplace_back(partition_end{});
return second_buffer;
};
//
// Continuing the same partition
//
check_evictable_reader_validation_is_triggered(
"pkey < _last_pkey; pkey ∉ prange",
partition_error_prefix,
s.schema(),
semaphore.make_permit(),
prange,
slice,
copy_fragments(*s.schema(), first_buffer),
last_fragment_position,
make_second_buffer(pkeys[0]),
max_buffer_size);
check_evictable_reader_validation_is_triggered(
"pkey == _last_pkey",
"",
s.schema(),
semaphore.make_permit(),
prange,
slice,
copy_fragments(*s.schema(), first_buffer),
last_fragment_position,
make_second_buffer(pkeys[1]),
max_buffer_size);
check_evictable_reader_validation_is_triggered(
"pkey == _last_pkey; position_in_partition ∉ ckrange (<)",
position_in_partition_error_prefix,
s.schema(),
semaphore.make_permit(),
prange,
slice,
copy_fragments(*s.schema(), first_buffer),
last_fragment_position,
make_second_buffer(pkeys[1], first_ck - 10),
max_buffer_size);
check_evictable_reader_validation_is_triggered(
"pkey == _last_pkey; position_in_partition ∉ ckrange (<); start with trimmable range-tombstone",
position_in_partition_error_prefix,
s.schema(),
semaphore.make_permit(),
prange,
slice,
copy_fragments(*s.schema(), first_buffer),
last_fragment_position,
make_second_buffer(pkeys[1], first_ck - 10, true),
max_buffer_size);
check_evictable_reader_validation_is_triggered(
"pkey == _last_pkey; position_in_partition ∉ ckrange; position_in_partition < _next_position_in_partition",
position_in_partition_error_prefix,
s.schema(),
semaphore.make_permit(),
prange,
slice,
copy_fragments(*s.schema(), first_buffer),
last_fragment_position,
make_second_buffer(pkeys[1], second_buffer_ck - 2),
max_buffer_size);
check_evictable_reader_validation_is_triggered(
"pkey == _last_pkey; position_in_partition ∉ ckrange; position_in_partition < _next_position_in_partition; start with trimmable range-tombstone",
position_in_partition_error_prefix,
s.schema(),
semaphore.make_permit(),
prange,
slice,
copy_fragments(*s.schema(), first_buffer),
last_fragment_position,
make_second_buffer(pkeys[1], second_buffer_ck - 2, true),
max_buffer_size);
{
auto second_buffer = make_second_buffer(pkeys[1], second_buffer_ck);
second_buffer[1] = s.make_range_tombstone(query::clustering_range::make_ending_with(s.make_ckey(second_buffer_ck - 10)));
check_evictable_reader_validation_is_triggered(
"pkey == _last_pkey; end(range_tombstone) < _next_position_in_partition",
trim_range_tombstones_error_prefix,
s.schema(),
semaphore.make_permit(),
prange,
slice,
copy_fragments(*s.schema(), first_buffer),
last_fragment_position,
std::move(second_buffer),
max_buffer_size);
}
{
auto second_buffer = make_second_buffer(pkeys[1], second_buffer_ck);
second_buffer[1] = s.make_range_tombstone(query::clustering_range::make_ending_with(s.make_ckey(second_buffer_ck + 10)));
check_evictable_reader_validation_is_triggered(
"pkey == _last_pkey; end(range_tombstone) > _next_position_in_partition",
"",
s.schema(),
semaphore.make_permit(),
prange,
slice,
copy_fragments(*s.schema(), first_buffer),
last_fragment_position,
std::move(second_buffer),
max_buffer_size);
}
{
auto second_buffer = make_second_buffer(pkeys[1], second_buffer_ck);
second_buffer[1] = s.make_range_tombstone(query::clustering_range::make_starting_with(s.make_ckey(last_ck + 10)));
check_evictable_reader_validation_is_triggered(
"pkey == _last_pkey; start(range_tombstone) ∉ ckrange (>)",
position_in_partition_error_prefix,
s.schema(),
semaphore.make_permit(),
prange,
slice,
copy_fragments(*s.schema(), first_buffer),
last_fragment_position,
std::move(second_buffer),
max_buffer_size);
}
check_evictable_reader_validation_is_triggered(
"pkey == _last_pkey; position_in_partition ∈ ckrange",
"",
s.schema(),
semaphore.make_permit(),
prange,
slice,
copy_fragments(*s.schema(), first_buffer),
last_fragment_position,
make_second_buffer(pkeys[1], second_buffer_ck),
max_buffer_size);
check_evictable_reader_validation_is_triggered(
"pkey == _last_pkey; position_in_partition ∉ ckrange (>)",
position_in_partition_error_prefix,
s.schema(),
semaphore.make_permit(),
prange,
slice,
copy_fragments(*s.schema(), first_buffer),
last_fragment_position,
make_second_buffer(pkeys[1], last_ck + 10),
max_buffer_size);
check_evictable_reader_validation_is_triggered(
"pkey > _last_pkey; pkey ∈ pkrange",
partition_error_prefix,
s.schema(),
semaphore.make_permit(),
prange,
slice,
copy_fragments(*s.schema(), first_buffer),
last_fragment_position,
make_second_buffer(pkeys[2]),
max_buffer_size);
check_evictable_reader_validation_is_triggered(
"pkey > _last_pkey; pkey ∉ pkrange",
partition_error_prefix,
s.schema(),
semaphore.make_permit(),
prange,
slice,
copy_fragments(*s.schema(), first_buffer),
last_fragment_position,
make_second_buffer(pkeys[3]),
max_buffer_size);
//
// Continuing from next partition
//
first_buffer.clear();
first_buffer.emplace_back(partition_start{pkeys[1], {}});
mem_usage = first_buffer.back().memory_usage(*s.schema());
for (int i = 0; i < second_buffer_ck; ++i) {
first_buffer.emplace_back(s.make_row(s.make_ckey(i++), "v"));
mem_usage += first_buffer.back().memory_usage(*s.schema());
}
first_buffer.emplace_back(partition_end{});
mem_usage += first_buffer.back().memory_usage(*s.schema());
last_fragment_position = position_in_partition(first_buffer.back().position());
max_buffer_size = mem_usage;
first_buffer.emplace_back(partition_start{pkeys[2], {}});
check_evictable_reader_validation_is_triggered(
"pkey < _last_pkey; pkey ∉ pkrange",
partition_error_prefix,
s.schema(),
semaphore.make_permit(),
prange,
slice,
copy_fragments(*s.schema(), first_buffer),
last_fragment_position,
make_second_buffer(pkeys[0]),
max_buffer_size);
check_evictable_reader_validation_is_triggered(
"pkey == _last_pkey",
partition_error_prefix,
s.schema(),
semaphore.make_permit(),
prange,
slice,
copy_fragments(*s.schema(), first_buffer),
last_fragment_position,
make_second_buffer(pkeys[1]),
max_buffer_size);
check_evictable_reader_validation_is_triggered(
"pkey > _last_pkey; pkey ∈ pkrange",
"",
s.schema(),
semaphore.make_permit(),
prange,
slice,
copy_fragments(*s.schema(), first_buffer),
last_fragment_position,
make_second_buffer(pkeys[2]),
max_buffer_size);
check_evictable_reader_validation_is_triggered(
"pkey > _last_pkey; pkey ∉ pkrange",
partition_error_prefix,
s.schema(),
semaphore.make_permit(),
prange,
slice,
copy_fragments(*s.schema(), first_buffer),
last_fragment_position,
make_second_buffer(pkeys[3]),
max_buffer_size);
}

View File

@@ -769,3 +769,27 @@ SEASTAR_THREAD_TEST_CASE(test_immediate_evict_on_insert) {
fut.get();
}
namespace {
class inactive_read : public reader_concurrency_semaphore::inactive_read {
public:
virtual void evict() override {
}
};
}
SEASTAR_THREAD_TEST_CASE(test_unique_inactive_read_handle) {
reader_concurrency_semaphore sem1(reader_concurrency_semaphore::no_limits{}, "sem1");
reader_concurrency_semaphore sem2(reader_concurrency_semaphore::no_limits{}, ""); // to see the message for an unnamed semaphore
auto sem1_h1 = sem1.register_inactive_read(std::make_unique<inactive_read>());
auto sem2_h1 = sem2.register_inactive_read(std::make_unique<inactive_read>());
// Sanity check that lookup still works with empty handle.
BOOST_REQUIRE(!sem1.unregister_inactive_read(reader_concurrency_semaphore::inactive_read_handle{}));
BOOST_REQUIRE_THROW(sem1.unregister_inactive_read(std::move(sem2_h1)), std::runtime_error);
BOOST_REQUIRE_THROW(sem2.unregister_inactive_read(std::move(sem1_h1)), std::runtime_error);
}

View File

@@ -3168,6 +3168,8 @@ SEASTAR_TEST_CASE(time_window_strategy_correctness_test) {
sstables.push_back(make_sstable_containing(sst_gen, {std::move(mut)}));
}
std::map<sstring, sstring> options;
time_window_compaction_strategy twcs(options);
std::map<api::timestamp_type, std::vector<shared_sstable>> buckets;
// We'll put 3 sstables into the newest bucket
@@ -3177,13 +3179,13 @@ SEASTAR_TEST_CASE(time_window_strategy_correctness_test) {
}
sstables::size_tiered_compaction_strategy_options stcs_options;
auto now = api::timestamp_clock::now().time_since_epoch().count();
auto new_bucket = time_window_compaction_strategy::newest_bucket(buckets, 4, 32, duration_cast<seconds>(hours(1)),
auto new_bucket = twcs.newest_bucket(buckets, 4, 32, duration_cast<seconds>(hours(1)),
time_window_compaction_strategy::get_window_lower_bound(duration_cast<seconds>(hours(1)), now), stcs_options);
// incoming bucket should not be accepted when it has below the min threshold SSTables
BOOST_REQUIRE(new_bucket.empty());
now = api::timestamp_clock::now().time_since_epoch().count();
new_bucket = time_window_compaction_strategy::newest_bucket(buckets, 2, 32, duration_cast<seconds>(hours(1)),
new_bucket = twcs.newest_bucket(buckets, 2, 32, duration_cast<seconds>(hours(1)),
time_window_compaction_strategy::get_window_lower_bound(duration_cast<seconds>(hours(1)), now), stcs_options);
// incoming bucket should be accepted when it is larger than the min threshold SSTables
BOOST_REQUIRE(!new_bucket.empty());
@@ -3218,13 +3220,88 @@ SEASTAR_TEST_CASE(time_window_strategy_correctness_test) {
}
now = api::timestamp_clock::now().time_since_epoch().count();
new_bucket = time_window_compaction_strategy::newest_bucket(buckets, 4, 32, duration_cast<seconds>(hours(1)),
new_bucket = twcs.newest_bucket(buckets, 4, 32, duration_cast<seconds>(hours(1)),
time_window_compaction_strategy::get_window_lower_bound(duration_cast<seconds>(hours(1)), now), stcs_options);
// new bucket should be trimmed to max threshold of 32
BOOST_REQUIRE(new_bucket.size() == size_t(32));
});
}
// Check that TWCS will only perform size-tiered on the current window and also
// the past windows that were already previously compacted into a single SSTable.
SEASTAR_TEST_CASE(time_window_strategy_size_tiered_behavior_correctness) {
using namespace std::chrono;
return test_env::do_with_async([] (test_env& env) {
storage_service_for_tests ssft;
auto s = schema_builder("tests", "time_window_strategy")
.with_column("id", utf8_type, column_kind::partition_key)
.with_column("value", int32_type).build();
auto tmp = tmpdir();
auto sst_gen = [&env, s, &tmp, gen = make_lw_shared<unsigned>(1)] () mutable {
return env.make_sstable(s, tmp.path().string(), (*gen)++, la, big);
};
auto make_insert = [&] (partition_key key, api::timestamp_type t) {
mutation m(s, key);
m.set_clustered_cell(clustering_key::make_empty(), bytes("value"), data_value(int32_t(1)), t);
return m;
};
std::map<sstring, sstring> options;
sstables::size_tiered_compaction_strategy_options stcs_options;
time_window_compaction_strategy twcs(options);
std::map<api::timestamp_type, std::vector<shared_sstable>> buckets; // windows
int min_threshold = 4;
int max_threshold = 32;
auto window_size = duration_cast<seconds>(hours(1));
auto add_new_sstable_to_bucket = [&] (api::timestamp_type ts, api::timestamp_type window_ts) {
auto key = partition_key::from_exploded(*s, {to_bytes("key" + to_sstring(ts))});
auto mut = make_insert(std::move(key), ts);
auto sst = make_sstable_containing(sst_gen, {std::move(mut)});
auto bound = time_window_compaction_strategy::get_window_lower_bound(window_size, window_ts);
buckets[bound].push_back(std::move(sst));
};
api::timestamp_type current_window_ts = api::timestamp_clock::now().time_since_epoch().count();
api::timestamp_type past_window_ts = current_window_ts - duration_cast<microseconds>(seconds(2L * 3600L)).count();
// create 1 sstable into past time window and let the strategy know about it
add_new_sstable_to_bucket(0, past_window_ts);
auto now = time_window_compaction_strategy::get_window_lower_bound(window_size, past_window_ts);
// past window cannot be compacted because it has a single SSTable
BOOST_REQUIRE(twcs.newest_bucket(buckets, min_threshold, max_threshold, window_size, now, stcs_options).size() == 0);
// create min_threshold-1 sstables into current time window
for (api::timestamp_type t = 0; t < min_threshold - 1; t++) {
add_new_sstable_to_bucket(t, current_window_ts);
}
// add 1 sstable into past window.
add_new_sstable_to_bucket(1, past_window_ts);
now = time_window_compaction_strategy::get_window_lower_bound(window_size, current_window_ts);
// past window can now be compacted into a single SSTable because it was the previous current (active) window.
// current window cannot be compacted because it has less than min_threshold SSTables
BOOST_REQUIRE(twcs.newest_bucket(buckets, min_threshold, max_threshold, window_size, now, stcs_options).size() == 2);
// now past window cannot be compacted again, because it was already compacted into a single SSTable, now it switches to STCS mode.
BOOST_REQUIRE(twcs.newest_bucket(buckets, min_threshold, max_threshold, window_size, now, stcs_options).size() == 0);
// make past window contain more than min_threshold similar-sized SSTables, allowing it to be compacted again.
for (api::timestamp_type t = 2; t < min_threshold; t++) {
add_new_sstable_to_bucket(t, past_window_ts);
}
// now past window can be compacted again because it switched to STCS mode and has more than min_threshold SSTables.
BOOST_REQUIRE(twcs.newest_bucket(buckets, min_threshold, max_threshold, window_size, now, stcs_options).size() == size_t(min_threshold));
});
}
SEASTAR_TEST_CASE(test_promoted_index_read) {
// create table promoted_index_read (
// pk int,
@@ -4720,8 +4797,8 @@ SEASTAR_TEST_CASE(sstable_scrub_test) {
table->add_sstable_and_update_cache(sst).get();
BOOST_REQUIRE(table->candidates_for_compaction().size() == 1);
BOOST_REQUIRE(table->candidates_for_compaction().front() == sst);
BOOST_REQUIRE(table->non_staging_sstables().size() == 1);
BOOST_REQUIRE(table->non_staging_sstables().front() == sst);
auto verify_fragments = [&] (sstables::shared_sstable sst, const std::vector<mutation_fragment>& mfs) {
auto r = assert_that(sst->as_mutation_source().make_reader(schema, tests::make_permit()));
@@ -4742,7 +4819,7 @@ SEASTAR_TEST_CASE(sstable_scrub_test) {
// We expect the scrub with skip_corrupted=false to stop on the first invalid fragment.
compaction_manager.perform_sstable_scrub(table.get(), false).get();
BOOST_REQUIRE(table->candidates_for_compaction().size() == 1);
BOOST_REQUIRE(table->non_staging_sstables().size() == 1);
verify_fragments(sst, corrupt_fragments);
testlog.info("Scrub with --skip-corrupted=true");
@@ -4750,9 +4827,9 @@ SEASTAR_TEST_CASE(sstable_scrub_test) {
// We expect the scrub with skip_corrupted=true to get rid of all invalid data.
compaction_manager.perform_sstable_scrub(table.get(), true).get();
BOOST_REQUIRE(table->candidates_for_compaction().size() == 1);
BOOST_REQUIRE(table->candidates_for_compaction().front() != sst);
verify_fragments(table->candidates_for_compaction().front(), scrubbed_fragments);
BOOST_REQUIRE(table->non_staging_sstables().size() == 1);
BOOST_REQUIRE(table->non_staging_sstables().front() != sst);
verify_fragments(table->non_staging_sstables().front(), scrubbed_fragments);
});
}, test_cfg);
}
@@ -5847,3 +5924,156 @@ SEASTAR_TEST_CASE(test_bug_6472) {
return make_ready_future<>();
});
}
SEASTAR_TEST_CASE(sstable_needs_cleanup_test) {
test_env env;
auto s = make_lw_shared(schema({}, some_keyspace, some_column_family,
{{"p1", utf8_type}}, {}, {}, {}, utf8_type));
auto tokens = token_generation_for_current_shard(10);
auto sst_gen = [&env, s, gen = make_lw_shared<unsigned>(1)] (sstring first, sstring last) mutable {
return sstable_for_overlapping_test(env, s, (*gen)++, first, last);
};
auto token = [&] (size_t index) -> dht::token {
return tokens[index].second;
};
auto key_from_token = [&] (size_t index) -> sstring {
return tokens[index].first;
};
auto token_range = [&] (size_t first, size_t last) -> dht::token_range {
return dht::token_range::make(token(first), token(last));
};
{
auto local_ranges = { token_range(0, 9) };
auto sst = sst_gen(key_from_token(0), key_from_token(9));
BOOST_REQUIRE(!needs_cleanup(sst, local_ranges, s));
}
{
auto local_ranges = { token_range(0, 1), token_range(3, 4), token_range(5, 6) };
auto sst = sst_gen(key_from_token(0), key_from_token(1));
BOOST_REQUIRE(!needs_cleanup(sst, local_ranges, s));
auto sst2 = sst_gen(key_from_token(2), key_from_token(2));
BOOST_REQUIRE(needs_cleanup(sst2, local_ranges, s));
auto sst3 = sst_gen(key_from_token(0), key_from_token(6));
BOOST_REQUIRE(needs_cleanup(sst3, local_ranges, s));
auto sst5 = sst_gen(key_from_token(7), key_from_token(7));
BOOST_REQUIRE(needs_cleanup(sst5, local_ranges, s));
}
return make_ready_future<>();
}
SEASTAR_TEST_CASE(test_twcs_partition_estimate) {
return test_setup::do_with_tmp_directory([] (test_env& env, sstring tmpdir_path) {
auto builder = schema_builder("tests", "test_bug_6472")
.with_column("id", utf8_type, column_kind::partition_key)
.with_column("cl", int32_type, column_kind::clustering_key)
.with_column("value", int32_type);
builder.set_compaction_strategy(sstables::compaction_strategy_type::time_window);
std::map<sstring, sstring> opts = {
{ time_window_compaction_strategy_options::COMPACTION_WINDOW_UNIT_KEY, "HOURS" },
{ time_window_compaction_strategy_options::COMPACTION_WINDOW_SIZE_KEY, "1" },
};
builder.set_compaction_strategy_options(opts);
builder.set_gc_grace_seconds(0);
auto s = builder.build();
const auto rows_per_partition = 200;
auto sst_gen = [&env, s, tmpdir_path, gen = make_lw_shared<unsigned>(1)] () mutable {
return env.make_sstable(s, tmpdir_path, (*gen)++, la, big);
};
auto next_timestamp = [] (int sstable_idx, int ck_idx) {
using namespace std::chrono;
auto window = hours(sstable_idx * rows_per_partition + ck_idx);
return (gc_clock::now().time_since_epoch() - duration_cast<microseconds>(window)).count();
};
auto tokens = token_generation_for_shard(4, this_shard_id(), test_db_config.murmur3_partitioner_ignore_msb_bits(), smp::count);
auto make_sstable = [&] (int sstable_idx) {
static thread_local int32_t value = 1;
auto key_str = tokens[sstable_idx].first;
auto key = partition_key::from_exploded(*s, {to_bytes(key_str)});
mutation m(s, key);
for (auto ck = 0; ck < rows_per_partition; ++ck) {
auto c_key = clustering_key::from_exploded(*s, {int32_type->decompose(value++)});
m.set_clustered_cell(c_key, bytes("value"), data_value(int32_t(value)), next_timestamp(sstable_idx, ck));
}
return make_sstable_containing(sst_gen, {m});
};
auto cm = make_lw_shared<compaction_manager>();
column_family::config cfg = column_family_test_config();
cfg.datadir = tmpdir_path;
cfg.enable_disk_writes = true;
cfg.enable_commitlog = false;
cfg.enable_cache = false;
cfg.enable_incremental_backups = false;
auto tracker = make_lw_shared<cache_tracker>();
cell_locker_stats cl_stats;
auto cf = make_lw_shared<column_family>(s, cfg, column_family::no_commitlog(), *cm, cl_stats, *tracker);
cf->mark_ready_for_writes();
cf->start();
std::vector<shared_sstable> sstables_spanning_many_windows = {
make_sstable(0),
make_sstable(1),
make_sstable(2),
make_sstable(3),
};
auto ret = compact_sstables(sstables::compaction_descriptor(sstables_spanning_many_windows,
cf->get_sstable_set(), default_priority_class()), *cf, sst_gen, replacer_fn_no_op()).get0();
// The real test here is that we don't assert() in
// sstables::prepare_summary() with the compact_sstables() call above,
// this is only here as a sanity check.
BOOST_REQUIRE_EQUAL(ret.new_sstables.size(), std::min(sstables_spanning_many_windows.size() * rows_per_partition,
sstables::time_window_compaction_strategy::max_data_segregation_window_count));
return make_ready_future<>();
});
}
SEASTAR_TEST_CASE(test_zero_estimated_partitions) {
return test_setup::do_with_tmp_directory([] (test_env& env, sstring tmpdir_path) {
simple_schema ss;
auto s = ss.schema();
auto pk = ss.make_pkey(make_local_key(s));
auto mut = mutation(s, pk);
ss.add_row(mut, ss.make_ckey(0), "val");
for (const auto version : all_sstable_versions) {
testlog.info("version={}", sstables::to_string(version));
auto mr = flat_mutation_reader_from_mutations({mut});
auto sst = env.make_sstable(s, tmpdir_path, 0, version, big);
sstable_writer_config cfg = test_sstables_manager.configure_writer();
sst->write_components(std::move(mr), 0, s, cfg, encoding_stats{}).get();
sst->load().get();
auto sst_mr = sst->as_mutation_source().make_reader(s, tests::make_permit(), query::full_partition_range, s->full_slice());
auto sst_mut = read_mutation_from_flat_mutation_reader(sst_mr, db::no_timeout).get0();
// The real test here is that we don't assert() in
// sstables::prepare_summary() with the write_components() call above,
// this is only here as a sanity check.
BOOST_REQUIRE(sst_mr.is_buffer_empty());
BOOST_REQUIRE(sst_mr.is_end_of_stream());
BOOST_REQUIRE_EQUAL(mut, sst_mut);
}
return make_ready_future<>();
});
}

View File

@@ -0,0 +1,55 @@
/*
* Copyright (C) 2020 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#include <seastar/testing/thread_test_case.hh>
#include "utils/stall_free.hh"
SEASTAR_THREAD_TEST_CASE(test_merge1) {
std::list<int> l1{1, 2, 5, 8};
std::list<int> l2{3};
std::list<int> expected{1,2,3,5,8};
utils::merge_to_gently(l1, l2, std::less<int>());
BOOST_CHECK(l1 == expected);
}
SEASTAR_THREAD_TEST_CASE(test_merge2) {
std::list<int> l1{1};
std::list<int> l2{3, 5, 6};
std::list<int> expected{1,3,5,6};
utils::merge_to_gently(l1, l2, std::less<int>());
BOOST_CHECK(l1 == expected);
}
SEASTAR_THREAD_TEST_CASE(test_merge3) {
std::list<int> l1{};
std::list<int> l2{3, 5, 6};
std::list<int> expected{3,5,6};
utils::merge_to_gently(l1, l2, std::less<int>());
BOOST_CHECK(l1 == expected);
}
SEASTAR_THREAD_TEST_CASE(test_merge4) {
std::list<int> l1{1};
std::list<int> l2{};
std::list<int> expected{1};
utils::merge_to_gently(l1, l2, std::less<int>());
BOOST_CHECK(l1 == expected);
}

View File

@@ -26,12 +26,16 @@
#include "db/system_keyspace.hh"
#include <seastar/testing/test_case.hh>
#include <seastar/testing/thread_test_case.hh>
#include "test/lib/cql_test_env.hh"
#include "test/lib/cql_assertions.hh"
#include "test/lib/sstable_utils.hh"
#include "schema_builder.hh"
#include "service/priority_manager.hh"
#include "test/lib/test_services.hh"
#include "test/lib/data_model.hh"
#include "test/lib/log.hh"
#include "utils/ranges.hh"
using namespace std::literals::chrono_literals;
@@ -421,23 +425,49 @@ SEASTAR_TEST_CASE(test_view_update_generator) {
auto& view_update_generator = e.local_view_update_generator();
auto s = test_table_schema();
std::vector<shared_sstable> ssts;
lw_shared_ptr<table> t = e.local_db().find_column_family("ks", "t").shared_from_this();
auto write_to_sstable = [&] (mutation m) {
auto sst = t->make_streaming_staging_sstable();
sstables::sstable_writer_config sst_cfg = test_sstables_manager.configure_writer();
auto& pc = service::get_local_streaming_priority();
sst->write_components(flat_mutation_reader_from_mutations({m}), 1ul, s, sst_cfg, {}, pc).get();
sst->open_data().get();
t->add_sstable_and_update_cache(sst).get();
return sst;
};
auto key = partition_key::from_exploded(*s, {to_bytes(key1)});
mutation m(s, key);
auto col = s->get_column_definition("v");
for (int i = 1024; i < 1280; ++i) {
auto& row = m.partition().clustered_row(*s, clustering_key::from_exploded(*s, {to_bytes(fmt::format("c{}", i))}));
row.cells().apply(*col, atomic_cell::make_live(*col->type, 2345, col->type->decompose(sstring(fmt::format("v{}", i)))));
// Scatter the data in a bunch of different sstables, so we
// can test the registration semaphore of the view update
// generator
if (!(i % 10)) {
ssts.push_back(write_to_sstable(std::exchange(m, mutation(s, key))));
}
}
lw_shared_ptr<table> t = e.local_db().find_column_family("ks", "t").shared_from_this();
ssts.push_back(write_to_sstable(std::move(m)));
auto sst = t->make_streaming_staging_sstable();
sstables::sstable_writer_config sst_cfg = test_sstables_manager.configure_writer();
auto& pc = service::get_local_streaming_priority();
BOOST_REQUIRE_EQUAL(view_update_generator.available_register_units(), db::view::view_update_generator::registration_queue_size);
sst->write_components(flat_mutation_reader_from_mutations({m}), 1ul, s, sst_cfg, {}, pc).get();
sst->open_data().get();
t->add_sstable_and_update_cache(sst).get();
view_update_generator.register_staging_sstable(sst, t).get();
parallel_for_each(ssts.begin(), ssts.begin() + 10, [&] (shared_sstable& sst) {
return view_update_generator.register_staging_sstable(sst, t);
}).get();
BOOST_REQUIRE_EQUAL(view_update_generator.available_register_units(), db::view::view_update_generator::registration_queue_size);
parallel_for_each(ssts.begin() + 10, ssts.end(), [&] (shared_sstable& sst) {
return view_update_generator.register_staging_sstable(sst, t);
}).get();
BOOST_REQUIRE_EQUAL(view_update_generator.available_register_units(), db::view::view_update_generator::registration_queue_size);
eventually([&, key1, key2] {
auto msg = e.execute_cql(fmt::format("SELECT * FROM t WHERE p = '{}'", key1)).get0();
@@ -464,5 +494,373 @@ SEASTAR_TEST_CASE(test_view_update_generator) {
}
});
BOOST_REQUIRE_EQUAL(view_update_generator.available_register_units(), db::view::view_update_generator::registration_queue_size);
});
}
SEASTAR_THREAD_TEST_CASE(test_view_update_generator_deadlock) {
cql_test_config test_cfg;
auto& db_cfg = *test_cfg.db_config;
db_cfg.enable_cache(false);
db_cfg.enable_commitlog(false);
test_cfg.dbcfg.emplace();
test_cfg.dbcfg->available_memory = memory::stats().total_memory();
test_cfg.dbcfg->statement_scheduling_group = seastar::create_scheduling_group("statement", 1000).get0();
test_cfg.dbcfg->streaming_scheduling_group = seastar::create_scheduling_group("streaming", 200).get0();
do_with_cql_env([] (cql_test_env& e) -> future<> {
e.execute_cql("create table t (p text, c text, v text, primary key (p, c))").get();
e.execute_cql("create materialized view tv as select * from t "
"where p is not null and c is not null and v is not null "
"primary key (v, c, p)").get();
auto msb = e.local_db().get_config().murmur3_partitioner_ignore_msb_bits();
auto key1 = token_generation_for_shard(1, this_shard_id(), msb).front().first;
for (auto i = 0; i < 1024; ++i) {
e.execute_cql(fmt::format("insert into t (p, c, v) values ('{}', 'c{}', 'x')", key1, i)).get();
}
// We need data on the disk so that the pre-image reader is forced to go to disk.
e.db().invoke_on_all([] (database& db) {
return db.flush_all_memtables();
}).get();
auto& view_update_generator = e.local_view_update_generator();
auto s = test_table_schema();
lw_shared_ptr<table> t = e.local_db().find_column_family("ks", "t").shared_from_this();
auto key = partition_key::from_exploded(*s, {to_bytes(key1)});
mutation m(s, key);
auto col = s->get_column_definition("v");
const auto filler_val_size = 4 * 1024;
const auto filler_val = sstring(filler_val_size, 'a');
for (int i = 0; i < 1024; ++i) {
auto& row = m.partition().clustered_row(*s, clustering_key::from_exploded(*s, {to_bytes(fmt::format("c{}", i))}));
row.cells().apply(*col, atomic_cell::make_live(*col->type, 2345, col->type->decompose(filler_val)));
}
auto sst = t->make_streaming_staging_sstable();
sstables::sstable_writer_config sst_cfg = test_sstables_manager.configure_writer();
auto& pc = service::get_local_streaming_priority();
sst->write_components(flat_mutation_reader_from_mutations({m}), 1ul, s, sst_cfg, {}, pc).get();
sst->open_data().get();
t->add_sstable_and_update_cache(sst).get();
auto& sem = *with_scheduling_group(e.local_db().get_streaming_scheduling_group(), [&] () {
return &e.local_db().make_query_class_config().semaphore;
}).get0();
// consume all units except what is needed to admit a single reader.
sem.consume(sem.initial_resources() - reader_resources{1, new_reader_base_cost});
testlog.info("res = [.count={}, .memory={}]", sem.available_resources().count, sem.available_resources().memory);
BOOST_REQUIRE_EQUAL(sem.get_inactive_read_stats().permit_based_evictions, 0);
view_update_generator.register_staging_sstable(sst, t).get();
eventually_true([&] {
return sem.get_inactive_read_stats().permit_based_evictions > 0;
});
return make_ready_future<>();
}, std::move(test_cfg)).get();
}
// Test that registered sstables (and semaphore units) are not leaked when
// sstables are register *while* a batch of sstables are processed.
SEASTAR_THREAD_TEST_CASE(test_view_update_generator_register_semaphore_unit_leak) {
cql_test_config test_cfg;
auto& db_cfg = *test_cfg.db_config;
db_cfg.enable_cache(false);
db_cfg.enable_commitlog(false);
do_with_cql_env([] (cql_test_env& e) -> future<> {
e.execute_cql("create table t (p text, c text, v text, primary key (p, c))").get();
e.execute_cql("create materialized view tv as select * from t "
"where p is not null and c is not null and v is not null "
"primary key (v, c, p)").get();
auto msb = e.local_db().get_config().murmur3_partitioner_ignore_msb_bits();
auto key1 = token_generation_for_shard(1, this_shard_id(), msb).front().first;
for (auto i = 0; i < 1024; ++i) {
e.execute_cql(fmt::format("insert into t (p, c, v) values ('{}', 'c{}', 'x')", key1, i)).get();
}
// We need data on the disk so that the pre-image reader is forced to go to disk.
e.db().invoke_on_all([] (database& db) {
return db.flush_all_memtables();
}).get();
auto& view_update_generator = e.local_view_update_generator();
auto s = test_table_schema();
lw_shared_ptr<table> t = e.local_db().find_column_family("ks", "t").shared_from_this();
auto key = partition_key::from_exploded(*s, {to_bytes(key1)});
auto make_sstable = [&] {
mutation m(s, key);
auto col = s->get_column_definition("v");
const auto val = sstring(10, 'a');
for (int i = 0; i < 1024; ++i) {
auto& row = m.partition().clustered_row(*s, clustering_key::from_exploded(*s, {to_bytes(fmt::format("c{}", i))}));
row.cells().apply(*col, atomic_cell::make_live(*col->type, 2345, col->type->decompose(val)));
}
auto sst = t->make_streaming_staging_sstable();
sstables::sstable_writer_config sst_cfg = test_sstables_manager.configure_writer();
auto& pc = service::get_local_streaming_priority();
sst->write_components(flat_mutation_reader_from_mutations({m}), 1ul, s, sst_cfg, {}, pc).get();
sst->open_data().get();
t->add_sstable_and_update_cache(sst).get();
return sst;
};
std::vector<sstables::shared_sstable> prepared_sstables;
// We need 2 * N + 1 sstables. N should be at least 5 (number of units
// on the register semaphore) + 1 (just to make sure the returned future
// blocks). While the initial batch is processed we want to register N
// more sstables, + 1 to detect the leak (N units will be returned from
// the initial batch). See below for more details.
const auto num_sstables = (view_update_generator.available_register_units() + 1) * 2 + 1;
for (auto i = 0; i < num_sstables; ++i) {
prepared_sstables.push_back(make_sstable());
}
// First batch: register N sstables.
while (view_update_generator.available_register_units()) {
auto fut = view_update_generator.register_staging_sstable(std::move(prepared_sstables.back()), t);
prepared_sstables.pop_back();
BOOST_REQUIRE(fut.available());
}
// Make sure we consumed all units and thus the register future blocks.
auto fut1 = view_update_generator.register_staging_sstable(std::move(prepared_sstables.back()), t);
prepared_sstables.pop_back();
BOOST_REQUIRE(!fut1.available());
std::vector<future<>> futures;
futures.reserve(prepared_sstables.size());
// While the first batch is processed, concurrently register the
// remaining N + 1 sstables, yielding in-between so the first batch
// processing can progress.
while (!prepared_sstables.empty()) {
thread::yield();
futures.emplace_back(view_update_generator.register_staging_sstable(std::move(prepared_sstables.back()), t));
prepared_sstables.pop_back();
}
// Make sure the first batch is processed.
fut1.get();
auto fut_res = when_all_succeed(futures.begin(), futures.end());
// Watchdog timer which will break out of the deadlock and fail the test.
timer watchdog_timer([&view_update_generator] {
// Re-start it so stop() on shutdown doesn't crash.
(void)view_update_generator.stop().then([&] { return view_update_generator.start(); });
});
watchdog_timer.arm(std::chrono::seconds(60));
// Wait on the second batch, will fail if the watchdog timer fails.
fut_res.get();
watchdog_timer.cancel();
return make_ready_future<>();
}, std::move(test_cfg)).get();
}
SEASTAR_THREAD_TEST_CASE(test_view_update_generator_buffering) {
using partition_size_map = std::map<dht::decorated_key, size_t, dht::ring_position_less_comparator>;
class consumer_verifier {
schema_ptr _schema;
reader_permit _permit;
const partition_size_map& _partition_rows;
std::vector<mutation>& _collected_muts;
bool& _failed;
std::unique_ptr<row_locker> _rl;
std::unique_ptr<row_locker::stats> _rl_stats;
clustering_key::less_compare _less_cmp;
const size_t _max_rows_soft;
const size_t _max_rows_hard;
size_t _buffer_rows = 0;
private:
static size_t rows_in_limit(size_t l) {
const size_t _100kb = 100 * 1024;
// round up
return l / _100kb + std::min(size_t(1), l % _100kb);
}
static size_t rows_in_mut(const mutation& m) {
return std::distance(m.partition().clustered_rows().begin(), m.partition().clustered_rows().end());
}
void check(mutation mut) {
// First we check that we would be able to create a reader, even
// though the staging reader consumed all resources.
auto fut = _permit.wait_admission(new_reader_base_cost, db::timeout_clock::now());
BOOST_REQUIRE(!fut.failed());
auto res_units = fut.get0();
const size_t current_rows = rows_in_mut(mut);
const auto total_rows = _partition_rows.at(mut.decorated_key());
_buffer_rows += current_rows;
testlog.trace("consumer_verifier::check(): key={}, rows={}/{}, _buffer={}",
partition_key::with_schema_wrapper(*_schema, mut.key()),
current_rows,
total_rows,
_buffer_rows);
BOOST_REQUIRE(current_rows);
BOOST_REQUIRE(current_rows <= _max_rows_hard);
BOOST_REQUIRE(_buffer_rows <= _max_rows_hard);
// The current partition doesn't have all of its rows yet, verify
// that the new mutation contains the next rows for the same
// partition
if (!_collected_muts.empty() && rows_in_mut(_collected_muts.back()) < _partition_rows.at(_collected_muts.back().decorated_key())) {
BOOST_REQUIRE(_collected_muts.back().decorated_key().equal(*mut.schema(), mut.decorated_key()));
const auto& previous_ckey = (--_collected_muts.back().partition().clustered_rows().end())->key();
const auto& next_ckey = mut.partition().clustered_rows().begin()->key();
BOOST_REQUIRE(_less_cmp(previous_ckey, next_ckey));
mutation_application_stats stats;
_collected_muts.back().partition().apply(*_schema, mut.partition(), *mut.schema(), stats);
// The new mutation is a new partition.
} else {
if (!_collected_muts.empty()) {
BOOST_REQUIRE(!_collected_muts.back().decorated_key().equal(*mut.schema(), mut.decorated_key()));
}
_collected_muts.push_back(std::move(mut));
}
if (_buffer_rows >= _max_rows_hard) { // buffer flushed on hard limit
_buffer_rows = 0;
testlog.trace("consumer_verifier::check(): buffer ends on hard limit");
} else if (_buffer_rows >= _max_rows_soft) { // buffer flushed on soft limit
_buffer_rows = 0;
testlog.trace("consumer_verifier::check(): buffer ends on soft limit");
}
}
public:
consumer_verifier(schema_ptr schema, reader_permit permit, const partition_size_map& partition_rows, std::vector<mutation>& collected_muts, bool& failed)
: _schema(std::move(schema))
, _permit(std::move(permit))
, _partition_rows(partition_rows)
, _collected_muts(collected_muts)
, _failed(failed)
, _rl(std::make_unique<row_locker>(_schema))
, _rl_stats(std::make_unique<row_locker::stats>())
, _less_cmp(*_schema)
, _max_rows_soft(rows_in_limit(db::view::view_updating_consumer::buffer_size_soft_limit))
, _max_rows_hard(rows_in_limit(db::view::view_updating_consumer::buffer_size_hard_limit))
{ }
future<row_locker::lock_holder> operator()(mutation mut) {
try {
check(std::move(mut));
} catch (...) {
testlog.error("consumer_verifier::operator(): caught unexpected exception {}", std::current_exception());
_failed |= true;
}
return _rl->lock_pk(_collected_muts.back().decorated_key(), true, db::no_timeout, *_rl_stats);
}
};
reader_concurrency_semaphore sem(1, new_reader_base_cost, get_name());
auto schema = schema_builder("ks", "cf")
.with_column("pk", int32_type, column_kind::partition_key)
.with_column("ck", int32_type, column_kind::clustering_key)
.with_column("v", bytes_type)
.build();
const auto blob_100kb = bytes(100 * 1024, bytes::value_type(0xab));
const abort_source as;
const auto partition_size_sets = std::vector<std::vector<int>>{{12}, {8, 4}, {8, 16}, {22}, {8, 8, 8, 8}, {8, 8, 8, 16, 8}, {8, 20, 16, 16}, {50}, {21}, {21, 2}};
const auto max_partition_set_size = std::ranges::max_element(partition_size_sets, [] (const std::vector<int>& a, const std::vector<int>& b) { return a.size() < b.size(); })->size();
auto pkeys = ranges::to<std::vector<dht::decorated_key>>(std::views::iota(size_t{0}, max_partition_set_size) | std::views::transform([schema] (int i) {
return dht::decorate_key(*schema, partition_key::from_single_value(*schema, int32_type->decompose(data_value(i))));
}));
std::ranges::sort(pkeys, dht::ring_position_less_comparator(*schema));
for (auto partition_sizes_100kb : partition_size_sets) {
testlog.debug("partition_sizes_100kb={}", partition_sizes_100kb);
partition_size_map partition_rows{dht::ring_position_less_comparator(*schema)};
std::vector<mutation> muts;
auto pk = 0;
for (auto partition_size_100kb : partition_sizes_100kb) {
auto mut_desc = tests::data_model::mutation_description(pkeys.at(pk++).key().explode(*schema));
for (auto ck = 0; ck < partition_size_100kb; ++ck) {
mut_desc.add_clustered_cell({int32_type->decompose(data_value(ck))}, "v", tests::data_model::mutation_description::value(blob_100kb));
}
muts.push_back(mut_desc.build(schema));
partition_rows.emplace(muts.back().decorated_key(), partition_size_100kb);
}
std::ranges::sort(muts, [less = dht::ring_position_less_comparator(*schema)] (const mutation& a, const mutation& b) {
return less(a.decorated_key(), b.decorated_key());
});
auto permit = sem.make_permit();
auto mt = make_lw_shared<memtable>(schema);
for (const auto& mut : muts) {
mt->apply(mut);
}
auto ms = mutation_source([mt] (
schema_ptr s,
reader_permit permit,
const dht::partition_range& pr,
const query::partition_slice& ps,
const io_priority_class& pc,
tracing::trace_state_ptr ts,
streamed_mutation::forwarding fwd_ms,
mutation_reader::forwarding fwd_mr) {
return make_restricted_flat_reader(mt->as_data_source(), s, std::move(permit), pr, ps, pc, std::move(ts), fwd_ms, fwd_mr);
});
auto [staging_reader, staging_reader_handle] = make_manually_paused_evictable_reader(
std::move(ms),
schema,
permit,
query::full_partition_range,
schema->full_slice(),
service::get_local_streaming_priority(),
nullptr,
::mutation_reader::forwarding::no);
std::vector<mutation> collected_muts;
bool failed = false;
staging_reader.consume_in_thread(db::view::view_updating_consumer(schema, as, staging_reader_handle,
consumer_verifier(schema, permit, partition_rows, collected_muts, failed)), db::no_timeout);
BOOST_REQUIRE(!failed);
BOOST_REQUIRE_EQUAL(muts.size(), collected_muts.size());
for (size_t i = 0; i < muts.size(); ++i) {
testlog.trace("compare mutation {}", i);
BOOST_REQUIRE_EQUAL(muts[i], collected_muts[i]);
}
}
}

View File

@@ -118,6 +118,25 @@ SEASTAR_TEST_CASE(test_access_and_schema) {
});
}
SEASTAR_TEST_CASE(test_column_dropped_from_base) {
return do_with_cql_env_thread([] (auto& e) {
e.execute_cql("create table cf (p int, c ascii, a int, v int, primary key (p, c));").get();
e.execute_cql("create materialized view vcf as select p, c, v from cf "
"where v is not null and p is not null and c is not null "
"primary key (v, p, c)").get();
e.execute_cql("alter table cf drop a;").get();
e.execute_cql("insert into cf (p, c, v) values (0, 'foo', 1);").get();
eventually([&] {
auto msg = e.execute_cql("select v from vcf").get0();
assert_that(msg).is_rows()
.with_size(1)
.with_row({
{int32_type->decompose(1)}
});
});
});
}
SEASTAR_TEST_CASE(test_updates) {
return do_with_cql_env_thread([] (auto& e) {
e.execute_cql("create table base (k int, v int, primary key (k));").get();

Some files were not shown because too many files have changed in this diff Show More