When v.u.g. stops is first aborts the generation background fiber by
requesting abort on the internal abort source and signalling the fiber
in case it's waiting. Right now v.u.g.::stop() is defer-scheduled last
in main(), so this move doesn't change much -- when stop_signal fires,
it will kick the v.u.g.::do_abort() just a bit earlier, there's nothing
that would happen after it before real ::stop() is called that depends
on it.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
When sending mutation to remote endpoint,
the selected endpoints must be in sync with
the current effective_replication_map.
Currently, the endpoints are sent down the storage_proxy
stack, and later on an effective_replication_map is retrieved
again, and it might not match the target or pending endpoints,
similar to the case seen in https://github.com/scylladb/scylladb/issues/15138
The correct way is to carry the same effective replication map
used to select said endpoints and pass it down the stack.
See also https://github.com/scylladb/scylladb/pull/15141
Fixes scylladb/scylladb#15144
Fixes scylladb/scylladb#14730
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Closes#15142
We see the abort_requested_exception error from time
to time, instead of sleep_aborted that was expected
and quietly ignored (in debug log level).
Treat abort_requested_exception the same way since
the error is expected on shutdown and to reduce
test flakiness, as seen for example, in
https://jenkins.scylladb.com/job/scylla-master/job/scylla-ci/3033/artifact/logs-full.release.010/1691896356104_repair_additional_test.py%3A%3ATestRepairAdditional%3A%3Atest_repair_schema/node2.log
```
INFO 2023-08-13 03:12:29,151 [shard 0] compaction_manager - Asked to stop
WARN 2023-08-13 03:12:29,152 [shard 0] gossip - failure_detector_loop: Got error in the loop, live_nodes={}: seastar::sleep_aborted (Sleep is aborted)
INFO 2023-08-13 03:12:29,152 [shard 0] gossip - failure_detector_loop: Finished main loop
WARN 2023-08-13 03:12:29,152 [shard 0] cdc - Aborted update CDC description table with generation (2023/08/13 03:12:17, d74aad4b-6d30-4f22-947b-282a6e7c9892)
INFO 2023-08-13 03:12:29,152 [shard 1] compaction_manager - Asked to stop
INFO 2023-08-13 03:12:29,152 [shard 1] compaction_manager - Stopped
INFO 2023-08-13 03:12:29,153 [shard 0] init - Signal received; shutting down
INFO 2023-08-13 03:12:29,153 [shard 0] init - Shutting down view builder ops
INFO 2023-08-13 03:12:29,153 [shard 0] view - Draining view builder
INFO 2023-08-13 03:12:29,153 [shard 1] view - Draining view builder
INFO 2023-08-13 03:12:29,153 [shard 0] compaction_manager - Stopped
ERROR 2023-08-13 03:12:29,153 [shard 0] view - start failed: seastar::abort_requested_exception (abort requested)
ERROR 2023-08-13 03:12:29,153 [shard 1] view - start failed: seastar::abort_requested_exception (abort requested)
```
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Closes#15029
Maps related to column families in database are extracted
to a column_families_data class. Access to them is possible only
through methods. All methods which may preempt hold rwlock
in relevant mode, so that the iterators can't become invalid.
Fixes: #13290Closes#13349
* github.com:scylladb/scylladb:
replica: make tables_metadata's attributes private
replica: add methods to get a filtered copy of tables map
replica: add methods to check if given table exists
replica: add methods to get table or table id
replica: api: return table_id instead of const table_id&
replica: iterate safely over tables related maps
replica: pass tables_metadata to phased_barrier_top_10_counts
replica: add methods to safely add and remove table
replica: wrap column families related maps into tables_metadata
replica: futurize database::add_column_family and database::remove
All partitions will have a corresponding mutation object in the buffer.
These objects have non-negligible sizes, yet the consumer did not bump
the _buffer_size when a new partition was consumer. This resulted in
empty partitions not moving the _buffer_size at all, and thus they could
accumulate without bounds in the buffer, never triggering a flush just
by themselves. We have recently seen this causing OOM.
This patch fixes that by bumping the _buffer_size with the size of the
freshly created mutation object.
As a preparation for ensuring access safety for column families
related maps, add tables_metadata, access to members of which
would be protected by rwlock.
By making it independent of the number of units the view update
generator's registration semaphore is created with. We want to increase
this number significantly and that would destabilize this test
significantly. To prevent this, detach the test from the number of units
completely, while stil preserving the original intent behind it, as best
as it could be determined.
Closes#14727
This reverts commit d3034e0fab.
The test modified by this commit
(view_build_test.test_view_update_generator_register_semaphore_unit_leak)
often fails, breaking build jobs.
When repair writes a sstable to disk, we check if the sstable needs view
update processing. If yes, the sstable will be placed into the staging
dir for processing, with the _registration_sem semaphore to prevent too
many pending unprocessed sstables.
We have seen multiple cases in the field where view update processing is
inefficient and way too slow which blocks the base table repair to
finish on time.
This patch increases the registration_queue_size to a bigger number to
mitigate the problem that slow view update processing blocks repair.
It is better to have a consistent base table + inconsistent view table
than inconsistent base table + inconsistent view table.
Currently, sstables in staging dir are not compacted. So we could not
increase the _registration_sem with too big number to avoid accumulate
too many sstables.
The view_build_test.cc is updated to make the test pass.
Closes#14241
View update routines accept `mutation` objects.
But what comes out of staging sstable readers is a stream of
mutation_fragment_v2 objects.
To build view updates after a repair/streaming, we have to
convert the fragment stream into `mutation`s. This is done by piping
the stream to mutation_rebuilder_v2.
To keep memory usage limited, the stream for a single partition might
have to be split into multiple partial `mutation` objects.
view_update_consumer does that, but in improper way -- when the
split/flush happens inside an active range tombstone, the range
tombstone isn't closed properly. This is illegal, and triggers an
internal error.
This patch fixes the problem by closing the active range tombstone
(and reopening in the same position in the next `mutation` object).
The tombstone is closed just after the last seen clustered position.
This is not necessary for correctness -- for example we could delay
all processing of the range tombstone until we see its end
bound -- but it seems like the most natural semantic.
Fixes#14503
Very helpful for user to understand how fast view update generation
is processing the staging sstables. Today, logs are completely
silent on that. It's not uncommon for operators to peek into
staging dir and deduce the throughput based on removal of files,
which is terrible.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Adding a function declaration to expression.hh causes many
recompilations. Reduce that by:
- moving some restrictions-related definitions to
the existing expr/restrictions.hh
- moving evaluation related names to a new header
expr/evaluate.hh
- move utilities to a new header
expr/expr-utilities.hh
expression.hh contains only expression definitions and the most
basic and common helpers, like printing.
Spans are slightly cleaner, slightly faster (as they avoid an indirection),
and allow for replacing some of the arguments with small_vector:s.
Closes#14313
There are some headers that include tracing/*.hh ones despite all they
need is forward-declared trace_state_ptr
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Closes#14155
In that level no io_priority_class-es exist. Instead, all the IO happens
in the context of current sched-group. File API no longer accepts prio
class argument (and makes io_intent arg mandatory to impls).
So the change consists of
- removing all usage of io_priority_class
- patching file_impl's inheritants to updated API
- priority manager goes away altogether
- IO bandwidth update is performed on respective sched group
- tune-up scylla-gdb.py io_queues command
The first change is huge and was made semi-autimatically by:
- grep io_priority_class | default_priority_class
- remove all calls, found methods' args and class' fields
Patching file_impl-s is smaller, but also mechanical:
- replace io_priority_class& argument with io_intent* one
- pass intent to lower file (if applicatble)
Dropping the priority manager is:
- git-rm .cc and .hh
- sed out all the #include-s
- fix configure.py and cmakefile
The scylla-gdb.py update is a bit hairry -- it needs to use task queues
list for IO classes names and shares, but to detect it should it checks
for the "commitlog" group is present.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Closes#13963
The `view_update_write_response_handler` class, which is a subclass of
`abstract_write_response_handler`, was created for a single purpose:
to make it possible to cancel a handler for a view update write,
which means we stop waiting for a response to the write, timing out
the handler immediately. This was done to solve issue with node
shutdown hanging because it was waiting for a view update to finish;
view updates were configured with 5 minute timeout. See #3966, #4028.
Now we're having a similar problem with hint updates causing shutdown
to hang in tests (#8079).
`view_update_write_response_handler` implements cancelling by adding
itself to an intrusive list which we then iterate over to timeout each
handler when we shutdown or when gossiper notifies `storage_proxy`
that a node is down.
To make it possible to reuse this algorithm for other handlers, move
the functionality into `abstract_write_response_handler`. We inherit
from `bi::list_base_hook` so it introduces small memory overhead to
each write handler (2 pointers) which was only present for view update
handlers before. But those handlers are already quite large, the
overhead is small compared to their size.
Use this new functionality to also cancel hint write handlers when we
shutdown. This fixes#8079.
Closes#14047
* github.com:scylladb/scylladb:
test: reproducer for hints manager shutdown hang
test: pylib: ScyllaCluster: generalize config type for `server_add`
test: pylib: scylla_cluster: add explicit timeout for graceful server stop
service: storage_proxy: make hint write handlers cancellable
service: storage_proxy: rename `view_update_handlers_list`
service: storage_proxy: make it possible to cancel all write handler types
Whether a write handler should be cancellable is now controlled by a
parameter passed to `create_write_response_handler`. We plumb it down
from `send_to_endpoint` which is called by hints manager.
This will cause hint write handlers to immediately timeout when we
shutdown or when a destination node is marked as dead.
Fixes#8079
Some assorted cleanups here: consolidation of schema agreement waiting
into a single place and removing unused code from the gossiper.
CI: https://jenkins.scylladb.com/job/scylla-master/job/scylla-ci/1458/
Reviewed-by: Konstantin Osipov <kostja@scylladb.com>
* gleb/gossiper-cleanups of github.com:scylladb/scylla-dev:
storage_service: avoid unneeded copies in on_change
storage_service: remove check that is always true
storage_service: rename handle_state_removing to handle_state_removed
storage_service: avoid string copy
storage_service: delete code that handled REMOVING_TOKENS state
gossiper: remove code related to advertising REMOVING_TOKEN state
migration_manager: add wait_for_schema_agreement() function
After a schema change, memtable and cache have to be upgraded to the new schema. Currently, they are upgraded (on the first access after a schema change) atomically, i.e. all rows of the entry are upgraded with one non-preemptible call. This is a one of the last vestiges of the times when partition were treated atomically, and it is a well known source of numerous large stalls.
This series makes schema upgrades gentle (preemptible). This is done by co-opting the existing MVCC machinery.
Before the series, all partition_versions in the partition_entry chain have the same schema, and an entry upgrade replaces the entire chain with a single squashed and upgraded version.
After the series, each partition_version has its own schema. A partition entry upgrade happens simply by adding an empty version with the new schema to the head of the chain. Row entries are upgraded to the current schema on-the-fly by the cursor during reads, and by the MVCC version merge ongoing in the background after the upgrade.
The series:
1. Does some code cleanup in the mutation_partition area.
2. Adds a schema field to partition_version and removes it from its containers (partition_snapshot, cache_entry, memtable_entry).
3. Adds upgrading variants of constructors and apply() for `row` and its wrappers.
4. Prepares partition_snapshot_row_cursor, mutation_partition_v2::apply_monotonically and partition_snapshot::merge_partition_versions for dealing with heterogeneous version chains.
5. Modifies partition_entry::upgrade to perform upgrades by extending the version chain with a new schema instead of squashing it to a single upgraded version.
Fixes#2577Closes#13761
* github.com:scylladb/scylladb:
test: mvcc_test: add a test for gentle schema upgrades
partition_version: make partition_entry::upgrade() gentle
partition_version: handle multi-schema snapshots in merge_partition_versions
mutation_partition_v2: handle schema upgrades in apply_monotonically()
partition_version: remove the unused "from" argument in partition_entry::upgrade()
row_cache_test: prepare test_eviction_after_schema_change for gentle schema upgrades
partition_version: handle multi-schema entries in partition_entry::squashed
partition_snapshot_row_cursor: handle multi-schema snapshots
partiton_version: prepare partition_snapshot::squashed() for multi-schema snapshots
partition_version: prepare partition_snapshot::static_row() for multi-schema snapshots
partition_version: add a logalloc::region argument to partition_entry::upgrade()
memtable: propagate the region to memtable_entry::upgrade_schema()
mutation_partition: add an upgrading variant of lazy_row::apply()
mutation_partition: add an upgrading variant of rows_entry::rows_entry
mutation_partition: switch an apply() call to apply_monotonically()
mutation_partition: add an upgrading variant of rows_entry::apply_monotonically()
mutation_fragment: add an upgrading variant of clustering_row::apply()
mutation_partition: add an upgrading variant of row::row
partition_version: remove _schema from partition_entry::operator<<
partition_version: remove the schema argument from partition_entry::read()
memtable: remove _schema from memtable_entry
row_cache: remove _schema from cache_entry
partition_version: remove the _schema field from partition_snapshot
partition_version: add a _schema field to partition_version
mutation_partition: change schema_ptr to schema& in mutation_partition::difference
mutation_partition: change schema_ptr to schema& in mutation_partition constructor
mutation_partition_v2: change schema_ptr to schema& in mutation_partition_v2 constructor
mutation_partition: add upgrading variants of row::apply()
partition_version: update the comment to apply_to_incomplete()
mutation_partition_v2: clean up variants of apply()
mutation_partition: remove apply_weak()
mutation_partition_v2: remove a misleading comment in apply_monotonically()
row_cache_test: add schema changes to test_concurrent_reads_and_eviction
mutation_partition: fix mixed-schema apply()
We already use the new pending_endpoints from erm though
the get_pending_ranges virtual function, in this commit
we update all the remaining places to use the new
implementation in erm, as well as remove the old implementation
in token_metadata.
The only reason why it's there (right next to compaction_fwd.hh) is
because the database::table_truncate_state subclass needs the definition
of compaction_manager::compaction_reenabler subclass.
However, the former sub is not used outside of database.cc and can be
defined in .cc. Keeping it outside of the header allows dropping the
compaction_manager.hh from database.hh thus greatly reducing its fanout
over the code (from ~180 indirect inclusions down to ~20).
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Closes#13622
The method needs proxy to get data_dictionary::database from to pass down to select_statement::prepare(). And a legacy bit that can come with data_dictionary::database as well. Fortunately, all the call traces that end up at select_statement() start inside table:: methods that have view_update_generator, or at view_builder::consumer that has reference to view_builder. Both services can share the database reference. However, the call traces in question pass through several code layers, so the PR adds data_dictionary::database to those layers one by one.
Closes#13591
* github.com:scylladb/scylladb:
view_info: Drop calls to get_local_storage_proxy()
view_info: Add data_dictionary argument to select_statement()
view_info: Add data_dictionary argument to partition_slice() method
view_filter_checking_visitor: Construct with data_dictionary
view: Carry data_dictionary arg through standalone helpers
view_updates: Carry data_dictionary argument throug methods
view_update_builder: Construct with data dictionary
table: Push view_update_generator arg to affected_views()
view: Add database getters to v._update_generator and v._builder
The view_builder::view_build_statuses() needs topology to walk its
nodes. Now it gets one from global proxy via its token metadata, but
database also has tokens and view_builder has reference to database.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
In both cases the proxy is called to get data_dictionary from. Now its
available as the call argument.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
This method needs data_dictionary to work. Fortunately, all callers of
it already have the dictionary at hand and can just pass it as argument.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
The caller is calculate_affected_clustering_ranges() with dictionary
arg, the method needs dictionary to call view_info::select_statement()
later.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
The visitor is wait-free helper for matches_view_filter() that has
dictionary as its argument. Later the visitor will pass the dictionary
to view_info::select_statement().
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
There's a bunch of functions in view.{hh|cc} that don't belong to any
class and perform view-related claculations for view updates. Lots of
them eventually call view_info::select_statement() which will later need
the dictionary.
By now all those methods' callers have data dictionary at hand and can
share it via argument.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
The goal is to have the dictionary at places that later wrap calls to
view_info::select_statement(). This graph of calls starts at the only
public view_updates::generate_update() method which, in turn, is called
from view_update_builder that already has data dictionary at hand.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
The caller is table with view-update-generator at hand (it calls
mutate_MV on). Builder here is used as a temporary object that destroys
once the caller coroutine co_return-s, so keeping the database obtained
from the view-update-generator is safe.
Later the v.u.b. object will propagate its data dictionary down the
callstacks.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Both services carry database which will be used by auxiliary objects
like view_updates, view_update_builder, consumer, etc in next patches.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
When view builder constructs it populates itself with view updates.
Later the updates may instantiate the value_getter-s which, in turn,
would need to check if the view is backing secondary index.
Good news is that when view builder constructs it has all the
information at hand needed to evaluate this "backing" bit. It's then
propagated down to value_getter via corresponding view_updates.
The getter's _view field becomes unused after this change and is
(void)-ed to make this patch compile.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
The getter needs to check if the view is backing a secondary index.
Currentl it's done inside the handle_computed_column() method, but it's
more convenient if this bit is known during construction, so move it
there. There are no places that can change this property between
view_getter is created and the method in question is called.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Don't maintain a "shadow" endpoint_to_host_id_map in token_metadata_impl.
Instead, get the nodes_by_endpoint map from topology
and use it to build the endpoint_to_host_id_map.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
This is not really an error, so print it in debug log_level
rather than error log_level.
Fixes#13374
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Closes#13462
That's courtersy of 153813d3b8, which annotates Seastar smart pointer classes with Clang's consumed attributes, to help Clang to statically spot use-after-move bugs.
Closes#13386
* github.com:scylladb/scylladb:
replica: Fix use-after-move in table::make_streaming_reader
index/built_indexes_virtual_reader.hh: Fix use-after-move
db/view/build_progress_virtual_reader: Fix use-after-move
sstables: Fix use-after-move when making reader in reverse mode
use-after-free in ctor, which potentially leads to a failure
when locating table from moved schema object.
static report
In file included from db/system_keyspace.cc:51:
./db/view/build_progress_virtual_reader.hh:202:40: warning: invalid invocation of method 'operator->' on object 's' while it is in the 'consumed' state [-Wconsumed]
_db.find_column_family(s->ks_name(), system_keyspace::v3::SCYLLA_VIEWS_BUILDS_IN_PROGRESS),
Fixes#13395.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Now the mutate_MV is the method of v.u.generator which has reference to
the sharded<storage_proxy>. Few helper static wrappers are patched to
get the needed proxy or database reference from the mutate_MV call.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>