The table::get_hit_rate needs gossiper to get hitrates state from.
There's no way to carry gossiper reference on the table itself, so it's
up to the callers of that method to provide it. Fortunately, there's
only one caller -- the proxy -- but the call chain to carry the
reference it not very short ... oh, well.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Said method has to evict all querier cache entries, belonging to the to-be-dropped table. This is already the case, but there was a window where new entries could sneak in, causing a stale reference to the table to be de-referenced later when they are evicted due to TTL. This window is now closed, the entries are evicted after the method has waited for all ongoing operations on said table to stop.
Fixes: #10450Closes#10451
* github.com:scylladb/scylla:
replica/database: drop_column_family(): drop querier cache entries after waiting for ops
replica/database: finish coroutinizing drop_column_family()
replica/database: make remove(const column_family&) private
The underlying mutation representation is still v1, so the
implementation still has to do conversion. This happens right above the
lsa reader component.
Reads (part of operations) running concurrent to `drop_column_family()`
can create querier cache entries while we wait for them to finish in
`await_pending_ops()`. Move the cache entry eviction to after this, to
ensure such entries are also cleaned up before destroying the table
object.
This moves the `_querier_cache.evict_all_for_table()` from
`database::remove()` to `database::drop_column_family()`. With that the
former doesn't have to return `future<>` anymore. While at it (changing
the signature) also rename `column_family` -> `table`.
Also add a regression unit test.
Said method was already coroutinized, but only halfway, possibly because
of the difficulty in expressing `finally()` with coroutines. We now have
`coroutines::as_future()` which makes this easier, so finish the job.
These bring in wasm.hh (though they really shouldn't) and make
everyone suffer. Forward declare instead and add missing includes
where needed.
Closes#10444
Reduce #include load by standardizing on std::any.
In keys.cc, we just drop the unneeded include.
One instance of boost::any remains in config_file, due to a tie-in with
other boost components.
Closes#10441
We don't need the database to determine the shard of the mutation,
only its schema. So move the implementation to the respecive
definitions of mutation and frozen_mutation.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Closes#10430
Currently an exception is thrown in the apply stage
when the schema is not synced, but it is too late
since returning an error doesn't pinpoint which code
path was using an unsync'ed schema so move the check
earlier, before _apply_stage is called.
We need to make sure the schema is synced earlier
when the mutation is applied so call on_internal_error
to generate a backtrace in testing and still throw
an error in production.
Typically storage_proxy::mutate_locally implicitly
ensures the schema is synced by making a global_schema_ptr
for it.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Message-Id: <20220424110057.3957597-1-bhalevy@scylladb.com>
And adjust callers. The factory functions just sprinkle upgrade_to_v2()
on returned readers for now.
One test in row_cache_test.cc had to be disabled, because the upgrade to
v2 wrapper we now have over cache readers doesn't allow it to directly
control the reader's buffer size and so the test fails. There is a FIXME
left in the test code and the test will be re-enabled once a native v2
reader implementation allows us to get rid of the upgrade wrapper.
"
The database::shutdown() and ::drain() methods are called inside the
invoke_on_all()s synchronizing with each other via the cross-shard
_stop_barrier.
If either shard throws in between all others may get stuck waiting for
the barrier to collect all arrivals. To fix it the throwing shard
should wake up others, resolving the wait somehow.
The fix is actually patch #4, the first and the second are the abort()
method for the barrier itself.
Fixes: #10304
tests: unit(dev), manual
"
* 'br-barrier-exception-2' of https://github.com/xemul/scylla:
database: Abort barriers on exception
database: Coroutinize close_tables
test: Add test for cross_shard_barrier::abort()
cross-shard-barrier: Add .abort() method
The database::shutdown() and ::drain() methods are called inside the
container().invoke_on_all() and synchronize with each other via the
cross-shard _stop_barrier. If either shard throws in between all others
may get stuck waiting for the barrier to collect all arrivals.
The fix is to abort the barrier on exception thus making all the
shards sitting in shutdown or drain to bail out with exceptions too.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
The STORAGE option is designed to hold a map of options
used for customizing storage for given keyspace.
The option is kept in a system_schema.scylla_keyspaces table.
The option is only available if the whole cluster is aware
of it - guarded by a cluster feature.
Example of the table contents:
```
cassandra@cqlsh> select * from system_schema.scylla_keyspaces;
keyspace_name | storage_options | storage_type
---------------+------------------------------------------------+--------------
ksx | {'bucket': '/tmp/xx', 'endpoint': 'localhost'} | S3
```
There's a public call on replica::table to get back the compaction
manager reference. It's not needed, actually. The users of the call are
distributed loader which already has database at hand, and a test that
creates itw own instance of compaction manager for its testing tables
and thus also has it available.
tests: unit(dev)
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Message-Id: <20220406171351.3050-1-xemul@scylladb.com>
"
First migrate all users to the v2 variant, all of which are tests.
However, to be able to properly migrate all tests off it, a v2 variant
of the restricted reader is also needed. All restricted reader users are
then migrated to the freshly introduced v2 variant and the v1 variant is
removed.
Users include:
* replica::table::make_reader_v2()
* streaming_virtual_table::as_mutation_source()
* sstables::make_reader()
* tests
This allows us to get rid of a bunch of conversions on the query path,
which was mostly v2 already.
With a few tests we did kick the can down the road by wrapping the v2
reader in `downgrade_to_v1()`, but this series is long enough already.
Tests: unit(dev), unit(boost/flat_mutation_reader_test:debug)
"
* 'remove-reader-from-mutations-v1/v3' of https://github.com/denesb/scylla:
readers: remove now unused v1 reader from mutations
test: move away from v1 reader from mutations
test/boost/mutation_reader_test: use fragment_scatterer
test/boost/mutation_fragment_test: extract fragment_scatterer into a separate hh
test/boost: mutation_fragment_test: refactor fragment_scatterer
readers: remove now unused v1 reversing reader
test/boost/flat_mutation_reader_test: convert to v2
frozen_mutation: fragment_and_freeze(): convert to v2
frozen_mutation: coroutinize fragment_and_freeze()
readers: migrate away from v1 reversing reader
db/virtual_table: use v2 variant of reversing and forwardable readers
replica/table: use v2 variant of reversing reader
sstables/sstable: remove unused make_crawling_reader_v1()
sstables/sstable: remove make_reader_v1()
readers: add v2 variant of reversing reader
readers/reversing: remove FIXME
readers: reader from mutations: use mutation's own schema when slicing
In most files it was unused. We should move these to the patch which
moved out the last interesting reader from mutation_reader.hh (and added
the corresponding new header include) but its probably not worth the
effort.
Some other files still relied on mutation_reader.hh to provide reader
concurrency semaphore and some other misc reader related definitions.
"
There's a static global sharded<local_cache> variable in system keyspace
the keeps several bits on board that other subsystems need to get from
the system keyspace, but what to have it in future<>-less manner.
Some time ago the system_keyspace became a classical sharded<> service
that references the qctx and the local cache. This set removes the global
cache variable and makes its instances be unique_ptr's sitting on the
system keyspace instances.
The biggest obstacle on this route is the local_host_id that was cached,
but at some point was copied onto db::config to simplify getting the value
from sstables manager (there's no system keyspace at hand there at all).
So the first thing this set does is removes the cached host_id and makes
all the users get it from the db::config.
(There's a BUG with config copy of host id -- replace node doesn't
update it. This set also fixes this place)
De-globalizing the cache is the prerequisite for untangling the snitch-
-messaging-gossiper-system_keyspace knot. Currently cache is initialized
too late -- when main calls system_keyspace.start() on all shards -- but
before this time messaging should already have access to it to store
its preferred IP mappings.
tests: unit(dev), dtest.simple_boot_shutdown(dev)
"
* 'br-trade-local-hostid-for-global-cache' of https://github.com/xemul/scylla:
system_keyspace: Make set_local_host_id non-static
system_keyspace: Make load_local_host_id non-static
system_keyspace: Remove global cache instance
system_keyspace: Make it peering service
system_keyspace,snitch: Make load_dc_rack_info non-static
system_keyspace,cdc,storage_service: Make bootstrap manipulations non-static
system_keyspace: Coroutinize set_bootstrap_state
gossiper: Add system keyspace dependency
cdc_generation_service: Add system keyspace dependency
system_keyspace: Remove local host id from local cache
storage_service: Update config.host_id on replace
storage_service: Indentation fix after previous patch
storage_service: Coroutinize prepare_replacement_info()
system_distributed_keyspace: Indentation fix after previous patch
code,system_keyspace: Relax system_keyspace::load_local_host_id() usage
code,system_keyspace: Remove system_keyspace::get_local_host_id()
"
By way of having an implementation of `data_dictionary` and using that.
The schema loader only needs a database to parse cql3 statements, which
are all coordinator-side objects and hence been largely migrated to use
data dictionary instead.
A few hard-dependencies on replica:: objects were found and resolved:
* index::secondary_index_manager
* tombstone_gc
The former was migrated to use `data_dictionary::table` instead of
`replica::table`. This in turn requires disentangling
`replica::data_dictionary_impl` from `replica::database`, as currently
the former can only really be used by the latter.
What all of this achieves us is that we no longer have to instantiate a
`replica::database` object in `tools::load_schema()`. We want to use the
standard allocator in tools, which means they cannot use LSA memory at
all. Database on the other hand creates memtable and row-cache instances
so it had to go.
Refs: #9882
Tests: unit(dev, schema_loader_test:debug,
cql-pytest/test_tools.py:debug)
"
* 'tools-schema-loader-database-impl/v2' of https://github.com/denesb/scylla:
tools/schema_loader: use own data dictionary impl
tombstone_gc: switch to using data dictionary
index/secondary_index_manager: switch to using data dictionary
replica/table: add as_data_dictionary()
replica: disentangle data_dictionary_impl from database
replica: move data_dictionary_impl into own header
The host id is cached on db::config object that's available in
all the places that need it. This allows removing the method in
question from the system_keyspace and not caring that anyone that
needs host_id would have to depend on system_keyspace instance.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Make it a standalone class, instead of private subclass of database.
Unfriend database and instead make wrap/unwrap methods public, so anyone
can use them.
"
The only real user is view building, which is converted to v2 and then
the v1 version of the mutation from fragments reader is removed.
Tests: unit(dev, release)
"
* 'v2-only-from-fragments-mutations/v1' of https://github.com/denesb/scylla:
readers: remove now unused v1 reader from fragments
test/boost: flat_mutation_reader_test: remove reader from fragments test
replica/table: migrate generate_and_propagate_view_updates() to v2
replica/table: migrate populate_views() to v2
db/view: convert view_update_builder interface to v2
db/view: migrate view_update_builder to v2
Flushing the base table triggers view building
and corresponding compactions on the view tables.
Temporarily disable compaction on both the base
table and all its view before flush and snapshot
since those flushed sstables are about to be truncated
anyway right after the snapshot is taken.
This should make truncate go faster.
In the process, this series also embeds `database::truncate_views`
into `truncate` and coroutinizes both
Refs #6309
Test: unit(dev)
Closes#10203
* github.com:scylladb/scylla:
replica/database: truncate: fixup indentation
replica/database: truncate: temporarily disable compaction on table and views before flush
replica/database: truncate: coroutinize per-view logic
replica/database: open-code truncate_view in truncate
replica/database: truncate: coroutinize run_with_compaction_disabled lambda
replica/database: coroutinize truncate
compaction_manager: add disable_compaction method
"
Making the system-keyspace into a standard sharded instance will
help to fix several dependency knots.
First, the global qctx and local-cache both will be moved onto the
sys-ks, all their users will be patched to depend on system-keyspace.
Now it's not quite so, but we're moving towards this state.
Second, snitch instance now sits in the middle of another dependency
loop. To untie one the preferred ip and dc/rack info should be
moved onto system keyspace altogether (now it's scattered over several
places). The sys-ks thus needs to be a sharded service with some
state.
This set makes system-keyspace sharded instance, equipps it with all
the dependencies it needs and passes it as dependency into storage
service, migration manager and API. This helps eliminating a good
portion of global qctx/cache usage and prepares the ground for snitch
rework.
tests: unit(dev)
v1: unit(debug), dtest.simple_boot_shutdown(dev)
"
* 'br-sharded-system-keyspace-instance-2' of https://github.com/xemul/scylla: (25 commits)
system_keyspace: Make load_host_ids non-static
system_keyspace: Make load_tokens non-static
system_keyspace: Make remove_endpoint and update_tokens non-static
system_keyspace: Coroutinize update_tokens
system_keyspace: Coroutinize remove_endpoint
system_keyspace: Make update_cached_values non-static
system_keyspace: Coroutinuze update_peer_info
system_keyspace: Make update_schema_version non-static
schema_tables: Add sharded<system_keyspace> argument to update_schema_version_and_announce
replica: Push sharded<system_keyspace> down to parse_system_tables
api: Carry sharded<system_keyspace> reference along
storage_service: Keep sharded<system_keyspace> reference
migration_manager: Keep sharded<system_keyspace> reference
system_keyspace: Remove temporary qp variable
system_keyspace: Make get_preferred_ips non-static
system_keyspace: Make cache_truncation_record non-static
system_keyspace: Make check_health non-static
system_keyspace: Make build_bootstrap_info non-static
system_keyspace: Make build_dc_rack_info non-static
system_keyspace: Make setup_version non-static
...
In https://github.com/scylladb/scylla/issues/10218
we see off-strategy compaction happening on a table
during the initial phases of
`distributed_loader::populate_column_family`.
It is caused by triggering offtrategy compaction
too early, when sstables are populated from the staging
directory in a144d30162.
We need to trigger offstrategy compaction only of the base
table directory, never the staging or quarantine dirs.
Fixes#10218
Test: unit(dev)
DTest: materialized_views_test.py::TestInterruptBuildProcess
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Message-Id: <20220316152812.3344634-1-bhalevy@scylladb.com>
All its (indirect) callers had been patched to have it, now it's
possible to have the argument in it. Next patch will make use of it
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
The method needs to call merge_schema() that will need system keyspace
instance at hand. The parse_s._t. method is boot-time one, pushing the
main-local instance through it is fine
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Compaction manager is calling back the table to run off-strategy compaction,
but the logic clearly belongs to manager which should perform the
operation independently and only call table to update its state with the
result.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20220315174504.107926-2-raphaelsc@scylladb.com>
Flushing the base table triggers view building
and corresponding compactions on the view tables.
Temporarily disable compaction on both the base
table and all its view before flush and snapshot
since those flushed sstables are about to be truncated
anyway right after the snapshot is taken.
This should make truncate go faster.
Refs #6309
Test: unit(dev)
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>