Replace get_local_storage_proxy() and get_local_storage_proxy() with
constructor-provided references. Some unneeded cases were removed.
Test: unit (dev)
Closes#9816
* github.com:scylladb/scylla:
migration_manager: replace uses of get_storage_proxy and get_local_storage_proxy with constructor-provided reference
migration_manager: don't keep storage_proxy alive during schema_check verb
mm: don't capture storage proxy shared_ptr during background schema merge
mm: remove stats on schema version get
The thrift layer started partially having admission control
after commit ef1de114f0,
but code inspection suggests that it might cause use-after-free
in a few cases, when a permit is obtained more than once per
handling - due to the fact that some functions tail-called other
functions, which also obtain a permit.
These extraneous permits are not taken anyore.
Tests: "please trust me" + cassandra-stress in thrift mode
Message-Id: <ac5d711288b22c5fed566937722cceeabc234e16.1639394937.git.sarna@scylladb.com>
The schema_check verb doesn't leak tasks, so when the verb is
unregistered it will be drained. So protection for storage_proxy lifetime
can be removed.
The definitions_update() verb captures a shared_ptr to storage_proxy
to keep it alive while the background task executes.
This was introduced in (2016!):
commit 1429213b4c
Author: Pekka Enberg <penberg@scylladb.com>
Date: Mon Mar 14 17:57:08 2016 +0200
main: Defer migration manager RPC verb registration after commitlog replay
Defer registering migration manager RPC verbs after commitlog has has
been replayed so that our own schema is fully loaded before other other
nodes start querying it or sending schema updates.
Message-Id: <1457971028-7325-1-git-send-email-penberg@scylladb.com>
when moving this code from storage_proxy.cc.
Later, better protection with a gate was added:
commit 14de126ff8
Author: Pavel Emelyanov <xemul@scylladb.com>
Date: Mon Mar 16 18:03:48 2020 +0300
migration_manager: Run background schema merge in gate
The call for merge_schema_from in some cases is run in the
background and thus is not aborted/waited on shutdown. This
may result in use-after-free one of which is
merge_schema_from
-> read_schema_for_keyspace
-> db::system_keyspace::query
-> storage_proxy::query
-> query_partition_key_range_concurrent
in the latter function the proxy._token_metadata is accessed,
while the respective object can be already free (unlike the
storage_proxy itself that's still leaked on shutdown).
Related bug: #5903, #5999 (cannot reproduce though)
Tests: unit(dev), manual start-stop
dtest(consistency.TestConsistency, dev)
dtest(schema_management, dev)
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Reviewed-by: Pekka Enberg <penberg@scylladb.com>
Message-Id: <20200316150348.31118-1-xemul@scylladb.com>
Since now the task execution is protected by the gate and
therefore migration_manager lifetime (which is contained within
that of storage_proxy, as it is constructed afterwards), capturing
the shared_ptr is not needed, and we therefore remove it, as
it uses the deprecated global storage_proxy accessors.
This patch helps to speed up node boot up for test setups like dtest.
Nadav reported
```
With Asias's two patches o Scylla, and my patch to enable it in dtest:
Boot time of 5 nodes is now down to 9 seconds!
Remember we started this exercise with 214 seconds? :-)
```
Closes#9808
* github.com:scylladb/scylla:
storage_service: Recheck tokens before throw in storage_service::bootstrap
gossip: Dot not wait for gossip to settle if skip_wait_for_gossip_to_settle is zero
Appending an empty range adjacent to an existing range tombstone would
not deoverlap (by dropping the empty range tombstone) resulting in
different (non canoncial) result depending on the order of appending.
Suppose that range tombstone [a, b] covers range tombstone [x, x), and [a, x) and [x, b) are range tombstones which correspond to [a, b] split around position x.
Appending [a, x) then [x, b) then [x, x) would give [a, b)
Appending [a, x) then [x, x) then [x, b) would give [a, x), [x, x), [x, b)
The fix is to drop empty range tombstones in range_tombstone_list so that the result is canonical.
Fixes#9661Closes#9764
* github.com:scylladb/scylla:
range_tombstone_list: Deoverlap adjacent empty ranges
range_tombstone_list: Convert to work in terms of position_in_partition
The full user-defined structure of the database (keyspaces,
tables, user-defined types, and similar metadata, often known
as the schema in other databases) is needed by much of the
front-end code. But in Scylla it is deeply intertwined with the
replica data management code - ::database, ::keyspace, and
::table. Not only does the front-end not need data access, it
cannot get correct data via these objects since they represent
just one replica out of many.
This dual-role is a frequent cause of recompilations. It was solved
to some degree by forward declarations, but there is still a lot
of incidental dependencies.
To solve this, we introduce a data_dictionary module (and
namespace) to exclusively deal with greater schema metadata.
It is an interface, with a backing implementation by the existing code,
so it doesn't add a new source of truth. The plan is to allow mock
implementations for testing as well.
Test: unit (dev, release, debug).
Closes#9783
* github.com:scylladb/scylla:
cql3, related: switch to data_dictionary
test: cql_test_env: provide access to data_dictionary
storage_proxy: provide access to data_dictionary
database: implement data_dictionary interface
data_dictionary: add database/keyspace/table objects
data_dictionary: move keyspace_metadata to data_dictionary
data_dictionary: move user_types_metadata to new module data_dictionary
"
Start converting small functions in gossiper code
from using `seastar::thread` context to coroutines.
For now, the changes are quite trivial.
Later, larger code fragments will be converted
to eliminate uses of `seastar::async` function calls.
Moving the code to coroutines makes the code a bit
more readable and also mmediately evident that a given
function is async just looking at the signature (for
example, for void-returning functions, a coroutine
will return `future<>` instead of `void` in case of
a seastar::thread-using function).
Tests: unit(dev)
"
* 'coro_gossip_v1' of https://github.com/ManManson/scylla:
gms: gossiper: coroutinize `maybe_enable_features`
gms: gossiper: coroutinize `wait_alive`
gms: gossiper: coroutinize `add_saved_endpoint`
gms: gossiper: coroutinize `evict_from_membership`
Stop using database (and including database.hh) for schema related
purposes and use data_dictionary instead.
data_dictionary::database::real_database() is called from several
places, for these reasons:
- calling yet-to-be-converted code
- callers with a legitimate need to access data (e.g. system_keyspace)
but with the ::database accessor removed from query_processor.
We'll need to find another way to supply system_keyspace with
data access.
- to gain access to the wasm engine for testing whether used
defined functions compile. We'll have to find another way to
do this as well.
The change is a straightforward replacement. One case in
modification_statement had to change a capture, but everything else
was just a search-and-replace.
Some files that lost "database.hh" gained "mutation.hh", which they
previously had access to through "database.hh".
Probably storage_proxy is not the correct place to supply
data_dictionary, but it is available to practically all of
the coordinator code, so it is convenient.
Implement the new data_dictionary interface using the existing
::database, ::keyspace, and ::table classes. The implementation
is straightforward. This will allow the coordinator code to access
the full schema without depending on the gnarly bits that compose
::database, like reader_concurrency_semaphore or the backlog
controller.
Add metadata-only counterparts to ::database, ::keyspace, and ::table.
Apart from being metadata-only objects suitable for the coordinator,
the new types are also type-erased and so they can be mocked without
being linked to ::database and friends.
We use a single abstract class to mediate between data_dictionary
objects and the objects they represent (data_dictionary::impl).
This makes the data_dictionary objects very lightweight - they
only contain a pointer to the impl object (of which only one
needs to be instantiated), and a reference to the object that
is represented. This allows these objects to be easily passed
by value.
The abstraction is leaky: in one place it is outright breached
with data_dictionary::database::real_database() that returns
a ::database reference. This is used so that we can perform the
transition incrementally. Another place is that one of the
methods returns a secondary_index_manager, which in turn grants
access to the real objects. This will be addressed later, probably
by type erasing as well.
This patch only contains the interface, and no implementation. It
is somewhat messy since it mimics the years-old evolution of the
real objects, but maybe it will be easier to improve it now.
The new module will contain all schema related metadata, detached from
actual data access (provided by the database class). User types is the
first contents to be moved to the new module.
In case count_normal_token_owners check fails, sleep and retry. This
makes test setups like skip_wait_for_gossip_to_settle = 0 and
ring_delay_ms = 0 work.
The skip_wait_for_gossip_to_settle == 0 which means do not wait for
gossip to settle at all. It is not respected in
gossiper::wait_for_range_setup and in gossiper::wait_for_gossip for
initial sleeps.
Since setting skip_wait_for_gossip_to_settle zero is not allowed in
production cluster anyway. It is mostly used by tests like dtest to
reduce the cluster boot up time. Respect skip_wait_for_gossip_to_settle
zero flag and avoid any sleep and wait completely.
"
These two readers are crucial for writing tests for any composable
reader so we need v2 versions of them before we can convert and test the
combined reader (for example). As these two readers are often used in
situations where the payload they deliver is specially crafted for the
test at hand, we keep their v1 versions too to avoid conversion meddling
with the tests.
Tests: unit(dev)
"
* 'forwarding-and-fragment-reader-v2/v1' of https://github.com/denesb/scylla:
flat_mutation_reader_v2: add make_flat_mutation_reader_from_fragments()
test/lib/mutation_source_test: don't force v1 reader in reverse run
mutation_source: add native_version() getter
flat_mutation_reader_v2: add make_forwardable()
position_in_partition: add after_key(position_in_partition_view)
flat_mutation_reader: make_forwardable(): fix indentation
flat_mutation_reader: make_forwardable(): coroutinize reader
The test in its current form is invalid, as database::remove
does removing the table's name from its listing
as well as from the keyspace metadata, so it won't be found
after that.
That said, database::drop_column_family then proceeds
to truncate and stop the table, after calling await_pending_ops,
and the latter should indeed block on the lock taken by the test.
This change modifies the test to create some sstables in the
table's directory before starting the sstable_directory.
Then, when executing "drop table" in the background,
wait until the table is not found by db.find_column_family
That would fail the test before this change.
See https://jenkins.scylladb.com/job/scylla-enterprise/job/next/1442/artifact/testlog/x86_64_debug/sstable_directory_test.sstable_directory_test_table_lock_works.4720.log
```
INFO 2021-12-13 14:00:17,298 [shard 0] schema_tables - Dropping ks.cf id=00487bc0-5c1d-11ec-9e3b-a44f824027ae version=b10c4994-31c7-3f5a-9591-7fedb0273c82
test/boost/sstable_directory_test.cc(453): fatal error: in "sstable_directory_test_table_lock_works": unexpected exception thrown by table_ok.get()
```
A this point, the test verifies again that the sstables are still on
disk (and no truncate happened), and only after drop completed,
the table should not exist on disk.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Message-Id: <20211214104407.2225080-1-bhalevy@scylladb.com>
When the UpdateTable operation is called for a non-existent table, the
appropriate error is ResourceNotFoundException, but before this patch
we ran into an exception, which resulted in an ugly "internal server
error".
In this patch we use the existing get_table() function which most other
operations use, and which does all the appropriate verifications and
generates the appropriate Alternator api_error instead of letting
internal Scylla exceptions escape to the user.
This patch also includes a test for UpdateTable on a non-existent table,
which used to fail before this patch and pass afterwards. We also add a
test for DeleteTable in the same scenario, and see it didn't have this
bug. As usual, both tests pass on DynamoDB, which confirms we generate
the right error codes.
Fixes#9747.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20211206181605.1182431-1-nyh@scylladb.com>
The series is on top of "wire up schema raft state machine". It will
apply without, but will not work obviously (when raft is disabled it
does nothing anyway).
This series does not provide any linearisability just yet though. It
only uses raft as a means to distribute schema mutations. To achieve
linearisability more work is needed. We need to at lease make sure
that schema mutation use monotonically increasing timestamps and,
since schema altering statement are RMW, no modification to schema
were done between schema mutation creation and application. If there
were an operation needs to be restarted.
* scylla-dev/gleb/raft-schema-v5: (59 commits)
cql3: cleanup mutation creation code in ALTER TYPE
cql3: use migration_manager::schema_read_barrier() before accessing a schema in altering statements
cql3: bounce schema altering statement to shard 0
migration_manager: add is_raft_enabled() to check if raft is enabled on a cluster
migration_manager: add schema_read_barrier() function
migration_manager: make announce() raft aware
migration_manager: co-routinize announce() function
migration_manager: pass raft_gr to the migration manager
migration_manager: drop view_ptr array from announce_column_family_update()
mm: drop unused announce_ methods
cql3: drop schema_altering_statement::announce_migration()
cql3: drop has_prepare_schema_mutations() from schema altering statement
cql3: drop announce_migration() usage from schema_altering_statement
cql3: move DROP AGGREGATE statement to prepare_schema_mutations() api
migration_manager: add prepare_aggregate_drop_announcement() function
cql3: move DROP FUNCTION statement to prepare_schema_mutations() api
migration_manager: add prepare_function_drop_announcement() function
cql3: move CREATE AGGREGATE statement to prepare_schema_mutations() api
migration_manager: add prepare_new_aggregate_announcement() function
cql3: move CREATE FUNCTION statement to prepare_schema_mutations() api
...
column_identifier serves two purposes: a value type used to denote an
identifier (which may or may not map to a table column), and `selectable`
implementation used for selecting table columns. This stands in the way
of further refactoring - the unification of the WHERE clause prepare path
(prepare_expression()) and the SELECT clause prepare path
(prepare_selectable()).
Reduce the entanglement by moving the selectable-specific parts to a new
type, selectable_column, and leaving column_identifier as a pure value type.
Closes#9729
* github.com:scylladb/scylla:
cql3: move selectable_column to selectable.cc
cql3: column_identifier: split selectable functionality off from column_identifier
The "ADD" operator in UpdateItem's AttributeUpdates supports a number of
types (numbers, sets and strings), should result in a ValidationException
if the attribute's existing type is different from the type of the
operand - e.g., trying to ADD a number to an attribute which has a set
as a value.
So far we only had partial testing for this (we tested the case where
both operands are sets, but of different types) so this patch adds the
missing tests. The new tests pass (on both Alternator and DynamoDB) -
we don't have a bug there.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20211213195023.1415248-1-nyh@scylladb.com>
Currently when scrub/validate is stopped (e.g. via the api),
scrub_validate_mode_validate_reader co_return:s without
closing the reader passed to it - causing a crash due
to internal error check, see #9766.
Throwing a compaction_stopped_exception rather than co_return:ing
an exception will be handled as any other exeption, including closing
the reader.
Fixes#9766
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Message-Id: <20211213125528.2422745-1-bhalevy@scylladb.com>
auto_compaction has been disabled so sstables
may have already been accumulated and require compaction.
Do not wait for new sstables to be written to trigger
compaction, trigger compaction right away.
Refs #9784
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Message-Id: <20211212090632.1257829-1-bhalevy@scylladb.com>
Schema altering statements do read/modify/write on the schema. To make
sure a statement access the latest schema it needs to execute raft read
barrier before accessing local schema copy.
Appending an empty range adjacent to an existing range tombstone would
not deoverlap (by dropping the empty range tombstone) resulting in
different (non canoncial) result depending on the order of appending.
Suppose that [a, b] covers [x, x)
Appending [a, x) then [x, b) then [x, x) would give [a, b)
Appending [a, x) then [x, x) then [x, b) would give [a, x), [x, x), [x, b)
Fix by dropping empty range tombstones.
The code in test/cql-pytest/run.py can start Scylla (or Cassandra, or
Redis, etc.) in a random IP address in 127.*.*.*. We explained in a
comment that 127.0.0.* is used by CCM so we avoid it in case someone
runs both dtest and test.py in parallel on the same machine.
But this observation was not accurate: Although the original CCM did use
only 127.0.0.*, in Scylla's CCM we added in 2017, in commit
00d3ba5562567ab83190dd4580654232f4590962, the ability to run multiple
copies of CCM in parallel; CCM now uses 127.0.*.*, not just 127.0.0.*.
So we need to correct this in the comment.
Luckily, the code doesn't need to change! We already avoided the entire
127.0.*.* for simplicity, not just 127.0.0.*.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20211212151339.1361451-1-nyh@scylladb.com>
"
Today, data from different buckets (e.g. windows) cannot be compacted together because
mutation compactor happens inside each consumer, where each consumer is done on behalf
of a particular bucket. To solve this problem, mutation compaction process is being
moved from consumer into producer, such that interposer consumer, which is responsible
for segregation, will be feeded with compacted data and forward it into the owner bucket.
Fixes#9662.
tests: unit(debug).
"
* 'compact_across_buckets_v2' of github.com:raphaelsc/scylla:
tests: sstable_compaction_test: add test_twcs_compaction_across_buckets
compaction: Move mutation compaction into producer for TWCS
compaction: make enable_garbage_collected_sstable_writer() more precise