In force_blocking_flush() there's an invoke-on-all invocation of
replica::database::flush() and a FIXME to get the replica database from
somewhere else rather than via query-processor -> data_dictionary.
Since now the force_blocking_flush() is non-static the invoke-on-all can
happen via system_keyspace's container and the database can be obtained
directly from the sys.ks. local instance
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
While in SQL DISTINCT applies to the result set, in CQL it applies
to the table being selected, and doesn't allow GROUP BY with clustering
keys. So reject the combination like Cassandra does.
While this is not an important issue to fix, it blocks un-xfailing
other issues, so I'm clearing it ahead of fixing those issues.
An issue is unmarked as xfail, and other xfails lose this issue
as a blocker.
Fixes#12479Closes#14970
Knowing that a server gained or lost leadership in group 0 is
sometimes useful for the purpose of debugging, so we log
information about these events on the INFO level.
Gaining and losing leadership are relatively rare events, so
this change shouldn't flood the logs.
Closes#14877
For `removenode`, we make a removed node a non-voter early. There is no
downside to it because the node is already dead. Moreover, it improves
availability in some situations.
For `decommission`, if we decommission a node when the number of nodes
is even, we make it a non-voter early to improve availability. All
majorities containing this node will remain majorities when we make this
node a non-voter and remove it from the set because the required size of
a majority decreases.
We don't change `decommission` when the number of nodes is odd since
this may reduce availability.
Fixes#13959Closes#14911
* github.com:scylladb/scylladb:
raft: make a decommissioning node a non-voter early
raft: topology_coordinator: implement step_down_as_nonvoter
raft: make a removed node a non-voter early
Rewrite test that checks whether task_manager/wait_task works properly.
The old version didn't work. Delete functions used in old version.
Closes#14959
* github.com:scylladb/scylladb:
test: rewrite wait_task test
test: move ThreadWrapper to rest_util.py
When deleting multiple sstables with the same prefix
the deletion atomicity is ensured by the pending_delete_log file,
so if scylla crashes in the middle, deletions will be replyed on
restart.
Therefore, we don't have to ensure atomicity of each individual
`unlink`. We just need to sync the directory once, before
removing the pending_delete_log file.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Closes#14967
This makes it possible to remove remaining users of the global qctx.
The thing is that db::schema_tables code needs to get wasm's engine, alien runner and instance cache to build wasm context for the merged function or to drop it from cache in the opposite case. To get the wasm stuff, this code uses global qctx -> query_processor -> wasm chain. However, the functions (un)merging code already has the database reference at hand, and its natural to get wasm stuff from it, not from the q.p. which is not available
So this PR packs the wasm engine, runner and cache on sharded<wasm::manager> instance, makes the manager be referenced by both q.p. and database and removes the qctx from schema tables code
Closes#14933
* github.com:scylladb/scylladb:
schema_tables: Stop using qctx
database: Add wasm::manager& dependency
main, cql_test_env, wasm: Start wasm::manager earlier
wasm: Shuffle context::context()
wasm: Add manager::remove()
wasm: Add manager::precompile()
wasm: Move stop() out of query_processor
wasm: Make wasm sharded<manager>
query_processor: Wrap wasm stuff in a struct
The metrics are registered on-demand when load-balancer is invoked, so that only leader exports the metrics. When leader changes, the old leader will stop exporting.
The metrics are divided into two levels: per-dc and per-node. In prometheus, they will have appropriate labels for dc and host_id values.
Closes#14962
* github.com:scylladb/scylladb:
tablet_allocator: unregister metrics when leadership is lost
tablets: load_balancer: Export metrics
service, raft: Move balance_tablets() to tablet_allocator
tablet_allocator: Start even if tablets feature is not enabled
main, storage_service: Pass tablet allocator to storage_service
Before the patch, tablet metadata update was processed on local schema merge
before table changes.
When table is dropped, this means that for a while table will exist
without a corresponding tablet map. This can cause memtable flush for
this table to fail, resulting in intentional abort(). That's because
sstable writing attempts to access tablet map to generate sharding
metadata.
If auto_snapshot is enabled, this is much more likely to happen,
because we flush memtables on table drop.
To fix the problem, process tablet metadata after dropping tables, but
before creating tables.
Fixes#14943Closes#14954
There are two places in there that need qctx to get query_processor from
to, in turn, get wasm::manager from. Fortunately, both places have the
database reference at hand and can get the wasm::manager from it
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
The dependency is needed by db::schema_tables to get wasm manager for
its needs. This patch prepares the ground. Now the wasm::manager is
shared between replica::database and cql3::query_processor
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
It will be needed by replica::database and should be available that
early. It doesn't depend on anything and can be moved in the starting
order safely
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Add a constructor that builds context out of const manager reference.
The existing one needs to get engine and instance cache and does it via
query_processor. This change lets removing those exports and finally --
drop the wasm::manager -> cql3::query_processor friendship
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
This is one of the users of query_processor's export of wasm::manager's
instance cache. Remove it in advance
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
When the q.p. stops it also "stops" the wasm manager. Move this call
into main. The cql test env doesn't need this change, it stops the whole
sharded service which stops instances on its own
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
The wasm::manager is just cql3::wasm_context renamed. It now sits in
lang/wasm* and is started as a sharded service in main (and cql test
env). This move also needs some headers shuffling, but it's not severe
This change is required to make it possible for the wasm::manager to be
shared (by reference) between q.p. and replica::database further
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
There are three wasm-only fields on q.p. -- engine, cache and runner.
This patch groups them on a single wasm_context structure to make it
earier to manipulate them in the next patches
The 'friend' declaration it temporary, will go away soon
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Currently, feature service uses `system_keyspace::load_topology_state`
to load information about features from the `system.topology` table.
This function implicitly assumes that it is called after schema
commitlog replay and will correspond to the state of the topology state
machine after some command is applied.
However, feature check happens before the commitlog replay. If some
group 0 command consists of multiple mutations that are not applied
atomically, the `load_topology_state` function may fail to construct a
`service::topology` object based on the table state. Moreover, this
function not only checks `system.topology` but also
`system.cdc_generations_v3` - in the case of the issue, the entry that
was loaded from the this table didn't contain the `num_ranges`
parameter.
In order to fix this, the feature check code now uses
`load_topology_features_state` which only loads enabled and supported
features from `system.topology`. Only this information is really
necessary for the feature check, and it doesn't have any invariants to
check.
Fixes: #14944Closes#14955
* github.com:scylladb/scylladb:
feature_service: don't load whole topology state to check features
system_keyspace: separate loading topology_features from topology
topology_state_machine: extract features-related fields to a struct
untyped_result_set: add missing_column_exception
Hold a (newly added) group0_state_machine gate
that is closed and waited on in group0_state_machine::abort()
To prevent use-after-free when destroying the group0_state_machine
while transfer_snapshot runs.
Fixes#14907
Also, use an abort_source in group0_state_machine
to abort an ongoing transfer_snapshot operation
on group0_state_machine::abort()
Closes#14952
* github.com:scylladb/scylladb:
raft: group0_state_machine: transfer_snapshot: make abortable
raft: group0_state_machine: transfer_snapshot: hold gate
Currently, feature service uses `system_keyspace::load_topology_state`
to load information about features from the `system.topology` table.
This function implicitly assumes that it is called after schema
commitlog replay and will correspond to the state of the topology state
machine after some command is applied.
However, feature check happens before the commitlog replay. If some
group 0 command consists of multiple mutations that are not applied
atomically, the `load_topology_state` function may fail to construct a
`service::topology` object based on the table state. Moreover, this
function not only checks `system.topology` but also
`system.cdc_generations_v3` - in the case of the issue, the entry that
was loaded from the this table didn't contain the `num_ranges`
parameter.
In order to fix this, the feature check code now uses
`load_topology_features_state` which only loads enabled and supported
features from `system.topology`. Only this information is really
necessary for the feature check, and it doesn't have any invariants to
check.
Fixes: #14944
Now, it is possible to load topology_features separately from the
topology struct. It will be used in the code that checks enabled
features on startup.
`enabled_features` and `supported_features` are now moved to a new
`topology::features` struct. This will allow to move load this
information independently from the `topology` struct, which will be
needed for feature checking during start.
Getting reason argument in task_manager_module::get_progress is deceiving
as the method works properly only for streaming::stream_reason::repair
(repair::shard_repair_task_impl::nr_ranges_finished isn't updated for
any other reason).
If some state update in database::add_column_family throws,
info about a column family would be inconsistent.
Undo already performed operations in database::add_column_family
when one throws.
Fixes: #14666.
Closes#14672
* github.com:scylladb/scylladb:
replica: undo the changes if something fails
replica: start table earlier in database::add_column_family
All compaction task executors, except for regular compaction one,
become task manager compaction tasks.
Creating and starting of major_compaction_task_executor is modified
to be consistent with other compaction task executors.
Closes#14505
* github.com:scylladb/scylladb:
test: extend test_compaction_task.py to cover compaction group tasks
compaction: turn custom_task_executor into compaction_task_impl
compaction: turn sstables_task_executor into sstables_compaction_task_impl
compaction: change sstables compaction tasks type
compaction: move table_upgrade_sstables_compaction_task_impl
compaction: pass task_info through sstables compaction
compaction: turn offstrategy_compaction_task_executor into offstrategy_compaction_task_impl
compaction: turn cleanup_compaction_task_executor into cleanup_compaction_task_impl
comapction: use optional task info in major compaction
compaction: use perform_compaction in compaction_manager::perform_major_compaction
While describing materialized view, print `synchronous_updates` option
only if the tag is present in schema's extensions map. Previously if the
key wasn't present, the default (false) value was printed.
Fixes: #14924Closes#14928
we wait for the same condition couple lines before, so no need to
check it again using `BOOST_CHECK_EQUAL()`.
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
Closes#14921
Currently, when one tries to access a column that an untyped_result_set
does not contain, a `std::bad_variant_access` exception is thrown. This
exception's message provides very little context and it can be difficult
to even figure out where this message is coming from.
In order to improve the situation, a new exception `missing_column` is
introduced which includes the missing column's name in its error
message. The exception derives from `std::bad_variant_access` for
compatibility with existing code that may want to catch it.
`boost::program_options::value()` create a new typed_value<T> object,
without holding it with a shared_ptr. boost::program_options expects
developer to construct a `bpo::option_description` right away from it.
and `boost::program_options::option_description` takes the ownership
of the `type_value<T>*` raw pointer, and manages its life cycle with
a shared_ptr. but before passing it to a `bpo::option_description`,
the pointer created by `boost::program_options::value()` is a still
a raw pointer.
before this change, we initialize positional options as global
variables using `boost::program_options::value()`. but unfortunately,
we don't always initialize a `bpo::option_description` from it --
we only do this on demand when the corresponding subcommand is
called.
so, if the corresponding subcommand is not called, the created
`typed_value<T>` objects are leaked. hence LeakSanitizer warns us.
after this change, we create the option vector as a static
local variable in a function so it is created on demand as well.
as an alternative, we could initialize the options vector as local
variable where it used. but to be more consistent with how
`global_option` is specified. and to colocate them in a single
place, let's keep the existing code layout.
Fixes#14929
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
Closes#14939
Commit bc5f6cf45d
added a reserve call to the `ranges` vector before
inserting all the returned token ranges into it.
However, that reservation is too small as we need
to express size+1 ranges for size tokens with
<unbound, token[0]> and <token[size-1], unbound>
ranges at the front and back, respectively.
Fixes#14849
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Closes#14938
Use an abort_source in group0_state_machine
to abort an ongoing transfer_snapshot operation
on group0_state_machine::abort()
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Hold a (newly added) group0_state_machine gate
that is closed and waited on in group0_state_machine::abort()
To prevent use-after-free when destroying the group0_state_machine
while transfer_snapshot runs.
Fixes#14907
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
This patch adds the ranges_parallelism option to repair restful API.
Users can use this option to optionally specify the number of ranges to repair in parallel per repair job to a smaller number than the Scylla core calculated default max_repair_ranges_in_parallel.
Scylla manager can also use this option to provide more ranges (>N) in a single repair job but only repairing N ranges_parallelism in parallel, instead of providing N ranges in a repair job.
To make it safer, unlike the PR #4848, this patch does not allow user to exceed the max_repair_ranges_in_parallel.
Fixes#4847Closes#14886
* github.com:scylladb/scylladb:
repair: Add ranges_parallelism option
repair: Change to use coroutine in do_repair_ranges
before this change, if the object_store test fails, the tempdir
will be preserved. and if our CI test pipeline is used to perform
the test, the test job would scan for the artifacts, and if the
test in question fails, it would take over 1 hour to scan the tempdir.
to alleviate the pain, let's just keep the scylla logging file
no matter the test fails or succeeds. so that jenkins can scan the
artifacts faster if the test fails.
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
Closes#14880
Per-table metrics are very valuable for the users, it does come with a high load on both the reporting and the collecting metrics systems.
This patch adds a small subset of per-metrics table that will be reported on the node level.
The list of metrics is:
system_column_family_memtable_switch - Number of times flush has
resulted in the memtable being switched out
system_column_family_memtable_partition_writes - Number of write
operations performed on partitions in memtables
system_column_family_memtable_partition_hits - Number of times a write
operation was issued on an existing partition in memtables
system_column_family_memtable_row_writes - Number of row writes
performed in memtables
system_column_family_memtable_row_hits - Number of rows overwritten by write operations in memtables
system_column_family_total_disk_space - Total disk space used system_column_family_live_sstable - Live sstable count system_column_family_read_latency_count - Number of reads system_column_family_write_latency_count - Number of writes
The names of the read/write metrics is based on the histogram convention, so when latencies histograms will be added, the names will not change.
The metrics are label with a specific label __per_table="node" so it will be possible to easily manipulate it.
The metrics will be available when enable_metrics_reporting (the per-table full metrics flag) is off
Fixes#2198Closes#13293
* github.com:scylladb/scylladb:
replica/table.cc: Add node-per-table metrics
config: add enable_node_table_metrics flag