Commit Graph

225 Commits

Author SHA1 Message Date
Botond Dénes
5e422ceefb cql3/query_processor: for_each_cql_result(): move func to the coro frame
Said method has a func parameter (called just f), which it receives as
rvalue ref and just uses as a reference. This means that if caller
doesn't keep the func alive, for_each_cql_result() will run into
use-after-free after the first suspention point. This is unexpected for
callers, who don't expect to have to keep something alive, which they
passed in with std::move().
Adjust the signature to take a value instead, value parameters are moved
to the coro frame and survive suspention points.
Adjust internal callers (query_internal()) the same way.

There are no known vulnerable external callers.

(cherry picked from commit 4e96e320b4)
2024-06-26 09:05:13 +00:00
Piotr Smaron
e04964ba17 Return response only when tablets are reallocated
Up until now we waited until mutations are in place and then returned
directly to the caller of the ALTER statement, but that doesn't imply
that tablets were deleted/created, so we must wait until the whole
processing is done and return only then.
2024-05-30 08:33:26 +03:00
Piotr Smaron
544c424e89 Implement ALTER tablets KEYSPACE statement support
This commit adds support for executing ALTER KS for keyspaces with
tablets and utilizes all the previous commits.
The ALTER KS is handled in alter_keyspace_statement, where a global
topology request in generated with data attached to system.topology
table. Then, once topology state machine is ready, it starts to handle
this global topology event, which results in producing mutations
required to change the schema of the keyspace, delete the
system.topology's global req, produce tablets mutations and additional
mutations for a table tracking the lifetime of the whole req. Tracking
the lifetime is necessary to not return the control to the user too
early, so the query processor only returns the response while the
mutations are sent.
2024-05-30 08:33:25 +03:00
Piotr Smaron
adfad686b3 Allow query_processor to check if global topo queue is empty
With current implementation only 1 global topo req can be executed at a
time, so when ALTER KS is executed, we'll have to check if any other
global topo req is ongoing and fail the req if that's the case.
2024-05-30 08:33:15 +03:00
Piotr Smaron
51b8b04d97 Add storage service to query processor
Query processor needs to access storage service to check if global
topology request is still ongoing and to be able to wait until it
completes.
2024-05-30 08:33:15 +03:00
Piotr Dulikowski
35f456c483 Merge 'Extend ALTER TABLE ... DROP to allow specifying timestamp of column drop' from Michał Jadwiszczak
In order to correctly restore schema from `DESC SCHEMA WITH INTERNALS`, we need a way to drop a column with a timestamp in the past.

Example:
- table t(a int pk, b int)
- insert some data1
- drop column b
- add column b int
- insert some data2

If the sstables weren't compacted, after restoring the schema from description:
- we will loss column b in data2 if we simply do `ALTER TABLE t DROP b` and `ALTER TABLE t ADD b int`
- we will resurrect column b in data1 if we skip dropping and re-adding the column

Test for this: https://github.com/scylladb/scylla-dtest/pull/4122

Fixes #16482

Closes scylladb/scylladb#18115

* github.com:scylladb/scylladb:
  docs/cql: update ALTER TABLE docs
  test/cqlpytest: add test for prepared `ALTER TABLE ... DROP ... USING TIMESTAMP ?`
  test/cql-pytest: remove `xfail` from alter table with timestamp tests
  cql3/statements: extend `ALTER TABLE ... DROP` to allow specifying timestamp of column drop
  cql3/statements: pass `query_options` to `prepare_schema_mutations()`
  cql3/statements: add bound terms to alter table statement
  cql3/statements: split alter_table_statement into raw and prepared
  schema: allow to specify timestamp of dropped column
2024-04-29 14:05:05 +02:00
Michał Jadwiszczak
7dc0d068c0 cql3/statements: pass query_options to prepare_schema_mutations()
The object is needed to get timestamp from attributes (in a case when
the statement was prepared with parameter marker).
2024-04-25 21:27:40 +02:00
Marcin Maliszkiewicz
7085339f72 cql3: test: include get_mutations_internal log in test.py
We have a concurrent modification conflict in tests and suspect
duplicated requests but since we don't log successful requests
we have no way to verify if that's the case. get_mutations_internal log
will help to tell wchich nodes are trying to push auth or
service levels mutations into raft.

Refs scylladb/scylladb#18319

Closes scylladb/scylladb#18413
2024-04-25 17:17:53 +03:00
Michał Jadwiszczak
da82c5f0b0 cql3:statements: run service level statements on shard0 with raft guard
To migrate service levels to be raft managed, obtain `group0_guard` to
be able to pass it to service_level_controller's methods.

Using this mechanism also automatically provides retries in case of
concurrent group0 operation.
2024-03-21 23:14:57 +01:00
Marcin Maliszkiewicz
bd444ed6f1 cql3: auth: add a way to create mutations without executing
To make table modifications go via raft we need to publish
mutations. Currently many system tables (especially auth) use
CQL to generate table modifications. Added function is a missing
link which will allow to do a seamless transition of certain
system tables to raft.
2024-03-01 16:25:14 +01:00
Marcin Maliszkiewicz
b482679857 cql3: run auth DML writes on shard 0 and with raft guard
Because we'll be doing group0 operations we need to run on shard 0. Additional benefit
is that with needs_guard set query_processor will also do automatic retries in case of
concurrent group0 operations.
2024-03-01 16:25:14 +01:00
Michał Jadwiszczak
aac90c1f92 cql3:query_processor: add logged user to query tracing info 2024-01-23 10:25:34 +01:00
Eliran Sinvani
5e33d9346b query processor: treat view changes at least as table changes
When a base table changes and altered, so does the views that might
refer to the added column (which includes "SELECT *" views and also
views that might need to use this column for rows lifetime (virtual
columns).
However the query processor implementation for views change notification
was an empty function.
Since views are tables, the query processor needs to at least treat them
as such (and maybe in the future, do also some MV specific stuff).
This commit adds a call to `on_update_column_family` from within
`on_update_view`.
The side effect true to this date is that prepared statements for views
which changed due to a base table change will be invalidated.

Fixes #16392

Signed-off-by: Eliran Sinvani <eliransin@scylladb.com>
2024-01-21 15:40:54 +02:00
Kefu Chai
2dbf044b91 cql3: do not include unused headers
these unused includes were identified by clangd. see
https://clangd.llvm.org/guides/include-cleaner#unused-include-warning
for more details on the "Unused include" warning.

Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>

Closes scylladb/scylladb#16791
2024-01-16 16:43:17 +02:00
Benny Halevy
6123dc6b09 query_processor: execute_internal: support unset values
Add overloads for execute_internal and friends
accepting a vector of optional<data_value>.

The caller can pass nullopt for any unset value.
The vector of optionals is translated internally to
`cql3::raw_value_vector_with_unset` by `make_internal_options`.

This path will be called by system_keyspace::update_peer_info
for updating a subset of the system.peers columns.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2023-12-31 18:21:35 +02:00
Benny Halevy
328ce23c78 types: add data_value_list
data_value_list is a wrapper around std::initializer_list<data_value>.
Use it for passing values to `cql3::query_processor::execute_internal`
and friends.

A following path will add a std::variant for data_value_or_unset
and extend data_value_list to support unset values.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2023-12-31 18:17:27 +02:00
Piotr Smaroń
8c464b2ddb guardrails: restrict replication strategy (RS)
Replacing `restrict_replication_simplestrategy` config option with
2 config options: `replication_strategy_{warn,fail}_list`, which
allow us to impose soft limits (issue a warning) and hard limits (not
execute CQL) on replication strategy when creating/altering a keyspace.
The reason to rather replace than extend `restrict_replication_simplestrategy` config
option is that it was not used and we wanted to generalize it.
Only soft guardrail is enabled by default and it is set to SimpleStrategy,
which means that we'll generate a CQL warning whenever replication strategy
is set to SimpleStrategy. For new cloud deployments we'll move
SimpleStrategy from warn to the fail list.
Guardrails violations will be tracked by metrics.

Resolves #5224
Refs #8892 (the replication strategy part, not the RF part)

Closes scylladb/scylladb#15399
2023-10-31 18:34:41 +03:00
Botond Dénes
844a0e426f Merge 'Mark counters with skip when empty' from Amnon Heiman
This series mark multiple high cardinality counters with skip_when_empty flag.
After this patch the following counters will not be reported if they were never used:
```
scylla_transport_cql_errors_total
scylla_storage_proxy_coordinator_reads_local_node
scylla_storage_proxy_coordinator_completed_reads_local_node
scylla_transport_cql_errors_total
```
Also marked, the CAS related CQL operations.
Fixes #12751

Closes scylladb/scylladb#13558

* github.com:scylladb/scylladb:
  service/storage_proxy.cc: mark counters with skip_when_empty
  cql3/query_processor.cc: mark cas related metrics with skip_when_empty
  transport/server.cc: mark metric counter with skip_when_empty
2023-09-19 15:02:39 +03:00
Piotr Smaroń
eb46f1bd17 guardrails: restrict replication factor (RF)
Replacing `minimum_keyspace_rf` config option with 4 config options:
`{minimum,maximum}_replication_factor_{warn,fail}_threshold`, which
allow us to impose soft limits (issue a warning) and hard limits (not
execute CQL) on RF when creating/altering a keyspace.
The reason to rather replace than extend `minimum_keyspace_rf` config
option is to be aligned with Cassandra, which did the same, and has the
same parameters' names.
Only min soft limit is enabled by default and it is set to 3, which means
that we'll generate a CQL warning whenever RF is set to either 1 or 2.
RF's value of 0 is always allowed and means that there will not be any
replicas on a given DC. This was agreed with PM.
Because we don't allow to change guardrails' values when scylla is
running (per PM), there're no tests provided with this PR, and dtests will be
provided separately.
Exceeding guardrails' thresholds will be tracked by metrics.

Resolves #8619
Refs #8892 (the RF part, not the replication-strategy part)

Closes #14262
2023-09-04 19:22:17 +03:00
Amnon Heiman
c279409d48 cql3/query_processor.cc: mark cas related metrics with skip_when_empty
This patch mark the conditional metrics counters with skip_when_empty
flag, to reduce metrics reporting when cas is not in used.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2023-08-23 09:30:35 -04:00
Gleb Natapov
4ffc39d885 cql3: Extend the scope of group0_guard during DDL statement execution
Currently we hold group0_guard only during DDL statement's execute()
function, but unfortunately some statements access underlying schema
state also during check_access() and validate() calls which are called
by the query_processor before it calls execute. We need to cover those
calls with group0_guard as well and also move retry loop up. This patch
does it by introducing new function to cql_statement class take_guard().
Schema altering statements return group0 guard while others do not
return any guard. Query processor takes this guard at the beginning of a
statement execution and retries if service::group0_concurrent_modification
is thrown. The guard is passed to the execute in query_state structure.

Fixes: #13942

Message-ID: <ZNsynXayKim2XAFr@scylladb.com>
2023-08-17 15:52:48 +03:00
Avi Kivity
d57a951d48 Revert "cql3: Extend the scope of group0_guard during DDL statement execution"
This reverts commit 70b5360a73. It generates
a failure in group0_test .test_concurrent_group0_modifications in debug
mode with about 4% probability.

Fixes #15050
2023-08-15 00:26:45 +03:00
Gleb Natapov
70b5360a73 cql3: Extend the scope of group0_guard during DDL statement execution
Currently we hold group0_guard only during DDL statement's execute()
function, but unfortunately some statements access underlying schema
state also during check_access() and validate() calls which are called
by the query_processor before it calls execute. We need to cover those
calls with group0_guard as well and also move retry loop up. This patch
does it by introducing new function to cql_statement class take_guard().
Schema altering statements return group0 guard while others do not
return any guard. Query processor takes this guard at the beginning of a
statement execution and retries if service::group0_concurrent_modification
is thrown. The guard is passed to the execute in query_state structure.

Fixes: #13942

Message-ID: <ZNSWF/cHuvcd+g1t@scylladb.com>
2023-08-13 14:19:39 +03:00
Patryk Jędrzejczak
27ddf78171 migration_manager: announce: provide descriptions for all calls
The system.group0_history table provides useful descriptions
for each command committed to Raft group 0. One way of applying
a command to group 0 is by calling migration_manager::announce.
This function has the description parameter set to empty string
by default. Some calls to announce use this default value which
causes null values in system.group0_history. We want
system.group0_history to have an actual description for every
command, so we change all default descriptions to reasonable ones.

We can't provide a reasonable description to announce in
query_processor::execute_thrift_schema_command because this
function is called in multiple situations. To solve this issue,
we add the description parameter to this function and to
handler::execute_schema_command that calls it.
2023-08-07 14:38:11 +02:00
Pavel Emelyanov
d58a2d65b5 wasm: Move stop() out of query_processor
When the q.p. stops it also "stops" the wasm manager. Move this call
into main. The cql test env doesn't need this change, it stops the whole
sharded service which stops instances on its own

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2023-08-04 19:47:50 +03:00
Pavel Emelyanov
243f2217dd wasm: Make wasm sharded<manager>
The wasm::manager is just cql3::wasm_context renamed. It now sits in
lang/wasm* and is started as a sharded service in main (and cql test
env). This move also needs some headers shuffling, but it's not severe

This change is required to make it possible for the wasm::manager to be
shared (by reference) between q.p. and replica::database further

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2023-08-04 19:47:50 +03:00
Pavel Emelyanov
dde285e7e9 query_processor: Wrap wasm stuff in a struct
There are three wasm-only fields on q.p. -- engine, cache and runner.
This patch groups them on a single wasm_context structure to make it
earier to manipulate them in the next patches

The 'friend' declaration it temporary, will go away soon

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2023-08-04 19:47:50 +03:00
Patryk Jędrzejczak
233d801f39 cql3: query_processor::execute_thrift_schema_command: remove an unused parameter
After changing the prepare_ methods of migration_manager to
functions, the migration_manager& parameter of
query_processor::execute_thrift_schema_command and
thrift::handler::execute_schema_command (that calls
query_processor::execute_thrift_schema_command) has been unused.
2023-08-01 10:07:31 +02:00
Patryk Jędrzejczak
ffc3c1302e cql3: schema_altering_statement::prepare_schema_mutations: remove an unused parameter
After changing the prepare_ methods of migration_manager to
functions, the migration_manager& parameter of
schema_altering_statement::prepare_schema_mutations has been
unused by all classes inheriting from schema_altering_statement.
2023-08-01 10:07:31 +02:00
Kamil Braun
3d58e8e424 Revert "cql3: Extend the scope of group0_guard during DDL statement execution"
This reverts commit c42a91ec72.

A significant performance regression was observed due to this change.

From Avi:
> perf-simple-query --smp 1
>
> before:
>
> 216489.88 tps ( 61.1 allocs/op, 13.1 tasks/op, 43558 insns/op, 0 errors)
> 217708.69 tps ( 61.1 allocs/op, 13.1 tasks/op, 43542 insns/op, 0 errors)
> 219495.02 tps ( 61.1 allocs/op, 13.1 tasks/op, 43538 insns/op, 0 errors)
> 216863.84 tps ( 61.1 allocs/op, 13.1 tasks/op, 43567 insns/op, 0 errors)
> 218936.48 tps ( 61.1 allocs/op, 13.1 tasks/op, 43546 insns/op, 0 errors)
>
> after:
>
> 201773.52 tps ( 63.1 allocs/op, 15.1 tasks/op, 44600 insns/op, 0 errors)
> 210875.48 tps ( 63.1 allocs/op, 15.1 tasks/op, 44558 insns/op, 0 errors)
> 210186.55 tps ( 63.1 allocs/op, 15.1 tasks/op, 44588 insns/op, 0 errors)
> 211021.76 tps ( 63.1 allocs/op, 15.1 tasks/op, 44569 insns/op, 0 errors)
> 208597.52 tps ( 63.1 allocs/op, 15.1 tasks/op, 44587 insns/op, 0 errors)
>
> Two extra allocations, two extra tasks, 1k extra instructions, for
> something that is DDL only.

Fixes #14590
2023-07-10 13:20:49 +02:00
Gleb Natapov
c42a91ec72 cql3: Extend the scope of group0_guard during DDL statement execution
Currently we hold group0_guard only during DDL statement's execute()
function, but unfortunately some statements access underlying schema
state also during check_access() and validate() calls which are called
by the query_processor before it calls execute. We need to cover those
calls with group0_guard as well and also move retry loop up. This patch
does it by introducing new function to cql_statement class take_guard().
Schema altering statements return group0 guard while others do not
return any guard. Query processor takes this guard at the beginning of a
statement execution and retries if service::group0_concurrent_modification
is thrown. The guard is passed to the execute in query_state structure.

Fixes: #13942
Message-Id: <ZJ2aeNIBQCtnTaE2@scylladb.com>
2023-07-05 14:38:34 +02:00
Gleb Natapov
28f31bcbb1 query_processor: get rid of internal_state and create individual query_satate for each request
We want to put per-request field in query_satate, so make it unique for
each internal execution (it is unique for non internal once).
2023-06-22 15:26:20 +03:00
Gleb Natapov
0820309c14 query_processor: co-routinise execute_prepared_without_checking_exception_message function 2023-06-22 13:57:36 +03:00
Gleb Natapov
818e72c029 query_processor: co-routinize execute_direct_without_checking_exception_message function 2023-06-22 13:57:36 +03:00
Gleb Natapov
d75a41ba30 query_processor: move statement::validate call into execute_with_params function
It is called before any call to the function anyway, so lets do it once instead.
2023-06-22 13:49:11 +03:00
Gleb Natapov
24e78059a5 query_processor: co-routinise execute_with_params function 2023-06-22 13:49:11 +03:00
Gleb Natapov
725fa5e0f3 query_processor: execute statement::validate before each execution of internal query instead of only during prepare
There is a discrepancy on how statement::validate is used. On a regular
path it is called before each execution, but on internal execution
path it is called only once during prepare. Such discrepancy make it
hard to reason what can and cannot be done during the call. Call it
uniformly before each execution. This allow validate to check a state that
can change after prepare.
2023-06-22 13:49:11 +03:00
Gleb Natapov
ce12a18135 query_processor: get rid of shared internal_query_state
internal_query_state was passed in shared_ptr from the java
translation times. It may be a regular c++ type with a lifetime
bound by the function execution it was created in.
2023-06-22 13:49:11 +03:00
Gleb Natapov
aabd05e2ef query_processor: co-routinize execute_paged_internal function 2023-06-22 13:49:11 +03:00
Gleb Natapov
64a67a59d6 query_processor: co_routinize execute_batch_without_checking_exception_message function 2023-06-22 13:49:11 +03:00
Gleb Natapov
c4ca24e636 query_processor: co-routinize process_authorized_statement function 2023-06-22 13:49:11 +03:00
Kamil Braun
c212370cf1 cql3: query_processor: split remote initialization step
Pass `migration_manager&`, `forward_service&` and `raft_group0_client&`
in the remote init step which happens after the constructor.

Add a corresponding uninit remote step.
Make sure that any use of the `remote` services is finished before we
destroy the `remote` object by using a gate.

Thanks to this in a later commit we'll be able to move the construction
of `query_processor` earlier in the Scylla initialization procedure.
2023-06-16 14:29:59 +02:00
Kamil Braun
ec5b831c13 cql3: query_processor: move migration_manager&, forwarder&, group0_client& to a remote object
These services are used for performing distributed queries, which
require remote calls. As a preparation for 2-phase initialization of
`query_processor` (for local queries vs for distributed queries), move
them to a separate `remote` object which will be constructed in the
second phase.

Replace the getters for the different services with a single `remote()`
getter. Once we split the initialization into two phases, `remote()`
will include a safety protection.
2023-06-16 14:08:21 +02:00
Kamil Braun
c2fa6406ad cql3: query_processor: make forwarder() private 2023-06-16 13:45:59 +02:00
Kamil Braun
f616408a87 cql3: query_processor: make get_group0_client() private 2023-06-16 13:45:19 +02:00
Kamil Braun
2e441e17cf cql3: query_processor: make get_migration_manager private
After previous commits it's no longer used outside `query_processor`.
Also remove the `const` version - not needed for anything.

Use the getter instead of directly accessing `_mm` in `query_processor`
methods. Later we will put `_mm` in a separate object.
2023-06-16 13:44:14 +02:00
Kamil Braun
9e25a3cbed thrift: handler: move implementation of execute_schema_command to query_processor
It's now named `execute_thrift_schema_command` in `query_processor`.

This allows us to remove yet another
`query_processor::get_migration_manager()` call.

Now that `execute_thrift_schema_command` sits near
`execute_schema_statement` (the latter used for CQL), we can see a
certain similarity. The Thrift version should also in theory get a retry
loop like the one CQL has, so the similarity would become even stronger.
Perhaps the two functions could be refactored to deduplicate some logic
later.
2023-06-15 09:48:54 +02:00
Kamil Braun
eace351ca3 cql3: statements: schema_altering_statement: move execute0 to query_processor
Rename it to `execute_schema_statement`.

This allows us to remove a call to
`query_processor::get_migration_manager`, the goal being to make it a
private member function.
2023-06-15 09:48:54 +02:00
Kamil Braun
30cc07b40d Merge 'Introduce tablets' from Tomasz Grabiec
This PR introduces an experimental feature called "tablets". Tablets are
a way to distribute data in the cluster, which is an alternative to the
current vnode-based replication. Vnode-based replication strategy tries
to evenly distribute the global token space shared by all tables among
nodes and shards. With tablets, the aim is to start from a different
side. Divide resources of replica-shard into tablets, with a goal of
having a fixed target tablet size, and then assign those tablets to
serve fragments of tables (also called tablets). This will allow us to
balance the load in a more flexible manner, by moving individual tablets
around. Also, unlike with vnode ranges, tablet replicas live on a
particular shard on a given node, which will allow us to bind raft
groups to tablets. Those goals are not yet achieved with this PR, but it
lays the ground for this.

Things achieved in this PR:

  - You can start a cluster and create a keyspace whose tables will use
    tablet-based replication. This is done by setting `initial_tablets`
    option:

    ```
        CREATE KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy',
                        'replication_factor': 3,
                        'initial_tablets': 8};
    ```

    All tables created in such a keyspace will be tablet-based.

    Tablet-based replication is a trait, not a separate replication
    strategy. Tablets don't change the spirit of replication strategy, it
    just alters the way in which data ownership is managed. In theory, we
    could use it for other strategies as well like
    EverywhereReplicationStrategy. Currently, only NetworkTopologyStrategy
    is augmented to support tablets.

  - You can create and drop tablet-based tables (no DDL language changes)

  - DML / DQL work with tablet-based tables

    Replicas for tablet-based tables are chosen from tablet metadata
    instead of token metadata

Things which are not yet implemented:

  - handling of views, indexes, CDC created on tablet-based tables
  - sharding is done using the old method, it ignores the shard allocated in tablet metadata
  - node operations (topology changes, repair, rebuild) are not handling tablet-based tables
  - not integrated with compaction groups
  - tablet allocator piggy-backs on tokens to choose replicas.
    Eventually we want to allocate based on current load, not statically

Closes #13387

* github.com:scylladb/scylladb:
  test: topology: Introduce test_tablets.py
  raft: Introduce 'raft_server_force_snapshot' error injection
  locator: network_topology_strategy: Support tablet replication
  service: Introduce tablet_allocator
  locator: Introduce tablet_aware_replication_strategy
  locator: Extract maybe_remove_node_being_replaced()
  dht: token_metadata: Introduce get_my_id()
  migration_manager: Send tablet metadata as part of schema pull
  storage_service: Load tablet metadata when reloading topology state
  storage_service: Load tablet metadata on boot and from group0 changes
  db, migration_manager: Notify about tablet metadata changes via migration_listener::on_update_tablet_metadata()
  migration_notifier: Introduce before_drop_keyspace()
  migration_manager: Make prepare_keyspace_drop_announcement() return a future<>
  test: perf: Introduce perf-tablets
  test: Introduce tablets_test
  test: lib: Do not override table id in create_table()
  utils, tablets: Introduce external_memory_usage()
  db: tablets: Add printers
  db: tablets: Add persistence layer
  dht: Use last_token_of_compaction_group() in split_token_range_msb()
  locator: Introduce tablet_metadata
  dht: Introduce first_token()
  dht: Introduce next_token()
  storage_proxy: Improve trace-level logging
  locator: token_metadata: Fix confusing comment on ring_range()
  dht, storage_proxy: Abstract token space splitting
  Revert "query_ranges_to_vnodes_generator: fix for exclusive boundaries"
  db: Exclude keyspace with per-table replication in get_non_local_strategy_keyspaces_erms()
  db: Introduce get_non_local_vnode_based_strategy_keyspaces()
  service: storage_proxy: Avoid copying keyspace name in write handler
  locator: Introduce per-table replication strategy
  treewide: Use replication_strategy_ptr as a shorter name for abstract_replication_strategy::ptr_type
  locator: Introduce effective_replication_map
  locator: Rename effective_replication_map to vnode_effective_replication_map
  locator: effective_replication_map: Abstract get_pending_endpoints()
  db: Propagate feature_service to abstract_replication_strategy::validate_options()
  db: config: Introduce experimental "TABLETS" feature
  db: Log replication strategy for debugging purposes
  db: Log full exception on error in do_parse_schema_tables()
  db: keyspace: Remove non-const replication strategy getter
  config: Reformat
2023-04-27 09:40:18 +02:00
Kefu Chai
c8aa7295d4 cql3: drop unused function
there are two variants of `query_processor::for_each_cql_result()`,
both of them perform the pagination of results returned by a CQL
statement. the one which accepts a function returning an instant
value is not used now. so let's drop it.

Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>

Closes #13675
2023-04-26 08:43:22 +03:00