Commit Graph

32816 Commits

Author SHA1 Message Date
Petr Gusev
4ff0807cd0 raft server, status metric 2022-09-13 19:34:22 +04:00
Petr Gusev
1b5fa4088e raft server, abort group0 server on background errors 2022-09-12 10:16:43 +04:00
Petr Gusev
e92dc9c15b raft server, provide a callback to handle background errors
Fix: #11352
2022-09-12 10:16:43 +04:00
Petr Gusev
c57238d3d6 raft server, check aborted state on public server public api's
Fix: #11352
2022-09-12 10:16:40 +04:00
Alejo Sanchez
67c91e8bcd test.py: random tables make DDL queries async
There are async timeouts for ALTER queries. Seems related to othe issues
with the driver and async.

Make these queries synchronous for now.

Signed-off-by: Alejo Sanchez <alejo.sanchez@scylladb.com>

Closes #11394
2022-08-28 10:38:39 +03:00
Felipe Mendes
fd5cb85a7a alternator - Doc - Update DescribeTable response and introduce hashing function differences
This commit introduces the following changes to Alternator compability doc:

* As of https://github.com/scylladb/scylladb/pull/11298 Alternator will return ProvisionedThroughput in DescribeTable API calls. We add the fact that tables will default to a BillingMode of PAY_PER_REQUEST (this wasn't made explicit anywhere in the docs), and that the values for RCUs/WCUs are hardcoded to 0.
* Mention the fact that ScyllaDB (thus Alternator) hashing function is different than AWS proprietary implementation for DynamoDB. This is mostly of an implementation aspect rather than a bug, but it may cause user confusion when/if comparing the ResultSet between DynamoDB and Alternator returned from Table Scans.

Refs: https://github.com/scylladb/scylladb/issues/11222
Fixes: https://github.com/scylladb/scylladb/issues/11315

Closes #11360
2022-08-28 10:29:07 +03:00
Kamil Braun
6c16ae4868 Merge 'raft, limit for command size' from Gusev Petr
Commitlog imposes a limit on the size of mutations
and throws an exception if it's exceeded. In case of
schema changes before raft this exception was delivered
to the client. Now it happens while saving the raft
command in io_fiber in persistence->store_log_entries
and what the client gets is just a timeout exception,
which doesn't say much about the cause of the problem.
This patch introduces an explicit command size limit
and provides a clear error message in this case.

Closes #11318

* github.com:scylladb/scylladb:
  raft, use max_command_size to satisfy commitlog limit
  raft, limit for command size
2022-08-26 12:20:58 +02:00
Avi Kivity
0dbcd13a0f config: change logging::settings constructor call to use designated initializer
Safer wrt reordering, and more readable too.

Closes #11382
2022-08-26 06:14:01 +03:00
Konstantin Osipov
4e128bafb5 docs: clarify the tricky field of row existence in LWT
Closes #11372
2022-08-26 06:10:45 +03:00
Benny Halevy
765d2f5e46 release: define SCYLLA_BUILD_MODE_STR by stringifying SCYLLA_BUILD_MODE
Currently SCYLLA_BULD_MODE is defined as a string by the cxxflags
generated by configure.py.  This is not very useful since one cannot use
it in a @if preprocessor directive.

Instead, use -DSCYLLA_BULD_MODE=release, for example, and define a
SCYLLA_BULD_MODE_STR as the dtirng representation of it.

In addition define the respective
SCYLLA_BUILD_MODE_{RELEASE,DEV,DEBUG,SANITIZE} macros that can be easily
used in @ifdef (or #ifndef :)) for conditional compilation.

The planned use case for it is to enable a task_manager test module only
in non-release modes.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>

Closes #11357
2022-08-25 16:50:42 +02:00
Wojciech Mitros
49dba4f0c1 functions: fix dropping of a keyspace with an aggregate in it
Currently, if a keyspace has an aggregate and the keyspace
is dropped, the keyspace becomes corrupted and another keyspace
with the same name cannot be created again

This is caused by the fact that when removing an aggregate, we
call create_aggregate() to get values for its name and signature.
In the create_aggregate(), we check whether the row and final
functions for the aggregate exist.
Normally, that's not an issue, because when dropping an existing
aggregate alone, we know that its UDFs also exist. But when dropping
and entire keyspace, we first drop the UDFs, making us unable to drop
the aggregate afterwards.

This patch fixes this behavior by removing the create_aggregate()
from the aggregate dropping implementation and replacing it with
specific calls for getting the aggregate name and signature.

Additionally, a test that would previously fail is added to
cql-pytest/test_uda.py where we drop a keyspace with an aggregate.

Fixes #11327

Closes #11375
2022-08-25 16:28:57 +02:00
Tomasz Grabiec
83850e247a Merge 'raft: server: handle aborts when waiting for config entry to commit' from Kamil Braun
Changing configuration involves two entries in the log: a 'joint
configuration entry' and a 'non-joint configuration entry'. We use
`wait_for_entry` to wait on the joint one. To wait on the non-joint one,
we use a separate promise field in `server`. This promise wasn't
connected to the `abort_source` passed into `set_configuration`.

The call could get stuck if the server got removed from the
configuration and lost leadership after committing the joint entry but
before committing the non-joint one, waiting on the promise. Aborting
wouldn't help. Fix this by subscribing to the `abort_source` in
resolving the promise exceptionally.

Furthermore, make sure that two `set_configuration` calls don't step on
each other's toes by one setting the other's promise. To do that, reset
the promise field at the end of `set_configuration` and check that it's
not engaged at the beginning.

Fixes #11288.

Closes #11325

* github.com:scylladb/scylladb:
  test: raft: randomized_nemesis_test: additional logging
  raft: server: handle aborts when waiting for config entry to commit
2022-08-25 12:49:09 +02:00
Avi Kivity
df87949241 Merge "Remove batch tokens update helper" from Pavel E
"
On token_metadata there are two update_normal_tokens() overloads --
one updates tokens for a single endpoint, another one -- for a set
(well -- std::map) of them. Other than updating the tokens both
methods also may add an endpoint to the t.m.'s topology object.

There's an ongoing effort in moving the dc/rack information from
snitch to topology, and one of the changes made in it is -- when
adding an entry to topology, the dc/rack info should be provided
by the caller (which is in 99% of the cases is the storage service).
The batched tokens update is extremely unfriendly to the latter
change. Fortunately, this helper is only used by tests, the core
code always uses fine-grained tokens updating.
"

* 'br-tokens-update-relax' of https://github.com/xemul/scylla:
  token_metadata: Indentation fix after prevuous patch
  token_metadata: Remove excessive empty tokens check
  token_metadata: Remove batch tokens updating method
  tests: Use one-by-one tokens updating method
2022-08-25 12:01:58 +02:00
Wojciech Mitros
9e6e8de38f tests: prevent test_wasm from occasional failing
Some cases in test_wasm.py assumed that all cases
are ran in the same order every time and depended
on values that should have been added to tables in
previous cases. Because of that, they were sometimes
failing. This patch removes this assumption by
adding the missing inserts to the affected cases.

Additionally, an assert that confirms low miss
rate of udfs is more precise, a comment is added
to explain it clearly.

Closes #11367
2022-08-25 11:32:06 +03:00
Kamil Braun
90233551be test: raft: randomized_nemesis_test: don't access failure detector service after it's stopped
It could happen that we accessed failure detector service after it was
stopped if a reconfiguration happened in the 'right' moment. This would
resolve in an assertion failure. Fix this.

Closes #11326
2022-08-25 11:32:06 +03:00
Tomasz Grabiec
1d0264e1a9 Merge 'Implement Raft upgrade procedure' from Kamil Braun
Start with a cluster with Raft disabled, end up with a cluster that performs
schema operations using group 0.

Design doc:
https://docs.google.com/document/d/1PvZ4NzK3S0ohMhyVNZZ-kCxjkK5URmz1VP65rrkTOCQ/
(TODO: replace this with .md file - we can do it as a follow-up)

The procedure, on a high level, works as follows:
- join group 0
- wait until every peer joined group 0 (peers are taken from `system.peers`
  table)
- enter `synchronize` upgrade state, in which group 0 operations are disabled
- wait until all members of group 0 entered `synchronize` state or some member
  entered the final state
- synchronize schema by comparing versions and pulling if necessary
- enter the final state (`use_new_procedures`), in which group 0 is used for
  schema operations.

With the procedure comes a recovery mode in case the upgrade procedure gets
stuck (and it may if we lose a node during recovery - the procedure, to
correctly establish a single group 0 cluster, requires contacting every node).

This recovery mode can also be used to recover clusters with group 0 already
established if they permanently lose a majority of nodes - killing two birds with
one stone. Details in the last commit message.

Read the design doc, then read the commits in topological order
for best reviewing experience.

---

I did some manual tests: upgrading a cluster, using the cluster to add nodes,
remove nodes (both with `decommission` and `removenode`), replacing nodes.
Performing recovery.

As a follow-up, we'll need to implement tests using the new framework (after
it's ready). It will be easy to test upgrades and recovery even with a single
Scylla version - we start with a cluster with the RAFT flag disabled, then
rolling-restart while enabling the flag (and recovery is done through simple
CQL statements).

Closes #10835

* github.com:scylladb/scylladb:
  service/raft: raft_group0: implement upgrade procedure
  service/raft: raft_group0: extract `tracker` from `persistent_discovery::run`
  service/raft: raft_group0: introduce local loggers for group 0 and upgrade
  service/raft: raft_group0: introduce GET_GROUP0_UPGRADE_STATE verb
  service/raft: raft_group0_client: prepare for upgrade procedure
  service/raft: introduce `group0_upgrade_state`
  db: system_keyspace: introduce `load_peers`
  idl-compiler: introduce cancellable verbs
  message: messaging_service: cancellable version of `send_schema_check`
2022-08-25 11:32:06 +03:00
Pavel Emelyanov
d8c5044eee token_metadata: Indentation fix after prevuous patch
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2022-08-24 08:24:21 +03:00
Pavel Emelyanov
8238c38e9f token_metadata: Remove excessive empty tokens check
After the previous patch empty passed tokens make the helper co_return
early, so this if is the dead code

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2022-08-24 08:24:21 +03:00
Pavel Emelyanov
056d21c050 token_metadata: Remove batch tokens updating method
No users left.

The endpoint_tokens.empty() check is removed, only tests could trigger
it, but they didn't and are patched out.

Indentation is left broken

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2022-08-24 08:24:21 +03:00
Pavel Emelyanov
1d437302a8 tests: Use one-by-one tokens updating method
Tests are the only users of batch tokens updating "sugar" which
actually makes things more complicated

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2022-08-24 08:24:21 +03:00
Pavel Emelyanov
18fa5038b1 replication_strategy: Remove unused method
The get_pending_address_ranges() accepting a single token is not in use,
its peer that accepts a set of tokens is

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>

Closes #11358
2022-08-23 20:23:50 +02:00
Avi Kivity
6ce5e9079c Merge 'utils/logalloc: consolidate lsa state in shard tracker' from Botond Dénes
Currently the state of LSA is scattered across a handful of global variables. This series consolidates all these into a single one: the shard tracker. Beyond reducing the number of globals (the less globals, the better) this paves the way for a planned de-globalization of the shard tracker itself.
There is one separate global left, the static migrators registry. This is left as-is for now.

Closes #11284

* github.com:scylladb/scylladb:
  utils/logalloc: remove reclaim_timer:: globals
  utils/logalloc: make s_sanitizer_report_backtrace global a member of tracker
  utils/logalloc: tracker_reclaimer_lock: get shard tracker via constructor arg
  utils/logalloc: move global stat accessors to tracker
  utils/logalloc: allocating_section: don't use the global tracker
  utils/logalloc: pass down tracker::impl reference to segment_pool
  utils/logalloc: move segment pool into tracker
  utils/logalloc: add tracker member to basic_region_impl
  utils/logalloc: make segment independent of segment pool
2022-08-23 18:51:14 +02:00
Benny Halevy
a980510654 table: seal_active_memtable: handle ENOSPC error
Aborting too soon on ENOSPC is too harsh, leading to loss of
availability of the node for reads, while restarting it won't
solve the ENOSPC condition.

Fixes #11245

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>

Closes #11246
2022-08-23 17:58:20 +02:00
Tomasz Grabiec
9c4e32d2e2 Merge 'raft: server: drop waiters in applier_fiber instead of io_fiber' from Kamil Braun
When `io_fiber` fetched a batch with a configuration that does not
contain this node, it would send the entries committed in this batch to
`applier_fiber` and proceed by any remaining entry dropping waiters (if
the node was no longer a leader).

If there were waiters for entries committed in this batch, it could
either happen that `applier_fiber` received and processed those entries
first, notifying the waiters that the entries were committed and/or
applied, or it could happen that `io_fiber` reaches the dropping waiters
code first, causing the waiters to be resolved with
`commit_status_unknown`.

The second scenario is undesirable. For example, when a follower tries
to remove the current leader from the configuration using
`modify_config`, if the second scenario happens, the follower will get
`commit_status_unknown` - this can happen even though there are no node
or network failures. In particular, this caused
`randomized_nemesis_test.remove_leader_with_forwarding_finishes` to fail
from time to time.

Fix it by serializing the notifying and dropping of waiters in a single
fiber - `applier_fiber`. We decided to move all management of waiters
into `applier_fiber`, because most of that management was already there
(there was already one `drop_waiters` call, and two `notify_waiters`
calls). Now, when `io_fiber` observes that we've been removed from the
config and no longer a leader, instead of dropping waiters, it sends a
message to `applier_fiber`. `applier_fiber` will drop waiters when
receiving that message.

Improve an existing test to reproduce this scenario more frequently.

Fixes #11235.

Closes #11308

* github.com:scylladb/scylladb:
  test: raft: randomized_nemesis_test: more chaos in `remove_leader_with_forwarding_finishes`
  raft: server: drop waiters in `applier_fiber` instead of `io_fiber`
  raft: server: use `visit` instead of `holds_alternative`+`get`
2022-08-23 17:19:44 +02:00
Avi Kivity
fd9d8ddb3e Merge 'distributed_loader: Restore separate processing of keyspace init prio/normal' from Calle Wilund
Fixes #11349

In 7396de7 (and refactorings before it) the set of prioritized keyspaces (and processing thereof)
was removed, due to apparent non-usage (which is true for open-source version).

This functionality is however required for certain features of the enterprise version (ear).
As such is needs to be restored and reenabled. This patch set does so, adapted
to the recent version of this file.

Closes #11350

* github.com:scylladb/scylladb:
  distributed_loader: Restore separate processing of keyspace init prio/normal
  Revert "distributed_loader: Remove unused load-prio manipulations"
2022-08-23 16:25:48 +02:00
Kamil Braun
e350e37605 service/raft: raft_group0: implement upgrade procedure
A listener is created inside `raft_group0` for acting when the
SUPPORTS_RAFT feature is enabled. The listener is established after the
node enters NORMAL status (in `raft_group0::finish_setup_after_join()`,
called at the end of `storage_service::join_cluster()`).

The listener starts the `upgrade_to_group0` procedure.

The procedure, on a high level, works as follows:
- join group 0
- wait until every peer joined group 0 (peers are taken from
  `system.peers` table)
- enter `synchronize` upgrade state, in which group 0 operations are
  disabled (see earlier commit which implemented this logic)
- wait until all members of group 0 entered `synchronize` state or some
  member entered the final state
- synchronize schema by comparing versions and pulling if necessary
- enter the final state (`use_new_procedures`), in which group 0 is used
  for schema operations (only those for now).

The devil lies in the details, and the implementation is ugly compared
to this nice description; for example there are many retry loops for
handling intermittent network failures. Read the code.

`leave_group0` and `remove_group0` were adjusted to handle the upgrade
procedure being run correctly; if necessary, they will wait for the
procedure to finish.

If the upgrade procedure gets stuck (and it may, since it requires all
nodes to be available to contact them to correctly establish a single
group 0 raft cluster); or if a running cluster permanently loses a
majority of nodes, causing group 0 unavailability; the cluster admin
is not left without help.

We introduce a recovery mode, which allows the admin to
completely get rid of traces of existing group 0 and restart the
upgrade procedure - which will establish a new group 0. This works even
in clusters that never upgraded but were bootstrapped using group 0 from
scratch.

To do that, the admin does the following on every node:
- writes 'recovery' under 'group0_upgrade_state' key
  in `system.scylla_local` table,
- truncates the `system.discovery` table,
- truncates the `system.group0_history` table,
- deletes group 0 ID and group 0 server ID from `system.scylla_local`
  (the keys are `raft_group0_id` and `raft_server_id`
then the admin performs a rolling restart of their cluster. The nodes
restart in a "group 0 recovery mode", which simply means that the nodes
won't try to perform any group 0 operations. Then the admin calls
`removenode` to remove the nodes that are down. Finally, the admin
removes the `group0_upgrade_state` key from `system.scylla_local`,
rolling-restarts the cluster, and the cluster should establish group 0
anew.

Note that this recovery procedure will have to be extended when new
stuff is added to group 0 - like topology change state. Indeed, observe
that a minority of nodes aren't able to receive committed entries from a
leader, so they may end up in inconsistent group 0 states. It wouldn't
be safe to simply create group 0 on those nodes without first ensuring
that they have the same state from which group 0 will start.
Right now the state only consist of schema tables, and the upgrade
procedure ensures to synchronize them, so even if the nodes started in
inconsistent schema states, group 0 will correctly be established.
(TODO: create a tracking issue? something needs to remind us of this
 whenever we extend group 0 with new stuff...)
2022-08-23 13:51:01 +02:00
Kamil Braun
b42dfbc0aa test: raft: randomized_nemesis_test: additional logging
Add some more logging to `randomized_nemesis_test` such as logging the
start and end of a reconfiguration operation in a way that makes it easy
to find one given the other in the logs.
2022-08-23 13:14:30 +02:00
Kamil Braun
efad6fe9b4 raft: server: handle aborts when waiting for config entry to commit
Changing configuration involves two entries in the log: a 'joint
configuration entry' and a 'non-joint configuration entry'. We use
`wait_for_entry` to wait on the joint one. To wait on the non-joint one,
we use a separate promise field in `server`. This promise wasn't
connected to the `abort_source` passed into `set_configuration`.

The call could get stuck if the server got removed from the
configuration and lost leadership after committing the joint entry but
before committing the non-joint one, waiting on the promise. Aborting
wouldn't help. Fix this by subscribing to the `abort_source` in
resolving the promise exceptionally.

Furthermore, make sure that two `set_configuration` calls don't step on
each other's toes by one setting the other's promise. To do that, reset
the promise field at the end of `set_configuration` and check that it's
not engaged at the beginning.

Fixes #11288.
2022-08-23 13:14:29 +02:00
Calle Wilund
54aca8e814 distributed_loader: Restore separate processing of keyspace init prio/normal
Fixes #11349

In 7396de7 (and refactorings before it) the set of prioritized keyspaces (and processing thereof)
was removed, due to apparent non-usage (which is true for open-source version).

This functionality is however required for certain features of the enterprise version (ear).
As such is needs to be restored and reenabled. This patch and revert before it does so, adapted
to the recent version of this file.
2022-08-23 10:39:19 +00:00
Calle Wilund
d9c391e366 Revert "distributed_loader: Remove unused load-prio manipulations"
This reverts commit 7396de72b1.

In 7396de7 (and refactorings before it) the set of prioritized keyspaces (and processing thereof)
was removed, due to apparent non-usage (which is true for open-source version).

This functionality is however required for certain features of the enterprise version (ear).
As such is needs to be restored and reenabled. This reverts the actual commit, patch after
ensures we use the prio set.
2022-08-23 10:34:05 +00:00
Avi Kivity
5d1ff17ddf Merge 'Streaming: define plan_id as a strong tagged_uuid type' from Benny Halevy
This series turns plan_id from a generic UUID into a strong type so it can't be used interchangeably with other uuid's.
While at it, streaming/stream_fwd.hh was added for forward declarations and the definition of plan_id.
Also, `stream_manager::update_progress` parameter name was renamed to plan_id to represent its assumed content, before changing its type to `streaming::plan_id`.

Closes #11338

* github.com:scylladb/scylladb:
  streaming: define plan_id as a strong tagged_uuid type
  stream_manager: update_progress: rename cf_id param to plan_id
  streaming: add forward declarations in stream_fwd.hh
2022-08-23 10:48:34 +02:00
Petr Gusev
aa88d58539 raft, use max_command_size to satisfy commitlog limit
Commitlog imposes a limit on the size of mutations
and throws an exception if it's exceeded. In case of
schema changes before raft this exception was delivered
to the client. Now it happens while saving the raft
command in io_fiber in persistence->store_log_entries
and what the client gets is just a timeout exception,
which doesn't say much about the cause of the problem.
This patch introduces an explicit command size limit
and provides a clear error message in this case.
2022-08-23 12:09:32 +04:00
Tomasz Grabiec
0e5b86d3da Merge 'Optimize mutation consume of range tombstones in reverse' from Benny Halevy
Reversing the whole range_tombstone_list
into reversed_range_tombstones is inefficient
and can lead to reactor stalls with a large number of
range tombstones.

Instead, iterate over the range_tombsotne_list in reverse
direction and reverse each range_tombstone as we go,
keeping the result in the optional cookie.reversed_rt member.

While at it, this series contains some other cleanups on this path
to improve the code readability and maybe make the compiler's life
easier as for optimizing the cleaned-up code.

Closes #11271

* github.com:scylladb/scylladb:
  mutation: consume_clustering_fragments: get rid of reversed_range_tombstones;
  mutation: consume_clustering_fragments: reindent
  mutation: consume_clustering_fragments: shuffle emit_rt logic around
  mutation: consume, consume_gently: simplify partition_start logic
  mutation: consume_clustering_fragments: pass iterators to mutation_consume_cookie ctor
  mutation: consume_clustering_fragments: keep the reversed schema in cookie
  mutation: clustering_iterators: get rid of current_rt
  mutation_test: test_mutation_consume_position_monotonicity: test also consume_gently
2022-08-23 10:05:39 +02:00
Botond Dénes
5bc499080d utils/logalloc: remove reclaim_timer:: globals
One of them (_active_timer) is moved to shard tracker, the other is made
a simple local in reclaim_timer.
2022-08-23 10:38:58 +03:00
Botond Dénes
5f8971173e utils/logalloc: make s_sanitizer_report_backtrace global a member of tracker
We want to consolidate all the logalloc state into a single object: the
shard tracker. Replacing this global with a member in said object is
part of this effort.
2022-08-23 10:38:58 +03:00
Botond Dénes
499b9a3a7c utils/logalloc: tracker_reclaimer_lock: get shard tracker via constructor arg 2022-08-23 10:38:58 +03:00
Botond Dénes
7d17d675af utils/logalloc: move global stat accessors to tracker
These are pretend free functions, accessing globals in the background,
make them a member of the tracker instead, which everything needed
locally to compute them. Callers still have to access these stats
through the global tracker instance, but this can be changed to happen
through a local instance. Soon....
2022-08-23 10:38:58 +03:00
Botond Dénes
f406151a86 utils/logalloc: allocating_section: don't use the global tracker
Instead, get the tracker instance from the region. This requires adding
a `region&` parameter to `with_reserve()`.
This brings us one step closer to eliminating the global tracker.
2022-08-23 10:38:58 +03:00
Botond Dénes
e968866fa1 utils/logalloc: pass down tracker::impl reference to segment_pool
To get rid of some usages of `shard_tracker()`.
2022-08-23 10:38:58 +03:00
Botond Dénes
3bd94e41bf utils/logalloc: move segment pool into tracker
Instead of a separate global segment pool instance, make it a member of
the already global tracker. Most users are inside the tracker instance
anyway. Outside users can access the pool through the global tracker
instance.
2022-08-23 10:38:58 +03:00
Botond Dénes
5b86dfc35a utils/logalloc: add tracker member to basic_region_impl
For now this member is initialized from the global tracker instance. But
it allows the members of region impl to be detached from said global,
making a step towards removing it.
2022-08-23 10:38:58 +03:00
Botond Dénes
f4056bd344 utils/logalloc: make segment independent of segment pool
segment has some members, which simply forward the call to a
segment_pool method, via the global segment_pool instance. Remove these
and make the callers use the segment pool directly instead.
2022-08-23 10:38:58 +03:00
Nadav Har'El
9c15659194 Merge 'test.py: bump timeout of async requests for topology' from Alecco
Topology tests do async requests using the Python driver. The driver's
API for async doesn't use the session timeout.

Pass 60 seconds timeout (default is 10) to match the session's.

Fixes https://github.com/scylladb/scylladb/issues/11289

Closes #11348

* github.com:scylladb/scylladb:
  test.py: bump schema agreement timeout for topology tests
  test.py: bump timeout of async requests for topology
  test.py: fix bad indent
2022-08-23 10:30:59 +03:00
Raya Kurlyand
bc7539cff0 Update auditing.rst
https://github.com/scylladb/scylladb/issues/11341

Closes #11347
2022-08-23 06:59:41 +03:00
Botond Dénes
331033adae Merge 'Fix frozen mutation consume ordering' from Benny Halevy
Currently, frozen_mutation is not consumed in position_in_partition
order as all range tombstones are consumed before all rows.

This violates the range_tombstone_generator invariants
as its lower_bound needs to be monotonically increasing.

Fix this by adding mutation_partition_view::accept_ordered
and rewriting do_accept_gently to do the same,
both making sure to consume the range tombstones
and clustering rows in position_in_partition order,
similar to the mutation consume_clustering_fragments function.

Add a unit test that verifies that.

Fixes #11198

Closes #11269

* github.com:scylladb/scylladb:
  mutation_partition_view: make mutation_partition_view_virtual_visitor stoppable
  frozen_mutation: consume and consume_gently in-order
  frozen_mutation: frozen_mutation_consumer_adaptor: rename rt to rtc
  frozen_mutation: frozen_mutation_consumer_adaptor: return early when flush returns stop_iteration::yes
  frozen_mutation: frozen_mutation_consumer_adaptor: consume static row unconditionally
  frozen_mutation: frozen_mutation_consumer_adaptor: flush current_row before rt_gen
2022-08-23 06:37:04 +03:00
Alejo Sanchez
01cac33472 test.py: bump schema agreement timeout for topology tests
Increase the schema agreement timeout to match other timeouts.

Signed-off-by: Alejo Sanchez <alejo.sanchez@scylladb.com>
2022-08-22 21:07:55 +02:00
Alejo Sanchez
f9d31112cf test.py: bump timeout of async requests for topology
Topology tests do async requests using the Python driver. The driver's
API for async doesn't use the session timeout.

Pass 60 seconds timeout (default is 10) to match the session's.

This will hopefully will fix timeout failures on debug mode.

Signed-off-by: Alejo Sanchez <alejo.sanchez@scylladb.com>
2022-08-22 21:07:03 +02:00
Benny Halevy
357e805e1f mutation_partition_view: make mutation_partition_view_virtual_visitor stoppable
So that the frozen_mutation consumer can return
stop_iteration::yes if it wishes to stop consuming at
some clustering position.

In this case, on_end_of_partition must still be called
so a closing range_tombstone_change can be emitted to the consumer.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2022-08-22 20:12:58 +03:00
Mikołaj Sielużycki
b5380baf8a frozen_mutation: consume and consume_gently in-order
Currently, frozen_mutation is not consumed in position_in_partition
order as all range tombstones are consumed before all rows.

This violates the range_tombstone_generator invariants
as its lower_bound needs to be monotonically increasing.

Fix this by adding mutation_partition_view::accept_ordered
and rewriting do_accept_gently to do the same,
both making sure to consume the range tombstones
and clustering rows in position_in_partition order,
similar to the mutation consume_clustering_fragments function.

Add a unit test that verifies that.

Fixes #11198

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2022-08-22 20:12:20 +03:00
Kamil Braun
e0c6153adf test: raft: randomized_nemesis_test: more chaos in remove_leader_with_forwarding_finishes
Improve the randomness of this test, making it a bit easier to
reproduce the scenarios that the test aims to catch.

Increase timeouts a bit to account for this additional randomness.
2022-08-22 18:53:48 +02:00