Commit Graph

232 Commits

Author SHA1 Message Date
Gleb Natapov
7f26a8eef5 raft: actively search for a leader if it is not known for a tick duration
For a follower to forward requests to a leader the leader must be known.
But there may be a situation where a follower does not learn about
a leader for a while. This may happen when a node becomes a follower while its
log is up-to-date and there are no new entries submitted to raft. In such
case the leader will send nothing to the follower and the only way to
learn about the current leader is to get a message from it. Until a new
entry is added to the raft's log a follower that does not know who the
leader is will not be able to add entries. Kind of a deadlock. Note that
the problem is specific to our implementation where failure detection is
done by an outside module. In vanilla raft a leader sends messages to
all followers periodically, so essentially it is never idle.

The patch solves this by broadcasting specially crafted append reject to all
nodes in the cluster on a tick in case a leader is not known. The leader
responds to this message with an empty append request which will cause the
node to learn about the leader. For optimisation purposes the patch
sends the broadcast only in case there is actually an operation that
waits for leader to be known.

Fixes #10379
2022-04-25 14:51:22 +02:00
Kamil Braun
5308a7d7a3 raft: server: return immediately from wait_for_leader if leader is known
`wait_for_leader` may be called when leader is known. There's nothing to
wait for in this case.
2022-04-25 12:59:55 +02:00
Kamil Braun
ad3141d3e0 raft: server: translate abort_requested_exception to raft::request_aborted
The `wait_for_leader` function would throw a low-level
`abort_requested_aborted` exception from seastar::shared_promise.
Translate it to the high-level raft::request_aborted so we can reduce
the number of different exception types which cross the Raft API
boundary.

Also, add comments on Raft API functions about the exception thrown when
requests are aborted.
2022-04-05 19:18:53 +02:00
Kamil Braun
7da586b912 raft: fsm: when stopping, become follower to reject new requests
After enabling add_entry forwarding in randomized_nemesis_test, the test
would sometimes hang on _rpc->abort() call due to add_entry messages
from followers which waited on log_limiter_semaphore on the leader
preventing _rpc from finishing the abort; the log_limter_semaphore would
not get unblocked because the part of the server was already stopped.

Prevent log_limiter_semaphore from being waited on when stopping the
server by becoming a follower in fsm::stop.
2022-04-05 19:11:44 +02:00
Kamil Braun
0f0d75fd66 raft: server: translate semaphore_aborted to request_aborted 2022-03-29 15:10:29 +02:00
Gleb Natapov
a1604aa388 raft: make raft requests abortable
This patch adds an ability to pass abort_source to raft request APIs (
add_entry, modify_config) to make them abortable. A request issuer not
always want to wait for a request to complete. For instance because a
client disconnected or because it no longer interested in waiting
because of a timeout. After this patch it can now abort waiting for such
requests through an abort source. Note that aborting a request only
aborts the wait for it to complete, it does not mean that the request
will not be eventually executed.

Message-Id: <YjHivLfIB9Xj5F4g@scylladb.com>
2022-03-16 18:38:01 +01:00
Gleb Natapov
108e7fcc4e raft: enter candidate state immediately when starting a singleton cluster
When a node starts it does not immediately becomes a candidate since it
waits to learn about already existing leader and randomize the time it
becomes a candidate to prevent dueling candidates if several nodes are
started simultaneously.

If a cluster consist of only one node there is no point in waiting
before becoming a candidate though because two cases above cannot
happen. This patch checks that the node belongs to a singleton cluster
where the node itself is the only voting member and becomes candidate
immediately. This reduces the starting time of a single node cluster
which are often used in testing.

Message-Id: <YiCbQXx8LPlRQssC@scylladb.com>
2022-03-04 20:30:52 +01:00
Piotr Grabowski
e99f487d31 raft: add missing include
Add missing include of "<experimental/source_location>" which caused
compile errors on GCC:

In file included from raft/fsm.hh:12,
                 from raft/fsm.cc:8:
raft/raft.hh:251:30: error: ‘std::experimental’ has not been declared
  251 |     state_machine_error(std::experimental::source_location l = std::experimental::source_location::current())
      |                              ^~~~~~~~~~~~
raft/raft.hh:251:59: error: expected ‘)’ before ‘l’
  251 |     state_machine_error(std::experimental::source_location l = std::experimental::source_location::current())
      |                        ~                                  ^~

Note that there are some GCC compilation problems still left apart from
this one.

Closes #10155
2022-03-02 16:33:43 +01:00
Alejo Sanchez
627275945f raft: modify_config: support voting state change
Handle requests to change voting for servers already present in the
current configuration.

Signed-off-by: Alejo Sanchez <alejo.sanchez@scylladb.com>
2022-02-08 08:00:07 -04:00
Alejo Sanchez
a40417df08 raft: minor: fix log format string
Fix format string for log line.

Signed-off-by: Alejo Sanchez <alejo.sanchez@scylladb.com>
2022-02-08 08:00:07 -04:00
Tomasz Grabiec
00a9326ae7 Merge "raft: let modify_config finish on a follower that removes itself" from Kamil
When forwarding a reconfiguration request from follower to a leader in
`modify_config`, there is no reason to wait for the follower's commit
index to be updated. The only useful information is that the leader
committed the configuration change - so `modify_config` should return as
soon as we know that.

There is a reason *not* to wait for the follower's commit index to be
updated: if the configuration change removes the follower, the follower
will never learn about it, so a local waiter will never be resolved.

`execute_modify_config` - the part of `modify_config` executed on the
leader - is thus modified to finish when the configuration change is
fully complete (including the dummy entry appended at the end), and
`modify_config` - which does the forwarding - no longer creates a local
waiter, but returns as soon as the RPC call to the leader confirms that
the entry was committed on the leader.

We still return an `entry_id` from `execute_modify_config` but that's
just an artifact of the implementation.

Fixes #9981.

A regression test was also added in randomized_nemesis_test.

* kbr/modify-config-finishes-v1:
  test: raft: randomized_nemesis_test: regression test for #9981
  raft: server: don't create local waiter in `modify_config`
2022-01-31 20:14:50 +01:00
Tomasz Grabiec
b78bab7286 Merge "raft: fixes and improvements to the library and nemesis test" from Kamil
Raft randomized nemesis test was improved by adding some more
chaos: randomizing the network delay, server configuration,
ticking speed of servers.

This allowed to catch a serious bug, which is fixed in the first patch.

The patchset also fixes bugs in the test itself and adds quality of life
improvements such as better diagnostics when inconsistency is detected.

* kbr/nemesis-random-v1:
  test: raft: randomized_nemesis_test: print state of each state machine when detecting inconsistency
  test: raft: randomized_nemesis_test: print details when detecting inconsistency
  test: raft: randomized_nemesis_test: print snapshot details when taking/loading snapshots in `impure_state_machine`
  test: raft: randomized_nemesis_test: keep server id in impure_state_machine
  test: raft: randomized_nemesis_test: frequent snapshotting configuration
  test: raft: randomized_nemesis_test: tick servers at different speeds in generator test
  test: raft: randomized_nemesis_test: simplify ticker
  test: raft: randomized_nemesis_test: randomize network delay
  test: raft: randomized_nemesis_test: fix use-after-free in `environment::crash()`
  test: raft: randomized_nemesis_test: fix use-after-free in two-way rpc functions
  test: raft: randomized_nemesis_test: rpc: don't propagate `gate_closed_exception` outside
  test: raft: randomized_nemesis_test: fix obsolete comment
  raft: fsm: print configuration entries appearing in the log
  raft: `operator<<(ostream&, ...)` implementation for `server_address` and `configuration`
  raft: server: abort snapshot applications before waiting for rpc abort
  raft: server: logging fix
  raft: fsm: don't advance commit index beyond matched entries
2022-01-31 13:25:27 +01:00
Kamil Braun
28b5792481 raft: server: don't create local waiter in modify_config
When forwarding a reconfiguration request from follower to a leader in
`modify_config`, there is no reason to wait for the follower's commit
index to be updated. The only useful information is that the leader
committed the configuration change - so `modify_config` should return as
soon as we know that.

There is a reason *not* to wait for the follower's commit index to be
updated: if the configuration change removes the follower, the follower
will never learn about it, so a local waiter will never be resolved.

`execute_modify_config` - the part of `modify_config` executed on the
leader - is thus modified to finish when the configuration change is
fully complete (including the dummy entry appended at the end), and
`modify_config` - which does the forwarding - no longer creates a local
waiter, but returns as soon as the RPC call to the leader confirms that
the entry was committed on the leader.

We still return an `entry_id` from `execute_modify_config` but that's
just an artifact of the implementation.

Fixes #9981.
2022-01-27 17:49:40 +01:00
Kamil Braun
22092d110a raft: fsm: print configuration entries appearing in the log
When appending or committing configuration entries, print them (on TRACE
level). Useful for debugging.
2022-01-26 16:09:41 +01:00
Kamil Braun
44a1a8a8b0 raft: operator<<(ostream&, ...) implementation for server_address and configuration
Useful for debugging.

Had to make `configuration` constructor explicit. Otherwise the
`operator<<` implementation for `configuration` would implicitly convert
the `server_address` to `configuration` when trying to output it, causing
infinite recursion.

Removed implicit uses of the constructor.
2022-01-26 16:09:41 +01:00
Kamil Braun
46f6a0cca5 raft: server: abort snapshot applications before waiting for rpc abort
The implementation of `rpc` may wait for all snapshot applications to
finish before it can finish aborting. This is what the
randomized_nemesis_test implementation did. This caused rpc abort to
hang in some scenarios.

In this commit, the order of abort calls is modified a bit. Instead of
waiting for rpc abort to finish and then aborting existing snapshot
applications, we call `rpc::abort()` and keep the future, then abort
snapshot applications, then wait on the future. Calling `rpc::abort()`
first is supposed to prevent new snapshot applications from starting;
a comment was added at the interface definition. The nemesis test
implementation had this property, and `raft_rpc` in group registry
was adjusted appropriately. Aborting the snapshot applications then
allows `rpc::abort()` to finish.
2022-01-26 16:06:45 +01:00
Kamil Braun
5577ad6c34 raft: server: logging fix 2022-01-26 15:54:14 +01:00
Kamil Braun
1216f39977 raft: fsm: don't advance commit index beyond matched entries
Otherwise it was possible to incorrectly mark obsolete entries from
earlier terms as committed, leading to inconsistencies between state
machine replicas.

Fixes #9965.
2022-01-26 15:53:13 +01:00
Gleb Natapov
579dcf187a raft: allow an option to persist commit index
Raft does not need to persist the commit index since a restarted node will
either learn it from an append message from a leader or (if entire cluster
is restarted and hence there is no leader) new leader will figure it out
after contacting a quorum. But some users may want to be able to bring
their local state machine to a state as up-to-date as it was before restart
as soon as possible without any external communication.

For them this patch introduces new persistence API that allows saving
and restoring last seen committed index.

Message-Id: <YfFD53oS2j1My0p/@scylladb.com>
2022-01-26 14:06:39 +01:00
Gleb Natapov
e56e96ac5a raft: do not add new wait entries after abort
Abort signals stopped_error on all awaited entries, but if an entry is
added after this it will be destroyed without signaling and will cause
a waiter to get broken_promise.

Fixes #9688

Message-Id: <Ye6xJjTDooKSuZ87@scylladb.com>
2022-01-25 09:52:30 +02:00
Avi Kivity
fcb8d040e8 treewide: use Software Package Data Exchange (SPDX) license identifiers
Instead of lengthy blurbs, switch to single-line, machine-readable
standardized (https://spdx.dev) license identifiers. The Linux kernel
switched long ago, so there is strong precedent.

Three cases are handled: AGPL-only, Apache-only, and dual licensed.
For the latter case, I chose (AGPL-3.0-or-later and Apache-2.0),
reasoning that our changes are extensive enough to apply our license.

The changes we applied mechanically with a script, except to
licenses/README.md.

Closes #9937
2022-01-18 12:15:18 +01:00
Avi Kivity
4118f2d8be treewide: replace deprecated seastar::later() with seastar::yield()
seastar::later() was recently deprecated and replaced with two
alternatives: a cheap seastar::yield() and an expensive (but more
powerful) seastar::check_for_io_immediately(), that corresponds to
the original later().

This patch replaces all later() calls with the weaker yield(). In
all cases except one, it's unambiguously correct. In one case
(test/perf scheduling_latency_measurer::stop()) it's not so ambiguous,
since check_for_io_immediately() will additionally force a poll and
so will cause more work to be done (but no additional tasks to be
executed). However, I think that any measurement that relies on
the measuring the work on the last tick to be inaccurate (you need
thousands of ticks to get any amount of confidence in the
measurement) that in the end it doesn't matter what we pick.

Tests: unit (dev)

Closes #9904
2022-01-12 12:19:19 +01:00
Kamil Braun
75bab2beec raft: server: print the ID of aborted server 2021-12-07 11:23:34 +01:00
Kamil Braun
485c0b1819 raft: server: don't register metrics in start()
Instead, expose `register_metrics()` at the `server` interface
(previously it was a private method of `server_impl`).

Metrics are global so `register_metrics()` cannot be called on
two servers that have the same ID, which is useful e.g. in tests when we
want to simulate server stops and restarts.
2021-12-07 11:23:33 +01:00
Konstantin Osipov
eea82f1262 raft: (server) improve tracing 2021-11-25 12:35:43 +03:00
Konstantin Osipov
0d830d4c11 raft: (metrics) fix spelling of waiters_awaken
The usage of awake and awaken is quite messy, but
awoken is more common for passive voice, so use waiters_awoken.
2021-11-25 12:35:43 +03:00
Konstantin Osipov
6d28927550 raft: make forwarding optional
In absence of abort_source or timeouts in Raft API, automatic
bouncing can create too much noise during testing, especially
during network failures. Add an option to disable follower
bouncing feature, since randomized_nemesis_test has its own
bouncing which handles timeouts correctly.

Optionally disable forwarding in basic_generator_test.
2021-11-25 12:35:43 +03:00
Konstantin Osipov
c22f945f11 raft: (service) manage Raft configuration during topology changes
Operations of adding or removing a node to Raft configuration
are made idempotent: they do nothing if already done, and
they are safe to resume after a failure.

However, since topology changes are not transactional, if a
bootstrap or removal procedure fails midway, Raft group 0
configuration may go out of sync with topology state as seen by
gossip.

In future we must change gossip to avoid making any persistent
changes to the cluster: all changes to persistent topology state
will be done exclusively through Raft Group 0.

Specifically, instead of persisting the tokens by advertising
them through gossip, the bootstrap will commit a change to a system
table using Raft group 0. nodetool will switch from looking at
gossip-managed tables to consulting with Raft Group 0 configuration
or Raft-managed tables.
Once this transformation is done, naturally, adding a node to Raft
configuration (perhaps as a non-voting member at first) will become the
first persistent change to ring state applied when a node joins;
removing a node from the Raft Group 0 configuration will become the last
action when removing a node.

Until this is done, do our best to avoid a cluster state when
a removed node or a node which addition failed is stuck in Raft
configuration, but the node is no longer present in gossip-managed
system tables. In other words, keep the gossip the primary source of
truth. For this purpose, carefully chose the timing when we
join and leave Raft group 0:

Join the Raft group 0 only after we've advertised our tokens, so the
cluster is aware of this node, it's visible in nodetool status,
but before node state jumps to "normal", i.e. before it accepts
queries. Since the operation is idempotent, invoke it on each
restart.

Remove the node from Group 0 *before* its tokens are removed
from gossip-managed system tables. This guarantees
that if removal from Raft group 0 fails for whatever reason,
the node stays in the ring, so nodetool removenode and
friends are re-tried.

Add tracing.
2021-11-25 12:35:42 +03:00
Konstantin Osipov
e3751068fe raft: (server) allow adding entries/modify config on a follower
Implement an RPC to forward add_entry calls from the follower
to leader. Bounce & retry in case of not_a_leader.
Do not retry in case of uncertainty - this can lead to adding
duplicate entries.

The feature is added to core Raft since it's needed by
all current clients - both topology and schema changes.

When forwarding an entry to a remote leader we may get back
a term/index pair that conflicts (has the same index, but is with
a higher term) with a local entry we're still waiting on.

This can happen, e.g. because there was a leader change and the
log was truncated, but we still haven't got the append_entries
RPC from the new leader, still haven't truncated the log locally,
still haven't aborted all the local waits for truncated entries.

Only remove the offending entry from the wait list and abort it.
There may be entries labeled with an older term to the right (with
higher commit index) of the conflicting entry. However, finding them,
would require a linear scan. If we allow it, we may end up doing this
linear scan for *every* conflicting entry during the transition
period, which brings us to N^2 complexity of this step. At the
same time, as soon as append_entries that commits a higher-term
entry with the same index reaches the follower, the waits
for the respective truncated entry will be aborted anyway (see
notify_waiters() which sets dropped_entry exception), so the scan
is unnecessary.

Similarly to being able to add entries, allow to modify
Raft group configuration on a follower. The implementation
works the same way as adding entries - forwards the command
to the leader.

Now that add_entry() or modify_config never throws not_a_leader,
it's more likely to throw timed_out_error, e.g. in case the
network is partitioned. Previously it was only possible due to a
semaphore wait timeout, and this scenario was not tested.
Handle timed_out_error on RPC level to let the existing tests
(specifically the randomized nemesis test) pass.
2021-11-25 11:50:38 +03:00
Konstantin Osipov
8f303844df raft: (server) fix a typo in exception message 2021-11-25 11:50:38 +03:00
Konstantin Osipov
9cde1cdf71 raft: (server) implement id() helper
There is no easy way to get server id otherwise.
2021-11-25 11:50:38 +03:00
Konstantin Osipov
b9faf41513 raft: (server) remove apply_dummy_entry()
It's currently unused, and going forward we'd like to make
it work on the follower, which requires a new implementation.
2021-11-25 11:50:38 +03:00
Gleb Natapov
814dea3600 raft: handle non voting members correctly in stepdown procedure
For leader stepdown purposes a non voting member is not different
from a node outside of the config. The patch makes relevant code paths
to check for both conditions.
2021-11-18 11:35:29 +02:00
Gleb Natapov
6a9b3cdb49 raft: exclude non voting nodes from the stable leader check
If a node is a non voting member it cannot be a leader, so the stable
leader rule should not be applied to it. This patch aligns non voting
node behaviour with a node that was removed from the cluster. Both of
them stepdown from leader position if they happen to be a leader when
the state change occurred.
2021-11-18 11:18:13 +02:00
Gleb Natapov
8f64a6d2d2 raft: fix configuration::can_vote() to worth correctly with joint config
Fix configuration::can_vote() to return true if a node is a voting
member in any of the configs.
2021-11-17 11:06:42 +02:00
Gleb Natapov
7aac6c2086 raft: rename rpc_configuration to configuration in fsm output
The filed is generic and used not only for rpc configuration now.
2021-11-09 15:16:57 +02:00
Gleb Natapov
a59779155f raft: fix matching of a snapshotted log on a follower
There can be a situation where a leader will send to a follower entries
that the latter already snapshotted. Currently a follower consider those
to be outdated appends and it rejects them, but it may cause the
follower progress to be stuck:

- A is a leader, B is a follower, there are other followers which A used to commit entries
- A remembers that the last matched entry for B is 10, so the next entry to send is 11. A managed to commit the 11 entry using other followers
- A sends entry 11 to B
- B receives it, accepts, and updates its commit index to 11. It sends a success reply to A, but it never reaches A due to a network partition
- B takes a snapshot at index 11
- A sends entry 11 to B again
- B rejects it since it is inside the snapshot
- A receives the reject and retries from the same entry
- Same thing happen again

We should not reject such outdated entries since if they fall inside a
snapshot it means they match (according to log matching property).
Accepting them will make the case above alive.

Fixes #9552
2021-11-09 14:51:40 +02:00
Gleb Natapov
9d505c48de raft: abort snapshot transfer to a server that was removed from the configuration
If a node is removed from a config we should stop transferring snapshot
to it. Do exactly that.

Fixes #9547
2021-11-09 14:51:40 +02:00
Gleb Natapov
88a6e2446d raft: fix race between snapshot application and committing of new entries
Completion notification code assumes that previous snapshot is applied
before new entries are committed, otherwise it asserts that some
notifications were missing. But currently commit notifications and
snapshot application run in different fibers, so the can be race between
those.

Fix that by moving commit notification into applier fiber as well.

Fixes #9550
2021-11-09 14:51:40 +02:00
Gleb Natapov
a04eb2d51f raft: do not maintain _last_conf_idx and _prev_conf_idx past snapshot index
The log maintains _last_conf_idx and _prev_conf_idx indexes into the log
to point to where the latest and previous configuration can be found.
If they are zero it means that the latest config is in the snapshot.
When snapshot with a trailing is applied we can safely reset those
indexes that are smaller than the snapshot one to zero because the
snapshot will have the latest config anyway. This simplifies maintenance
of those indexes since their value will not depend on user configured
snapshot_trailing parameter.
2021-11-09 14:03:36 +02:00
Gleb Natapov
bdf7d1a411 raft: correctly truncate the log in a persistence module during snapshot application
When remote snapshot is applied the log is completely cleared because
snapshot transfer happens only when common log prefix cannot be found,
so we cannot be sure that existing entries in the log are correct. But
currently it only happens for in memory log by calling apply_snapshot
with trailing set to zero, but when persistence module is called to
store the snapshot _config.snapshot_trailing is used which can be non
zero. This may cause the log to contain incorrect entries after restart.

The patch fixes this by using zero trailing for non local snapshots.

Fixes #9551
2021-11-04 15:11:19 +02:00
Avi Kivity
cd4af0c722 raft: disambiguate promise name in raft::active_read
gcc complains tha the name 'promise' changes meaning (from type
to variable) within active_read. Help it by disambiguating the
use as type.
2021-10-10 18:16:50 +03:00
Kamil Braun
36f3e26374 raft: server: handle rpc::send_snapshot returning instantly
If `rpc::send_snapshot` returned immediately with a ready future, or if
it threw, the code in `server_impl::send_snapshot` would not update
`_snapshot_transfers` correctly.

The code assumed that the continuation attached to `rpc::send_snapshot`
(with `then_wrapped`) was executed after `_snapshot_transfer` below the
`rpc::send_snapshot` call was updated. That would not necessarily be true
(the continuation may even not have been executed at all if
`rpc::send_snapshot` threw).

Fix that by wrapping the `rpc::send_snapshot` call into a continuation
attached to `later()`.

Originally authored by Gleb <gleb@scylladb.com>, I added a comment.
2021-10-05 11:04:11 +02:00
Tomasz Grabiec
6bf873b663 Merge "raft: misc documentation edits" from Kostja
* scylla-dev/raft-misc-v4-docedit:
  raft: document pre-voting and protection against disruptive leaders
  raft: style edits of README.md.
  raft: document snapshot API
2021-09-28 12:12:46 +02:00
Konstantin Osipov
0adff23c21 raft: document pre-voting and protection against disruptive leaders 2021-09-27 22:04:18 +03:00
Konstantin Osipov
0e63e99b5a raft: style edits of README.md. 2021-09-27 22:04:04 +03:00
Konstantin Osipov
de2beac6ca raft: document snapshot API 2021-09-27 22:03:38 +03:00
Gleb Natapov
78774a485a raft: drop local snapshot if it cannot be installed
If a locally taken snapshot cannot be installed because newer one was
received meanwhile it should be dropped, otherwise it will take space
needlessly.

Message-Id: <YUrWXxVfBjEio1Ol@scylladb.com>
2021-09-27 13:03:23 +02:00
Kamil Braun
bf823e34a4 raft: disable sticky leadership rule
The Raft PhD presents the following scenario.
When we remove a server from the cluster configuration, it does not
receive the configuration entry which removes it (because the leader
appending this entry uses that entry's configuration to decide to which
servers to send the entry to, and the entry does not contain the removed
server). Therefore the server keeps believing it is a member but does
not receive heartbeats from leaders in the new configuration. Therefore
it will keep becoming a candidate, causing existing leaders to step
down, harming availability. With many such candidates the cluster may
even stop being able to proceed at all. We call such servers
"disruptive".

More concretely, consider the following example, adapted from the PhD for
joint configuration changes (the original PhD considered a different
algorithm which can only add/remove one server at once):
Let C_old = {A, B, C, D}, C_new = {B, C, D}, and C_joint be the joint
configuration (C_old, C_new). D is the leader. D managed to append
C_joint to every server and commit it. D appends C_new. At this point, D
stops sending heartbeats to A because C_new does not contain A, but A's
last entry is still C_joint, so it still has the ability to become a
candidate. A can now become a candidate and cause D, or any other leader
in C_new, to step down. Even if D manages to commit C_new, A can keep
disrupting the cluster until it is shut down.

Prevoting changes the situation, which the authors admit. The "even if"
above no longer applies: if D manages to commit C_new, or just append it
to a majority of C_new, then A won't be able to succeed in the prevote
phase because a majority of servers in C_new has a longer log than A
(and A must obtain a prevote from a majority of servers in C_new because
A is in C_joint which contains C_new). But the authors continue to argue
that disruptions can still occur during the small period where C_new is
only appended on D but not yet on a majority of C_new. As they say:
"we also did not want to assume that a leader will reliably replicate
entries fast enough to move past the scenario (...) quickly; that might
have worked in practice, but it depends on stronger assumptions that we
prefer to avoid about the performance (...) of replicating log entries".
One could probably try debunking this by saying that if entries take
longer to replicate than the election timeout we're in much bigger
trouble, but nevermind.

In any case, the authors propose a solution which we call "sticky
leadership". A server will not grant a vote to a candidate if it has
recently received a heartbeat from the currently known leader, even if
the candidate's term is higher. In the above example, servers in C_new
would not grant votes to A as long as D keeps sending them heartbeats,
thus A is no longer disruptive.

In our case the situation is a bit
different: in original Raft, "heartbeats" have a very specific meaning
- they are append_entries requests (possibly empty) sent by leaders.
Thus if a node stops being a leader it stops sending heartbeats;
similarly, if a node leaves the configuration, it stops receiving
heartbeats from others still in the configuration. We instead use a
"shared failure detector" interface, where nodes may still consider
other nodes alive regardless of their configuration/leadership
situation, as part of the general "MultiRaft" framework.

This pretty much invalidates the original argument, as seen on
the above example: A will still consider D alive, thus it won't become
a candidate.

Shared failure detector combined with sticky leadership actually makes
the situation worse - it may cause cluster unavailability in certain
scenarios (fortunately not a permanent one, it can be solved with server
restarts, for example). Randomized nemesis testing with reconfigurations
found the following scenario:
Let C1 = {A, B, C}, C2 = {A}, C3 = {B, C}. We start from configuration
C1, B is the leader. B commits joint (C1, C2), then new C2
configuration. Note that C does not learn about the last entry
(since it's not part of C2) but it keeps believing that B is alive,
so it keeps believing that B is the leader.
We then partition {A} from {B, C}. A appends (C2, C3) joint
configuration to its log. It's not able to append it to B or C due to
the partition. The partition holds long enough for A to revert to
candidate state (or we may restart A at this point). Eventually the
partition resolves. The only node which can become a candidate now is A:
C does not become a candidate because it keeps believeing that B is the
leader, and B does not become a candidate because it saw the C2
non-joint entry being committed. However, A won't become a leader
because C won't grant it a vote due to the sticky leadership rule.
The cluster will remain unavailable until e.g. C is restarted.

Note that this scenario requires allowing configuration changes which
remove and then readd the same servers to the configuration. One may
wonder if such reconfigurations should be allowed, but there doesn't
seem to be any example of them breaking safety of Raft (and the PhD
doesn't seem to mention them at all; perhaps it implicitly accepts
them). It is unknown whether a similar scenario may be produced without
such reconfigurations.

In any case, disabling sticky leadership resolves the problem, and it is
the last currently known availability problem found in randomized
nemesis testing. There is no reason to keep this extension, both because
the original Raft authors' argument does not apply for shared failure
detector, and because one may even argue with the authors in vanilla
Raft given that prevoting is enabled (see end of third paragraph of this
commit message).
Message-Id: <20210921153741.65084-1-kbraun@scylladb.com>
2021-09-26 11:09:01 +03:00
Avi Kivity
daf028210b build: enable -Winconsistent-missing-override warning
This warning can catch a virtual function that thinks it
overrides another, but doesn't, because the two functions
have different signatures. This isn't very likely since most
of our virtual functions override pure virtuals, but it's
still worth having.

Enable the warning and fix numerous violations.

Closes #9347
2021-09-15 12:55:54 +03:00