mirror of
https://github.com/scylladb/scylladb.git
synced 2026-05-02 06:05:53 +00:00
We introduce `PureStateMachine`, which is the most direct translation of the mathematical definition of a state machine to C++ that I could come up with. Represented by a C++ concept, it consists of: a set of inputs (represented by the `input_t` type), outputs (`output_t` type), states (`state_t`), an initial state (`init`) and a transition function (`delta`) which given a state and an input returns a new state and an output. The rest of the testing infrastructure is going to be generic w.r.t. `PureStateMachine`. This will allow easily implementing tests using both simple and complex state machines by substituting the proper definition for this concept. Next comes `logical_timer`: it is a wrapper around `raft::logical_clock` that allows scheduling events to happen after a certain number of logical clock ticks. For example, `logical_timer::sleep(20_t)` returns a future that resolves after 20 calls to `logical_timer::tick()`. It will be used to introduce timeouts in the tests, among other things. To replicate a state machine, our Raft implementation requires it to be represented with the `raft::state_machine` interface. `impure_state_machine` is an implementation of `raft::state_machine` that wraps a `PureStateMachine`. It keeps a variable of type `state_t` representing the current state. In `apply` it deserializes the given command into `input_t`, uses the transition (`delta`) function to produce the next state and output, replaces its current state with the obtained state and returns the output (more on that below); it does so sequentially for every given command. We can think of `PureStateMachine` as the actual state machine - the business logic, and `impure_state_machine` as the ``boilerplate'' that allows the pure machine to be replicated by Raft and communicate with the external world. The interface also requires maintainance of snapshots. We introduce the `snapshots_t` type representing a set of snapshots known by a state machine. `impure_state_machine` keeps a reference to `snapshots_t` because it will share it with an implementation of `persistence`. Returning outputs is a bit tricky because apply is ``write-only'' - it returns `future<>`. We use the following technique: 1. Before sending a command to a Raft leader through `server::add_entry`, one must first directly contact the instance of `impure_state_machine` replicated by the leader, asking it to allocate an ``output channel''. 2. On such a request, `impure_state_machine` creates a channel (represented by a promise-future pair) and a unique ID; it stores the input side of the channel (the promise) with this ID internally and returns the ID and the output side of the channel (the future) to the requester. 3. After obtaining the ID, one serializes the ID together with the input and sends it as a command to Raft. Thus commands are (ID, machine input) pairs. 4. When `impure_state_machine` applies a command, it looks for a promise with the given ID. If it finds one, it sends the output through this channel. 5. The command sender waits for the output on the obtained future. The allocation and deallocation of channels is done using the `impure_state_machine::with_output_channel` function. The `call` function is an implementation of the above technique. Note that only the leader will attempt to send the output - other replicas won't find the ID in their internal data structure. The set of IDs and channels is not a part of the replicated state. A failure may cause the output to never arrive (or even the command to never be applied) so `call` waits for a limited time. It may also mistakenly `call` a server which is not currently the leader, but it is prepared to handle this error. We implement the `raft::rpc` interface, allowing Raft servers to communicate with other Raft servers. The implementation is mostly boilerplate. It assumes that there exists a method of message passing, given by a `send_message_t` function passed in the constructor. It also handles the receival of messages in the `receive` function. It defines the message type (`message_t`) that will be used by the message-passing method. The actual message passing is implemented with `network` and `delivery_queue`. The only slightly complex thing in `rpc` is the implementation of `send_snapshot` which is the only function in the `raft::rpc` interface that actually expects a response. To implement this, before sending the snapshot message we allocate a promise-future pair and assign to it a unique ID; we store the promise and the ID in a data structure. We then send the snapshot together with the ID and wait on the future. The message receival function on the other side, when it receives the snapshot message, applies the snapshot and sends back a snapshot reply message that contains the same ID. When we receive a snapshot reply message we look up the ID in the data structure and if we find a promise, we push the reply through that promise. `rpc` also keeps a reference to `snapshots_t` - it will refer to the same set of snapshots as the `impure_state_machine` on the same server. It accesses the set when it receives or sends a snapshot message. `persistence` represents the data that does not get lost between server crashes and restarts. We store a log of commands in `_stored_entries`. It is invariably ``contiguous'', meaning that the index of each entry except the first is equal to the index of the previous entry plus one at all times (i.e. after each yield). We assume that the caller provides log entries in strictly increasing index order and without gaps. Additionally to storing log entries, `persistence` can be asked to store or load a snapshot. To implement this it takes a reference to a set of snapshots (`snapshots_t&`) which it will share with `impure_state_machine` and an implementation of `rpc`. We ensure that the stored log either ``touches'' the stored snapshot on the right side or intersects it. In order to simulate a production environment as closely as possible, we implement a failure detector which uses heartbeats for deciding whether to convict a server as failed. We convict a server if we don't receive a heartbeat for a long enough time. Similarly to `rpc`, `failure_detector` assumes a message passing method given by a `send_heartbeat_t` function through the constructor. `failure_detector` uses the knowledge about existing servers to decide who to send heartbeats to. Updating this knowledge happens through `add_server` and `remove_server` functions. `network` is a simple priority queue of "events", where an event is a message associated with delivery time. Each message contains a source, a destination, and payload. The queue uses a logical clock to decide when to deliver messages; it delivers are messages whose associated times are smaller than the current time. The exact delivery method is unknown to `network` but passed as a `deliver_t` function in the constructor. The type of payload is generic. The fact that `network` has delivered a message does not mean the message was processed by the receiver. In fact, `network` assumes that delivery is instantaneous, while processing a message may be a long, complex computation, or even require IO. Thus, after a message is delivered, something else must ensure that it is processed by the destination server. That something in our framework is `delivery_queue`. It will be the bridge between `network` and `rpc`. While `network` is shared by all servers - it represents the ``environment'' in which the servers live - each server has its own private `delivery_queue`. When `network` delivers an RPC message it will end up inside `delivery_queue`. A separate fiber, `delivery_queue::receive_fiber()`, will process those messages by calling `rpc::receive` (which is a potentially long operation, thus returns a `future<>`) on the `rpc` of the destination server. `raft_server` is a package that contains `raft::server` and other facilities needed for the server to communicate with its environment: the delivery queue, the set of snapshots (shared by `impure_state_machine`, `rpc` and `persistence`) and references to the `impure_state_machine` and `rpc` instances of this server. `environment` represents a set of `raft_server`s connected by a `network`. The `network` inside is initialized with a message delivery function which notifies the destination server's failure detector on each message and if the message contains an RPC payload, pushes it into the destination's `delivery_queue`. Needs to be periodically `tick()`ed which ticks the network and underlying servers. `ticker` calls the given function as fast as the Seastar reactor allows and yields between each call. It may be provided a limit for the number of calls; it crashes the test if the limit is reached before the ticker is `abort()`ed. Finally, we add a simple test that serves as an example of using the implemented framework. We introduce `ExRegister`, an implementation of `PureStateMachine` that stores an `int32_t` and handles ``exchange'' and ``read'' inputs; an exchange replaces the state with the given value and returns the previous state, a read does not modify the state and returns the current state. In order to pass the inputs to Raft we must serialize them into commands so we implement instances of `ser::serializer` for `ExReg`'s input types. * kbr/randomized-nemesis-test-v5: raft: randomized_nemesis_test: basic test raft: randomized_nemesis_test: ticker raft: randomized_nemesis_test: environment raft: randomized_nemesis_test: server raft: randomized_nemesis_test: delivery queue raft: randomized_nemesis_test: network raft: randomized_nemesis_test: heartbeat-based failure detector raft: randomized_nemesis_test: memory backed persistence raft: randomized_nemesis_test: rpc raft: randomized_nemesis_test: impure_state_machine raft: randomized_nemesis_test: introduce logical_timer raft: randomized_nemesis_test: `PureStateMachine` concept