We revive the `join_ring` option. We support it only in the
Raft-based topology, as we plan to remove the gossip-based topology
when we fix the last blocker - the implementation of the manual
recovery tool. In the Raft-based topology, a node can be assigned
tokens only once when it joins the cluster. Hence, we disallow
joining the ring later, which is possible in Cassandra.
The main idea behind the solution is simple. We make the unsupported
special case of zero tokens a supported normal case. Nodes with zero
tokens assigned are called "zero-token nodes" from now on.
From the topology point of view, zero-token nodes are the same as
token-owning nodes. They can be in the same states, etc. From the
data point of view, they are different. They are not members of
the token ring, so they are not present in
`token_metadata::_normal_token_owners`. Hence, they are ignored in
all non-local replication strategies. The tablet load balancer also
ignores them.
Zero-token nodes can be used as coordinator-only nodes, just like in
Cassandra. They can handle requests just like token-owning nodes.
The main motivation behind zero-token nodes is that they can prevent
the Raft majority loss efficiently. Zero-token nodes are group 0
voters, but they can run on much weaker and cheaper machines because
they do not replicate data and handle client requests by default
(drivers ignore them). For example, if there are two DCs, one with 4
nodes and one with 5 nodes, if we add a DC with 2 zero-token nodes,
every DC will contain less than half of the nodes, so we won't lose
the majority when any DC dies.
Another way of preventing the Raft majority loss is changing the
voter set, which is tracked by scylladb/scylladb#18793. That approach
can be used together with zero-token nodes. In the example above, if
we choose equal numbers of voters in both DCs, then a DC with one
zero-token node will be sufficient. However, in the typical setup of
2 DCs with the same number of nodes it is enough to add a DC with
only one zero-token node without changing the voter set.
Zero-token nodes could also be used as load balancers in the
Alternator.
Additionally, this PR fixesscylladb/scylladb#11087, which turned out to
be a blocker.
This PR introduced a new feature. There is no need to backport it.
Fixesscylladb/scylladb#6527Fixesscylladb/scylladb#11087Fixesscylladb/scylladb#15360Closesscylladb/scylladb#19684
* github.com:scylladb/scylladb:
docs: raft: document using zero-token nodes to prevent majority loss
test: test recovery mode in the presence of zero-token nodes
test: topology: util.py: add cqls parameter to check_system_topology_and_cdc_generations_v3_consistency
test: topology: util.py: accept zero tokens in check_system_topology_and_cdc_generations_v3_consistency
treewide: support zero-token nodes in the recovery mode
storage_proxy: make TRUNCATE work locally for local tables
test: topology: util.py: document that check_token_ring_and_group0_consistency fails with zero-token nodes
test: test zero-token nodes
test: test_topology_ops: move helpers to topology/util.py
feature_service: introduce the ZERO_TOKEN_NODES feature
storage_service: rename join_token_ring to join_topology
storage_service: raft_topology_cmd_handler: improve warnings
topology_coordinator: fix indentation after the previous patch
treewide: introduce support for zero-token nodes in Raft topology
system_keyspace: load_topology_state: remove assertion impossible to hit
treewide: distinguish all nodes from all token owners
gossip topology: make a replacing node remove the replaced node from topology
locator: topology: add_or_update_endpoint: use none as the default node state
test: boost: tablets tests: ensure all nodes are normal token owners
token_metadata: rename get_all_endpoints and get_all_ips
network_topology_strategy: reallocate_tablets: remove unused dc_rack_nodes
virtual_tables: cluster_status_table: execute: set dc regardless of the token ownership