Recent changes in topology restricted the get_dc/get_rack calls. Older
code was trying to locate the endpoint in gossiper, then in system
keyspace cache and if the endpoint was not found in both -- returned
"default" location.
New code generates internal error in this case. This approach already
helped to spot several BUGs in code that had been eventually fixed, but
echoes of that change still pop up.
This patch relaxes the "missing endpoint" case by printing a warning in
logs and returning back the "default" location like old code did.
tests: update_cluster_layout_tests.py::*
hintedhandoff_additional_test.py::TestHintedHandoff::test_hintedhandoff_rebalance
bootstrap_test.py::TestBootstrap::test_decommissioned_wiped_node_can_join
bootstrap_test.py::TestBootstrap::test_failed_bootstap_wiped_node_can_join
materialized_views_test.py::TestMaterializedViews::test_decommission_node_during_mv_insert_4_nodes
refs: #11900
refs: #12054fixes: #11870
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Closes#12067
Returns an unordered set of datacenter names
to be used by network_topology_replication_strategy
and for ks_prop_defs.
The set is kept in sync with _dc_endpoints.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Closes#12023
Since commit a980f94 (token_metadata: impl: keep the set of normal token
owners as a member), we have a set, _normal_token_owners, which contains
all the nodes in the ring.
We can use _normal_token_owners to check if a node is part of the ring
directly instead of going through the _toplogy indirectly.
Fixes#11935
update_normal_tokens is the way to add a new node into the ring. We
should not require a new node to already be in the ring to be able to
add it to the ring. The current code works accidentally because
is_member is checking if a node is in the topology
We should use _topology.has_endpoint to check if a node is part of the
topology explicitly.
`get_rpc_client` calculates a `topology_ignored` field when creating a
client which says whether the client's endpoint had topology information
when this client was created. This is later used to check if that client
needs to be dropped and replaced with a new client which uses the
correct topology information.
The `topology_ignored` field was incorrectly calculated as `true` for
pending endpoints even though we had topology information for them. This
would lead to unnecessary drops of RPC clients later. Fix this.
Remove the default parameter for `with_pending` from
`topology::has_endpoint` to avoid similar bugs in the future.
Apparently this fixes#11780. The verbs used by decommission operation
use RPC client index 1 (see `do_get_rpc_client_idx` in
message/messaging_service.cc). From local testing with additional
logging I found that by the time this client is created (i.e. the first
verb in this group is used), we already know the topology. The node is
pending at that point - hence the bug would cause us to assume we don't
know the topology, leading us to dropping the RPC client later, possibly
in the middle of a decommission operation.
Fixes: #11780Closes#11942
* github.com:scylladb/scylladb:
message: messaging_service: check for known topology before calling is_same_dc/rack
test: reenable test_topology::test_decommission_node_add_column
test/pylib: util: configurable period in wait_for
message: messaging_service: fix topology_ignored for pending endpoints in get_rpc_client
message: messaging_service: topology independent connection settings for GOSSIP verbs
As described in #11993 per-shard repair_info instances get the effective_replication_map on their own with no centralized synchronization.
This series ensures that the effective replication maps used by repair (and other associated structures like the token metadata and topology) are all in sync with the one used to initiate the repair operation.
While at at, the series includes other cleanups in this area in repair and view that are not fixes as the calls happen in synchronous functions that do not yield.
Fixes#11993Closes#11994
* github.com:scylladb/scylladb:
repair: pass erm down to get_hosts_participating_in_repair and get_neighbors
repair: pass effective_replication_map down to repair_info
repair: coroutinize sync_data_using_repair
repair: futurize do_repair_start
effective_replication_map: add global_effective_replication_map
shared_token_metadata: get_lock is const
repair: sync_data_using_repair: require to run on shard 0
repair: require all node operations to be called on shard 0
repair: repair_info: keep effective_replication_map
repair: do_repair_start: use keyspace erm to get keyspace local ranges
repair: do_repair_start: use keyspace erm for get_primary_ranges
repair: do_repair_start: use keyspace erm for get_primary_ranges_within_dc
repair: do_repair_start: check_in_shutdown first
repair: get_db().local() where needed
repair: get topology from erm/token_metdata_ptr
view: get_view_natural_endpoint: get topology from erm
This series moves the topology code from locator/token_metadata.{cc,hh} out to localtor/topology.{cc,hh}
and introduces a shared header file: locator/types.hh contains shared, low level definitions, in anticipation of https://github.com/scylladb/scylladb/pull/11987
While at it, the token_metadata functions are turned into coroutines
and topology copy constructor is deleted. The copy functionality is moved into an async `clone_gently` function that allows yielding while copying the topology.
Closes#12001
* github.com:scylladb/scylladb:
locator: refactor topology out of token_metadata
locator: add types.hh
topology: delete copy constructor
token_metadata: coroutinize clone functions
An unordered_set is more efficient and there is no need
to return an ordered set for this purpose.
This change facilitates a follow-up change of adding
topology::get_datacenters(), returning an unordered_set
of datacenter names.
Refs #11987
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Closes#12003
Class to hold a coherent view of a keyspace
effective replication map on all shards.
To be used in a following patch to pass the sharded
keyspace e_r_m:s to repair.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Topology is copied only from token_metadata_impl::clone_only_token_map
which copies the token_metadata_impl with yielding to prevent reactor
stalls. This should apply to topology as well, so
add a clone_gently function for cloning the topology
from token_metadata_impl::clone_only_token_map.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
`get_rpc_client` calculates a `topology_ignored` field when creating a
client which says whether the client's endpoint had topology information
when topology was created. This is later used to check if that client
needs to be dropped and replaced with a new client which uses the
correct topology information.
The `topology_ignored` field was incorrectly calculated as `true` for
pending endpoints even though we had topology information for them. This
would lead to unnecessary drops of RPC clients later. Fix this.
Remove the default parameter for `with_pending` from
`topology::has_endpoint` to avoid similar bugs in the future.
Apparently this fixes#11780. The verbs used by decommission operation
use RPC client index 1 (see `do_get_rpc_client_idx` in
message/messaging_service.cc). From local testing with additional
logging I found that by the time this client is created (i.e. the first
verb in this group is used), we already know the topology. The node is
pending at that point - hence the bug would cause us to assume we don't
know the topology, leading us to dropping the RPC client later, possibly
in the middle of a decommission operation.
Fixes: #11780
Despite docs discourage from using INADDR_ANY as listen address, this is not disabled in code. Worse -- some snitch drivers may gossip it around as the INTERNAL_IP state. This set prevents this from happening and also adds a sanity check not to use this value if it somehow sneaks in.
Closes#11846
* github.com:scylladb/scylladb:
messaging_service: Deny putting INADD_ANY as preferred ip
messaging_service: Toss preferred ip cache management
gossiping_property_file_snitch: Dont gossip INADDR_ANY preferred IP
gossiping_property_file_snitch: Make _listen_address optional
To be used for specifying nodes either by their
host_id or ip address and using the token_metadata
to resolve the mapping.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
The node to be removed must be identified by its host_id.
Validate that at the api layer and pass the parsed host_id
down to storage_service::removenode.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Gossiping 0.0.0.0 as preferred IP may break the peer as it will
"interpret" this address as <myself> which is not what peer expects.
However, g.p.f.s. uses --listen-address argument as the internal IP
and it's not prohibited to configure it to be 0.0.0.0
It's better not to gossip the INTERNAL_IP property at all if the listen
address is such.
fixes: #11502
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
All uses of snitch not have their own local referece. The global
instance can now be replaced with the one living in main (and tests)
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
The method replaces snitch instance on the existing sharded<snitch_ptr>
and the "existing" is nowadays the global instance. This patch changes
it to use local reference passed from API code
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
"
There's an ongoing effort to move the endpoint -> {dc/rack} mappings
from snitch onto topology object and this set finalizes it. After it the
snitch service stops depending on gossiper and system keyspace and is
ready for de-globalization. As a nice side-effect the system keyspace no
longer needs to maintain the dc/rack info cache and its starting code gets
relaxed.
refs: #2737
refs: #2795
"
* 'br-snitch-dont-mess-with-topology-data-2' of https://github.com/xemul/scylla: (23 commits)
system_keyspace: Dont maintain dc/rack cache
system_keyspace: Indentation fix after previous patch
system_keyspace: Coroutinuze build_dc_rack_info()
topology: Move all post-configuration to topology::config
snitch: Start early
gossiper: Do not export system keyspace
snitch: Remove gossiper reference
snitch: Mark get_datacenter/_rack methods const
snitch: Drop some dead dependency knots
snitch, code: Make get_datacenter() report local dc only
snitch, code: Make get_rack() report local rack only
storage_service: Populate pending endpoint in on_alive()
code: Populate pending locations
topology: Put local dc/rack on topology early
topology: Add pending locations collection
topology: Make get_location() errors more verbose
token_metadata: Add config, spread everywhere
token_metadata: Hide token_metadata_impl copy constructor
gosspier: Remove messaging service getter
snitch: Get local address to gossip via config
...
Following up on 69aea59d97, which added fencing support
for simple reads and writes, this series does the same for the
complex ops:
- partition scan
- counter mutation
- paxos
With this done, the coordinator knows about all in-flight requests and
can delay topology changes until they are retired.
Closes#11296
* github.com:scylladb/scylladb:
storage_proxy: hold effective_replication_map for the duration of a paxos transaction
storage_proxy: move paxos_response_handler class to .cc file
storage_proxy: deinline paxos_response_handler constructor/destructor
storage_proxy: use consistent effective_replication_map for counter coordinator
storage_proxy: improve consistency in query_partition_key_range{,_concurrent}
storage_proxy: query_partition_key_range_concurrent: reduce smart pointer use
storage_proxy: query_partition_key_range_concurrent: improve token_metadata consistency
storage_proxy: query_singular: use fewer smart pointers
storage_proxy: query_singular: simplify lambda captures
locator: effective_replication_map: provide non-smart-pointer accessor to token_metadata
storage_proxy: use consistent token_metadata with rest of singular read
token_metadata is protected by holders of an effective_replication_map_ptr,
so it's just as safe and less expensive for them to obtain a reference
to token_metadata rather than a smart pointer, so give them that option with
a new accessor.
EC2 instance metadata service can be busy, ret's retry to connect with
interval, just like we do in scylla-machine-image.
Fixes#10250
Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Closes#11688
Because of snitch ex-dependencies some bits on topology were initialized
with nasty post-start calls. Now it all can be removed and the initial
topology information can be provided by topology::config
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
It doesn't need gossiper any longer. This change will allow starting
snitch early by the next patch, and eventually improving the
token-metadata start-up sequence
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
They are in fact such, but wasn't marked as const before because they
wanted to talk to non-const gossiper and system_keyspaces methods and
updated snitch internal caches
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
After previous patches and merged branches snitch no longer needs its
method that gets dc/rack for endpoints from gossiper, system keyspace
and its internal caches.
This cuts the last but the biggest snitch->gossiper dependency.
Also this removes implicit snitch->system_keyspace dependency loop
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
The continuation of the previous patch -- all the code uses
topology::get_datacenter(endpoint) to get peers' dc string. The topology
still uses snitch for that, but it already contains the needed data.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
All the code out there now calls snitch::get_rack() to get rack for the
local node. For other nodes the topology::get_rack(endpoint) is used.
Since now the topology is properly populated with endpoints, it can
finally be patched to stop using snitch and get rack from its internal
collections
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Previous patches added the concept of pending endpoints in the topology,
this patch populates endpoints in this state.
Also, the set_pending_ranges() is patched to make sure that the tokens
added for the enpoint(s) are added for something that's known by the
topology. Same check exists in update_normal_tokens()
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Startup code needs to know the dc/rack of the local node early, way
before nodes starts any communication with the ring. This information is
available when snitch activates, but it starts _after_ token-metadata,
so the only way to put local dc/rack in topology is via a startup-time
special API call. This new init_local_endpoint() is temporary and will
be removed later in this set
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Nowadays the topology object only keeps info about nodes that are normal
members of the ring. Nodes that are joining or bootstrapping or leaving
are out of it. However, one of the goals of this patchset is to make
topology object provide dc/rack info for _all_ nodes, even those in
transitive state.
The introduced _pending_locations is about to hold the dc/rack info for
transitive endpoints. When a node becomes member of the ring it is moved
from pending (if it's there) to current locations, when it leaves the
ring it's moved back to pending.
For now the new collection is just added and the add/remove/get API is
extended to maintain it, but it's not really populated. It will come in
the next patch
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Currently if topology.get_location() doesn't find an entry in its
collection(s) it throws standard out-of-range exception which's very
hard to debug.
Also, next patches will extend this method, the introduced here if
(_current_locations.contains()) makes this future change look nicer.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Next patches will need to provide some early-start data for topology.
The standard way of doing it is via service config, so this patch adds
one. The new config is empty in this patch, to be filled later
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Copying of token_metadata_impl is heavy operation and it's performed
internally with the help of the dedicated clone_async() method. This
method, in turn, doesn't copy the whole object in its copy-ctor, but
rather default-initializes it and carries the remaining fields later.
Having said that, the standart copy-ctor is better to be made private
and, for the sake of being more explicit, marked as shallow-copy-ctor
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
The property-file snitch gossips listen_address as internal-IP state. To
get this value it gets it from snitch->gossiper->messaging_service
chain. This change provides the needed value via config thus cutting yet
another snitch->gossiper dependency and allowing gossiper not to export
messaging service in the future
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
The goal is not to default initialize an object when its fields are about to be immediately overwritten by the consecutive code.
Closes#11619
* github.com:scylladb/scylladb:
replication_strategy: Construct temp tokens in place
topology: Define copy-sonctructor with init-lists
Nowadays it's done inside snitch, and snitch needs to carry gossiper
refernece for that. There's an ongoing effort in de-globalizing snitch
and fixing its dependencies. This patch cuts this snitch->gossiper link
to facilitate the mentioned effort.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Otherwise, the token_metadata object is default-initialized, then it's
move-assigned from another object.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Otherwise the topology is default-constructed, then its fields
are copy-assigned with the data from the copy-from reference.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
The token_metadata::calculate_pending_ranges_for_bootstrap() makes a
clone of itself and adds bootstrapping nodes to the clone to calculate
ranges. Currently added nodes lack the dc/rack which affects the
calculations the bad way.
Unfortunately, the dc/rack for those nodes is not available on topology
(yet) and needs pretty heavy patching to have. Fortunately, the only
caller of this method has gossiper at hand to provide the dc/rack from.
fixes: #11531
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Closes#11596
This std::function causes allocations, both on construction
and in other operations. This costs ~2200 instructions
for a DC-local query. Fix that.
Closes#11494