Files
scylladb/test/cluster/test_decommission.py
Patryk Jędrzejczak bb9fb7848a test: cluster: deflake consistency checks after decommission
In the Raft-based topology, a decommissioning node is removed from group
0 after the decommission request is considered finished (and the token
ring is updated). Therefore, `check_token_ring_and_group0_consistency`
called just after decommission might fail when the decommissioned node
is still in group 0 (as a non-voter). We deflake all tests that call
`check_token_ring_and_group0_consistency` after decommission in this
commit.

Fixes #25809
2025-09-09 19:01:12 +02:00

36 lines
1.3 KiB
Python

#
# Copyright (C) 2024-present ScyllaDB
#
# SPDX-License-Identifier: LicenseRef-ScyllaDB-Source-Available-1.0
#
import logging
import time
import pytest
from test.pylib.manager_client import ManagerClient
from test.cluster.util import wait_for_token_ring_and_group0_consistency
logger = logging.getLogger(__name__)
@pytest.mark.asyncio
async def test_decommissioned_node_cant_rejoin(request, manager: ManagerClient):
# This a regression test for #17282.
logger.info("Bootstrapping the leader node")
servers = [await manager.server_add()]
logger.info(f"Bootstrapping the second node")
servers += [await manager.server_add()]
# It's important that we decommission a node which is not a leader.
# We want to check the case when after restart the node needs
# to communicate with other nodes to discover a leader.
logger.info(f"Decommissioning node {servers[1]}")
await manager.decommission_node(servers[1].server_id)
await wait_for_token_ring_and_group0_consistency(manager, time.time() + 30)
logger.info(f"Attempting to start the node {servers[1]} after it was decommissioned")
await manager.server_start(servers[1].server_id,
expected_error='This node was decommissioned and will not rejoin the ring')
logger.info(f"Got the expected error")