This commit removes the Open Source vs. Enterprise matrix
from the Open Source documentation.
In addition, a redirection is added to prevent 404 in the OSS docs,
and to the removed page is replaced with a link to the same page
in the Enterprise docs.
This commit must be reverted enterprise.git, because
we want to keep the Matrix in the Enterprise docs.
Fixes https://github.com/scylladb/scylladb/issues/17289Closesscylladb/scylladb#17295
Instead of adding an asterisk next to "liveness" linking to the glossary, we will temporarily replace them with a hyperlink pending the implementation of tooltip functionality.
Closesscylladb/scylladb#17244
This PR implements a procedure that upgrades existing clusters to use
raft-based topology operations. The procedure does not start
automatically, it must be triggered manually by the administrator after
making sure that no topology operations are currently running.
Upgrade is triggered by sending `POST
/storage_service/raft_topology/upgrade` request. This causes the
topology coordinator to start who drives the rest of the process: it
builds the `system.topology` state based on information observed in
gossip and tells all nodes to switch to raft mode. Then, topology
coordinator runs normally.
Upgrade progress is tracked in a new static column `upgrade_state` in
`system.topology`.
The procedure also serves as an extension to the current recovery
procedure on raft. The current recovery procedure requires restarting
nodes in a special mode which disables raft, perform `nodetool
removenode` on the dead nodes, clean up some state on the nodes and
restart them so that they automatically rebuild the group 0. Raft
topology fits into existing procedure by falling back to legacy topology
operations after disabling raft. After rebuilding the group 0, upgrade
needs to be triggered again.
Because upgrade is manual and it might not be convenient for
administrators to run it right after upgrading the cluster, we allow the
cluster to operate in legacy topology operations mode until upgrade,
which includes allowing new nodes to join. In order to allow it, nodes
now ask the cluster about the mode they should use to join before
proceeding by using a new `JOIN_NODE_QUERY` RPC.
The procedure is explained in more detail in `topology-over-raft.md`.
Fixes: https://github.com/scylladb/scylladb/issues/15008Closesscylladb/scylladb#17077
* github.com:scylladb/scylladb:
test/topology_custom: upgrade/recovery tests for topology on raft
cdc/generation_service: in legacy mode, fall back to raft tables
system_keyspace: add read_cdc_generation_opt
cdc/generation_service: turn off gossip notifications in raft topo mode
cql_test_env: move raft_topology_change_enabled var earlier
group0_state_machine: pull snapshot after raft topology feature enabled
storage_service: disable persistent feature enabler on upgrade
storage_service: replicate raft features to system.peers
storage_service: gossip tokens and cdc generation in raft topology mode
API: add api for triggering and monitoring topology-on-raft upgrade
storage_service: infer which topology operations to use on startup
storage_service: set the topology kind value based on group 0 state
raft_group0: expose link to the upgrade doc in the header
feature_service: fall back to checking legacy features on startup
storage_service: add fiber for tracking the topology upgrade progress
gms: feature_service: add SUPPORTS_CONSISTENT_TOPOLOGY_CHANGES
topology_coordinator: implement core upgrade logic
topology_coordinator: extract top-level error handling logic
storage_service: initialize discovery leader's state earlier
topology_coordinator: allow for custom sharding info in prepare_and_broadcast_cdc_generation_data
topology_coordinator: allow for custom sharding info in prepare_new_cdc_generation_data
topology_coordinator: remove outdated fixme in prepare_new_cdc_generation_data
topology_state_machine: introduce upgrade_state
storage_service: disallow topology ops when upgrade is in progress
raft_group0_client: add in_recovery method
storage_service: introduce join_node_query verb
raft_group0: make discover_group0 public
raft_group0: filter current node's IP in discover_group0
raft_group0: remove my_id arg from discover_group0
storage_service: make _raft_topology_change_enabled more advanced
docs: document raft topology upgrade and recovery
Adds a missing logging import in the file scylladb_common_images extension, which prevents the enterprise build from building.
Additionally, it standardizes logging handling across the extensions and removes "ami" references in Azure and GCP extensions.
Closesscylladb/scylladb#17137
This PR removes the following pages:
- ScyllaDB Open Source Features
- ScyllaDB Enterprise Features
They were outdated, incomplete, and misleading. They were also redundant, as the per-release updates are added as Release Notes.
With this update, the features listed on the removed pages are added under the common page: ScyllaDB Features.
In addition, a reference to the Enterprise-only Features section is added.
Note: No redirections are added because no file paths or URLs are changed with this PR.
Fixes https://github.com/scylladb/scylladb/issues/13485
Refs https://github.com/scylladb/scylladb/issues/16496
(nobackport)
Closesscylladb/scylladb#17150
* github.com:scylladb/scylladb:
Update docs/using-scylla/features.rst
doc: remove the OSS and Enterprise Features pages
This PR:
- Adds the upgrade guide from ScyllaDB Open Source 5.4 to ScyllaDB Enterprise 2024.1. Note: The need to include the "Restore system tables" step in rollback has been confirmed; see https://github.com/scylladb/scylladb/issues/11907#issuecomment-1842657959.
- Removes the 5.1-to-2022.2 upgrade guide (unsupported versions).
Fixes https://github.com/scylladb/scylladb/issues/16445Closesscylladb/scylladb#16887
* github.com:scylladb/scylladb:
doc: fix the OSS version number
doc: metric updates between 2024.1. and 5.4
doc: remove the 5.1-to-2022.2 upgrade guide
doc: add the 5.4-to-2024.1 upgrade guide
After changing `left_token_ring` from a node state to a transition
state in scylladb/scylladb#17009, we do the same for
`rollback_to_normal`. `rollback_to_normal` was created as a node
state because `left_token_ring` was a node state.
This change will allow us to distinguish a failed removenode from
a failed decommission in the `rollback_to_normal` handler.
Currently, we use the same logic for both of them, so it's not
required. However, this might change, as it has happened with the
decommission and the failed bootstrap/replace in the
`left_token_ring` state (scylladb/scylladb#16797). We are making
this change now because it would be much harder after branching.
Fixesscylladb/scylladb#17032Closesscylladb/scylladb#17136
* github.com:scylladb/scylladb:
docs: dev: topology-over-raft: align indentation
docs: dev: topology-over-raft: document the rollback_to_normal state
topology_coordinator: improve logs in rollback_to_normal handler
raft topology: make rollback_to_normal a transition state
This commit removes the following pages:
- ScyllaDB Open Source Features
- ScyllaDB Enterprise Features
They were outdated, incomplete, and misleading.
They were also redundant, as the per-release
updates are added as Release Notes.
With this update, the features listed on the removed
pages are added under the common page: ScyllaDB Features.
Note: No redirections are added, because no file paths
or URLs are changed with this commit.
Fixes https://github.com/scylladb/scylladb/issues/13485
Refs https://github.com/scylladb/scylladb/issues/16496
In one of the previous patches, we changed the `rollback_to_normal`
state from a node state to a transition state. We document it
in this patch. The node state wasn't documented, so there is
nothing to remove.
The motivation for tablet resizing is that we want to keep the average tablet size reasonable, such that load rebalancing can remain efficient. Too large tablet makes migration inefficient, therefore slowing down the balancer.
If the avg size grows beyond the upper bound (split threshold), then balancer decides to split. Split spans all tablets of a table, due to power-of-two constraint.
Likewise, if the avg size decreases below the lower bound (merge threshold), then merge takes place in order to grow the avg size. Merge is not implemented yet, although this series lays foundation for it to be impĺemented later on.
A resize decision can be revoked if the avg size changes and the decision is no longer needed. For example, let's say table is being split and avg size drops below the target size (which is 50% of split threshold and 100% of merge one). That means after split, the avg size would drop below the merge threshold, causing a merge after split, which is wasteful, so it's better to just cancel the split.
Tablet metadata gains 2 new fields for managing this:
resize_type: resize decision type, can be either of "merge", "split", or "none".
resize_seq_number: a sequence number that works as the global identifier of the decision (monotonically increasing, increased by 1 on every new decision emitted by the coordinator).
A new RPC was implemented to pull stats from each table replica, such that load balancer can calculate the avg tablet size and know the "split status", for a given table. Avg size is aggregated carefully while taking RF of each DC into account (which might differ).
When a table is done splitting its storage, it loads (mirror) the resize_seq_number from tablet metadata into its local state (in another words, my split status is ready). If a table is split ready, coordinator will see that table's seq number is the same as the one in tablet metadata. Helps to distinguish stale decisions from the latest one (in case decisions are revoked and re-emited later on). Also, it's aggregated carefully, by taking the minimum among all replicas, so coordinator will only update topology when all replicas are ready.
When load balancer emits split decision, replicas will listen to need to split with a "split monitor" that is awakened once a table has replication metadata updated and detects the need for split (i.e. resize_type field is "split").
The split monitor will start splitting of compaction groups (using mechanism introduced here: 081f30d149) for the table. And once splitting work is completed, the table updates its local state as having completed split.
When coordinator pulls the split status of all replicas for a table via RPC, the balancer can see whether that table is ready for "finalizing" the decision, which is about updating tablet metadata to split each tablet into two. Once table replicas have their replication metadata updated with the new tablet count, they can update appropriately their set of compaction groups (that were previously split in the preparation step).
Fixes#16536.
Closesscylladb/scylladb#16580
* github.com:scylladb/scylladb:
test/topology_experimental_raft: Add tablet split test
replica: Bypass reshape on boot with tablets temporarily
replica: Fix table::compaction_group_for_sstable() for tablet streaming
test/topology_experimental_raft: Disable load balancer in test fencing
replica: Remap compaction groups when tablet split is finalized
service: Split tablet map when split request is finalized
replica: Update table split status if completed split compaction work
storage_service: Implement split monitor
topology_cordinator: Generate updates for resize decisions made by balancer
load_balancer: Introduce metrics for resize decisions
db: Make target tablet size a live-updateable config option
load_balancer: Implement resize decisions
service: Wire table_resize_plan into migration_plan
service: Introduce table_resize_plan
tablet_mutation_builder: Add set_resize_decision()
topology_coordinator: Wire load stats into load balancer
storage_service: Allow tablet split and migration to happen concurrently
topology_coordinator: Periodically retrieve table_load_stats
locator: Introduce topology::get_datacenter_nodes()
storage_service: Implement table_load_stats RPC
replica: Expose table_load_stats in table
replica: Introduce storage_group::live_disk_space_used()
locator: Introduce table_load_stats
tablets: Add resize decision metadata to tablet metadata
locator: Introduce resize_decision
When a node is in the `left_token_ring` state, we don't know how
it has ended up in this state. We cannot distinguish a node that
has finished decommissioning from a node that has failed bootstrap.
The main problem it causes is that we incorrectly send the
`barrier_and_drain` command to a node that has failed
bootstrapping or replacing. We must do it for a node that has
finished decommissioning because it could still coordinate
requests. However, since we cannot distinguish nodes in the
`left_token_ring` state, we must send the command to all of them.
This issue appeared in scylladb/scylladb#16797 and this PR is
a follow-up that fixes it.
The solution is changing `left_token_ring` from a node state
to a transition state.
Fixesscylladb/scylladb#16944Closesscylladb/scylladb#17009
* github.com:scylladb/scylladb:
docs: dev: topology-over-raft: document the left_token_ring state
topology_coordinator: adjust reason string in left_token_ring handler
raft topology: make left_token_ring a transition state
topology_coordinator: rollback_current_topology_op: remove unused exclude_nodes
In one of the previous patches, we changed the `left_token_ring`
state from a node state to a transition state. We document it
in this patch. The node state wasn't documented, so there is
nothing to remove.
In this mode, the node is not reachable from the outside, i.e.
* it refuses all incoming RPC connections,
* it does not join the cluster, thus
* all group0 operations are disabled (e.g. schema changes),
* all cluster-wide operations are disabled for this node (e.g. repair),
* other nodes see this node as dead,
* cannot read or write data from/to other nodes,
* it does not open Alternator and Redis transport ports and the TCP CQL port.
The only way to make CQL queries is to use the maintenance socket. The node serves only local data.
To start the node in maintenance mode, use the `--maintenance-mode true` flag or set `maintenance_mode: true` in the configuration file.
REST API works as usual, but some routes are disabled:
* authorization_cache
* failure_detector
* hinted_hand_off_manager
This PR also updates the maintenance socket documentation:
* add cqlsh usage to the documentation
* update the documentation to use `WhiteListRoundRobinPolicy`
Fixes#5489.
Closesscylladb/scylladb#15346
* github.com:scylladb/scylladb:
test.py: add test for maintenance mode
test.py: generalize usage of cluster_con
test.py: when connecting to node in maintenance mode use maintenance socket
docs: add maintenance mode documentation
main: add maintenance mode
main: move some REST routes initialization before joining group0
message_service: add sanity check that rpc connections are not created in the maintenance mode
raft_group0_client: disable group0 operations in the maintenance mode
service/storage_service: add start_maintenance_mode() method
storage_service: add MAINTENANCE option to mode enum
service/maintenance_mode: add maintenance_mode_enabled bool class
service/maintenance_mode: move maintenance_socket_enabled definition to seperate file
db/config: add maintenance mode flag
docs: add cqlsh usage to maintenance socket documentation
docs: update maintenance socket documentation to use WhiteListRoundRobinPolicy
This implements the ability in load balancer to emit split or merge
requests, cancel ongoing ones if they're no longer needed, and
also finalize those that are ready for the topology changes.
That's all based on average tablet size, collected by coordinator
from all nodes, and split and merge thresholds.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
The new metadata describes the ongoing resize operation (can be either
of merge, split or none) that spans tablets of a given table.
That's managed by group0, so down nodes will be able to see the
decision when they come back up and see the changes to the
metadata.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
New tablet replicas are allocated and rebuilt synchronously with node
operations. They are safely rebuilt from all existing replicas.
The list of ignored nodes passed to node operations is respected.
Tablet scheduler is responsible for scheduling tablet rebuilding transition which
changes the replicas set. The infrastructure for handling decommission
in tablet scheduler is reused for this.
Scheduling is done incrementally, respecting per-shard load
limits. Rebuilding transitions are recognized by load calculation to
affect all tablet replicas.
New kind of tablet transition is introduced called "rebuild" which
adds new tablet replica and rebuilds it from existing replicas. Other
than that, the transition goes through the same stages as regular
migration to ensure safe synchronization with request coordinators.
In this PR we simply stream from all tablet replicas. Later we should
switch to calling repair to avoid sending excessive amounts of data.
Fixes https://github.com/scylladb/scylladb/issues/16690.
Closesscylladb/scylladb#16894
* github.com:scylladb/scylladb:
tests: tablets: Add tests for removenode and replace
tablets: Add support for removenode and replace handling
topology_coordinator: tablets: Do not fail in a tight loop
topology_coordinator: tablets: Avoid warnings about ignored failured future
storage_service, topology: Track excluded state in locator::topology
raft topology: Introduce param-less topology::get_excluded_nodes()
raft topology: Move get_excluded_nodes() to topology
tablets: load_balancer: Generalize load tracking
tablets: Introduce get_migration_streaming_info() which works on migration request
tablets: Move migration_to_transition_info() to tablets.hh
tablets: Extract get_new_replicas() which works on migraiton request
tablets: Move tablet_migration_info to tablets.hh
tablets: Store transition kind per tablet
Add empty line before list of different checksums in
validate-checksums's description. Otherwise the list is not rendered.
Closesscylladb/scylladb#16401
This commit improves the developer-oriented section
of the core documentation:
- Added links to the developer sections in the new
Get Started guide (Develop with ScyllaDB and
Tutorials and Example Projects) for ease of access.
- Replaced the outdated Learn to Use ScyllaDB page with
a link to the up-to-date page in the Get Started guide.
This involves removing the learn.rst file and adding
an appropriate redirection.
- Removed the Apache Copyrights, as this page does not
need it.
- Removed the Features panel box as there was only one
feature listed, which looked weird. Also, we are in
the process of removing the Features section.
Closesscylladb/scylladb#16800
This enhancement formats descriptions in config.cc using the standard markup language reStructuredText (RST).
By doing so, it improves the rendering of these descriptions in the documentation, allowing you to use various directives like admonitions, code blocks, ordered lists, and more.
Closesscylladb/scylladb#16311
This commit adds the information that
ScyllaDB Enterprise 2024.1 is based
on ScyllaDB Open Source 5.4
to the OSS vs. Enterprise matrix.
Closesscylladb/scylladb#16880
This PR:
- Removes the redundant information about previous versions from the Create Cluster page.
- Fixes language mistakes on that page, and replaces "Scylla" with "ScyllaDB".
(nobackport)
Closesscylladb/scylladb#16885
* github.com:scylladb/scylladb:
doc: fix the language on the Create Cluster page
doc: remove reduntant info about old versions
This commit removes the 5.1-to-2022.2 upgrade
guide - the upgrade guide for versions we
no longer support.
We should remove it while adding the 5.4-to-2024.1
upgrade guide (the previous commit).
This commit removes the upgrade guides
from ScyllaDB Open Source to Enterprise
for versions we no longer support.
In addition, it removes a link to
one of the removed pages from
the Troubleshooting section (the link is
redundant).
Closesscylladb/scylladb#16249
Add `docs/dev/code-coverage.md` with explanations about how to work with
the different tools added for coverage reporting and cli options added
to `configure.py` and `test.py`
Signed-off-by: Eliran Sinvani <eliransin@scylladb.com>
To enable tablets replication one needs to turn on the (experimental) feature and specify the `initial_tablets: N` option when creating a keyspace. We want tablets to become default in the future and allow users to explicitly opt it out if they want to.
This PR solves this by changing the CREATE KEYSPACE syntax wrt tablets options. Now there's a new TABLETS options map and the usage is
* `CREATE KEYSPACE ...` will turn tablets on or off based on cluster feature being enabled/disabled
* `CREATE KEYSPACE ... WITH TABLETS = { 'enabled': false }` will turn tablets off regardless of what
* `CREATE KEYSPACE ... WITH TABLETS = { 'enabled': true }` will try to enable tablets with default configuration
* `CREATE KEYSPACE ... WITH TABLETS = { 'initial': <int> }` is now the replacement for `REPLICATION = { ... 'initial_tablets': <int> }` thing
fixes: #16319Closesscylladb/scylladb#16364
* github.com:scylladb/scylladb:
code: Enable tablets if cluster feature is enabled
test: Turn off tablets feature by default
test: Move test_tablet_drain_failure_during_decommission to another suite
test/tablets: Enable tables for real on test keyspace
test/tablets: Make timestamp local
cql3: Add feature service to as_ks_metadata_update()
cql3: Add feature service to ks_prop_defs::as_ks_metadata()
cql3: Add feature service to get_keyspace_metadata()
cql: Add tablets on/off switch to CREATE KEYSPACE
cql: Move initial_tablets from REPLICATION to TABLETS in DDL
network_topology_strategy: Estimate initial_tablets if 0 is set
This commit removes support for CentOS 7
from the docs.
The change applies to version 5.4,so it
must be backported to branch-5.4.
Refs https://github.com/scylladb/scylla-enterprise/issues/3502
In addition, this commit removes the information
about Amazon Linux and Oracle Linux, unnecessarily added
without request, and there's no clarity over which versions
should be documented.
Closesscylladb/scylladb#16279