Merge 'doc: remove the Manger documentation from the core ScyllaDB docs' from Anna Stuchlik

In this PR, I have:

- removed the docs for Manager  (including the sources for Manager 2.1 and the upgrade guides).
- added redirects to https://manager.docs.scylladb.com/.
- replaced the internal links with external links to https://manager.docs.scylladb.com/.

Closes #11162

* github.com:scylladb/scylladb:
  doc: update the link to fix the warning about duplicate targets
  Update docs/kb/gc-grace-seconds.rst
  Update docs/_utils/redirects.yaml
  doc: update the links to Manager
  doc: add the link to manager.docs.scylladb.com to the toctree
  doc: remove the docs for Manager - the Manager page, the guide for Manager 2.1, Manger upgrade guides
  doc: add redirections from Manager 2.1 to the Manager docs
  doc: add redirections to manager.docs.scylladb.com
This commit is contained in:
Botond Dénes
2022-08-02 12:29:37 +03:00
79 changed files with 77 additions and 6326 deletions

View File

@@ -14,6 +14,44 @@
/stable/operating-scylla/scylla-operator/index.html: https://operator.docs.scylladb.com/stable/
### removing the old Scylla Manager documentation from the ScyllaDB docs
/stable/operating-scylla/manager/index.html: https://manager.docs.scylladb.com/
/stable/upgrade/upgrade-manager/index.html: https://manager.docs.scylladb.com/stable/upgrade/index.html
/stable/upgrade/upgrade-manager/upgrade-guide-maintenance-1.x.y-to-1.x.z/index.html: https://manager.docs.scylladb.com/stable/upgrade/index.html
/stable/upgrade/upgrade-manager/upgrade-guide-maintenance-1.x.y-to-1.x.z/upgrade-guide-from-manager-1.x.y-to-1.x.z-CentOS.html: https://manager.docs.scylladb.com/stable/upgrade/index.html
/stable/upgrade/upgrade-manager/upgrade-guide-maintenance-1.x.y-to-1.x.z/upgrade-guide-from-manager-1.x.y-to-1.x.z-ubuntu.html: https://manager.docs.scylladb.com/stable/upgrade/index.html
/stable/upgrade/upgrade-manager/upgrade-guide-from-manager-1.0.x-to-1.1.x.html: https://manager.docs.scylladb.com/stable/upgrade/index.html
/stable/upgrade/upgrade-manager/upgrade-guide-from-manager-1.1.x-to-1.2.x.html: https://manager.docs.scylladb.com/stable/upgrade/index.html
/stable/upgrade/upgrade-manager/upgrade-guide-from-1.2-to-1.3/index.html: https://manager.docs.scylladb.com/stable/upgrade/index.html
/stable/upgrade/upgrade-manager/upgrade-guide-from-1.2-to-1.3/upgrade-guide-from-manager-1.2.x-to-1.3.x-CentOS.html: https://manager.docs.scylladb.com/stable/upgrade/index.html
/stable/upgrade/upgrade-manager/upgrade-guide-from-1.2-to-1.3/upgrade-guide-from-manager-1.2.x-to-1.3.x-ubuntu.html: https://manager.docs.scylladb.com/stable/upgrade/index.html
/stable/upgrade/upgrade-manager/upgrade-guide-from-1.2-to-1.3/manager-metric-update-1.2-to-1.3.html: https://manager.docs.scylladb.com/stable/upgrade/index.html
/stable/upgrade/upgrade-manager/upgrade-guide-from-1.3-to-1.4/index.html: https://manager.docs.scylladb.com/stable/upgrade/index.html
/stable/upgrade/upgrade-manager/upgrade-guide-from-1.3-to-1.4/upgrade-guide-from-manager-1.3.x-to-1.4.x-CentOS.html: https://manager.docs.scylladb.com/stable/upgrade/index.html
/stable/upgrade/upgrade-manager/upgrade-guide-from-1.3-to-1.4/upgrade-guide-from-manager-1.3.x-to-1.4.x-ubuntu.html: https://manager.docs.scylladb.com/stable/upgrade/index.html
/stable/upgrade/upgrade-manager/upgrade-guide-from-1.3-to-1.4/manager-metric-update-1.3-to-1.4.html: https://manager.docs.scylladb.com/stable/upgrade/index.html
/stable/upgrade/upgrade-manager/upgrade-guide-from-1.4-to-2.0/index.html: https://manager.docs.scylladb.com/stable/upgrade/index.html
/stable/upgrade/upgrade-manager/upgrade-guide-from-1.4-to-2.0/upgrade-guide-from-manager-1.4.x-to-2.0.x.html: https://manager.docs.scylladb.com/stable/upgrade/index.html
/stable/upgrade/upgrade-manager/upgrade-guide-from-1.4-to-2.0/manager-metric-update-1.4-to-2.0.html: https://manager.docs.scylladb.com/stable/upgrade/index.html
/stable/upgrade/upgrade-manager/upgrade-guide-from-2.x.a-to-2.y.b/index.html: https://manager.docs.scylladb.com/stable/upgrade/index.html
/stable/upgrade/upgrade-manager/upgrade-guide-from-2.x.a-to-2.y.b/upgrade-2.x.a-to-2.y.b.html: https://manager.docs.scylladb.com/stable/upgrade/index.html
/stable/upgrade/upgrade-manager/upgrade-guide-from-2.x.a-to-2.y.b/upgrade-row-level-repair.html: https://www.scylladb.com/2019/08/13/scylla-open-source-3-1-efficiently-maintaining-consistency-with-row-level-repair/
stable/operating-scylla/manager/2.1/index.html: https://manager.docs.scylladb.com/
/stable/operating-scylla/manager/2.1/architecture.html: https://manager.docs.scylladb.com/
/stable/operating-scylla/manager/2.1/install.html: https://manager.docs.scylladb.com/stable/install-scylla-manager.html
/stable/operating-scylla/manager/2.1/install-agent.html: https://manager.docs.scylladb.com/stable/install-scylla-manager-agent.html
/stable/operating-scylla/manager/2.1/add-a-cluster.html: https://manager.docs.scylladb.com/stable/add-a-cluster.html
/stable/operating-scylla/manager/2.1/repair.html: https://manager.docs.scylladb.com/stable/repair/index.html
/stable/operating-scylla/manager/2.1/backup.html: https://manager.docs.scylladb.com/stable/backup/index.html
/stable/operating-scylla/manager/2.1/extract-schema-from-backup.html: https://manager.docs.scylladb.com/stable/sctool/backup.html
/stable/operating-scylla/manager/2.1/restore-a-backup.html: https://manager.docs.scylladb.com/stable/restore/index.html
/stable/operating-scylla/manager/2.1/health-check.html: https://manager.docs.scylladb.com/stable/health-check.html
/stable/operating-scylla/manager/2.1/sctool.html: https://manager.docs.scylladb.com/stable/sctool/index.html
/stable/operating-scylla/manager/2.1/monitoring-manager-integration.html: https://manager.docs.scylladb.com/stable/scylla-monitoring.html
/stable/operating-scylla/manager/2.1/use-a-remote-db.html: https://manager.docs.scylladb.com/
/stable/operating-scylla/manager/2.1/configuration-file.html: https://manager.docs.scylladb.com/stable/config/scylla-manager-config.html
/stable/operating-scylla/manager/2.1/agent-configuration-file.html: https://manager.docs.scylladb.com/stable/config/scylla-manager-agent-config.html
### moving the CQL reference files to the new cql folder

View File

@@ -431,7 +431,7 @@ Is ``Nodetool Repair`` a Local (One Node) Operation or a Global (Full Cluster) O
When running :doc:`nodetool repair </operating-scylla/nodetool-commands/repair/>` on a node, it performs a repair on every token range this node owns; this will also repair other nodes that share the same range.
If you wish to repair the entire cluster, it is recommended to run ``nodetool repair -pr`` on each node in the cluster, sequentially, or use the :doc:`Scylla Manager </operating-scylla/manager/index/>`.
If you wish to repair the entire cluster, it is recommended to run ``nodetool repair -pr`` on each node in the cluster, sequentially, or use the `ScyllaDB Manager <https://manager.docs.scylladb.com/>`_.
How can I change the maximum number of IN restrictions?

View File

@@ -34,7 +34,7 @@ There are two types of compactions:
View Compaction Statistics
--------------------------
Scylla has tools you can use to see the status of your compactions. These include nodetool (:doc:`compactionhistory </operating-scylla/nodetool-commands/compactionhistory>` and :doc:`compactionstats </operating-scylla/nodetool-commands/compactionstats>`) and the Grafana dashboards which are part of the `Scylla Monitoring Stack <https://monitoring.docs.scylladb.com/>`_ which display the compaction statistics on a per cluster and per node basis. Compaction errors can be seen in the :ref:`logs <manager-2.1-logging-settings>`.
Scylla has tools you can use to see the status of your compactions. These include nodetool (:doc:`compactionhistory </operating-scylla/nodetool-commands/compactionhistory>` and :doc:`compactionstats </operating-scylla/nodetool-commands/compactionstats>`) and the Grafana dashboards which are part of the `Scylla Monitoring Stack <https://monitoring.docs.scylladb.com/>`_ which display the compaction statistics on a per cluster and per node basis. Compaction errors can be seen in the `logs <https://manager.docs.scylladb.com/stable/config/scylla-manager-config.html>`_.
Compaction strategy
-------------------

View File

@@ -23,11 +23,11 @@ In addition, you should follow the procedure below in order to avoid data resurr
Resolution
----------
#. Run a :doc:`full repair </operating-scylla/manager/2.1/repair>` for the table in question.
#. Run a `full repair <https://manager.docs.scylladb.com/stable/repair>`_ for the table in question.
#. Change the ``gc_grace_seconds`` value for the table using the :ref:`ALTER table <alter-table-statement>` command.
#. Verify that the schema is in sync after the change by issuing :doc:`nodetool describecluster </operating-scylla/nodetool-commands/describecluster>` command from all nodes.
Verify that only a single schema version is reported. Read the :doc:`Schema Mismatch Troubleshooting Guide </troubleshooting/error-messages/schema-mismatch>` if it's not the case.
#. Make sure that you run at least one :doc:`full repair </operating-scylla/manager/2.1/repair>` for the table in question during the ``gc_grace_seconds`` time window.
#. Make sure that you run at least one `full repair <https://manager.docs.scylladb.com/stable/repair>`_ for the table in question during the ``gc_grace_seconds`` time window.
For example, if the ``gc_grace_seconds`` is set to 10 days, you should run a full repair on your tables every 8-9 days to make sure your tables are repaired before the ``gc_grace_seconds`` threshold is reached.

View File

@@ -40,4 +40,4 @@ Please check if the aborted repair stays in RUNNING forever before forcing a sto
curl -X POST "http://127.0.0.2:10000/storage_service/force_terminate_repair"
**NOTE:** If you are using :doc:`Scylla Manager </operating-scylla/manager/index>` for repairs, a simple stop command via sctool already implements all the needed logic to gracefully stop a repair.
**NOTE:** If you are using `ScyllaDB Manager <https://manager.docs.scylladb.com/>`_ for repairs, a simple stop command via sctool already implements all the needed logic to gracefully stop a repair.

View File

@@ -12,6 +12,9 @@ Scylla for Administrators
manager/index
ScyllaDB Monitoring Stack <https://monitoring.docs.scylladb.com/>
ScyllaDB Operator <https://operator.docs.scylladb.com/>
ScyllaDB Manager <https://manager.docs.scylladb.com/>
Scylla Monitoring Stack <monitoring/index>
Scylla Operator <scylla-operator/index>
Upgrade Procedures </upgrade/index>
System Configuration <system-configuration/index>
benchmarking-scylla
@@ -33,9 +36,15 @@ Scylla for Administrators
:class: my-panel
* :doc:`Scylla Tools </operating-scylla/admin-tools/index>` - Tools for Administrating and integrating with Scylla
<<<<<<< HEAD
* :doc:`Scylla Manager </operating-scylla/manager/index>` - Tool for cluster administration and automation
* `ScyllaDB Monitoring Stack <https://monitoring.docs.scylladb.com/stable/>`_ - Tool for cluster monitoring and alerting
* `ScyllaDB Operator <https://operator.docs.scylladb.com>`_ - Tool to run Scylla on Kubernetes
=======
* `ScyllaDB Manager <https://manager.docs.scylladb.com/>`_ - Tool for cluster administration and automation
* :doc:`Scylla Monitoring Stack </operating-scylla/monitoring/index>` - Tool for cluster monitoring and alerting
* :doc:`Scylla Operator </operating-scylla/scylla-operator/index>` - Tool to run Scylla on Kubernetes
>>>>>>> 40050f951 (doc: add the link to manager.docs.scylladb.com to the toctree)
* :doc:`Scylla Logs </getting-started/logging/>`
.. panel-box::

View File

@@ -1,40 +0,0 @@
``--host <node IP>``
Specifies the hostname or IP of the node that will be used to discover other nodes belonging to the cluster.
Note that this will be persisted and used every time Scylla Manager starts. You can use either an IPv4 or IPv6 address.
=====
``-n, --name <alias>``
When a cluster is added, it is assigned a unique identifier.
Use this parameter to identify the cluster by an alias name which is more meaningful.
This alias name can be used with all commands that accept ``-c, --cluster`` parameter.
=====
``--auth-token <token>``
Specifies the :ref:`authentication token <manager-2.1-generate-auth-token>` you identified in ``/etc/scylla-manager-agent/scylla-manager-agent.yaml``
=====
``-u, --username <cql username>``
Optional CQL username, for security reasons this user should NOT have access to your data.
If you specify the CQL username and password, the CQL health check you see in `status`_ would try to login and execute a query against system keyspace.
Otherwise CQL health check is based on sending `CQL OPTIONS frame <https://github.com/apache/cassandra/blob/trunk/doc/native_protocol_v4.spec#L302>`_ and does not start a CQL session.
=====
``-p, --password <password>``
CQL password associated with username.
=====
``--without-repair``
When cluster is added, Manager schedules repair to repeat every 7 days. To create a cluster without a scheduled repair, use this flag.

View File

@@ -1,10 +0,0 @@
The following syntax is supported:
* ``*`` - matches any number of any characters including none
* ``?`` - matches any single character
* ``[abc]`` - matches one character given in the bracket
* ``[a-z]`` - matches one character from the range given in the bracket
Patterns are evaluated from left to right.
If a pattern starts with ``!`` it unselects items that were selected by previous patterns
i.e. ``a?,!aa`` selects *ab* but not *aa*.

View File

@@ -1,6 +0,0 @@
Scylla Manager is a product for database operations automation, it can schedule tasks such as repairs and backups.
Scylla Manager can manage multiple Scylla clusters and run cluster-wide tasks in a controlled and predictable way.
Scylla Manager is available for Scylla Enterprise customers and Scylla Open Source users.
With Scylla Open Source, Scylla Manager is limited to 5 nodes.
See the Scylla Manager Proprietary Software `License Agreement <https://www.scylladb.com/scylla-manager-software-license-agreement/>`_ for details.

View File

@@ -1,3 +0,0 @@
``-c`` , ``--cluster``
This is the cluster name is the name you assigned when you created the cluster (`cluster add`_). You can see the cluster name and ID by running the command `cluster list`_.

View File

@@ -1,3 +0,0 @@
A task ID with a type (repair, for example) is **required** for this command.
This is a unique ID which is created when the task was made.
To display the ID, run the command ``sctool task list`` (see `task list`_).

View File

@@ -1,41 +0,0 @@
``--interval <time between task runs>``
Amount of time after which a successfully completed task would be run again.
Supported time units include:
* ``d`` - days,
* ``h`` - hours,
* ``m`` - minutes,
* ``s`` - seconds.
**Default** 0 (no interval)
.. note:: The task run date is aligned with ``--start date`` value. For example, if you select ``--interval 7d`` task would run weekly at the ``--start-date`` time.
=====
``-s, --start-date <now+duration|RFC3339>``
The date can be expressed relatively to now or as a RFC3339 formatted string.
To run the task in 2 hours use ``now+2h``, supported units are:
* ``h`` - hours,
* ``m`` - minutes,
* ``s`` - seconds,
* ``ms`` - milliseconds.
If you want the task to start at a specified date use RFC3339 formatted string i.e. ``2018-01-02T15:04:05-07:00``.
If you want the repair to start immediately, use the value ``now`` or skip this flag.
**Default:** now (start immediately)
=====
``-r, --num-retries <times to rerun a failed task>``
Number of times a task reruns following a failure. The task reruns 10 minutes following a failure.
If the task fails after the retry times have been used, it will not retry again until its next run which was scheduled according to the ``--interval`` parameter.
.. note:: If this is an ad hoc repair, the task will not run again.
**Default:** 3

View File

@@ -1,162 +0,0 @@
=========================================
Add a cluster or a node to Scylla Manager
=========================================
.. include:: /operating-scylla/manager/_common/note-versions.rst
Scylla Manager manages clusters. A cluster contains one or more nodes / datacenters. When you add a cluster to Scylla Manager, it adds all of the nodes which are:
* associated with it,
* that are running Scylla Manager Agent,
* and are accessible
Port Settings
=============
Confirm all ports required for Scylla Manager and Scylla Manager Agent are open. This includes:
* 9042 CQL
* 9142 SSL CQL
* 10001 Scylla Agent REST API
Add a Cluster
=============
This procedure adds the nodes to Scylla Manager so the cluster can be a managed cluster under Scylla Manager.
Prerequisites
-------------
For each node in the cluster, the **same** :ref:`authentication token <manager-2.1-generate-auth-token>` needs to be identified in ``/etc/scylla-manager-agent/scylla-manager-agent.yaml``
Create a Managed Cluster
------------------------
.. _name:
**Procedure**
#. From the Scylla Manager Server, provide the broadcast_address of one of the nodes and the generated auth_token (if used) and a custom name if desired.
Where:
* ``--host`` is hostname or IP of one of the cluster nodes. You can use an IPv6 or an IPv4 address.
* ``--name`` is an alias you can give to your cluster. Using an alias means you do not need to use the ID of the cluster in all other operations.
* ``--auth-token`` is the authentication :ref:`token <manager-2.1-generate-auth-token>` you identified in ``/etc/scylla-manager-agent/scylla-manager-agent.yaml``
* ``--without-repair`` - when a cluster is added, the Manager schedules repair to repeat every 7 days. To create a cluster without a scheduled repair, use this flag.
* ``--username`` and ``--password`` - optionally, you can provide CQL credentials to the cluster.
For security reasons, the user should NOT have access to your data.
This enables :ref:`CQL query-based health check <manager-2.1-cql-query-health-check>` compared to :ref:`credentials agnostic health check <manager-2.1-credentials-agnostic-health-check>` if you do not specify the credentials.
This also enables CQL schema backup, which isn't performed if credentials aren't provided.
Example (IPv4):
.. code-block:: none
sctool cluster add --host 34.203.122.52 --auth-token "6Es3dm24U72NzAu9ANWmU3C4ALyVZhwwPZZPWtK10eYGHJ24wMoh9SQxRZEluWMc0qDrsWCCshvfhk9uewOimQS2x5yNTYUEoIkO1VpSmTFu5fsFyoDgEkmNrCJpXtfM" --name prod-cluster
c1bbabf3-cad1-4a59-ab8f-84e2a73b623f
__
/ \ Cluster added! You can set it as default by exporting env variable.
@ @ $ export SCYLLA_MANAGER_CLUSTER=c1bbabf3-cad1-4a59-ab8f-84e2a73b623f
| | $ export SCYLLA_MANAGER_CLUSTER=prod-cluster
|| |/
|| || Now run:
|\_/| $ sctool status -c prod-cluster
\___/ $ sctool task list -c prod-cluster
Example (IPv6):
.. code-block:: none
sctool cluster add --host 2a05:d018:223:f00:971d:14af:6418:fe2d --auth-token "6Es3dm24U72NzAu9ANWmU3C4ALyVZhwwPZZPWtK10eYGHJ24wMoh9SQxRZEluWMc0qDrsWCCshvfhk9uewOimQS2x5yNTYUEoIkO1VpSmTFu5fsFyoDgEkmNrCJpXtfM" --name prod-cluster
Each cluster has a unique ID.
You will use this ID in all commands where the cluster ID is required.
Each cluster is automatically registered with a repair task that runs once a week.
This can be canceled using ``--without-repair``.
To use a different repair schedule, see :ref:`Schedule a Repair <manager-2.1-schedule-a-repair>`.
#. Verify that the cluster you added has a registered repair task by running the ``sctool task list -c <cluster-name>`` command, adding the name_ of the cluster you created in step 1 (with the ``--name`` flag).
.. code-block:: none
sctool task list -c prod-cluster
╭───────────────────────────────────────────────────────┬───────────┬────────────────────────────────┬────────╮
│ Task │ Arguments │ Next run │ Status │
├───────────────────────────────────────────────────────┼───────────┼────────────────────────────────┼────────┤
│ healthcheck/8988932e-de2f-4c42-a2f8-ae3b97fd7126 │ │ 02 Apr 20 12:28:10 CEST (+15s) │ NEW │
│ healthcheck_rest/9b7e694d-a1e3-42f1-8ca6-d3dfd9f0d94f │ │ 02 Apr 20 12:28:40 CEST (+1h) │ NEW │
│ repair/0fd8a43b-eacf-4df8-9376-2a31b0dee6cc │ │ 03 Apr 20 00:00:00 CEST (+7d) │ NEW │
╰───────────────────────────────────────────────────────┴───────────┴────────────────────────────────┴────────╯
You will see 3 tasks which are created by adding the cluster:
* Healthcheck - which checks the Scylla CQL, starting immediately, repeating every 15 seconds. See :doc:`Scylla Health Check <health-check>`
* Healthcheck REST - which checks the Scylla REST API, starting immediately, repeating every hour. See :doc:`Scylla Health Check <health-check>`
* Repair - an automated repair task, starting at midnight tonight, repeating every seven days at midnight. See :doc:`Run a Repair <repair>`
.. note:: If you want to change the schedule for the repair, see :ref:`Reschedule a repair <manager-2.1-reschedule-a-repair>`.
Connect Managed Cluster to Scylla Monitoring
============================================
Connecting your cluster to Scylla Monitoring allows you to see metrics about your cluster and Scylla Manager all within Scylla Monitoring.
To connect your cluster to Scylla Monitoring, it is **required** to use the same cluster name_ as you used when you created the cluster. See `Add a Cluster`_.
**Procedure**
Follow the procedure |mon_root| as directed, remembering to update the Scylla Node IPs and Cluster name_ as well as the Scylla Manager IP in the relevant Prometheus configuration files.
If you have any issues connecting to Scylla Monitoring Stack, consult the :doc:`Troubleshooting Guide </troubleshooting/manager-monitoring-integration>`.
Add a Node to a Managed Cluster
===============================
Although Scylla Manager is aware of all topology changes made within every cluster it manages, it cannot properly manage nodes/datacenters without establishing connections with every node/datacenter in the cluster, including the Scylla Manager Agent, which is on each managed node.
**Before You Begin**
* Confirm you have a managed cluster running under Scylla Manager. If you do not have a managed cluster, see `Add a cluster`_.
* Confirm the :ref:`node <add-node-to-cluster-procedure>` or :doc:`Datacenter </operating-scylla/procedures/cluster-management/add-dc-to-existing-dc>` is added to the Scylla Cluster.
**Procedure**
#. :doc:`Add Scylla Manager Agent <install-agent>` to the new node. Use the **same** authentication token as you did for the other nodes in this cluster. Do not generate a new token.
#. Confirm the node / datacenter was added by checking its :ref:`status <sctool_status>`. From the node running the Scylla Manager server, run the ``sctool status`` command, using the name of the managed cluster.
.. code-block:: none
sctool status -c prod-cluster
Datacenter: eu-west
╭────┬───────────────┬────────────┬──────────────┬──────────────────────────────────────╮
│ │ CQL │ REST │ Host │ Host ID │
├────┼───────────────┼────────────┼──────────────┼──────────────────────────────────────┤
│ UN │ UP SSL (42ms) │ UP (52ms) │ 10.0.114.68 │ 45a7390d-d162-4daa-8bff-6469c9956f8b │
│ UN │ UP SSL (38ms) │ UP (88ms) │ 10.0.138.46 │ 8dad7fc7-5a82-4fbb-8901-f6f60c12342a │
│ UN │ UP SSL (38ms) │ UP (298ms) │ 10.0.196.204 │ 44eebe5b-e0cb-4e45-961f-4ad175592977 │
│ UN │ UP SSL (43ms) │ UP (159ms) │ 10.0.66.115 │ 918a52aa-cc42-43a4-a499-f7b1ccb53b18 │
╰────┴───────────────┴────────────┴──────────────┴──────────────────────────────────────╯
#. If you are using the Scylla Monitoring Stack, continue to `Connect Managed Cluster to Scylla Monitoring`_ for more information.
Remove a Node/Datacenter from Scylla Manager
--------------------------------------------
There is no need to perform any action in Scylla Manager after removing a node or datacenter from a Scylla cluster.
.. note:: If you are removing the cluster from Scylla Manager and you are using Scylla Monitoring, refer to |mon_root| for more information.
See Also
========
* :doc:`sctool Reference <sctool>`
* :doc:`Remove a node from a Scylla Cluster </operating-scylla/procedures/cluster-management/remove-node>`
* `Scylla Monitoring <https://monitoring.docs.scylladb.com/stable/>`_

View File

@@ -1,205 +0,0 @@
========================
Agent Configuration File
========================
.. include:: /operating-scylla/manager/_common/note-versions.rst
This document covers the configurations settings you need to consider for the Sylla Manager Agent.
The Scylla Manager Agent has a single configuration file /etc/scylla-manager-agent/scylla-manager-agent.yaml.
.. _manger-2.1-agent-configuration-file-auth-token:
Authentication token
====================
.. note:: Completing this section in the scylla-manager-agent.yaml file is mandatory
Scylla Agent uses token authentication in API calls so that the Scylla Manager Server can authenticate itself with the Scylla Manager Agent.
Once you have :ref:`created a token <manager-2.1-generate-auth-token>`, and have configured the :ref:`agent configuration file <manager-2.1-configure-auth-token>` as described.
.. code-block:: none
# Specify authentication token
auth_token: 6Es3dm24U72NzAu9ANWmU3C4ALyVZhwwPZZPWtK10eYGHJ24wMoh9SQxRZEluWMc0qDrsWCCshvfhk9uewOimQS2x5yNTYUEoIkO1VpSmTFu5fsFyoDgEkmNrCJpXtfM
HTTPS server settings
=====================
In this section, you can specify which address Scylla Agent should listen to.
By default, Scylla Manager Agent pulls these values from the Scylla itself.
Port 10001 is the default port, and traffic to this port should be allowed on the firewall.
You can change the port to a different port by clearing the ``https`` label and adding an IP address and port.
In order to obtain TLS cert and key file, use the ``scyllamgr_ssl_cert_gen`` script.
.. code-block:: none
# Bind REST API to the specified TCP address using HTTPS protocol. By default
# Scylla Manager Agent uses Scylla listen/broadcast address that is read from
# the Scylla API (see the scylla section).
#https: 0.0.0.0:10001
# TLS certificate and key files to use with HTTPS. To regenerate the files use
# scyllamgr_ssl_cert_gen script shipped with the Scylla Manager Agent.
#tls_cert_file: /var/lib/scylla-manager/scylla_manager.crt
#tls_key_file: /var/lib/scylla-manager/scylla_manager.key
Prometheus settings
===================
In this section, you can set the Prometheus settings for the Scylla Manager Agent so that the Agent Manager metrics (from each Sylla node) can be viewed and monitored with Scylla Monitoring.
.. code-block:: none
# Bind Prometheus API to the specified TCP address using HTTP protocol.
# By default it binds to all network interfaces, but you can restrict it
# by specifying it like this 127:0.0.1:56090 or any other combination
# of ip and port.
#prometheus: ':56090'
If you change the Prometheus IP or port, you must adjust the rules in the Prometheus server.
.. code-block:: none
- targets:
- IP:56090
Debug endpoint settings
=======================
In this section, you can specify the pprof debug server address.
It allows you to run profiling on demand on a live application.
By default, the server is running on port ``56112``.
.. code-block:: none
debug: 127.0.0.1:56112
CPU pinning settings
====================
In this section, you can set the ``cpu`` setting, which dictates the CPU to run Scylla Manager Agent on.
By default, the agent reads the Scylla configuration from ``/etc/scylla.d/cpuset.conf`` and tries to find a core that is not used by Scylla.
If that's not possible, you can specify the core on which to run the Scylla Manager Agent.
.. code-block:: none
cpu: 0
Log level settings
==================
In this section, you can set the Log level settings which specify log output and level. Available log levels are ``error``, ``info`` and ``debug``.
.. code-block:: none
logger:
level: info
Scylla API settings
===================
In this section, you can set the Scylla API settings. Scylla Manager Agent pulls all needed configuration options from the ``scylla.yaml`` file. In order to do this, Scylla Manager Agent needs to know where the Scylla API is exposed. You should copy the ``api_address`` and ``api_port`` values from ``/etc/scylla/scylla.yaml`` and add them here:
.. code-block:: none
#scylla:
# api_address: 0.0.0.0
# api_port: 10000
Backup S3 settings
==================
In this section, you configure the AWS credentials (if required) for the backup location.
IAM Role
--------
.. note:: If you are setting an IAM role in AWS, you do not need to change this section.
.. _manager-2.1-aws-credentials:
AWS credentials
---------------
.. note:: Completing this section in the scylla-manager-agent.yaml file is mandatory if you are not using an IAM role. Make sure you understand the security ramifications of placing AWS credentials into the yaml file.
Fill in the information below with your AWS Credentials information.
If you do not know where your keys are located, read the `AWS Security Blogs <https://aws.amazon.com/blogs/security/wheres-my-secret-access-key/>`_ or `documentation <https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys>`_ for information.
.. code-block:: none
s3:
# S3 credentials, it's recommended to use IAM roles if possible, otherwise set
# your AWS Access Key ID and AWS Secret Access Key (password) here.
access_key_id: <your access key id>
secret_access_key: <your secret access key>
MinIO and other AWS S3 alternatives
-----------------------------------
Backup can work with MinIO and other AWS S3 compatible providers.
The available options are:
* Alibaba,
* Ceph,
* DigitalOcean,
* IBMCOS,
* Minio,
* Wasabi,
* Dreamhost,
* Netease.
To configure S3 with a 3rd-party provider, in addition to credentials, one needs to specify ``provider`` parameter with one of the above options.
If the service is self-hosted, it's also needed to specify ``endpoint`` with its base URL address.
.. code-block:: none
s3:
# S3 credentials, it's recommended to use IAM roles if possible, otherwise set
# your AWS Access Key ID and AWS Secret Access Key (password) here.
access_key_id: <your access key id>
secret_access_key: <your secret access key>
# Provider of the S3 service. By default, this is AWS. There are multiple S3
# API compatible providers that can be used instead. Due to minor differences
# between them we require that exact provider is specified here for full
# compatibility. Supported and tested options are: AWS and Minio.
# The available providers are: Alibaba, AWS, Ceph, DigitalOcean, IBMCOS, Minio,
# Wasabi, Dreamhost, Netease.
provider: Minio
#
# Endpoint for S3 API, only relevant when using S3 compatible API.
endpoint: <your MinIO instance URL>
Advanced settings
-----------------
.. code-block:: none
#s3:
# The server-side encryption algorithm used when storing this object in S3.
# If using KMS ID you must provide the ARN of Key.
# server_side_encryption:
# sse_kms_key_id:
#
# Number of files uploaded concurrently, by default it's 2.
# upload_concurrency: 2
#
# Maximum size (in bytes) of the body of a single request to S3 when uploading big files.
# Big files are cut into chunks. This value allows specifying how much data
# single request to S3 can carry. Bigger value allows reducing the number of requests
# needed to upload files, increasing it may help with 5xx responses returned by S3.
# Default value is 50M, and the string representation of the value can be provided, for
# e.g. 1M, 1G, off.
# chunk_size: 50M
#
# AWS S3 Transfer acceleration
# https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration-examples.html
# use_accelerate_endpoint: false
Additional resources
====================
Scylla Manager :doc:`Configuration file <configuration-file>`

Binary file not shown.

Before

Width:  |  Height:  |  Size: 150 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 223 KiB

View File

@@ -1,57 +0,0 @@
============
Architecture
============
.. include:: /operating-scylla/manager/_common/note-versions.rst
Scylla Manager is a product for database operations automation.
It can manage multiple Scylla clusters and run cluster-wide tasks in a controlled and predictable way.
Scylla Manager discovers cluster topology and is aware of nodes belonging to different datacenters.
Deployment
==========
Scylla Manager consists of three components:
* Server - a daemon that exposes REST API
* sctool - a command-line interface (CLI) for interacting with the Server over the REST API
* Agent - a small executable, installed on each Scylla node. The Server communicates with the Agent over REST HTTPS. The Agent communicates with the local Scylla node over the REST HTTP.
The Server persists its data to a Scylla cluster that can run locally or run on an external cluster
(see :doc:`Use a remote database for Scylla Manager <use-a-remote-db>` for details).
Optionally (but recommended), you can add Scylla Monitoring Stack to enable reporting of Scylla Manager metrics and alerts.
The diagram below presents a logical view of Scylla Manager with a remote backend datastore managing multiple Scylla Clusters situated in datacenters.
Each node has two connections with the Scylla Manager Server:
* REST API connection - used for Scylla Manager and Scylla Manager Agent activities
* CQL connection - used for the Scylla :doc:`Health Check <health-check>`
Scylla Manager uses the following ports:
====== ============================================ ========
Port Description Protocol
====== ============================================ ========
10001 Scylla Manager Agent REST API TCP
------ -------------------------------------------- --------
56080 Scylla Manager HTTP (default) HTTP
------ -------------------------------------------- --------
56443 Scylla Manager HTTPS (default) HTTPS
------ -------------------------------------------- --------
56090 Scylla Manager Prometheus API HTTP
------ -------------------------------------------- --------
56090 Scylla Manager Agent Prometheus API TCP
====== ============================================ ========
.. image:: architecture.png
Additional Resources
====================
* :doc:`Install Scylla Manager <install>`
* :doc:`Install Scylla Manager Agent <install-agent>`
* :doc:`sctool Reference <sctool>`

View File

@@ -1,330 +0,0 @@
======
Backup
======
.. include:: /operating-scylla/manager/_common/note-versions.rst
Using sctool, you can backup and restore your managed Scylla clusters under Scylla Manager.
Backups are scheduled in the same manner as repairs. You can start, stop, and track backup operations on demand.
Scylla Manager can backup to Amazon S3 and S3 compatible API storage providers such as Ceph or MinIO.
Benefits of using Scylla Manager backups
========================================
Scylla Manager automates the backup process and allows you to configure how and when a backup occurs.
The advantages of using Scylla Manager for backup operations are:
* Data selection - backup a single table or an entire cluster, the choice is up to you
* Data deduplication - prevents multiple uploads of the same SSTable
* Data retention - purge old data automatically when all goes right, or failover when something goes wrong
* Data throttling - control how fast you upload or Pause/resume the backup
* Lower disruption to the workflow of the Scylla Manager Agent due to cgroups and/or CPU pinning
* No cross-region traffic - configurable upload destination per datacenter
The backup process
==================
The backup procedure consists of multiple steps executed sequentially.
It runs parallel on all nodes unless you limit it with the ``--snapshot-parallel`` or ``--upload-parallel`` :ref:`flag <sctool-backup-parameters>`.
#. **Snapshot** - Take a :term:`snapshot <Snapshot>` of data on each node (according to backup configuration settings).
#. **Schema** - (Optional) Upload the schema CQL to the backup storage destination, this requires that you added the cluster with ``--username`` and ``--password`` flags. See :doc:`Add Cluster <add-a-cluster>` for reference.
#. **Upload** - Upload the snapshot to the backup storage destination.
#. **Manifest** - Upload the manifest file containing metadata about the backup.
#. **Purge** - If the retention threshold has been reached, remove the oldest backup from the storage location.
Prepare nodes for backup
========================
#. Create a storage location for the backup.
Currently, Scylla Manager supports `Amazon S3 buckets <https://aws.amazon.com/s3/>`_ .
You can use an S3 bucket that you already created.
We recommend using an S3 bucket in the same region where your nodes are to minimize cross-region data transfer costs.
In multi-dc deployments, you should create a bucket per datacenter, each located in the datacenter's region.
#. Choose how you want to configure access to the S3 Bucket.
You can use an IAM role (recommended), or you can add your AWS credentials to the agent configuration file.
The latter method is less secure as you will be propagating each node with this security information, and in cases where you need to change the key, you will have to replace it on each node.
**To use an IAM Role**
#. Create an `IAM role <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide//iam-roles-for-amazon-ec2.html>`_ for the S3 bucket which adheres to your company security policy.
#. `Attach the IAM role <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide//iam-roles-for-amazon-ec2.html#attach-iam-role>`_ to **each EC2 instance (node)** in the cluster.
Sample IAM policy for *scylla-manager-backup* bucket:
.. code-block:: none
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket",
"s3:ListBucketMultipartUploads"
],
"Resource": [
"arn:aws:s3:::scylla-manager-backup"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:::scylla-manager-backup/*"
]
}
]
}
**To add your AWS credentials the Scylla Manager Agent configuration file**
Edit the ``/etc/scylla-manager-agent/scylla-manager-agent.yaml``
#. Uncomment the ``s3:`` line, for parameters, note the two spaces in front, it's a yaml file.
#. Uncomment and set ``access_key_id`` and ``secret_access_key``, refer to :ref:`AWS Credentials Configuration <manager-2.1-aws-credentials>` for details.
#. If NOT running in AWS EC2 instance, uncomment and set ``region`` to a region where you created the S3 bucket.
Troubleshooting
---------------
To troubleshoot Node to S3 connectivity issues, you can run:
.. code-block:: none
scylla-manager-agent check-location --debug --location s3:<your S3 bucket name>
Schedule a backup
=================
The most recommended way to run a backup is across an entire cluster.
Backups can be scheduled to run on single or multiple datacenters, keyspaces, or tables.
The backup procedure can be customized, allowing you to plan your backups according to your IT policy.
All parameters can be found in the :ref:`sctool reference <sctool-backup>`.
If you want to check if all of your nodes can connect to the backup storage location, see `Perform a Dry Run of a Backup`_.
**Prerequisites**
#. Backup locations (S3 buckets) created.
#. Access rights to backup locations granted to Nodes, see `Prepare Nodes for Backup`_.
Create a scheduled backup
-------------------------
Use the example below to run the sctool backup command.
.. code-block:: none
sctool backup -c <id|name> -L <list of locations> [-s <date>] [-i <time-unit>]
where:
* ``-c`` - the :ref:`name <sctool-cluster-add>` you used when you created the cluster
* ``-L`` - points to backup storage location in ``s3:<your S3 bucket name>`` format or ``<your DC name>:s3:<your S3 bucket name>`` if you want to specify a location for a datacenter
* ``-s`` - the time you want the backup to begin
* ``-i`` - the time interval you want to use in between consecutive backups
If you want to run the backup only once, see `Create an ad-hoc backup`_.
In case when you want the backup to start immediately, but you want it to schedule it to repeat at a determined interval, leave out the start flag (``-s``) and set the interval flag (``-i``) to the time you want the backup to reoccur.
Schedule a daily backup
.......................
This command will schedule a backup on 9th Dec 2019 at 15:15:06 UTC time zone, a backup will be repeated every day, and all the data will be stored in S3 under the ``my-backups`` bucket.
.. code-block:: none
sctool backup -c prod-cluster -L 's3:my-backups' -s '2019-12-09T15:16:05Z' -i 24h
backup/3208ff15-6e8f-48b2-875c-d3c73f545410
Command returns the task ID (backup/3208ff15-6e8f-48b2-875c-d3c73f545410, in this case).
This ID can be used to query the status of the backup task, to defer the task to another time, or to cancel the task See :ref:`Managing Tasks <sctool-managing-tasks>`.
Schedule a daily, weekly, and monthly backup
............................................
This command series will schedule a backup on 9th Dec 2019 at 15:15:06 UTC time zone and will repeat the backup every day (keeping the last 7 days), every week (keeping the previous week), and every month (keeping the previous month).
All the data will be stored in S3 under the ``my-backups`` bucket.
.. code-block:: none
sctool backup -c prod-cluster -L 's3:my-backups' --retention 7 -s '2019-12-09T15:16:05Z' -i 24h
sctool backup -c prod-cluster -L 's3:my-backups' --retention 2 -s '2019-12-09T15:16:05Z' -i 7d
sctool backup -c prod-cluster -L 's3:my-backups' --retention 1 -s '2019-12-09T15:16:05Z' -i 30d
Schedule a backup for a specific DC, keyspace, or table
--------------------------------------------------------
In order to schedule a backup of a particular datacenter, you have to specify ``-dc`` parameter.
You can specify more than one DC, or use a glob pattern to match multiple DCs or exclude some of them.
For example, you have the following DCs in your cluster: dc1, dc2, dc3
Backup one specific DC
......................
In this example you backup the only dc1 every 2 days.
.. code-block:: none
sctool backup -c prod-cluster --dc 'dc1' -L 's3:dc1-backups' -i 2d
Backup all DCs except for those specified
.........................................
.. code-block:: none
sctool backup -c prod-cluster -i 30d --dc '*,!dc2' -L 's3:my-backups'
Backup to a specific location per DC
....................................
If your data centers are located in different regions, you can also specify different locations.
If your buckets are created in the same regions as your data centers, you may save some bandwidth costs.
.. code-block:: none
sctool backup -c prod-cluster -i 30d --dc 'eu-dc,us-dc' -L 's3:eu-dc:eu-backups,s3:us-dc:us-backups'
Backup a specific keyspace or table
...................................
In order to schedule a backup of a particular keyspace or table, you have to provide ``-K`` parameter.
You can specify more than one keyspace/table or use a glob pattern to match multiple keyspaces/tables or exclude them.
.. code-block:: none
sctool backup -c prod-cluster -i 30d -K 'auth_service.*,!auth_service.lru_cache' --dc 'dc1' -L 's3:dc1-backups'
Create an ad-hoc backup
-----------------------
An ad-hoc backup runs immediately and does not repeat.
This procedure shows the most frequently used backup commands.
Additional parameters can be used. Refer to :ref:`backup parameters <sctool-backup-parameters>`.
**Procedure**
To run an immediate backup on the prod-cluster cluster, saving the backup in my-backups, run the following command
replacing the ``-c`` cluster flag with your cluster's cluster name or ID and replace the ``-L`` flag with your backup's location:
.. code-block:: none
sctool backup -c prod-cluster -L 's3:my-backups'
Perform a dry run of a backup
-----------------------------
We recommend to use ``--dry-run`` parameter prior to scheduling a backup.
It's a useful way to verify whether all necessary prerequisites are fulfilled.
Add the parameter to the end of your backup command, so if it works, you can erase it and schedule the backup with no need to make any other changes.
Dry run verifies if nodes are able to access the backup location provided.
If it's not accessible, an error message will be displayed, and the backup is not be scheduled.
.. code-block:: none
sctool backup -c prod-cluster -L 's3:test-bucket' --dry-run
NOTICE: dry run mode, backup is not scheduled
Error: failed to get backup target: location is not accessible
192.168.100.23: failed to access s3:test-bucket make sure that the location is correct and credentials are set
192.168.100.22: failed to access s3:test-bucket make sure that the location is correct and credentials are set
192.168.100.21: failed to access s3:test-bucket make sure that the location is correct and credentials are set
The dry run gives you the chance to resolve all configuration or access issues before executing an actual backup.
If the dry run completes successfully, a summary of the backup is displayed. For example:
.. code-block:: none
sctool backup -c prod-cluster -L 's3:backups' --dry-run
NOTICE: dry run mode, backup is not scheduled
Data Centers:
- dc1
- dc2
Keyspaces:
- system_auth all (2 tables)
- system_distributed all (1 table)
- system_traces all (5 tables)
- auth_service all (3 tables)
Disk size: ~10.4GB
Locations:
- s3:backups
Bandwidth Limits:
- Unlimited
Snapshot Parallel Limits:
- All hosts in parallel
Upload Parallel Limits:
- All hosts in parallel
Retention: Last 3 backups
List the contents of a specific backup
=======================================
List all backups in s3
----------------------
Lists all backups currently in storage that are managed by Scylla Manager.
.. code-block:: none
sctool backup list -c prod-cluster
Snapshots:
- sm_20191210145143UTC
- sm_20191210145027UTC
- sm_20191210144833UTC
Keyspaces:
- system_auth (2 tables)
- system_distributed (1 table)
- system_traces (5 tables)
- auth_service (3 tables)
List files that were uploaded during a specific backup
-------------------------------------------------------
You can list all files that were uploaded during a particular backup.
To list the files use:
.. code-block:: none
sctool backup files -c prod-cluster --snapshot-tag sm_20191210145027UTC
s3://backups/backup/sst/cluster/1d781354-9f9f-47cc-ad45-f8f890569656/dc/dc1/node/ece658c2-e587-49a5-9fea-7b0992e19607/keyspace/auth_service/table/roles/5bc52802de2535edaeab188eecebb090/mc-2-big-CompressionInfo.db auth_service/roles
s3://backups/backup/sst/cluster/1d781354-9f9f-47cc-ad45-f8f890569656/dc/dc1/node/ece658c2-e587-49a5-9fea-7b0992e19607/keyspace/auth_service/table/roles/5bc52802de2535edaeab188eecebb090/mc-2-big-Data.db auth_service/roles
s3://backups/backup/sst/cluster/1d781354-9f9f-47cc-ad45-f8f890569656/dc/dc1/node/ece658c2-e587-49a5-9fea-7b0992e19607/keyspace/auth_service/table/roles/5bc52802de2535edaeab188eecebb090/mc-2-big-Digest.crc32 auth_service/roles
[...]
Additional resources
--------------------
:doc:`Scylla Snapshots </kb/snapshots>`
Delete backup snapshot
=========================
If you decide that you don't want to wait until a particular snapshot expires according to its retention policy, there is a command which allows you to delete a single snapshot from a provided location.
This operation is aware of the Manager deduplication policy and will not delete any SSTable file referenced by another snapshot.
.. warning:: This operation is irreversible! Use it with great caution!
.. code-block:: none
sctool backup delete -c prod-cluster -L s3:backups --snapshot-tag sm_20191210145027UTC
Once a snapshot is deleted, it won't show up in the backup listing anymore.

View File

@@ -1,181 +0,0 @@
==================
Configuration file
==================
.. include:: /operating-scylla/manager/_common/note-versions.rst
Scylla Manager has a single configuration file ``/etc/scylla-manager/scylla-manager.yaml``.
Note that the file will open as read-only unless you edit it as the root user or by using sudo.
Usually, there is no need to edit the configuration file.
HTTP/HTTPS server settings
==========================
With server settings, you may specify if Scylla Manager should be available over HTTP, HTTPS, or both.
.. code-block:: yaml
# Bind REST API to the specified TCP address using HTTP protocol.
# http: 127.0.0.1:56080
# Bind REST API to the specified TCP address using HTTPS protocol.
https: 127.0.0.1:56443
Prometheus settings
===================
.. code-block:: yaml
# Bind prometheus API to the specified TCP address using HTTP protocol.
# By default it binds to all network interfaces, but you can restrict it
# by specifying it like this 127:0.0.1:56090 or any other combination
# of ip and port.
prometheus: ':56090'
If changing prometheus IP or port, please remember to adjust rules in `prometheus server <https://monitoring.docs.scylladb.com/stable/>`_.
.. code-block:: yaml
- targets:
- IP:56090
Debug endpoint settings
=======================
In this section, you can specify the pprof debug server address.
It allows you to run profiling on demand on a live application.
By default, the server is running on port ``56112``.
.. code-block:: none
debug: 127.0.0.1:56112
.. _manager-2.1-logging-settings:
Logging settings
================
Logging settings specify log output and level.
.. code-block:: yaml
# Logging configuration.
logger:
# Where to output logs, syslog or stderr.
mode: syslog
# Available log levels are error, info, and debug.
level: info
Database settings
=================
Database settings allow for :doc:`using a remote cluster <use-a-remote-db>` to store Scylla Manager data.
.. code-block:: yaml
# Scylla Manager database, used to store management data.
database:
hosts:
- 127.0.0.1
# Enable or disable client/server encryption.
# ssl: false
#
# Database credentials.
# user: user
# password: password
#
# Local datacenter name, specify if using a remote, multi-dc cluster.
# local_dc:
#
# Database connection timeout.
# timeout: 600ms
#
# Keyspace for management data, for create statement see /etc/scylla-manager/create_keyspace.cql.tpl.
# keyspace: scylla_manager
# replication_factor: 1
# Optional custom client/server encryption options.
#ssl:
# CA certificate used to validate server cert. If not set, will use he host's root CA set.
# cert_file:
#
# Verify the hostname and server cert.
# validate: true
#
# Client certificate and key in PEM format. It has to be provided when
# client_encryption_options.require_client_auth=true is set on server.
# user_cert_file:
# user_key_file
Health check settings
=====================
Health check settings let you specify the timeout threshold.
If there is no response from a node after this time period is reached, the :ref:`status <sctool_status>` report (``sctool status``) shows the node as ``DOWN``.
.. code-block:: yaml
# Healthcheck service configuration.
#healthcheck:
# Timeout for CQL status checks.
# timeout: 250ms
# ssl_timeout: 750ms
Backup settings
===============
Backup settings let you specify backup parameters.
.. code-block:: yaml
# Backup service configuration.
#backup:
# Minimal amount of free disk space required to take a snapshot.
# disk_space_free_min_percent: 10
#
# Maximal time for a backup run to be considered fresh and can be continued from
# the same snapshot. If exceeded, a new run with a new snapshot will be created.
# Zero means no limit.
# age_max: 12h
.. _repair-settings:
Repair settings
===============
Repair settings let you specify repair parameters.
.. code-block:: yaml
# Repair service configuration.
#repair:
# Number of segments repaired by Scylla in a single repair command. Increase
# this value to make repairs faster, note that this may result in increased load
# on the cluster.
# segments_per_repair: 1
#
# Maximal number of shards on a host repaired at the same time. By default all
# shards are repaired in parallel.
# shard_parallel_max: 0
#
# Maximal allowed number of failed segments per shard. In case of a failure
# to repair a segment Scylla Manager will try to repair it multiple times
# depending on the specified number of retries (default 3). If the
# shard_failed_segments_max limit is exceeded repair task will immediately
# fail, and the next repair run will start the repair procedure from the beginning.
# shard_failed_segments_max: 25
#
# In case of an error, hold back repair for the specified amount of time.
# error_backoff: 5m
#
# Frequency Scylla Manager poll Scylla node for repair command status.
# poll_interval: 200ms
#
# Maximal time a paused repair is considered fresh and can be continued,
# if an exceeded repair will start from the beginning. Zero means no limit.
# age_max: 0
#
# Distribution of data among cores (shards) within a node.
# Copy value from Scylla configuration file.
# murmur3_partitioner_ignore_msb_bits: 12

View File

@@ -1,40 +0,0 @@
==============================
Extract schema from the backup
==============================
.. include:: /operating-scylla/manager/_common/note-versions.rst
.. versionadded:: 2.1 Scylla Manager
The first step to restoring a Scylla Manager backup is to restore the CQL schema from a text file.
Scylla Manager version 2.1 creates a backup up of matching schema along with the snapshot.
If you created the backup with Scylla Manager version 2.0 or you didn't provide credentials for a schema backup in Scylla Manager version 2.1, follow the instructions in how to restore your schema from system table(deleted document).
If not, follow these steps to restore the schema from the Scylla Manager backup that has the schema stored along with the snapshot:
**Procedure**
#. List available backups:
.. code-block:: none
sctool backup list --cluster my-cluster --location s3:backup-bucket
#. List files located in snapshot you want to restore. The first line contains a path to the schema so pipe it to ``aws s3 cp`` for download it to the current directory. For example:
.. code-block:: none
sctool backup files --cluster my-cluster -L s3:backup-bucket -T sm_20200513104924UTC --with-version | head -n 1 | xargs -n2 aws s3 cp
download: s3://backup-bucket/backup/schema/cluster/7313fda0-6ebd-4513-8af0-67ac8e30077b/task_001ce624-9ac2-4076-a502-ec99d01effe4_tag_sm_20200513104924UTC_schema.tar.gz to ./task_001ce624-9ac2-4076-a502-ec99d01effe4_tag_sm_20200513104924UTC_schema.tar.gz
#. Create a directory to store the schema files and extract the archive containing the schema.
.. code-block:: none
mkdir ./schema
tar -xf task_001ce624-9ac2-4076-a502-ec99d01effe4_tag_sm_20200513104924UTC_schema.tar.gz -C ./schema
ls ./schema
system_auth.cql system_distributed.cql system_schema.cql system_traces.cql user_data.cql
Listed files are schema files for each keyspace in the backup. You can use each cql file to restore needed keyspace and continue the :ref:`restore procedure <restore-backup-restore-schema>`

View File

@@ -1,85 +0,0 @@
====================
Cluster Health Check
====================
.. include:: /operating-scylla/manager/_common/note-versions.rst
Scylla Manager automatically adds a health check task to all new nodes when the cluster is added to the Scylla Manager and to all existing nodes
during the upgrade procedure. You can see the tasks created by the healthcheck when you run
the :ref:`sctool task list <sctool-task-list>` command.
For example:
.. code-block:: none
sctool task list -c manager-testcluster
returns:
.. code-block:: none
Cluster: manager-testcluster
╭──────────────────────────────────────────────────────┬───────────────────────────────┬──────┬───────────┬────────╮
│ task │ next run │ ret. │ arguments │ status │
├──────────────────────────────────────────────────────┼───────────────────────────────┼──────┼───────────┼────────┤
│ healthcheck/018da854-b9ff-4e0a-bae7-ca65c677c559 │ 02 Apr 19 18:06:31 UTC (+15s) │ 0 │ │ NEW │
│ healthcheck_api/597f237f-103d-4994-8167-3ff591150b7e │ 02 Apr 19 18:07:01 UTC (+1h) │ 0 │ │ NEW │
│ repair/21006f88-0c8c-4e11-9e84-83c319f80d0c │ 03 Apr 19 00:00:00 UTC (+7d) │ 3/3 │ │ NEW │
╰──────────────────────────────────────────────────────┴───────────────────────────────┴──────┴───────────┴────────╯
The health check task ensures that CQL native port is accessible on all the nodes. For each node, in parallel,
Scylla Manager opens a connection to a CQL port and asks for server options. If there is no response or the response takes longer than 250 milliseconds, the node is considered to be DOWN otherwise, the node is considered to be UP.
The results are available using the :ref:`sctool status <sctool_status>` command.
For example:
.. code-block:: none
sctool status -c prod-cluster2
returns:
.. code-block:: none
Datacenter: dc1
╭──────────┬─────┬──────────┬────────────────╮
│ CQL │ SSL │ REST │ Host │
├──────────┼─────┼──────────┼────────────────┤
│ UP (2ms) │ OFF │ UP (1ms) │ 192.168.100.11 │
│ UP (1ms) │ OFF │ UP (0ms) │ 192.168.100.12 │
│ UP (2ms) │ OFF │ UP (0ms) │ 192.168.100.13 │
╰──────────┴─────┴──────────┴────────────────╯
Datacenter: dc2
╭──────────┬─────┬──────────┬────────────────╮
│ CQL │ SSL │ REST │ Host │
├──────────┼─────┼──────────┼────────────────┤
│ UP (2ms) │ OFF │ UP (1ms) │ 192.168.100.21 │
│ UP (1ms) │ OFF │ UP (1ms) │ 192.168.100.22 │
│ UP (1ms) │ OFF │ UP (1ms) │ 192.168.100.23 │
╰──────────┴─────┴──────────┴────────────────╯
If you have enabled the Scylla Monitoring stack, the Scylla Manager dashboard includes the same cluster status report.
In addition, the Prometheus Alert Manager has an alert to report when a Scylla node health check fails.
Scylla Manager just works!
It reads CQL IP address and port from node configuration and can automatically detect TLS/SSL connection.
There are two types of CQL health check `Credentials agnostic health check`_ and `CQL query health check`_.
.. _manager-2.1-credentials-agnostic-health-check:
Credentials agnostic health check
---------------------------------
Scylla Manager does not require database credentials to work.
CQL health check is based on sending `CQL OPTIONS frame <https://github.com/apache/cassandra/blob/trunk/doc/native_protocol_v4.spec#L302>`_ and does not start a CQL session.
This is simple and effective but does not test CQL all the way down.
For that, you may consider upgrading to `CQL query health check`_.
.. _manager-2.1-cql-query-health-check:
CQL query health check
----------------------
You may specify CQL ``username`` and ``password`` flags when adding a cluster to Scylla Manager using :ref:`sctool cluster add command <sctool-cluster-add>`.
It's also possible to add or change that using :ref:`sctool cluster update command <sctool-cluster-update>`.
Once Scylla Manager has CQL credential to the cluster, when performing a health check, it would try to connect to each node and execute ``SELECT now() FROM system.local`` query.

View File

@@ -1,46 +0,0 @@
:orphan:
Scylla Manager 2.1
==================
.. toctree::
:hidden:
architecture
Install Scylla Manager <install>
Install Scylla Manager Agent <install-agent>
add-a-cluster
repair
backup
extract-schema-from-backup
restore-a-backup
health-check
sctool
monitoring-manager-integration
Troubleshoot Integration with Scylla Manager </troubleshooting/manager-monitoring-integration/>
use-a-remote-db
configuration-file
agent-configuration-file
.. include:: /operating-scylla/manager/_common/note-versions.rst
.. panel-box::
:title: Scylla Manager
:id: "getting-started"
:class: my-panel
.. include:: /operating-scylla/manager/2.1/_common/manager-description.rst
* :doc:`Architecture <architecture>`
* :doc:`Install Scylla Manager <install>`
* :doc:`Install Scylla Manager Agent <install-agent>`
* :doc:`Add a cluster or a node to Scylla Manager <add-a-cluster>`
* :doc:`Repair <repair>`
* :doc:`Backup <backup>`
* :doc:`Extract schema from the backup <extract-schema-from-backup>`
* :doc:`Restore a backup <restore-a-backup>`
* :doc:`Health Check <health-check>`
* :doc:`sctool CLI Reference <sctool>`
* :doc:`Integration with Scylla Monitoring Stack <monitoring-manager-integration>`
* :doc:`Troubleshooting guide for Scylla Manager and Scylla Monitoring integration </troubleshooting/manager-monitoring-integration/>`
* :doc:`Use a remote database for Scylla Manager <use-a-remote-db>`
* :doc:`Configuration file <configuration-file>`
* :doc:`Scylla Manager Agent Configuration file <agent-configuration-file>`

View File

@@ -1,181 +0,0 @@
=================================
Scylla Manager Agent Installation
=================================
.. include:: /operating-scylla/manager/_common/note-versions.rst
Scylla Manager Agent is an executable, installed on each Scylla node.
The Server communicates with the Agent over REST/HTTPS.
The Agent communicates with the local Scylla node over the REST/HTTP.
Install Scylla Manager Agent
----------------------------
Prerequisites
=============
* Scylla cluster running on any :doc:`OS supported by Scylla Manager 2.0 </getting-started/os-support>`
* Traffic on port 10001 unblocked to Scylla nodes from the dedicated host
.. note:: Scylla Manager only works with Scylla clusters that are using the Murmur3 partitioner (Scylla default partitioner). To check your cluster's partitioner, run the cqlsh command ``DESCRIBE CLUSTER``.
Download packages
=================
**Procedure**
Download and install Scylla Manager Agent (from the Scylla Manager Download Page) according to the desired version:
* `Scylla Manager for Open Source <https://www.scylladb.com/download/open-source/scylla-manager/>`_ - Registration Required
* Scylla Enterprise - Login to the `Customer Portal <https://www.scylladb.com/customer-portal/>`_
Configure Scylla Manager Agent
------------------------------
There are three steps you need to complete:
#. `Generate an authentication token`_
#. Place the token parameters from `Configure authentication token parameters`_ in the Agent configuration file
#. `Start Scylla Manager Agent service`_ or restart if already running. Confirm the service starts / restarts and runs without errors
.. _manager-2.1-generate-auth-token:
Generate an authentication token
================================
**Procedure**
#. Generate an authentication token to be used to authenticate Scylla Manager with Scylla nodes.
This procedure is done **once** for each cluster. It is recommended to use a different token for each cluster.
.. note:: Use the same token on all nodes in the same cluster.
From **one node only** Run the token generator script.
For example:
.. code-block:: none
$ scyllamgr_auth_token_gen
6Es3dm24U72NzAu9ANWmU3C4ALyVZhwwPZZPWtK10eYGHJ24wMoh9SQxRZEluWMc0qDrsWCCshvfhk9uewOimQS2x5yNTYUEoIkO1VpSmTFu5fsFyoDgEkmNrCJpXtfM
If you want to change the token, you will need to repeat this procedure and place the new token on all nodes.
This procedure sets up the Scylla agent on each node.
Repeat the procedure for **every** Scylla node in the cluster that you want to be managed under Scylla Manager.
Run `scyllamgr_agent_setup` script
==================================
**Procedure**
#. Run the setup script to setup environment for the agent:
.. note:: Script requires sudo rights
.. code-block:: none
$ sudo scyllamgr_agent_setup
Do you want to create scylla-helper.slice if it does not exist?
Yes - limit Scylla Manager Agent and other helper programs memory. No - skip this step.
[YES/no] YES
Do you want the Scylla Manager Agent service to automatically start when the node boots?
Yes - automatically start Scylla Manager Agent when the node boots. No - skip this step.
[YES/no] YES
The first step relates to limited resources that are available to the agent, and second instructs systemd to run agent on node restart.
.. _manager-2.1-configure-auth-token:
Configure authentication token parameters
=========================================
**Procedure**
#. Take the authentication token you generated from `Generate an authentication token`_, and place it into ``/etc/scylla-manager-agent/scylla-manager-agent.yaml`` as part of the ``auth_token`` :ref:`section <manger-2.1-agent-configuration-file-auth-token>`.
For Example:
.. code-block:: none
$ cat /etc/scylla-manager-agent/scylla-manager-agent.yaml
# Scylla Manager Agent config YAML
# Specify authentication token, the auth_token needs to be the same for all the
# nodes in a cluster. Use scyllamgr_auth_token_gen to generate the auth_token
# value.
auth_token: 6Es3dm24U72NzAu9ANWmU3C4ALyVZhwwPZZPWtK10eYGHJ24wMoh9SQxRZEluWMc0qDrsWCCshvfhk9uewOimQS2x5yNTYUEoIkO1VpSmTFu5fsFyoDgEkmNrCJpXtfM
Start Scylla Manager Agent service
==================================
**Procedure**
#. Start Scylla Manager Agent service
.. code-block:: none
$ sudo systemctl start scylla-manager-agent
#. Validate Scylla Manager Agent is running
.. code-block:: none
$ sudo systemctl status scylla-manager-agent
● scylla-manager-agent.service - Scylla Manager Agent
Loaded: loaded (/usr/lib/systemd/system/scylla-manager-agent.service; disabled; vendor preset: disabled)
Active: active (running) since Wed 2019-10-30 10:46:51 UTC; 7s ago
Main PID: 14670 (scylla-manager-)
CGroup: /system.slice/scylla-manager-agent.service
└─14670 /usr/bin/scylla-manager-agent
#. Enable the Scylla Manager Agent to run when the node starts.
.. code-block:: none
$ sudo systemctl enable scylla-manager-agent
#. Repeat the procedure for **every** Scylla node in the cluster that you want to be managed under Scylla Manager, starting with `Configure authentication token parameters`_.
.. _manager-2.1-prepare-nodes-for-backup:
Prepare nodes for backup
------------------------
Adding the cluster to Scylla Manager automatically creates a backup task. Validate the connection to your backup location is accessible from Scylla Manager before adding the cluster to avoid errors.
**Procedure**
#. Create a storage location for the backup.
Currently, Scylla Manager 2.1 supports `S3 buckets <https://aws.amazon.com/s3/>`_ created on AWS.
You can use an S3 bucket that you already created.
#. Choose how you want to configure access to the S3 Bucket.
You can use an IAM role (recommended), or you can add your AWS credentials to the agent configuration file.
This method is less secure as you will be propagating each node with this security information, and in cases where you need to change the key, you will have to replace it on each node.
* To use an IAM Role:
#. Create an `IAM role <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide//iam-roles-for-amazon-ec2.html>`_ for the S3 bucket which adheres to your company security policy. You can use the role you already created.
#. `Attach the IAM role <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide//iam-roles-for-amazon-ec2.html#attach-iam-role>`_ to **each EC2 instance (node)** in the cluster.
* To add your AWS credentials the Scylla Manager Agent configuration file:
#. Edit the ``/etc/scylla-manager-agent/scylla-manager-agent.yaml`` in the ``S3`` section your authentication information about the S3 bucket.
Refer to :ref:`AWS Credentials Configuration <manager-2.1-aws-credentials>` for details.
#. Validate that the manager has access to the backup location.
If there is no response, the S3 bucket is accessible. If not, you will see an error.
.. code-block:: none
$ scylla-manager-agent check-location --location s3:<your S3 bucket name>
Register a cluster
------------------
Continue with :doc:`Add a Cluster <add-a-cluster>`.

View File

@@ -1,132 +0,0 @@
===========================
Scylla Manager Installation
===========================
.. include:: /operating-scylla/manager/_common/note-versions.rst
System requirements
===================
Scylla Manager Server has modest systems requirements.
While a minimal server can run on a system with 2 cores and 1GB RAM, the following configuration is recommended:
* **CPU** - 2vCPUs
* **Memory** - 8GB+ DRAM
.. note:: If you are running `Scylla Monitoring Stack <https://monitoring.docs.scylladb.com/stable/>`_ on the same server as Scylla Manager, your system should also meet the minimal `Monitoring requirements <https://monitoring.docs.scylladb.com/stable/>`_.
Installation workflow
=====================
#. `Install Scylla Manager`_
#. `Run the scyllamgr_setup script`_
#. `Enable Bash Script Completion`_
#. `Start Scylla Manager Service`_ and verify Scylla Manager is Running and that sctool is running
Install Scylla Manager
----------------------
Choose one of the following installation methods:
**Scylla Manager for Scylla Enterprise**
#. Download and install Scylla Manager from the `Enterprise Download page <https://www.scylladb.com/download/enterprise/#manager>`_.
#. Follow the entire installation procedure.
#. Continue with `Run the scyllamgr_setup script`_.
**Scylla Manager for Scylla Open Source**
#. On the same node as you are installing Scylla Manager, download and install Scylla as a local database from the `Scylla Open Source Download page <https://www.scylladb.com/download/open-source/>`_.
There is no need to run the Scylla setup as it is taken care of by the ``scyllamgr_setup`` script.
#. Download and Install Scylla Manager from the `Scylla Manager Open Source Download page <https://www.scylladb.com/download/open-source/scylla-manager/>`_.
#. Follow the entire installation procedure.
#. Continue with `Run the scyllamgr_setup script`_.
.. _install-run-the-scylla-manager-setup-script:
Run the scyllamgr_setup script
------------------------------
The Scylla Manager setup script automates the configuration of Scylla Manager by asking you some simple questions.
It can be run in non-interactive mode if you'd like to script it.
There are three decisions you need to make:
* Do you want to enable the service to start automatically? If not, you will have to start the service manually each time you want to use it.
* Do you want to set up and enable a local Scylla backend? If not, you will need to set up a :doc:`remote DB <use-a-remote-db>`
* Do you want Scylla Manager to check periodically if updates are available? If not, you will need to check yourself.
.. code-block:: none
scyllamgr_setup -h
Usage: scyllamgr_setup [-y][--no-scylla-setup][--no-enable-service][--no-check-for-updates]
Options:
-y, --assume-yes assume that the answer to any question which would be asked is yes
--no-scylla-setup skip setting up and enabling local Scylla instance as a storage backend for Scylla Manager
--no-enable-service skip enabling service
--no-check-for-updates skip enabling periodic check for updates
-h, --help print this help
Interactive mode is enabled when no flags are provided.
**Procedure**
#. Run the ``scyllamgr_setup`` script to configure the service. You can run the script in interactive mode (no flags) or automate your decision making by using flags.
Enable bash script completion
-----------------------------
Enable bash completion for sctool: the Scylla Manager CLI. Alternatively, you can just open a new terminal.
.. code-block:: none
source /etc/bash_completion.d/sctool.bash
Start Scylla Manager service
============================
Scylla Manager integrates with ``systemd`` and can be started and stopped using ``systemctl`` command.
**Procedure**
#. Start the Scylla Manager server service.
.. code-block:: none
sudo systemctl start scylla-manager.service
#. Verify the Scylla Manager server service is running.
.. code-block:: none
sudo systemctl status scylla-manager.service
● scylla-manager.service - Scylla Manager Server
Loaded: loaded (/usr/lib/systemd/system/scylla-manager.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2019-10-30 11:00:01 UTC; 20s ago
Main PID: 5805 (scylla-manager)
CGroup: /system.slice/scylla-manager.service
└─5805 /usr/bin/scylla-manager
...
Hint: Some lines were ellipsized, use -l to show in full.
#. Confirm sctool is running by displaying the sctool version.
.. code-block:: none
sctool version
Client version: 2.1-0.20200401.ce91f2ad
Server version: 2.1-0.20200401.ce91f2ad
.. note:: The first time you run this command, Scylla Manager may take a few seconds to start because it must create the database schema.
Install Scylla Manager Agent
============================
Continue with :doc:`Setup Scylla Manager Agent <install-agent>`

View File

@@ -1,11 +0,0 @@
========================================
Integration with Scylla Monitoring Stack
========================================
.. include:: /operating-scylla/manager/_common/note-versions.rst
When used with Scylla Manager, the metrics for all of your managed clusters and alerts can be viewed using Scylla Monitoring.
The Monitoring stack 2.1 Manager dashboard displays progress for all tasks, including: repairs, backups, node status, Scylla Manager status, and other metrics and Alerts.
For more information, refer to the `Scylla Monitoring <https://monitoring.docs.scylladb.com/stable/>`_ documentation.
.. todo-add screenshot

View File

@@ -1,186 +0,0 @@
======
Repair
======
.. include:: /operating-scylla/manager/_common/note-versions.rst
.. note:: If, after upgrading to the latest Scylla, you experience repairs that are slower than usual please consider :doc:`upgrading Scylla Manager to the appropriate version </upgrade/upgrade-manager/upgrade-guide-from-2.x.a-to-2.y.b/upgrade-row-level-repair>`.
When you create a cluster, a repair job is automatically scheduled.
This task is set to occur each week by default, but you can change it to another time or add additional repair tasks.
It is important to make sure that data across the nodes is consistent when maintaining your clusters.
Why repair with Scylla Manager
-------------------------------
Scylla Manager automates the repair process and allows you to manage how and when the repair occurs.
The advantage of repairing the cluster with Scylla Manager is:
* Clusters are repaired node by node, ensuring that each database shard performs exactly one repair task at a time.
This gives the best repair parallelism on a node, shortens the overall repair time, and does not introduce unnecessary load.
* If there is an error, Scylla Managers retry mechanism will try to run the repair again up to the number of retries that you set.
* It has a restart (pause) mechanism that allows for restarting a repair from where it left off.
* Repair what you want, when you want, and how often you want. Manager gives you that flexibility.
* The most apparent advantage is that with Manager you do not have to manually SSH into every node as you do with nodetool.
What can you repair with Scylla Manager
----------------------------------------
Scylla Manager can repair any item which it manages, specifically:
* Specific tables, keyspaces, clusters, or data centers.
* A group of tables, keyspaces, clusters or data centers.
* All tables, keyspaces, clusters, or data centers.
What sort of repairs can I run with Scylla Manager
---------------------------------------------------
You can run two types of repairs:
* Ad-hoc - this is a one time repair
* Scheduled - this repair is scheduled in advance and can repeat
.. _manager-2.1-schedule-a-repair:
Schedule a Repair
-----------------
By default, a cluster successfully added to Scylla Manager has a repair task created for it, which repairs the entire cluster.
This is a repeating task that runs every week.
You can change this repair, add additional repairs, or delete this repair.
You can schedule repairs to run in the future on a regular basis, schedule repairs to run once, or schedule repairs to run immediately on an as-needed basis.
Any repair can be rescheduled, paused, resumed, or deleted.
For information on what is repaired and the types of repairs available, see `What can you repair with Scylla Manager`_.
Create a scheduled repair
.........................
While the most recommended way to run a repair is across an entire cluster, repairs can be scheduled to run on a single/multiple datacenters, keyspaces, or tables.
Scheduled repairs run every X days, depending on the frequency you set.
The procedure here shows the most frequently used repair command.
Additional parameters are located in the :ref:`sctool Reference <sctool-repair-parameters>`.
**Procedure**
Run the following sctoool repair command, replacing the parameters with your own parameters:
* ``-c`` - cluster name - replace `prod-cluster` with the name of your cluster
* ``-s`` - start-time - replace 2018-01-02T15:04:05-07:00 with the time you want the repair to begin
* ``-i`` - interval - replace -i 7d with your own time interval
For example:
.. code-block:: none
sctool repair -c prod-cluster -s 2018-01-02T15:04:05-07:00 -i 7d
2. The command returns the task ID. You will need this ID for additional actions.
3. If you want to run the repair only once, remove the `-i` argument.
4. If you want to run this command immediately, but still want the repair to repeat, keep the interval argument (``-i``), but remove the start-date (``-s``).
Schedule an ad-hoc repair
.........................
An ad-hoc repair runs immediately and does not repeat.
This procedure shows the most frequently used repair command.
Additional parameters can be used. Refer to the :ref:`sctool Reference <sctool-repair-parameters>`.
**Procedure**
1. Run the following command, replacing the -c argument with your cluster name:
.. code-block:: none
sctool repair -c prod-cluster
2. The command returns the task ID. You will need this ID for additional actions.
Repair faster or slower
.......................
When scheduling repair, you may specify ``--intensity`` flag, the intensity meaning is:
* For values > 1 intensity specifies the number of segments repaired by Scylla in a single repair command. Higher values result in higher speed and may increase cluster load.
* For values < 1 intensity specifies what percent of node's shards repaired in parallel.
* For intensity equal to 1 it will repair one segment in each repair command on all shards in parallel.
* For zero intensity it uses limits specified in Scylla Manager :ref:`configuration <repair-settings>`.
Please note that this only works with versions that are **not** :doc:`row-level-repair enabled </upgrade/upgrade-manager/upgrade-guide-from-2.x.a-to-2.y.b/upgrade-row-level-repair>`.
**Example**
.. code-block:: none
sctool repair -c prod-cluster --intensity 16
.. _manager-2.1-reschedule-a-repair:
Reschedule a Repair
-------------------
You can change the run time of a scheduled repair using the update repair command.
The new time you set replaces the time which was previously set.
This command requires the task ID, which was generated when you set the repair.
This can be retrieved using the command sctool :ref:`task list <sctool-task-list>`.
This example updates a task to run in 3 hours instead of whatever time it was supposed to run.
.. code-block:: none
sctool task update -c prod-cluster repair/143d160f-e53c-4890-a9e7-149561376cfd -s now+3h
To start a scheduled repair immediately, run the following command inserting the task id and cluster name:
.. code-block:: none
sctool task start repair/143d160f-e53c-4890-a9e7-149561376cfd -c prod-cluster
Pause a Repair
--------------
Pauses a specified task, provided it is running.
You will need the task ID for this action.
This can be retrieved using the command ``sctool task list``. To start the task again, see `Resume a Repair`_.
.. code-block:: none
sctool task stop repair/143d160f-e53c-4890-a9e7-149561376cfd -c prod-cluster
Resume a Repair
---------------
Restart a repair that is currently in the paused state.
To start running a repair which is scheduled, but is currently not running, use the task update command.
See `Reschedule a Repair`_.
You will need the task ID for this action. This can be retrieved using the command ``sctool task list``.
.. code-block:: none
sctool task start repair/143d160f-e53c-4890-a9e7-149561376cfd -c prod-cluster
Delete a Repair
---------------
This action removes the repair from the task list.
Once removed, you cannot resume the repair.
You will have to create a new one.
You will need the task ID for this action.
This can be retrieved using the command ``sctool task list``.
.. code-block:: none
sctool task delete repair/143d160f-e53c-4890-a9e7-149561376cfd -c prod-cluster

View File

@@ -1,348 +0,0 @@
================
Restore a Backup
================
.. include:: /operating-scylla/manager/_common/note-versions.rst
This document provides information on how to restore data from backups that were taken using the Scylla Manager.
There are two restore scenarios:
#. Backup to the same topology cluster.
For example, restore data to the same source cluster.
#. Backup to a different topology cluster.
For example, restore data to a smaller or bigger cluster, a cluster with a different rack or DC topology, or different token distribution.
**Workflow**
#. `Prepare for restore`_
#. `Upload data to Scylla`_
Prepare for restore
===================
No matter which backup scenario you are using, the procedures in this workflow apply.
**Workflow**
#. `Make sure Scylla cluster is up`_
#. `Install Scylla Manager`_
#. `Register the cluster with the Scylla Manager`_
#. `Identify relevant snapshot`_
#. `Restore the schema`_
Make sure Scylla cluster is up
------------------------------
Make sure that your Scylla cluster is up and that there are no issues with networking, disk space, or memory.
If you need help, you can check official documentation on :doc:`operational procedures for cluster management </operating-scylla/procedures/cluster-management/index>`.
Install Scylla Manager
----------------------
You need a working Scylla Manager setup to list backups. If you don't have it installed, please follow official instructions on :doc:`how to install Scylla Manager <install>` first.
Nodes must have access to the locations of the backups as per instructions in the official documentation for :ref:`installing Scylla Manager Agent <manager-2.1-prepare-nodes-for-backup>`.
Register the cluster with the Scylla Manager
--------------------------------------------
This section only applies to situations where a registered cluster that was originally used for the backup is missing from the Scylla Manager.
In that case, a new cluster must be registered before you can access the backups created with the old one.
This example demonstrates adding a cluster named "cluster1" with initial node IP 18.185.31.99, instructs Scylla Manager not to schedule a default repair, and forcing uuid of the new cluster to ebec29cd-e768-4b66-aac3-8e8943bcaa76:
.. code-block:: none
sctool cluster add --host 18.185.31.99 --name cluster1 --without-repair -id ebec29cd-e768-4b66-aac3-8e8943bcaa76
ebec29cd-e768-4b66-aac3-8e8943bcaa76
__
/ \ Cluster added! You can set it as default, by exporting its name or ID as env variable:
@ @ $ export SCYLLA_MANAGER_CLUSTER=ebec29cd-e768-4b66-aac3-8e8943bcaa76
| | $ export SCYLLA_MANAGER_CLUSTER=cluster1
|| |/
|| || Now run:
|\_/| $ sctool status -c cluster1
\___/ $ sctool task list -c cluster1
Cluster is created, and we can proceed to list old backups.
If the uuid of the old cluster is lost, there is a workaround with ``--all-clusters`` parameter.
In that case, just register the cluster and proceed to the next step.
Identify relevant snapshot
--------------------------
**Procedure**
#. List all available backups and choose the one you would like to restore.
Run: :ref:`sctool backup list <sctool-backup-list>`, to lists all backups for the cluster.
This command will list backups only created with provided cluster (``-c clust1``).
If you don't have uuid of the old cluster, you can use ``--all-clusters`` to list all backups from all clusters that are available in the target location:
.. code-block:: none
sctool backup list -c cluster1 --all-clusters -L s3:backup-bucket
Cluster: 7313fda0-6ebd-4513-8af0-67ac8e30077b
Snapshots:
- sm_20200513131519UTC (563.07GiB)
- sm_20200513080459UTC (563.07GiB)
- sm_20200513072744UTC (563.07GiB)
- sm_20200513071719UTC (563.07GiB)
- sm_20200513070907UTC (563.07GiB)
- sm_20200513065522UTC (563.07GiB)
- sm_20200513063046UTC (563.16GiB)
- sm_20200513060818UTC (534.00GiB)
Keyspaces:
- system_auth (4 tables)
- system_distributed (2 tables)
- system_schema (12 tables)
- system_traces (5 tables)
- user_data (100 tables)
Here, for example, we have eight different snapshots to choose from.
Snapshot tags encode the date they were taken in UTC time zone.
For example, ``sm_20200513131519UTC`` was taken on 13/05/2020 at 13:15 and 19 seconds UTC.
The data source for the listing is the cluster backup locations.
Listing may take some time, depending on how big the cluster is and how many backups there are.
.. _restore-backup-restore-schema:
Restore the schema
------------------
Scylla Manager 2.1 can store schema with your backup.
To extract schema files for each keyspace from the backup, please refer to the official documentation for :doc:`extracting schema from the backup <extract-schema-from-backup>`. For convenience, here is the continuation of our example with the list of steps for restoring schema:
#. Download schema from the backup store to the current dir. It's in the first line of the ``backup_files.out`` output:
.. code-block:: none
sctool backup files --cluster my-cluster -L s3:backup-bucket -T sm_20200513104924UTC --with-version | head -n 1 | xargs -n2 aws s3 cp
download: s3://backup-bucket/backup/schema/cluster/7313fda0-6ebd-4513-8af0-67ac8e30077b/task_001ce624-9ac2-4076-a502-ec99d01effe4_tag_sm_20200513104924UTC_schema.tar.gz to ./task_001ce624-9ac2-4076-a502-ec99d01effe4_tag_sm_20200513104924UTC_schema.tar.gz
#. Extract schema files by decompressing archive:
.. code-block:: none
mkdir ./schema
tar -xf task_001ce624-9ac2-4076-a502-ec99d01effe4_tag_sm_20200513104924UTC_schema.tar.gz -C ./schema
ls ./schema
system_auth.cql system_distributed.cql system_schema.cql system_traces.cql user_data.cql
* If you do *not* have the schema file available, you can `extract the schema from system table <https://manager.docs.scylladb.com/branch-2.2/restore/extract-schema-from-metadata.html>`_.
Full schema restore procedure can be found at :ref:`steps 1 to 5 <restore-procedure>`.
For convenience, here is the list of steps for our example (WARNING: these can be destructive operations):
#. Run the ``nodetool drain`` command to ensure the data is flushed to the SSTables.
#. Shut down the node:
.. code-block:: none
sudo systemctl stop scylla-server
#. Delete all files in the commitlog:
.. code-block:: none
sudo rm -rf /var/lib/scylla/commitlog/*
#. Delete all the files in the user_data.data_* tables (only files, not directories):
.. code-block:: none
sudo rm -f /var/lib/scylla/data/user_data/data_0-6e856600017f11e790f4000000000000/*
If cluster is added with CQL credentials (see :doc:`Add Cluster <add-a-cluster>` for reference) Scylla Manager would backup schema in CQL format.
To obtain CQL schema from a particular backup, use ``sctool backup files`` command, for example:
.. code-block:: none
sctool backup files -c my-cluster -L s3:backups -T sm_20191210145143UTC
The first output line is a path to schemas archive, for example:
.. code-block:: none
s3://backups/backup/schema/cluster/ed63b474-2c05-4f4f-b084-94541dd86e7a/task_287791d9-c257-4850-aef5-7537d6e69d90_tag_sm_20200506115612UTC_schema.tar.gz ./
This archive contains a single CQL file for each keyspace in the backup.
.. code-block:: none
tar -ztvf task_287791d9-c257-4850-aef5-7537d6e69d90_tag_sm_20200506115612UTC_schema.tar.gz
-rw------- 0/0 2366 2020-05-08 14:38 system_auth.cql
-rw------- 0/0 931 2020-05-08 14:38 system_distributed.cql
-rw------- 0/0 11557 2020-05-08 14:38 system_schema.cql
-rw------- 0/0 4483 2020-05-08 14:38 system_traces.cql
To restore the schema, you need to execute the files with cqlsh command.
**Procedure**
#. Download schema archive
.. code-block:: none
aws s3 cp s3://backups/backup/schema/cluster/ed63b474-2c05-4f4f-b084-94541dd86e7a/task_287791d9-c257-4850-aef5-7537d6e69d90_tag_sm_20200506115612UTC_schema.tar.gz ./
#. Extract CQL files from archive
.. code-block:: none
tar -xzvf task_287791d9-c257-4850-aef5-7537d6e69d90_tag_sm_20200506115612UTC_schema.tar.gz
#. Copy CQL files for desired keyspaces to a cluster node
#. On node execute CQL files using cqlsh
.. code-block:: none
cqlsh -f my_keyspace.cql
Upload data to Scylla
=====================
You can either upload the data:
* `To the same cluster`_: with the same nodes, topology, and the same token distribution **OR**
* `To a new cluster`_: of any number of nodes
To the same cluster
-------------------
List the backup files
.....................
List the backup files needed on each node and save the list to a file.
If you are listing old backups from the new cluster use ``--all-clusters`` parameter.
.. code-block:: none
sctool backup files -c cluster1 --snapshot-tag sm_20200513131519UTC \
--with-version \
--location s3:backup-bucket \
> backup_files.out
Snapshot information is now stored in ``backup_files.out`` file.
Each line of the ``backup_files.out`` file contains mapping between the path to the SSTable file in the backup bucket, and it's mapping to keyspace/table.
If Scylla Manager is configured to store database schemas with the backups, then the first line in the file listing is the path to the schema archive.
For example:
.. code-block:: none
s3://backup-bucket/backup/sst/cluster/7313fda0-6ebd-4513-8af0-67ac8e30077b/dc/AWS_EU_CENTRAL_1/node/92de78b1-6c77-4788-b513-2fff5a178fe5/keyspace/user_data/table/data_65/a2667040944811eaaf9d000000000000/la-72-big-Index.db user_data/data_65-a2667040944811eaaf9d000000000000
Path contains metadata, for example:
* Cluster ID - 7313fda0-6ebd-4513-8af0-67ac8e30077b
* Data Center - AWS_EU_CENTRAL_1
* Directory - /var/lib/scylla/data/user_data/data_65-a2667040944811eaaf9d000000000000/
* Keyspace - user_data
.. code-block:: none
sctool backup files -c prod-cluster --snapshot-tag sm_20191210145027UTC \
--with-version > backup_files.out
Each line describes a backed-up file and where it should be downloaded. For example
.. code-block:: none
s3://backups/backup/sst/cluster/1d781354-9f9f-47cc-ad45-f8f890569656/dc/dc1/node/ece658c2-e587-49a5-9fea-7b0992e19607/keyspace/auth_service/table/roles/5bc52802de2535edaeab188eecebb090/mc-2-big-CompressionInfo.db auth_service/roles-5bc52802de2535edaeab188eecebb090
This file has to be copied to:
* Cluster - 1d781354-9f9f-47cc-ad45-f8f890569656
* Data Center - dc1
* Node - ece658c2-e587-49a5-9fea-7b0992e19607
* Directory - /var/lib/scylla/data/auth_service/roles-5bc52802de2535edaeab188eecebb090/upload
Download the backup files
.........................
This step must be executed on **each node** in the cluster.
#. Copy ``backup_files.out`` file as ``/tmp/backup_files.out`` on the node.
#. Run ``nodetool status`` to get to know the node ID.
#. Download data into table directories.
As the file is kept in S3 so we can use S3 CLI to download it (this step may be different with other storage providers).
Grep can be used to filter specific files to restore.
With node UUID we can filter files only for a single node.
With a keyspace name, we can filter files only for a single keyspace.
.. code-block:: none
cd /var/lib/scylla/data
# Filter only files for a single node.
grep ece658c2-e587-49a5-9fea-7b0992e19607 /tmp/backup_files.out | xargs -n2 aws s3 cp
#. Make sure that all files are owned by the Scylla user and group.
We must ensure that permissions are right after copy:
.. code-block:: none
sudo chown -R scylla:scylla /var/lib/scylla/data/user_data/
#. Start the Scylla nodes:
.. code-block:: none
sudo systemctl start scylla-server
Repair
......
After performing the above on all nodes, repair the cluster with Scylla Manager Repair.
This makes sure that the data is consistent on all nodes and between each node.
To a new cluster
----------------
In order to restore a backup to a cluster that has a different topology, you have to use an external tool called :doc:`sstableloader </operating-scylla/procedures/cassandra-to-scylla-migration-process>`.
This procedure is much slower than restoring to the same topology cluster.
**Procedure**
#. Start up the nodes if they are not running after schema restore:
.. code-block:: none
sudo systemctl start scylla-server
#. List all the backup files and save the list to a file.
Use ``--all-clusters`` if you are restoring from the cluster that no longer exists.
.. code-block:: none
sctool backup files -c cluster1 --snapshot-tag sm_20200513131519UTC --location s3:backup-bucket > backup_files.out
#. Copy ``backup_files.out`` file as ``/tmp/backup_files.out`` on the host where ``sstableloader`` is installed.
#. Download all files created during backup into temporary location:
.. code-block:: none
mkdir snapshot
cd snapshot
# Create temporary directory structure.
cat /tmp/backup_files.out | awk '{print $2}' | xargs mkdir -p
# Download snapshot files.
cat /tmp/backup_files.out | xargs -n2 aws s3 cp
#. Execute the following command for each table by providing a list of node IP addresses and path to sstable files on node that has sstableloader installed:
.. code-block:: none
# Loads table user_data.data_0 into four node cluster.
sstableloader -d '35.158.14.221,18.157.98.72,3.122.196.197,3.126.2.205' ./user_data/data_0 --username scylla --password <password>
After tables are restored, verify the validity of your data by running queries on your database.

File diff suppressed because it is too large Load Diff

View File

@@ -1,123 +0,0 @@
========================================
Use a remote database for Scylla Manager
========================================
.. include:: /operating-scylla/manager/_common/note-versions.rst
When you install Scylla Manager, it installs a local instance of Scylla to use as it's a database.
You are not required to use the local instance and can use Scylla Manager with a remote database.
**Requirements**
* Scylla cluster to be used as Scylla Manager data store.
* Package ``scylla-manager`` installed.
Remove local Scylla instance
============================
The ``scylla-manager`` package is a meta package that pulls both Scylla and Scylla Manager packages.
If you do not intend to use the local Scylla instance, you may remove it.
**Procedure**
1. Remove ``scylla-enterprise`` package.
.. code-block:: none
sudo yum remove scylla-enterprise -y
2. Remove related packages. This would also remove the Scylla Manager.
.. code-block:: none
sudo yum autoremove -y
3. Install the Scylla Manager client and server packages.
.. code-block:: none
sudo yum install scylla-manager-client scylla-manager-server -y
Edit Scylla Manager configuration
=================================
Scylla Manager configuration file ``/etc/scylla-manager/scylla-manager.yaml`` contains a database configuration section.
.. code-block:: yaml
# Scylla Manager database, used to store management data.
database:
hosts:
- 127.0.0.1
# Enable or disable client/server encryption.
# ssl: false
#
# Database credentials.
# user: user
# password: password
#
# Local datacenter name, specify if using a remote, multi-dc cluster.
# local_dc:
#
# Database connection timeout.
# timeout: 600ms
#
# Keyspace for management data, for create statement see /etc/scylla-manager/create_keyspace.cql.tpl.
# keyspace: scylla_manager
# replication_factor: 1
Using an editor, open the file and change relevant parameters.
**Procedure**
1. Edit the ``hosts`` parameter, change the IP address to the IP address or addresses of the remote cluster.
2. If client/server encryption is enabled, uncomment and set the ``ssl`` parameter to ``true``.
Additional SSL configuration options can be set in the ``ssl`` configuration section.
.. code-block:: yaml
# Optional custom client/server encryption options.
#ssl:
# CA certificate used to validate server cert. If not set will use he host's root CA set.
# cert_file:
#
# Verify the hostname and server cert.
# validate: true
#
# Client certificate and key in PEM format. It has to be provided when
# client_encryption_options.require_client_auth=true is set on server.
# user_cert_file:
# user_key_file
3. If authentication is needed, uncomment and edit the ``user`` and ``password`` parameters.
4. If the remote cluster contains more than one node:
* If it's a single DC deployment, uncomment and edit the ``replication_factor`` parameter to match the required replication factor.
Note that this would use a simple replication strategy (SimpleStrategy).
If you want to use a different replication strategy, create ``scylla_manager`` keyspace (or another matching the ``keyspace`` parameter) yourself.
Refer to :doc:`Scylla Architecture - Fault Tolerance </architecture/architecture-fault-tolerance>` for more information on replication.
* If it's a multi DC deployment, create ``scylla_manager`` keyspace (or other matching the ``keyspace`` parameter) yourself.
Uncomment and edit the ``local_dc`` parameter to specify the local datacenter.
Sample configuration of Scylla Manager working with a remote cluster with authentication and replication factor 3 could look like this.
.. code-block:: yaml
database:
hosts:
- 198.100.51.11
- 198.100.51.12
user: user
password: password
replication_factor: 3
Setup Scylla Manager
====================
Continue with :ref:`setup script <install-run-the-scylla-manager-setup-script>`.

View File

@@ -1,4 +0,0 @@
Scylla Manager is a centralized cluster administration and recurrent tasks automation tool. Scylla Manager can schedule tasks such as repairs and backups.
Scylla Manager is available for Scylla Enterprise customers and Scylla Open Source users. With Scylla Open Source, Scylla Manager is limited to 5 nodes.
See the Scylla Manager Proprietary Software `License Agreement <https://www.scylladb.com/scylla-manager-software-license-agreement/>`_ for details.
Scylla Manager runs with any version of Scylla Enterprise or Open source.

View File

@@ -1,4 +0,0 @@
Older Scylla Manager releases:
* :doc:`Scylla Manager 2.1 </operating-scylla/manager/2.1/index>`

View File

@@ -1,2 +0,0 @@
.. note:: You are not reading the most recent version of this documentation.
Go to the **latest** version of `Scylla Manager Documentation <http://scylladb.github.io/scylla-manager/>`_.

View File

@@ -1,27 +0,0 @@
Scylla Manager
==============
.. toctree::
:hidden:
:maxdepth: 2
Scylla Manager Docs <https://manager.docs.scylladb.com>
Upgrade Scylla Manager </upgrade/upgrade-manager/index>
Monitoring Support Matrix <https://monitoring.docs.scylladb.com/stable/reference/matrix.html>
.. include:: /operating-scylla/manager/_common/manager-description.rst
.. image:: scylla-manager@2x.png
:width: 250
:alt: Scylla Manager Logo
To get started, read the `Scylla Manager Documentation <https://manager.docs.scylladb.com/stable/index.html>`_ (Scylla Manager release 2.2 and later)
Additional information:
* :doc:`Upgrade Scylla Manager </upgrade/upgrade-manager/index>`
* `Scylla Monitoring Support Matrix <https://monitoring.docs.scylladb.com/stable/reference/matrix.html>`_ - refer to this document before installing Manager if you plan to use Scylla Manager with Scylla Monitoring stack.
* `Cluster Management, Repair and Scylla Manager lesson <https://university.scylladb.com/courses/scylla-operations/lessons/cluster-management-repair-and-scylla-manager/>`_ on Scylla University, includes theory and some hands-on labs
.. include:: /operating-scylla/manager/_common/manager-index.rst

Binary file not shown.

Before

Width:  |  Height:  |  Size: 38 KiB

View File

@@ -6,7 +6,7 @@ Nodetool repair
When running ``nodetool repair`` on a **single node**, it acts as the **repair master**. Only the data contained in the master node and its replications will be repaired.
Typically, this subset of data is replicated on many nodes in the cluster, often all, and the repair process syncs between all the replicas until the master data subset is in-sync.
To repair **all** of the data in the cluster, you need to run a repair on **all** of the nodes in the cluster, or let :doc:`Scylla Manager</operating-scylla/manager/index/>` do it for you.
To repair **all** of the data in the cluster, you need to run a repair on **all** of the nodes in the cluster, or let `ScyllaDB Manager <https://manager.docs.scylladb.com/>`_ do it for you.
.. note:: Run the :doc:`nodetool repair </operating-scylla/nodetool-commands/repair/>` command regularly. If you delete data frequently, it should be more often than the value of ``gc_grace_seconds`` (by default: 10 days), for example, every week. Use the **nodetool repair -pr** on each node in the cluster, sequentially.
@@ -108,8 +108,6 @@ Scylla nodetool repair command supports the following options:
nodetool repair <my_keyspace> <my_table>
See also
:doc:`Scylla Manager</operating-scylla/manager/index/>`
See also `ScyllaDB Manager <https://manager.docs.scylladb.com/>`_.
.. include:: nodetool-index.rst

View File

@@ -1,4 +1,4 @@
.. note::
For cluster wide backup and restore, see :doc:`Scylla Manager </operating-scylla/manager/index>`
For cluster wide backup and restore, see `ScyllaDB Manager <https://manager.docs.scylladb.com/>`_.

View File

@@ -41,7 +41,7 @@ Once all the nodes have been upgraded to the new version 2018.1, run a **serial*
**During** the rolling upgrade it is highly recommended:
* Not to use new 2019.1 features
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See :doc:`here </operating-scylla/manager/2.1/sctool>` for suspending Scylla Manager scheduled or running repairs.
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See `sctool <https://manager.docs.scylladb.com/stable/sctool/index.html>`_ for suspending Scylla Manager scheduled or running repairs.
* Not to apply schema changes
.. include:: /upgrade/_common/upgrade_to_2019_warning.rst

View File

@@ -39,7 +39,7 @@ Apply the following procedure **serially** on each node. Do not move to the next
**During** the rolling upgrade, it is highly recommended:
* Not to use new 2020.1 features.
* Not to run administration functions, like repairs, refresh, rebuild, or add or remove nodes. See :doc:`here </operating-scylla/manager/2.1/sctool>` for suspending Scylla Manager scheduled or running repairs.
* Not to run administration functions, like repairs, refresh, rebuild, or add or remove nodes. See `sctool <https://manager.docs.scylladb.com/stable/sctool/index.html>`_ for suspending Scylla Manager scheduled or running repairs.
* Not to apply schema changes.
.. include:: /upgrade/_common/upgrade_to_2020_warning.rst

View File

@@ -31,7 +31,7 @@ Apply the following procedure **serially** on each node. Do not move to the next
**During** the rolling upgrade, it is highly recommended:
* Not to use new 2021.1 features.
* Not to run administration functions, like repairs, refresh, rebuild, or add or remove nodes. See :doc:`here </operating-scylla/manager/2.1/sctool>` for suspending Scylla Manager scheduled or running repairs.
* Not to run administration functions, like repairs, refresh, rebuild, or add or remove nodes. See `sctool <https://manager.docs.scylladb.com/stable/sctool/index.html>`_ for suspending Scylla Manager scheduled or running repairs.
* Not to apply schema changes.
.. include:: /upgrade/_common/upgrade_to_2020_warning.rst

View File

@@ -31,7 +31,7 @@ Apply the following procedure **serially** on each node. Do not move to the next
**During** the rolling upgrade, it is highly recommended:
* Not to use new 2022.1 features.
* Not to run administration functions, like repairs, refresh, rebuild, or add or remove nodes. See :doc:`here </operating-scylla/manager/2.1/sctool>` for suspending ScyllaDB Manager's scheduled or running repairs.
* Not to run administration functions, like repairs, refresh, rebuild, or add or remove nodes. See `sctool <https://manager.docs.scylladb.com/stable/sctool/index.html>`_ for suspending ScyllaDB Manager's scheduled or running repairs.
* Not to apply schema changes.
.. include:: /upgrade/_common/upgrade_to_2022_warning.rst

View File

@@ -34,7 +34,7 @@ Apply the following procedure **serially** on each node. Do not move to the next
**During** the rolling upgrade it is highly recommended:
* Not to use new 2019.1 features
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See :doc:`here </operating-scylla/manager/2.1/sctool>` for suspending Scylla Manager scheduled or running repairs.
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See `sctool <https://manager.docs.scylladb.com/stable/sctool/index.html>`_ for suspending Scylla Manager scheduled or running repairs.
* Not to apply schema changes

View File

@@ -34,7 +34,7 @@ Apply the following procedure **serially** on each node. Do not move to the next
**During** the rolling upgrade it is highly recommended:
* Not to use new 2020.1 features
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See :doc:`here </operating-scylla/manager/2.1/sctool>` for suspending Scylla Manager scheduled or running repairs.
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See `sctool <https://manager.docs.scylladb.com/stable/sctool/index.html>`_ for suspending Scylla Manager scheduled or running repairs.
* Not to apply schema changes

View File

@@ -34,7 +34,7 @@ Apply the following procedure **serially** on each node. Do not move to the next
**During** the rolling upgrade it is highly recommended:
* Not to use new 2021.1 features
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See :doc:`here </operating-scylla/manager/2.1/sctool>` for suspending Scylla Manager scheduled or running repairs.
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See `sctool <https://manager.docs.scylladb.com/stable/sctool/index.html>`_ for suspending Scylla Manager scheduled or running repairs.
* Not to apply schema changes

View File

@@ -33,7 +33,7 @@ Apply the following procedure **serially** on each node. Do not move to the next
**During** the rolling upgrade it is highly recommended:
* Not to use new |NEW_VERSION| features
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See :doc:`here </operating-scylla/manager/2.1/sctool>` for suspending Scylla Manager (only available Scylla Enterprise) scheduled or running repairs.
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See `sctool <https://manager.docs.scylladb.com/stable/sctool/index.html>`_ for suspending Scylla Manager (only available Scylla Enterprise) scheduled or running repairs.
* Not to apply schema changes
.. note:: Before upgrading, make sure to use |SCYLLA_MONITOR|_ or newer, for the Dashboards.

View File

@@ -32,7 +32,7 @@ Apply the following procedure **serially** on each node. Do not move to the next
**During** the rolling upgrade it is highly recommended:
* Not to use new |NEW_VERSION| features
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See :doc:`here </operating-scylla/manager/2.1/sctool>` for suspending Scylla Manager (only available Scylla Enterprise) scheduled or running repairs.
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See `sctool <https://manager.docs.scylladb.com/stable/sctool/index.html>`_ for suspending Scylla Manager (only available Scylla Enterprise) scheduled or running repairs.
* Not to apply schema changes
.. note:: Before upgrading, make sure to use |SCYLLA_MONITOR|_ or newer, for the Dashboards.

View File

@@ -33,7 +33,7 @@ Apply the following procedure **serially** on each node. Do not move to the next
**During** the rolling upgrade it is highly recommended:
* Not to use new |NEW_VERSION| features
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See :doc:`here </operating-scylla/manager/2.1/sctool>` for suspending Scylla Manager (only available Scylla Enterprise) scheduled or running repairs.
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See `sctool <https://manager.docs.scylladb.com/stable/sctool/index.html>`_ for suspending Scylla Manager (only available Scylla Enterprise) scheduled or running repairs.
* Not to apply schema changes
.. note:: Before upgrading, make sure to use |SCYLLA_MONITOR|_ or newer, for the Dashboards.

View File

@@ -32,7 +32,7 @@ Apply the following procedure **serially** on each node. Do not move to the next
**During** the rolling upgrade it is highly recommended:
* Not to use new |NEW_VERSION| features
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See :doc:`here </operating-scylla/manager/2.1/sctool>` for suspending Scylla Manager (only available Scylla Enterprise) scheduled or running repairs.
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See `sctool <https://manager.docs.scylladb.com/stable/sctool/index.html>`_ for suspending Scylla Manager (only available Scylla Enterprise) scheduled or running repairs.
* Not to apply schema changes
.. note:: Before upgrading, make sure to use |SCYLLA_MONITOR|_ or newer, for the Dashboards.

View File

@@ -31,7 +31,7 @@ Apply the following procedure **serially** on each node. Do not move to the next
**During** the rolling upgrade, it is highly recommended:
* Not to use new |NEW_VERSION| features
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See :doc:`here </operating-scylla/manager/2.1/sctool>` for suspending Scylla Manager (only available Scylla Enterprise) scheduled or running repairs.
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See `sctool <https://manager.docs.scylladb.com/stable/sctool/index.html>`_ for suspending Scylla Manager (only available Scylla Enterprise) scheduled or running repairs.
* Not to apply schema changes
.. note:: Before upgrading, make sure to use |SCYLLA_MONITOR|_ or newer, for the Dashboards.

View File

@@ -32,7 +32,7 @@ Apply the following procedure **serially** on each node. Do not move to the next
**During** the rolling upgrade it is highly recommended:
* Not to use new |NEW_VERSION| features
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See :doc:`here </operating-scylla/manager/2.1/sctool>` for suspending Scylla Manager (only available Scylla Enterprise) scheduled or running repairs.
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See `sctool <https://manager.docs.scylladb.com/stable/sctool/index.html>`_ for suspending Scylla Manager (only available Scylla Enterprise) scheduled or running repairs.
* Not to apply schema changes
.. note:: Before upgrading, make sure to use |SCYLLA_MONITOR|_ or newer, for the Dashboards.

View File

@@ -29,7 +29,7 @@ Apply the following procedure **serially** on each node. Do not move to the next
**During** the rolling upgrade, it is highly recommended:
* Not to use new |NEW_VERSION| features
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See :doc:`here </operating-scylla/manager/2.1/sctool>` for suspending Scylla Manager (only available Scylla Enterprise) scheduled or running repairs.
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See `sctool <https://manager.docs.scylladb.com/stable/sctool/index.html>`_ for suspending Scylla Manager (only available Scylla Enterprise) scheduled or running repairs.
* Not to apply schema changes
.. note:: Before upgrading, make sure to use the latest `Scylla Montioring <https://monitoring.docs.scylladb.com/>`_ stack.

View File

@@ -31,7 +31,7 @@ Apply the following procedure **serially** on each node. Do not move to the next
**During** the rolling upgrade it is highly recommended:
* Not to use new |NEW_VERSION| features
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See :doc:`here </operating-scylla/manager/2.1/sctool>` for suspending Scylla Manager (only available Scylla Enterprise) scheduled or running repairs.
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See `sctool <https://manager.docs.scylladb.com/stable/sctool/index.html>`_ for suspending Scylla Manager (only available Scylla Enterprise) scheduled or running repairs.
* Not to apply schema changes
.. note:: Before upgrading, make sure to use the latest `Scylla Montioring <https://monitoring.docs.scylladb.com/>`_ stack.

View File

@@ -9,7 +9,6 @@ Upgrade Scylla
Scylla Enterprise <upgrade-enterprise/index>
Scylla Open Source <upgrade-opensource/index>
Scylla Open Source to Scylla Enterprise <upgrade-to-enterprise/index>
Scylla Manager <upgrade-manager/index>
Scylla AMI <ami-upgrade>
.. raw:: html
@@ -30,8 +29,6 @@ Procedures for upgrading Scylla.
* :doc:`Upgrade from Scylla Open Source to Scylla Enterprise <upgrade-to-enterprise/index>`
* :doc:`Upgrade Scylla Manager <upgrade-manager/index>`
* :doc:`Upgrade Scylla AMI <ami-upgrade>`

View File

@@ -39,7 +39,7 @@ Apply the following procedure **serially** on each node. Do not move to the next
**During** the rolling upgrade it is highly recommended:
* Not to use new 2019.1 features
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See :doc:`here </operating-scylla/manager/2.1/sctool>` for suspending Scylla Manager scheduled or running repairs.
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See `sctool <https://manager.docs.scylladb.com/stable/sctool/index.html>`_ for suspending Scylla Manager scheduled or running repairs.
* Not to apply schema changes
.. include:: /upgrade/_common/upgrade_to_2019_warning.rst

View File

@@ -39,7 +39,7 @@ Apply the following procedure **serially** on each node. Do not move to the next
**During** the rolling upgrade, it is highly recommended:
* Not to use new 2020.1 features.
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See :doc:`here </operating-scylla/manager/2.1/sctool>` for suspending Scylla Manager scheduled or running repairs.
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See `sctool <https://manager.docs.scylladb.com/stable/sctool/index.html>`_ for suspending Scylla Manager scheduled or running repairs.
* Not to apply schema changes.
.. include:: /upgrade/_common/upgrade_to_2020_warning.rst

View File

@@ -31,7 +31,7 @@ Apply the following procedure **serially** on each node. Do not move to the next
**During** the rolling upgrade, it is highly recommended:
* Not to use new 2021.1 features.
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See :doc:`here </operating-scylla/manager/2.1/sctool>` for suspending Scylla Manager scheduled or running repairs.
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See `sctool <https://manager.docs.scylladb.com/stable/sctool/index.html>`_ for suspending Scylla Manager scheduled or running repairs.
* Not to apply schema changes.
.. include:: /upgrade/_common/upgrade_to_2021_warning.rst

View File

@@ -32,7 +32,7 @@ Apply the following procedure **serially** on each node. Do not move to the next
**During** the rolling upgrade, it is highly recommended:
* Not to use new 2022.1 features.
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See :doc:`here </operating-scylla/manager/2.1/sctool>` for suspending ScyllaDB Manager's scheduled or running repairs.
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See `sctool <https://manager.docs.scylladb.com/stable/sctool/index.html>`_ for suspending ScyllaDB Manager's scheduled or running repairs.
* Not to apply schema changes.
.. include:: /upgrade/_common/upgrade_to_2022_warning.rst

View File

@@ -1,39 +0,0 @@
=======================
Upgrade Scylla Manager
=======================
.. toctree::
:hidden:
Scylla Manager 2.x.a to Scylla Manager 2.y.b </upgrade/upgrade-manager/upgrade-guide-from-2.x.a-to-2.y.b/index>
Scylla Manager 2.x to Scylla Manager 2.3 </upgrade/upgrade-manager/upgrade-guide-from-2.x.a-to-2.y.b/upgrade-2.x.a-to-2.y.b>
Scylla Manager 1.4 to Scylla Manager 2.0 </upgrade/upgrade-manager/upgrade-guide-from-1.4-to-2.0/index>
Scylla Manager 1.3 to Scylla Manager 1.4 </upgrade/upgrade-manager/upgrade-guide-from-1.3-to-1.4/index>
Scylla Manager 1.2 to Scylla Manager 1.3 </upgrade/upgrade-manager/upgrade-guide-from-1.2-to-1.3/index>
Scylla Manager 1.1 to Scylla Manager 1.2</upgrade/upgrade-manager/upgrade-guide-from-manager-1.1.x-to-1.2.x>
Scylla Manager 1.0 to Scylla Manager 1.1</upgrade/upgrade-manager/upgrade-guide-from-manager-1.0.x-to-1.1.x>
Scylla Manager 1.x Maintenance Release </upgrade/upgrade-manager/upgrade-guide-maintenance-1.x.y-to-1.x.z/index>
.. panel-box::
:title: Manager Upgrade - Latest
:id: "getting-started"
:class: my-panel
* `Upgrade Guide - Latest Release <https://manager.docs.scylladb.com/latest/upgrade>`_
.. panel-box::
:title: Manager Upgrade - Older Versions
:id: "getting-started"
:class: my-panel
* :doc:`Upgrade Guide - Scylla Manager 2.x.a to Scylla Manager 2.y.b </upgrade/upgrade-manager/upgrade-guide-from-2.x.a-to-2.y.b/index>`
* :doc:`Upgrade Guide - Scylla Manager 2.2 to Scylla Manager 2.3 </upgrade/upgrade-manager/upgrade-guide-from-2.x.a-to-2.y.b/index>`
* :doc:`Upgrade Guide - Scylla Manager 2.1 to Scylla Manager 2.2 </upgrade/upgrade-manager/upgrade-guide-from-2.x.a-to-2.y.b/index>`
* :doc:`Upgrade Guide - Scylla Manager 2.0 to Scylla Manager 2.1 </upgrade/upgrade-manager/upgrade-guide-from-2.x.a-to-2.y.b/index>`
* :doc:`Upgrade Guide - Scylla Manager 1.4 to Scylla Manager 2.0 </upgrade/upgrade-manager/upgrade-guide-from-1.4-to-2.0/index>`
* :doc:`Upgrade Guide - Scylla Manager 1.3 to Scylla Manager 1.4 </upgrade/upgrade-manager/upgrade-guide-from-1.3-to-1.4/index>`
* :doc:`Upgrade Guide - Scylla Manager 1.2 to Scylla Manager 1.3 </upgrade/upgrade-manager/upgrade-guide-from-1.2-to-1.3/index>`
* :doc:`Upgrade Guide - Scylla Manager 1.1 to Scylla Manager 1.2 </upgrade/upgrade-manager/upgrade-guide-from-manager-1.1.x-to-1.2.x>`
* :doc:`Upgrade Guide - Scylla Manager 1.0 to Scylla Manager 1.1 </upgrade/upgrade-manager/upgrade-guide-from-manager-1.0.x-to-1.1.x>`
* :doc:`Upgrade Guide Scylla Manager 1.x.y to Scylla Manager 1.x.z </upgrade/upgrade-manager/upgrade-guide-maintenance-1.x.y-to-1.x.z/index>`

View File

@@ -1,34 +0,0 @@
=================================
Upgrade Scylla Manager 1.2 to 1.3
=================================
.. toctree::
:hidden:
Scylla Manager 1.2 to Scylla Manager 1.3 for Centos 7<upgrade-guide-from-manager-1.2.x-to-1.3.x-CentOS>
Scylla Manager 1.2 to Scylla Manager 1.3 for Ubuntu 16<upgrade-guide-from-manager-1.2.x-to-1.3.x-ubuntu>
Metrics Update - Scylla Manager 1.2 to 1.3 <manager-metric-update-1.2-to-1.3>
.. raw:: html
<div class="panel callout radius animated">
<div class="row">
<div class="medium-3 columns">
<h5 id="getting-started">Upgrade Scylla Manager 1.2 to 1.3</h5>
</div>
<div class="medium-9 columns">
Procedures for upgrading Scylla Manager from 1.2 to 1.3.
* :doc:`Upgrade Guide - Scylla Manager 1.2.x to 1.3.x on CentOS 7<upgrade-guide-from-manager-1.2.x-to-1.3.x-CentOS>`
* :doc:`Upgrade Guide - Scylla Manager 1.2.x to 1.3.x on Ubuntu 16<upgrade-guide-from-manager-1.2.x-to-1.3.x-ubuntu>`
* :doc:`Scylla Manager Metrics Update - Scylla Manager 1.2 to 1.3<manager-metric-update-1.2-to-1.3>`
.. raw:: html
</div>
</div>
</div>

View File

@@ -1,32 +0,0 @@
========================================================
Scylla Manager Metric Update - Scylla Manager 1.2 to 1.3
========================================================
.. toctree::
:maxdepth: 2
:hidden:
Scylla Manager 1.3 Dashboards are available for use with Scylla Monitoring Stack
The following metrics are new in Scylla Manager 1.3
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* `scylla_manager.healthcheck.cql_status` - This metric measures the CQL status and is used in the Scylla Manager Health Check. The metric is labeled with the cluster and host and reports the following values:
- `0` - not checked
- `1` - success
- `-1` - failure
* `scylla_manager.healthcheck.cql_rtt_ms` - This metric measures the CQL latency and is used in the Scylla Manager Health Check. The metric is labeled with the cluster and host and reports the latency in milliseconds.
The following metric was updated from Scylla Manager 1.2 to Scylla Manager 1.3
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* `scylla_manager.repair.progress` now aggregates progress in percentage and has the following additional labels:
- cluster
- task
- keyspace
- host
Previously, this metric reported the aggregated keyspace and host progress. Now it reports the aggregated keyspace progress and aggregated total progress. This is implemented by leaving appropriate labels blank. In the total progress metric the only set label is `cluster` and `task`.

View File

@@ -1,179 +0,0 @@
========================================================
Upgrade Guide - Scylla Manager 1.2.x to 1.3.x on CentOS
========================================================
Enterprise customers who use Scylla Manager 1.2.x are encouraged to upgrade to 1.3.x.
For new installations please see `Scylla Manager - Download and Install <https://www.scylladb.com/enterprise-download/#manager>`_.
The steps below instruct you how to upgrade the Scylla Manager server while keeping the manager datastore intact.
If you are not running Scylla Manager 1.2.x, do not perform this upgrade procedure. This procedure only covers upgrades from Scylla Manager 1.2.x to 1.3.x.
Please contact `Scylla Enterprise Support <https://www.scylladb.com/product/support/>`_ team with any questions.
Upgrade Notes
=================
* This upgrade brings to sctool a new command: ``status``. This command shows a listing of the individual nodes in the cluster and records the CQL availability on the nodes.
* Health Check - This upgrade introduces a new feature where each node is monitored by Scylla Manager. When Scylla Manager detects that a node is down an alert message is sent to Scylla Monitoring. Alternitively, you can use the ``sctool status``` command to show the live cluster status.
* Automated health check - When a cluster is added a new health check task is automatically added to the cluster. Following an upgrade, all existing clusters will have an health check task as well.
* The sctool argument ``interval-days`` has been renamed to ``interval`` as it now supports more granular time units. For example: ``3d2h10m``. The available time units are ``d``, ``h``, ``m``, and ``s``.
* The sctool command ``cluster list`` no longer displays the **host** column in the results table. This was removed because it was easy to be mislead that this node was the only node being used. Adding a cluster (``cluster add``) still takes a ``--host`` argument, but when all the available nodes are discovered they are persisted and used for subsequent interactions with ScyllaDB.
Upgrade Procedure
=================
* Backup the data
* Download and install new packages
* Validate that the upgrade was successful
Backup the Scylla Manager data
-------------------------------
Scylla Manager server persists its data to a Scylla cluster (data store). Before upgrading, backup the ``scylla_manager`` keyspace from Scylla Manager's backend, following this :doc:`backup procedure </operating-scylla/procedures/backup-restore/backup>`.
Download and install the new release
------------------------------------
.. _upgrade-manager-1.2.x-to-1.3.x-previous-release:
Before upgrading, check what version you are currently running now using ``rpm -q scylla-manager``. You should use the same version that you had previously installed in case you want to :ref:`rollback <upgrade-manager-1.2.x-to-1.3.x-rollback-procedure-centos>` the upgrade.
To upgrade:
1. Update the `Update the Scylla Manager repo: <https://www.scylladb.com/enterprise-download/#manager>`_ to **1.3.x**
2. Run:
.. code:: sh
sudo yum update scylla-manager -y
3. Reload your shell execute the below command to reload ``sctool`` code completion.
.. code:: sh
source /etc/bash_completion.d/sctool.bash
Validate
--------
1. Check that Scylla Manager service is running with ``sudo systemctl status scylla-manager.service``. Confirm the service is active (running). If not, then start it with ``systemctl start scylla-manager.service``.
2. Confirm that the upgrade changed the Client and Server version. Run ``sctool version`` and make sure both are 1.3.x version. For example:
.. code-block:: none
sctool version
Client version: 1.3.0-0.20181130.03ae248
Server version: 1.3.0-0.20181130.03ae248
3. Confirm that following the update, that your managed clusters are still present. Run ``sctool cluster list``
.. code-block:: none
sctool cluster list
╭──────────────────────────────────────┬──────────┬───────────────╮
│ cluster id │ name │ssh user │
├──────────────────────────────────────┼──────────┼───────────────┤
│ db7faf98-7cc4-4a08-b707-2bc59d65551e │ cluster │scylla-manager │
╰──────────────────────────────────────┴──────────┴───────────────╯
4. Confirm that following the upgrade, there is a healtcheck task for each existing cluster. Run ``sctool task list`` to list the tasks.
.. code-block:: none
sctool task list -c cluster --all
╭──────────────────────────────────────────────────┬───────────────────────────────┬──────┬────────────┬────────╮
│ task │ next run │ ret. │ properties │ status │
├──────────────────────────────────────────────────┼───────────────────────────────┼──────┼────────────┼────────┤
│ healthcheck/afe9a610-e4c7-4d05-860e-5a0ddf14d7aa │ 10 Dec 18 20:21 UTC (+15s) │ 0 │ │ RUNNING│
│ repair/4d79ee63-7721-4105-8c6a-5b98c65c3e21 │ 12 Dec 18 00:00 UTC (+7d) │ 3 │ │ NEW │
╰──────────────────────────────────────────────────┴───────────────────────────────┴──────┴────────────┴────────╯
.. _upgrade-manager-1.2.x-to-1.3.x-rollback-procedure-centos:
Rollback Procedure
==================
The following procedure describes a rollback from Scylla Manager 1.3 to 1.2. Apply this procedure if an upgrade from 1.2 to 1.3 failed for any reason.
**Warning:** note that you may lose the manged clusters after downgrade. Should this happen, you will need to add the managed clusters clusters manually.
* Downgrade to :ref:`previous release <upgrade-manager-1.2.x-to-1.3.x-previous-release>`
* Start Scylla Manager
* Valdate Scylla Manager version
Downgrade to previous release
-----------------------------
1. Stop Scylla Manager
.. code:: sh
sudo systemctl stop scylla-manager
2. Drop the ``scylla_manager`` keyspace from the remote datastore
.. code:: sh
cqlsh -e "DROP KEYSPACE scylla_manager"
3. Remove Scylla Manager repo
.. code:: sh
sudo rm -rf /etc/yum.repos.d/scylla-manager.repo
sudo yum clean all
sudo rm -rf /var/cache/yum
4. Update the `Scylla Manager repo <https://www.scylladb.com/enterprise-download/#manager>`_ to **1.2.x**
5. Install previous version
.. code:: sh
sudo yum downgrade scylla-manager scylla-manager-server scylla-manager-client -y
Rollback the Scylla Manager database
------------------------------------
1. Start Scylla Manager to reinitialize the data base schema.
.. code:: sh
sudo systemctl start scylla-manager
2. Stop Scylla Manager to avoid issues while restoring the backup. If you did not perform any backup before upgrading then you are done now and can continue at "Start Scylla Manager".
.. code:: sh
sudo systemctl stop scylla-manager
3. Restore the database backup if you performed a backup by following the instructions in :doc:`Restore from a Backup </operating-scylla/procedures/backup-restore/restore>`.
You can skip step 1 since the Scylla Manager has done this for you.
Start Scylla Manager
--------------------
.. code:: sh
sudo systemctl start scylla-manager
Validate Scylla Manager Version
-------------------------------
Validate Scylla Manager version:
.. code:: sh
sctool version
The version should match with the results you had :ref:`previously <upgrade-manager-1.2.x-to-1.3.x-previous-release>`.

View File

@@ -1,188 +0,0 @@
==========================================================
upgrade guide - Scylla Manager 1.2.x to 1.3.x on Ubuntu 16
==========================================================
Enterprise customers who use Scylla Manager 1.2.x are encouraged to upgrade to 1.3.x.
For new installations please see `Scylla Manager - Download and Install <https://www.scylladb.com/enterprise-download/#manager>`_.
The steps below instruct you how to upgrade the Scylla Manager server while keeping the manager datastore intact.
If you are not running Scylla Manager 1.2.x, do not perform this upgrade procedure. This procedure only covers upgrades from Scylla Manager 1.2.x to 1.3.x.
Please contact `Scylla Enterprise Support <https://www.scylladb.com/product/support/>`_ team with any questions.
Upgrade Notes
=================
* This upgrade brings to sctool a new command: ``status``. This command shows a listing of the individual nodes in the cluster and records the CQL availability on the nodes.
* Health Check - This upgrade introduces a new feature where each node is monitored by Scylla Manager. When Scylla Manager detects that a node is down an alert message is sent to Scylla Monitoring. Alternitively, you can use the ``sctool status``` command to show the live cluster status.
* Automated health check - When a cluster is added a new health check task is automatically added to the cluster. Following an upgrade, all existing clusters will have an health check task as well.
* The sctool argument ``interval-days`` has been renamed to ``interval`` as it now supports more granular time units. For example: ``3d2h10m``. The available time units are ``d``, ``h``, ``m``, and ``s``.
* The sctool command ``cluster list`` no longer displays the **host** column in the results table. This was removed because it was easy to be mislead that this node was the only node being used. Adding a cluster (``cluster add``) still takes a ``--host`` argument, but when all the available nodes are discovered they are persisted and used for subsequent interactions with ScyllaDB.
Upgrade Procedure
=================
* Backup the data
* Download and install new packages
* Validate that the upgrade was successful
Backup the Scylla Manager data
-------------------------------
Scylla Manager server persists its data to a Scylla cluster (data store). Before upgrading, backup the ``scylla_manager`` keyspace from Scylla Manager's backend, following this :doc:`backup procedure </operating-scylla/procedures/backup-restore/backup>`.
Download and install the new release
------------------------------------
.. _upgrade-manager-1.2.x-to-1.3.x-previous-release:
Before upgrading, check what version you are currently running now using ``rpm -q scylla-manager``. You should use the same version that you had previously installed in case you want to :ref:`rollback <upgrade-manager-1.2.x-to-1.3.x-rollback-procedure-ubuntu>` the upgrade.
To upgrade:
1. Update the `Update the Scylla Manager repo: <https://www.scylladb.com/enterprise-download/#manager>`_ to **1.3.x**
2. Run:
.. code:: sh
sudo apt-get update
sudo apt-get dist-upgrade scylla-manager*
3. Restart the service
.. code:: sh
sudo systemctl restart scylla-manager.service
4. Reload your shell execute the below command to reload ``sctool`` code completion.
.. code:: sh
source /etc/bash_completion.d/sctool.bash
Validate
--------
1. Check that Scylla Manager service is running with ``sudo systemctl status scylla-manager.service``. Confirm the service is active (running). If not, then start it with ``systemctl start scylla-manager.service``.
2. Confirm that the upgrade changed the Client and Server version. Run ``sctool version`` and make sure both are 1.3.x version. For example:
.. code-block:: none
sctool version
Client version: 1.3.0-0.20181130.03ae248
Server version: 1.3.0-0.20181130.03ae248
3. Confirm that following the update, that your managed clusters are still present. Run ``sctool cluster list``
.. code-block:: none
sctool cluster list
╭──────────────────────────────────────┬──────────┬───────────────╮
│ cluster id │ name │ssh user │
├──────────────────────────────────────┼──────────┼───────────────┤
│ db7faf98-7cc4-4a08-b707-2bc59d65551e │ cluster │scylla-manager │
╰──────────────────────────────────────┴──────────┴───────────────╯
4. Confirm that following the upgrade, there is a healtcheck task for each existing cluster. Run ``sctool task list`` to list the tasks.
.. code-block:: none
sctool task list -c cluster --all
╭──────────────────────────────────────────────────┬───────────────────────────────┬──────┬────────────┬────────╮
│ task │ next run │ ret. │ properties │ status │
├──────────────────────────────────────────────────┼───────────────────────────────┼──────┼────────────┼────────┤
│ healthcheck/afe9a610-e4c7-4d05-860e-5a0ddf14d7aa │ 10 Dec 18 20:21 UTC (+15s) │ 0 │ │ RUNNING│
│ repair/4d79ee63-7721-4105-8c6a-5b98c65c3e21 │ 12 Dec 18 00:00 UTC (+7d) │ 3 │ │ NEW │
╰──────────────────────────────────────────────────┴───────────────────────────────┴──────┴────────────┴────────╯
.. _upgrade-manager-1.2.x-to-1.3.x-rollback-procedure-ubuntu:
Rollback Procedure
==================
The following procedure describes a rollback from Scylla Manager 1.3 to 1.2. Apply this procedure if an upgrade from 1.2 to 1.3 failed for any reason.
**Warning:** note that you may lose the manged clusters after downgrade. Should this happen, you will need to add the managed clusters clusters manually.
* Downgrade to :ref:`previous release <upgrade-manager-1.2.x-to-1.3.x-previous-release>`
* Start Scylla Manager
* Valdate Scylla Manager version
Downgrade to previous release
-----------------------------
1. Stop Scylla Manager
.. code:: sh
sudo systemctl stop scylla-manager
2. Drop the ``scylla_manager`` keyspace from the remote datastore
.. code:: sh
cqlsh -e "DROP KEYSPACE scylla_manager"
3. Remove Scylla Manager repo
.. code:: sh
sudo rm -rf /etc/apt/sources.list.d/scylla-manager.list
4. Update the `Scylla Manager repo <https://www.scylladb.com/enterprise-download/#manager>`_ to **1.2.x**
5. Install previous version
.. code:: sh
sudo apt-get update
sudo apt-get remove scylla-manager\* -y
sudo apt-get install scylla-manager scylla-manager-server scylla-manager-client
sudo systemctl unmask scylla-manager.service
Rollback the Scylla Manager database
------------------------------------
1. Start Scylla Manager to reinitialize the data base schema.
.. code:: sh
sudo systemctl start scylla-manager
2. Stop Scylla Manager to avoid issues while restoring the backup. If you did not perform any backup before upgrading then you are done now and can continue at "Start Scylla Manager".
.. code:: sh
sudo systemctl stop scylla-manager
3. Restore the database backup if you performed a backup by following the instructions in :doc:`Restore from a Backup </operating-scylla/procedures/backup-restore/restore>`.
You can skip step 1 since the Scylla Manager has done this for you.
Start Scylla Manager
--------------------
.. code:: sh
sudo systemctl start scylla-manager
Validate Scylla Manager Version
-------------------------------
Validate Scylla Manager version:
.. code:: sh
sctool version
The version should match with the results you had :ref:`previously <upgrade-manager-1.2.x-to-1.3.x-previous-release>`.

View File

@@ -1,34 +0,0 @@
=================================
Upgrade Scylla Manager 1.3 to 1.4
=================================
.. toctree::
:hidden:
Scylla Manager 1.3 to Scylla Manager 1.4 for Centos 7<upgrade-guide-from-manager-1.3.x-to-1.4.x-CentOS>
Scylla Manager 1.3 to Scylla Manager 1.4 for Ubuntu 16<upgrade-guide-from-manager-1.3.x-to-1.4.x-ubuntu>
Metrics Update - Scylla Manager 1.3 to 1.4 <manager-metric-update-1.3-to-1.4>
.. raw:: html
<div class="panel callout radius animated">
<div class="row">
<div class="medium-4 columns">
<h5 id="getting-started">Upgrade Scylla Manager 1.3 to 1.4</h5>
</div>
<div class="medium-9 columns">
Procedures for upgrading Scylla Manager from 1.3 to 1.4.
* :doc:`Upgrade Guide - Scylla Manager 1.3.x to 1.4.x on CentOS 7<upgrade-guide-from-manager-1.3.x-to-1.4.x-CentOS>`
* :doc:`Upgrade Guide - Scylla Manager 1.3.x to 1.4.x on Ubuntu 16<upgrade-guide-from-manager-1.3.x-to-1.4.x-ubuntu>`
* :doc:`Scylla Manager Metrics Update - Scylla Manager 1.3 to 1.4<manager-metric-update-1.3-to-1.4>`
.. raw:: html
</div>
</div>
</div>

View File

@@ -1,21 +0,0 @@
========================================================
Scylla Manager Metric Update - Scylla Manager 1.3 to 1.4
========================================================
.. toctree::
:maxdepth: 2
:hidden:
Scylla Manager 1.4 Dashboards are available for use with Scylla Monitoring Stack
The following metrics are new in Scylla Manager 1.4
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* `scylla_manager_healthcheck_rest_status` - This metric measures the CQL status and is used in the Scylla Manager Health Check. The metric is labeled with the cluster and host and reports the following values:
- `0` - not checked
- `1` - success
- `-1` - failure
* `scylla_manager_healthcheck_rest_rtt_ms` - This metric measures the CQL latency and is used in the Scylla Manager Health Check. The metric is labeled with the cluster and host and reports the latency in milliseconds.

View File

@@ -1,181 +0,0 @@
========================================================
Upgrade Guide - Scylla Manager 1.3.x to 1.4.x on CentOS
========================================================
Enterprise customers who use Scylla Manager 1.3.x are encouraged to upgrade to 1.4.x.
For new installations please see `Scylla Manager - Download and Install <https://www.scylladb.com/enterprise-download/#manager>`_.
The steps below instruct you how to upgrade the Scylla Manager server while keeping the manager datastore intact.
If you are not running Scylla Manager 1.3.x, do not perform this upgrade procedure. This procedure only covers upgrades from Scylla Manager 1.3.x to 1.4.x.
Please contact `Scylla Enterprise Support <https://www.scylladb.com/product/support/>`_ team with any questions.
Upgrade Notes
=================
* This upgrade brings to sctool a new command: ``status``. This command shows a listing of the individual nodes in the cluster and records the CQL availability on the nodes.
* Health Check - This upgrade introduces a new feature where each node is monitored by Scylla Manager. When Scylla Manager detects that a node is down an alert message is sent to Scylla Monitoring. Alternitively, you can use the ``sctool status``` command to show the live cluster status.
* Automated health check - When a cluster is added a new health check task is automatically added to the cluster. Following an upgrade, all existing clusters will have an health check task as well.
* The sctool argument ``interval-days`` has been renamed to ``interval`` as it now supports more granular time units. For example: ``3d2h10m``. The available time units are ``d``, ``h``, ``m``, and ``s``.
* The sctool command ``cluster list`` no longer displays the **host** column in the results table. This was removed because it was easy to be mislead that this node was the only node being used. Adding a cluster (``cluster add``) still takes a ``--host`` argument, but when all the available nodes are discovered they are persisted and used for subsequent interactions with ScyllaDB.
Upgrade Procedure
=================
* Backup the data
* Download and install new packages
* Validate that the upgrade was successful
Backup the Scylla Manager data
-------------------------------
Scylla Manager server persists its data to a Scylla cluster (data store). Before upgrading, backup the ``scylla_manager`` keyspace from Scylla Manager's backend, following this :doc:`backup procedure </operating-scylla/procedures/backup-restore/backup>`.
Download and install the new release
------------------------------------
.. _upgrade-manager-1.3.x-to-1.4.x-previous-release:
Before upgrading, check what version you are currently running now using ``rpm -q scylla-manager``. You should use the same version that you had previously installed in case you want to :ref:`rollback <upgrade-manager-1.3.x-to-1.4.x-rollback-procedure-centos>` the upgrade.
To upgrade:
1. Update the `Update the Scylla Manager repo: <https://www.scylladb.com/enterprise-download/#manager>`_ to **1.4.x**
2. Run:
.. code:: sh
sudo yum update scylla-manager -y
3. Reload your shell execute the below command to reload ``sctool`` code completion.
.. code:: sh
source /etc/bash_completion.d/sctool.bash
Validate
--------
1. Check that Scylla Manager service is running with ``sudo systemctl status scylla-manager.service``. Confirm the service is active (running). If not, then start it with ``systemctl start scylla-manager.service``.
2. Confirm that the upgrade changed the Client and Server version. Run ``sctool version`` and make sure both are 1.3.x version. For example:
.. code-block:: none
sctool version
Client version: 1.4-0.20190324.247a5585
Server version: 1.4-0.20190324.247a5585
3. Confirm that following the update, that your managed clusters are still present. Run ``sctool cluster list``
.. code-block:: none
sctool cluster list
╭──────────────────────────────────────┬──────────┬───────────────╮
│ cluster id │ name │ssh user │
├──────────────────────────────────────┼──────────┼───────────────┤
│ db7faf98-7cc4-4a08-b707-2bc59d65551e │ cluster │scylla-manager │
╰──────────────────────────────────────┴──────────┴───────────────╯
4. Confirm that following the upgrade, there is a healtcheck task for each existing cluster. Run ``sctool task list`` to list the tasks.
.. code-block:: none
sctool task list -c cluster
╭──────────────────────────────────────────────────────┬───────────────────────────────┬──────┬────────────┬────────╮
│ task │ next run │ ret. │ arguments │ status │
├──────────────────────────────────────────────────────┼───────────────────────────────┼──────┼────────────┼────────┤
│ healthcheck/afe9a610-e4c7-4d05-860e-5a0ddf14d7aa │ 01 May 19 20:31 UTC (+15s) │ 0 │ │ RUNNING│
│ healthcheck_api/597f237f-103d-4994-8167-3ff591150b7e │ 01 May 19 21:31:01 UTC (+1h) │ 0 │ │ NEW │
│ repair/4d79ee63-7721-4105-8c6a-5b98c65c3e21 │ 01 May 19 00:00 UTC (+7d) │ 3 │ │ NEW │
╰──────────────────────────────────────────────────────┴───────────────────────────────┴──────┴────────────┴────────╯
.. _upgrade-manager-1.3.x-to-1.4.x-rollback-procedure-centos:
Rollback Procedure
==================
The following procedure describes a rollback from Scylla Manager 1.4 to 1.3. Apply this procedure if an upgrade from 1.3 to 1.4 failed for any reason.
**Warning:** note that you may lose the manged clusters after downgrade. Should this happen, you will need to add the managed clusters clusters manually.
* Downgrade to :ref:`previous release <upgrade-manager-1.3.x-to-1.4.x-previous-release>`
* Start Scylla Manager
* Valdate Scylla Manager version
Downgrade to previous release
-----------------------------
1. Stop Scylla Manager
.. code:: sh
sudo systemctl stop scylla-manager
2. Drop the ``scylla_manager`` keyspace from the remote datastore
.. code:: sh
cqlsh -e "DROP KEYSPACE scylla_manager"
3. Remove Scylla Manager repo
.. code:: sh
sudo rm -rf /etc/yum.repos.d/scylla-manager.repo
sudo yum clean all
sudo rm -rf /var/cache/yum
4. Update the `Scylla Manager repo <https://www.scylladb.com/enterprise-download/#manager>`_ to **1.3.x**
5. Install previous version
.. code:: sh
sudo yum downgrade scylla-manager scylla-manager-server scylla-manager-client -y
Rollback the Scylla Manager database
------------------------------------
1. Start Scylla Manager to reinitialize the data base schema.
.. code:: sh
sudo systemctl start scylla-manager
2. Stop Scylla Manager to avoid issues while restoring the backup. If you did not perform any backup before upgrading then you are done now and can continue at "Start Scylla Manager".
.. code:: sh
sudo systemctl stop scylla-manager
3. Restore the database backup if you performed a backup by following the instructions in :doc:`Restore from a Backup </operating-scylla/procedures/backup-restore/restore>`
You can skip step 1 since the Scylla Manager has done this for you.
Start Scylla Manager
--------------------
.. code:: sh
sudo systemctl start scylla-manager
Validate Scylla Manager Version
-------------------------------
Validate Scylla Manager version:
.. code:: sh
sctool version
The version should match with the results you had :ref:`previously <upgrade-manager-1.3.x-to-1.4.x-previous-release>`.

View File

@@ -1,189 +0,0 @@
==========================================================
upgrade guide - Scylla Manager 1.3.x to 1.4.x on Ubuntu 16
==========================================================
Enterprise customers who use Scylla Manager 1.3.x are encouraged to upgrade to 1.4.x.
For new installations please see `Scylla Manager - Download and Install <https://www.scylladb.com/enterprise-download/#manager>`_.
The steps below instruct you how to upgrade the Scylla Manager server while keeping the manager datastore intact.
If you are not running Scylla Manager 1.3.x, do not perform this upgrade procedure. This procedure only covers upgrades from Scylla Manager 1.3.x to 1.4.x.
Please contact `Scylla Enterprise Support <https://www.scylladb.com/product/support/>`_ team with any questions.
Upgrade Notes
=================
* This upgrade brings to sctool a new command: ``status``. This command shows a listing of the individual nodes in the cluster and records the CQL availability on the nodes.
* Health Check - This upgrade introduces a new feature where each node is monitored by Scylla Manager. When Scylla Manager detects that a node is down an alert message is sent to Scylla Monitoring. Alternitively, you can use the ``sctool status``` command to show the live cluster status.
* Automated health check - When a cluster is added a new health check task is automatically added to the cluster. Following an upgrade, all existing clusters will have an health check task as well.
* The sctool argument ``interval-days`` has been renamed to ``interval`` as it now supports more granular time units. For example: ``3d2h10m``. The available time units are ``d``, ``h``, ``m``, and ``s``.
* The sctool command ``cluster list`` no longer displays the **host** column in the results table. This was removed because it was easy to be mislead that this node was the only node being used. Adding a cluster (``cluster add``) still takes a ``--host`` argument, but when all the available nodes are discovered they are persisted and used for subsequent interactions with ScyllaDB.
Upgrade Procedure
=================
* Backup the data
* Download and install new packages
* Validate that the upgrade was successful
Backup the Scylla Manager data
-------------------------------
Scylla Manager server persists its data to a Scylla cluster (data store). Before upgrading, backup the ``scylla_manager`` keyspace from Scylla Manager's backend, following this :doc:`backup procedure </operating-scylla/procedures/backup-restore/backup>`.
Download and install the new release
------------------------------------
.. _upgrade-manager-1.3.x-to-1.4.x-previous-release:
Before upgrading, check what version you are currently running now using ``rpm -q scylla-manager``. You should use the same version that you had previously installed in case you want to :ref:`rollback <upgrade-manager-1.3.x-to-1.4.x-rollback-procedure-ubuntu>` the upgrade.
To upgrade:
1. Update the `Update the Scylla Manager repo: <https://www.scylladb.com/enterprise-download/#manager>`_ to **1.4.x**
2. Run:
.. code:: sh
sudo apt-get update
sudo apt-get dist-upgrade scylla-manager*
3. Restart the service
.. code:: sh
sudo systemctl restart scylla-manager.service
4. Reload your shell execute the below command to reload ``sctool`` code completion.
.. code:: sh
source /etc/bash_completion.d/sctool.bash
Validate
--------
1. Check that Scylla Manager service is running with ``sudo systemctl status scylla-manager.service``. Confirm the service is active (running). If not, then start it with ``systemctl start scylla-manager.service``.
2. Confirm that the upgrade changed the Client and Server version. Run ``sctool version`` and make sure both are 1.4.x version. For example:
.. code-block:: none
sctool version
Client version: 1.4.0-0.20181130.03ae248
Server version: 1.4.0-0.20181130.03ae248
3. Confirm that following the update, that your managed clusters are still present. Run ``sctool cluster list``
.. code-block:: none
sctool cluster list
╭──────────────────────────────────────┬──────────┬───────────────╮
│ cluster id │ name │ssh user │
├──────────────────────────────────────┼──────────┼───────────────┤
│ db7faf98-7cc4-4a08-b707-2bc59d65551e │ cluster │scylla-manager │
╰──────────────────────────────────────┴──────────┴───────────────╯
4. Confirm that following the upgrade, there is a healtcheck task for each existing cluster. Run ``sctool task list`` to list the tasks.
.. code-block:: none
sctool task list -c cluster
╭──────────────────────────────────────────────────────┬───────────────────────────────┬──────┬────────────┬────────╮
│ task │ next run │ ret. │ arguments │ status │
├──────────────────────────────────────────────────────┼───────────────────────────────┼──────┼────────────┼────────┤
│ healthcheck/afe9a610-e4c7-4d05-860e-5a0ddf14d7aa │ 01 May 19 20:31 UTC (+15s) │ 0 │ │ RUNNING│
│ healthcheck_api/597f237f-103d-4994-8167-3ff591150b7e │ 01 May 19 21:31:01 UTC (+1h) │ 0 │ │ NEW │
│ repair/4d79ee63-7721-4105-8c6a-5b98c65c3e21 │ 01 May 19 00:00 UTC (+7d) │ 3 │ │ NEW │
╰──────────────────────────────────────────────────────┴───────────────────────────────┴──────┴────────────┴────────╯
.. _upgrade-manager-1.3.x-to-1.4.x-rollback-procedure-ubuntu:
Rollback Procedure
==================
The following procedure describes a rollback from Scylla Manager 1.4 to 1.3. Apply this procedure if an upgrade from 1.3 to 1.4 failed for any reason.
**Warning:** note that you may lose the manged clusters after downgrade. Should this happen, you will need to add the managed clusters clusters manually.
* Downgrade to :ref:`previous release <upgrade-manager-1.3.x-to-1.4.x-previous-release>`
* Start Scylla Manager
* Valdate Scylla Manager version
Downgrade to previous release
-----------------------------
1. Stop Scylla Manager
.. code:: sh
sudo systemctl stop scylla-manager
2. Drop the ``scylla_manager`` keyspace from the remote datastore
.. code:: sh
cqlsh -e "DROP KEYSPACE scylla_manager"
3. Remove Scylla Manager repo
.. code:: sh
sudo rm -rf /etc/apt/sources.list.d/scylla-manager.list
4. Update the `Scylla Manager repo <https://www.scylladb.com/enterprise-download/#manager>`_ to **1.3.x**
5. Install previous version
.. code:: sh
sudo apt-get update
sudo apt-get remove scylla-manager\* -y
sudo apt-get install scylla-manager scylla-manager-server scylla-manager-client
sudo systemctl unmask scylla-manager.service
Rollback the Scylla Manager database
------------------------------------
1. Start Scylla Manager to reinitialize the data base schema.
.. code:: sh
sudo systemctl start scylla-manager
2. Stop Scylla Manager to avoid issues while restoring the backup. If you did not perform any backup before upgrading then you are done now and can continue at "Start Scylla Manager".
.. code:: sh
sudo systemctl stop scylla-manager
3. Restore the database backup if you performed a backup by following the instructions in :doc:`Restore from a Backup </operating-scylla/procedures/backup-restore/restore>`
You can skip step 1 since the Scylla Manager has done this for you.
Start Scylla Manager
--------------------
.. code:: sh
sudo systemctl start scylla-manager
Validate Scylla Manager Version
-------------------------------
Validate Scylla Manager version:
.. code:: sh
sctool version
The version should match with the results you had :ref:`previously <upgrade-manager-1.3.x-to-1.4.x-previous-release>`.

View File

@@ -1,31 +0,0 @@
=================================
Upgrade Scylla Manager 1.4 to 2.0
=================================
.. toctree::
:hidden:
Scylla Manager 1.4 to Scylla Manager 2.0 <upgrade-guide-from-manager-1.4.x-to-2.0.x>
Metrics Update - Scylla Manager 1.4 to 2.0 <manager-metric-update-1.4-to-2.0>
.. raw:: html
<div class="panel callout radius animated">
<div class="row">
<div class="medium-4 columns">
<h5 id="getting-started">Upgrade Scylla Manager 1.4 to 2.0</h5>
</div>
<div class="medium-9 columns">
Procedures for upgrading Scylla Manager from 1.4 to 2.0.
* :doc:`Upgrade Guide - Scylla Manager 1.4.x to 2.0.x on CentOS 7, RHEL 7, Ubuntu 16, and Debian 9 <upgrade-guide-from-manager-1.4.x-to-2.0.x>`
* :doc:`Scylla Manager Metrics Update - Scylla Manager 1.4 to 2.0 <manager-metric-update-1.4-to-2.0>`
.. raw:: html
</div>
</div>
</div>

View File

@@ -1,17 +0,0 @@
========================================================
Scylla Manager Metric Update - Scylla Manager 1.4 to 2.0
========================================================
.. toctree::
:maxdepth: 2
:hidden:
Scylla Manager 2.0 Dashboards are available for use with Scylla Monitoring Stack
The following metrics are new in Scylla Manager 2.0
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* scylla_manager_backup_avg_upload_bandwidth - reports the average upload speed in bytes/sec since start of the upload
* scylla_manager_backup_bytes_done - reports the number of bytes that have uploaded so far.
* scylla_manager_backup_bytes_left - reports the Number of remaning bytes to be backed up.
* scylla_manager_backup_percent_progress- reports the current backup progress as a percentage.

View File

@@ -1,243 +0,0 @@
=============================================
Upgrade Guide - Scylla Manager 1.4.x to 2.0.x
=============================================
**Supported Operating Systems:** CentOS/RHEL 7, Ubuntu 16 and 18, Debian 9
Enterprise customers who use Scylla Manager 1.4.x are encouraged to upgrade to 2.0.x. For new installations please see `Scylla Manager - Download and Install <https://www.scylladb.com/enterprise-download/#manager>`_. The steps below instruct you how to upgrade the Scylla Manager server while keeping the manager datastore intact. If you are not running Scylla Manager 1.4.x, do not perform this upgrade procedure. This procedure only covers upgrades from Scylla Manager 1.4.x to 2.0.x. If you want to upgrade to 2.0 from lower versions please upgrade to 1.4.x first.
Please contact `Scylla Enterprise Support <https://www.scylladb.com/product/support/>`_ team with any questions. For release information, see the `Release Notes <https://www.scylladb.com/product/release-notes/>`_.
Upgrade Procedure
=================
**Workflow:**
#. `Stop the Scylla Manager Server`_
#. `Backup the Scylla Manager data`_
#. Install the :doc:`Scylla Manager server 2.0 </operating-scylla/manager/2.1/install>`
#. Install the :doc:`Scylla Manager Agent </operating-scylla/manager/2.1/install-agent>` (with the most up to date version)latest agent on eachthe node. Confirm the nodes all start and make sure they are started.
#. `Start the Scylla Manager server`_
#. `Validate`_ that the upgrade was successful
#. (optional) Remove ssh artifacts from the nodes
#. (optional) Configure auth token in the agent configuration, and update cluster in Scylla Manager.
Stop the Scylla Manager Server
------------------------------
**Procedure**
#. Make sure that no task is running (all tasks have DONE status) before stopping the server:
.. code-block:: none
sctool task list -c <cluster_id|cluster_name>
#. Stop the Scylla Manager:
.. code-block:: none
sudo systemctl stop scylla-manager
Backup the Scylla Manager data
-------------------------------
Scylla Manager server persists its data to a Scylla cluster (data store). Before upgrading, backup the ``scylla_manager`` keyspace from Scylla Manager's backend, following this :doc:`backup procedure </operating-scylla/procedures/backup-restore/backup>`.
Install new Scylla Manager server 2.0
-------------------------------------
.. _upgrade-manager-1.4.x-to-2.0.x-previous-release:
Before upgrading, check what version you are **currently** running:
**CentOS / RHEL**
.. code-block:: bash
rpm -q scylla-manager
**Ubuntu / Debian**
.. code-block:: bash
dpkg -s scylla-manager
Save the details of this version so you can :ref:`rollback <upgrade-manager-1.4.x-to-2.0.x-rollback-procedure-centos>` to it.
To upgrade:
Follow the procedure described in: :doc:`Install new Scylla Manager server 2.0 </operating-scylla/manager/2.1/install>`
Download, Install and Start Scylla Manager Agent
------------------------------------------------
Follow the instructions described in :doc:`Install the Scylla Manager Agent </operating-scylla/manager/2.1/install-agent>` for installing the Scylla Manager Agent on every node in the cluster.
Start the Scylla Manager Server
-------------------------------
From the Scylla Manager Server, run:
.. code-block:: none
sudo systemctl start scylla-manager
Configure Scylla Manager to work with the authentication token
--------------------------------------------------------------
Copy the authentication :doc:`token </operating-scylla/manager/2.1/install-agent>` you created when installating the scylla-manager-agent:
.. code-block:: none
sctool cluster update --auth-token=6Es3dm24U72NzAu9ANWmU3C4ALyVZhwwPZZPWtK10eYGHJ24wMoh9SQxRZEluWMc0qDrsWCCshvfhk9uewOimQS2x5yNTYUEoIkO1VpSmTFu5fsFyoDgEkmNrCJpXtfM -c cluster-name
Validate
--------
#. Check that Scylla Manager service is running with ``sudo systemctl status scylla-manager.service``. Confirm the service is active (running). If not, then start it with ``systemctl start scylla-manager.service``.
#. Confirm that the upgrade changed the Client and Server version. Run ``sctool version`` and make sure both are 2.0.x version. For example:
.. code-block:: none
sctool version
Client version: 2.0-0.20191220.5407198e
Server version: 2.0-0.20191220.5407198e
#. Confirm that following the update, that your managed clusters are still present. Run ``sctool cluster list``
.. code-block:: none
sctool cluster list
╭──────────────────────────────────────┬──────────╮
│ cluster id │ name │
├──────────────────────────────────────┼──────────┤
│ db7faf98-7cc4-4a08-b707-2bc59d65551e │ cluster │
╰──────────────────────────────────────┴──────────╯
#. Confirm that following the upgrade, status is up and running
.. code-block:: none
sctool status -c <cluster_id|cluster_name>
Datacenter: AWS_1
╭───────────┬─────┬───────────┬───────────────╮
│ CQL │ SSL │ REST │ Host │
├───────────┼─────┼───────────┼───────────────┤
│ UP (56ms) │ OFF │ UP (37ms) │ 127.0.0.1 │
│ UP (56ms) │ OFF │ UP (25ms) │ 127.0.0.2 │
│ UP (56ms) │ OFF │ UP (25ms) │ 127.0.0.3 │
╰───────────┴─────┴───────────┴───────────────╯
.. _upgrade-manager-1.4.x-to-2.0.x-rollback-procedure-centos:
Rollback Procedure
==================
The following procedure describes a rollback from Scylla Manager 2.0 to 1.4. Apply this procedure if an upgrade from 1.4 to 2.0 failed for any reason.
**Warning:** note that you may lose the manged clusters after downgrade. Should this happen, you will need to add the managed clusters clusters manually.
* Downgrade to :ref:`previous release <upgrade-manager-1.4.x-to-2.0.x-previous-release>`
* Start Scylla Manager
* Valdate Scylla Manager version
Downgrade to previous release
-----------------------------
#. Stop Scylla Manager
.. code:: sh
sudo systemctl stop scylla-manager
#. Drop the ``scylla_manager`` keyspace from the remote datastore
.. code:: sh
cqlsh -e "DROP KEYSPACE scylla_manager"
#. Remove Scylla Manager repo
**CentOS / RHEL**
.. code:: sh
sudo rm -f /etc/yum.repos.d/scylla-manager.repo
sudo yum clean all
sudo rm -rf /var/cache/yum
**Ubuntu / Debian**
.. code:: sh
sudo rm -f /etc/apt/sources.list.d/scylla-manager.list
#. Update the `Scylla Manager repo <https://www.scylladb.com/enterprise-download/#manager>`_ to **1.4.x**
#. Install previous version
**CentOS / RHEL**
.. code:: sh
sudo yum downgrade scylla-manager scylla-manager-server scylla-manager-client -y
**Ubuntu / Debian**
.. code:: sh
sudo apt-get update
sudo apt-get remove scylla-manager\* -y
sudo apt-get install scylla-manager scylla-manager-server scylla-manager-client
sudo systemctl unmask scylla-manager.service
Rollback the Scylla Manager database
------------------------------------
#. Start Scylla Manager to reinitialize the data base schema.
.. code:: sh
sudo systemctl start scylla-manager
#. Stop Scylla Manager to avoid issues while restoring the backup. If you did not perform any backup before upgrading then you are done now and can continue at "Start Scylla Manager".
.. code:: sh
sudo systemctl stop scylla-manager
#. Restore the database backup if you performed a backup by following the instructions in :doc:`Restore from a Backup </operating-scylla/procedures/backup-restore/restore>`
You can skip step 1 since the Scylla Manager has done this for you.
Start Scylla Manager
--------------------
.. code:: sh
sudo systemctl start scylla-manager
Validate Scylla Manager Version
-------------------------------
Validate Scylla Manager version:
.. code:: sh
sctool version
The version should match with the version infomation you were running :ref:`previously <upgrade-manager-1.4.x-to-2.0.x-previous-release>`.

View File

@@ -1,21 +0,0 @@
======================================
Upgrade Scylla Manager 2.x.a to 2.y.b
======================================
.. toctree::
:hidden:
:titlesonly:
Scylla Manager 2.x.a to Scylla Manager 2.y.b<upgrade-2.x.a-to-2.y.b>
Fix for row-level repairs<upgrade-row-level-repair>
.. panel-box::
:title: Upgrade Scylla Manager
:id: "getting-started"
:class: my-panel
Procedures for upgrading to a new version of Scylla Manager.
* :doc:`Scylla Manager Upgrade - Scylla Manager 2.x.a to 2.y.b <upgrade-2.x.a-to-2.y.b/>`
* :doc:`Fix for row-level repairs <upgrade-row-level-repair/>`

View File

@@ -1,492 +0,0 @@
=======================================================
Scylla Manager Upgrade - Scylla Manager 2.x.a to 2.y.b
=======================================================
This document describes upgrade guide between two following *Minor* or *Patch* releases of Scylla Manager 2.x.y
.. toctree::
:maxdepth: 2
Applicable versions
===================
This guide covers upgrading Scylla Manager version 2.x.a to version 2.y.b, on the following platforms:
- Red Hat Enterprise Linux, version 7
- CentOS, version 7
- Debian, version 9
- Ubuntu, versions 16.04, 18.04
Upgrade Procedure
=================
.. note:: In Scylla Manager 2.x.a new component called Scylla Manager Agent is introduced which is running on each scylla node in the cluster as a sidecar. Upgrading this component means commands have to be executed for each node separately.
Upgrade procedure for the Scylla Manager includes upgrade of three components server, client, and the agent. Entire cluster shutdown is NOT needed. Scylla will be running while the manager components are upgraded. Overview of the required steps:
- Stop all Scylla Manager tasks (or wait for them to finish)
- Stop the Scylla Manager Server 2.x.a
- Stop the Scylla Manager Agent 2.x.a on all nodes
- Upgrade the Scylla Manager Server and Client to 2.y.b
- Upgrade the Scylla Manager Agent to 2.y.b on all nodes
- Run `scyllamgr_agent_setup` script on all nodes
- Reconcile configuration files
- Start the Scylla Manager Agent 2.y.b on all nodes
- Start the Scylla Manager Server 2.y.b
- Validate status of the cluster
Upgrade steps
=============
Stop all Scylla Manager tasks (or wait for them to finish)
----------------------------------------------------------
**On the Manager Server** check current status of the manager tasks:
.. code:: sh
sctool task list -c <cluster>
None of the listed tasks should have status in RUNNING.
Stop the Scylla Manager Server 2.x.a
------------------------------------
**On the Manager Server** instruct Systemd to stop the server process:
.. code:: sh
sudo systemctl stop scylla-manager
Ensure that it is stopped with:
.. code:: sh
sudo systemctl status scylla-manager
It should have a status of *“Active: inactive (dead)”*.
Stop the Scylla Manager Agent 2.x.a on all nodes
------------------------------------------------
**On each scylla node** in the cluster run:
.. code:: sh
sudo systemctl stop scylla-manager-agent
Ensure that it is stopped with:
.. code:: sh
sudo systemctl status scylla-manager-agent
It should have a status of *“Active: inactive (dead)”*.
Upgrade the Scylla Manager Server and Client to 2.y.b
-----------------------------------------------------
**Before you Begin**
Confirm that the settings for ``scylla-manager.repo`` are correct.
**On the Manager Server** display the contents of the scylla-manager.repo and confirm the version displayed is the version you want to upgrade to. This example uses Scylla Manager 2.1, but your display should show the repo you have installed.
CentOS, Red Hat:
.. code:: sh
cat /etc/yum.repos.d/scylla-manager.repo
[scylla-manager-2.1] name=Scylla Manager for Centos - $basearch baseurl=http://downloads.scylladb.com/downloads/scylla-manager/rpm/centos/scylladb-manager-2.1/$basearch/ enabled=1 gpgcheck=0
Debian, Ubuntu:
.. code:: sh
cat /etc/apt/sources.list/scylla-manager.repo
[scylla-manager-2.1] name=Scylla Manager for Centos - $basearch baseurl=http://downloads.scylladb.com/downloads/scylla-manager/rpm/centos/scylladb-manager-2.1/$basearch/ enabled=1 gpgcheck=0
**On the Manager Server** instruct package manager to update server and the client:
CentOS, Red Hat:
.. code:: sh
sudo yum update scylla-manager-server scylla-manager-client -y
Debian, Ubuntu:
.. code:: sh
sudo apt-get update
sudo apt-get install scylla-manager-server scylla-manager-client -y
.. note:: When using apt-get, if a previous version of the Scylla Manager package had a modified configuration file, you will be asked what to do with this file during the installation process. In order to keep both files for reconciliation (covered later in the procedure), select the "keep your currently-installed version" option when prompted.
Upgrade the Scylla Manager Agent to 2.y.b on all nodes
------------------------------------------------------
**On each scylla node** instruct package manager to update the agent:
CentOS, Red Hat:
.. code:: sh
sudo yum update scylla-manager-agent -y
Debian, Ubuntu:
.. code:: sh
sudo apt-get update
sudo apt-get install scylla-manager-agent -y
.. note:: With apt-get, if a previous version of the package had a modified configuration file, you will be asked during installation what to do with it. Please select "keep your currently-installed version" option to keep both previous and new default configuration file for later reconciliation.
Run `scyllamgr_agent_setup` script on all nodes
-----------------------------------------------
.. note:: Script mentioned in this section is added in version 2.0.2 so it won't be available for earlier versions.
This step requires sudo rights:
.. code:: sh
$ sudo scyllamgr_agent_setup
Do you want to create scylla-helper.slice if it does not exist?
Yes - limit Scylla Manager Agent and other helper programs memory. No - skip this step.
[YES/no] YES
Do you want the Scylla Manager Agent service to automatically start when the node boots?
Yes - automatically start Scylla Manager Agent when the node boots. No - skip this step.
[YES/no] YES
First step relates to limiting resources that are available to the agent and second
instructs systemd to run agent on node restart.
Reconcile configuration files
-----------------------------
Upgrades can create changes to the structure and values of the default yaml configuration file. If the previous version's configuration file was modified with custom values, this could result in a conflict. The upgrade procedure can't resolve this without help from an administrator. If you followed instructions from the upgrade packages sections of this document, and you elected to save both the new and old configuration files, the new version of the configuration file is saved in the same directory as the old one with an added extension suffix for both server and agent. These files are stored in the `/etc/scylla-manager` directory.
On a CentOS configuration, a conflict looks like:
.. code:: sh
# On the Scylla Manager node
/etc/scylla-manager/scylla-manager.yaml # old file containing custom values
/etc/scylla-manager/scylla-manager.yaml.rpmnew # new default file from new version
# On all Scylla nodes
/etc/scylla-manager-agent/scylla-manager-agent.yaml # old file containing custom values
/etc/scylla-manager-agent/scylla-manager-agent.yaml.rpmnew # new default file from new version
On an Ubuntu configuration, a conflict looks like:
.. code:: sh
# On the Scylla Manager node
/etc/scylla-manager/scylla-manager.yaml # old file containing custom values
/etc/scylla-manager/scylla-manager.yaml.dpkg-dist # new default file from new version
# On all Scylla nodes
/etc/scylla-manager-agent/scylla-manager-agent.yaml # old file containing custom values
/etc/scylla-manager-agent/scylla-manager-agent.yaml.dpkg-dist # new default file from new version
It is required to manually inspect both files and reconcile old values with the new configuration. Remember to carry over any custom values like database credentials, backup, repair, and any other configuration. This can be done by manually updating values in the new config file and then renaming files:
For CentOS:
.. code:: sh
# On the Scylla Manager node
cd /etc/scylla-manager/
mv scylla-manager.yaml scylla-manager.yaml.old #renames the old config file as old
mv scylla-manager.yaml.rpmnew scylla-manager.yaml
# On all Scylla nodes
cd /etc/scylla-manager-agent/
mv scylla-manager-agent.yaml scylla-manager-agent.yaml.old
mv scylla-manager-agent.yaml.rpmnew scylla-manager-agent.yaml
For Ubuntu:
.. code:: sh
# On the Scylla Manager node
cd /etc/scylla-manager/
mv scylla-manager.yaml scylla-manager.yaml.old
mv scylla-manager.yaml.dpkg-dist scylla-manager.yaml
# On all Scylla nodes
cd /etc/scylla-manager-agent/
mv scylla-manager-agent.yaml scylla-manager-agent.yaml.old
mv scylla-manager-agent.yaml.dpkg-dist scylla-manager-agent.yaml
**Guide to important configuration changes across versions:**
Scylla Manager 2.2.x
- Default ports changed. They are placed into a lower port bracket because higher bracket is reserved by the OS for incoming connections and this is to avoid potential conflicts.
- ``http`` old port was ``56080`` new port is ``5080``
- ``https`` old port was ``56443`` new port is ``5443``
- ``prometheus`` old port was ``56090`` new port is ``5090``
- ``debug`` old port waas ``56112`` new port is ``5112``
- Repair changes:
- ``poll_interval`` was changed from ``200ms`` to ``50ms``.
- ``segments_per_repair``, ``shard_parallel_max``, ``shard_failed_segments_max``, and ``error_backoff`` were removed.
- ``graceful_stop_timeout`` and ``force_repair_type`` were added.
Scylla Manager Agent 2.2.x
- Default ports changed. They are placed into a lower port bracket because higher bracket is reserved by the OS for incoming connections and this is to avoid potential conflicts.
- ``prometheus`` old port was ``56090`` new port is ``5090``
- ``debug`` old port waas ``56112`` new port is ``5112``
Start the Scylla Manager Agent 2.y.b on all nodes
-------------------------------------------------
**On each scylla node** instruct Systemd to start the agent process:
.. code:: sh
sudo systemctl start scylla-manager-agent
Ensure that it is running with:
.. code:: sh
sudo systemctl status scylla-manager-agent
It should have a status of *“Active: active (running)”*.
Start the Scylla Manager Server 2.y.b
-------------------------------------
**On the Manager Server** instruct Systemd to start the server process:
.. code:: sh
sudo systemctl daemon-reload
sudo systemctl start scylla-manager
Ensure that it is started with:
.. code:: sh
sudo systemctl status scylla-manager
It should have a status of *“Active: active (running)”*.
Validate status of the cluster
------------------------------
**On the Manager Server** check the version of the client and the server:
.. code:: sh
sctool version
Client version: 2.y.b-0.20200123.7cf18f6b
Server version: 2.y.b-0.20200123.7cf18f6b
Check that cluster is up:
.. code:: sh
sctool status -c <cluster>
All running nodes should be up.
.. note:: In **Scylla Manager 2.2** the meaning of repair command's ``--intensity`` flag was changed. After starting the upgraded server, all previously scheduled repairs will remain with the original value of ``--intensity`` but that value will be interpreted differently. This parameter is now broken into two new components:
- ``--intensity`` which now affects the number of repaired segments in a single repair request to the cluster. By default value (0) scylla manager will try to determine the maximum segments possible based on cluster setup.
- ``--parallel`` which now affects how many repairs will be requested in parallel (given that only nodes that are not busy with the repair can be repaired in parallel). By default value (0) maximum possible parallelism will be applied.
Based on this breakdown you can update current repair tasks to achieve previous result. Update can be done with:
sctool repair update --intensity <value> --parallel <value> <type/task-id>
Rollback Procedure
==================
.. note:: Rolling back to 2.x.a is not recommended because 2.y.b contains bug fixes and performance optimizations so you will be going back to a lesser version. This should be only used as a last resort.
Rollback procedure contains the same steps as upgrade but with downgrading the components to older version:
- Stop all Scylla Manager tasks (or wait for them to finish)
- Stop the Scylla Manager Server 2.y.b
- Stop the Scylla Manager Agent 2.y.b on all nodes
- Downgrade the Scylla Manager Server and Client to 2.x.a
- Downgrade the Scylla Manager Agent to 2.x.a on all nodes
- Bring back old configuration (if there was conflict)
- Start the Scylla Manager Agent 2.x.a on all nodes
- Start the Scylla Manager Server 2.x.a
- Validate status of the cluster
Rollback steps
==============
Stop all Scylla Manager tasks (or wait for them to finish)
----------------------------------------------------------
**On the Manager Server** check current status of the manager tasks:
.. code:: sh
sctool task list -c <cluster>
None of the listed tasks should have status in RUNNING.
Stop the Scylla Manager Server 2.y.b
------------------------------------
**On the Manager Server** instruct Systemd to stop the server process:
.. code:: sh
sudo systemctl stop scylla-manager
Ensure that it is stopped with:
.. code:: sh
sudo systemctl status scylla-manager
It should have a status of *“Active: inactive (dead)”*.
Stop the Scylla Manager Agent 2.y.b on all nodes
------------------------------------------------
**On each scylla node** in the cluster run:
.. code:: sh
sudo systemctl stop scylla-manager-agent
Ensure that it is stopped with:
.. code:: sh
sudo systemctl status scylla-manager-agent
It should have a status of *“Active: inactive (dead)”*.
Downgrade the Scylla Manager Server and Client to 2.x.a
-------------------------------------------------------
**On the Manager Server** instruct package manager to downgrade server and the client:
CentOS, Red Hat:
.. code:: sh
sudo yum downgrade scylla-manager-server-2.x.a* scylla-manager-client-2.x.a* -y
Debian, Ubuntu:
.. code:: sh
sudo apt-get install scylla-manager-server=2.x.a scylla-manager-client=2.x.a -y
Downgrade the Scylla Manager Agent to 2.x.a on all nodes
--------------------------------------------------------
**On each scylla node** instruct package manager to downgrade the agent:
CentOS, Red Hat:
.. code:: sh
sudo yum downgrade scylla-manager-agent-2.x.a* -y
Debian, Ubuntu:
.. code:: sh
sudo apt-get install scylla-manager-agent=2.x.a -y
Revert to the old configuration
----------------------------------------------------
If you followed instructions from the Upgrade Steps section and you had configuration conflict when upgrading, then listing the configuration directory should give you both new and old configuration:
.. code:: sh
/etc/scylla-manager/scylla-manager.yaml # New version that you want to disable
/etc/scylla-manager/scylla-manager.yaml.old # Previous version that you want to rollback
To restore the old configuration:
.. code:: sh
cd /etc/scylla-manager/
mv scylla-manager.yaml scylla-manager.yaml.new
mv scylla-manager.yaml.old scylla-manager.yaml
The procedure is the same for the Scylla Manager Agent (on all nodes):
.. code:: sh
cd /etc/scylla-manager-agent/
mv scylla-manager-agent.yaml scylla-manager-agent.yaml.new
mv scylla-manager-agent.yaml.old scylla-manager-agent.yaml
Start the Scylla Manager Agent 2.x.a on all nodes
-------------------------------------------------
On all nodes instruct Systemd to start the agent process:
.. code:: sh
sudo systemctl start scylla-manager-agent
Ensure that it is running with:
.. code:: sh
sudo systemctl status scylla-manager-agent
It should have a status of *“Active: active (running)”*.
Start the Scylla Manager Server 2.x.a
-------------------------------------
**On the Manager Server** instruct Systemd to start the server process:
.. code:: sh
sudo systemctl stop scylla-manager
Ensure that it is stopped with:
.. code:: sh
sudo systemctl status scylla-manager
It should have a status of *“Active: active (running)”*.
.. note:: In **Scylla Manager 2.2** the meaning of repair command's ``--intensity`` flag was changed. If you want to rollback changes that were broken down to ``--intensity`` and ``--parallel`` you would need to remove or disable task in question and create new with correct values.
For example:
sctool task update --enable false <type/task_id>
sctool repair [repair parameters]
Validate status of the cluster
------------------------------
**On the Manager Server** check the version of the client and the server:
.. code:: sh
sctool version
Client version: 2.x.a
Server version: 2.x.a
Check that cluster is up:
.. code:: sh
sctool status -c <cluster>
All running nodes should be up.

View File

@@ -1,26 +0,0 @@
=========================
Fix for Row-level Repairs
=========================
.. toctree::
:maxdepth: 2
Upgrade to Manager 2.0.2 for Improved Repair Speeds in Scylla 3.1 and Higher
=============================================================================
One of the useful features of the Manager is how it handles repairs.
Manager breaks down token ranges into smaller segments in order distribute load over all available shards.
Result of this approach is more efficient repair execution.
With the release of `Scylla 3.1 <https://www.scylladb.com/2019/10/15/introducing-scylla-open-source-3-1/>`_ new improvement called `row-level repair <https://www.scylladb.com/2019/08/13/scylla-open-source-3-1-efficiently-maintaining-consistency-with-row-level-repair/>`_ was introduced.
This change approaches repair on the more granular level which makes optimizations done by the Manager obsolete.
In practice we noticed degradation of repair execution time with repairs done by Manager on Scylla clusters with row-level repair feature enabled.
Manager 2.0.2 introduces fix for handling repairs on clusters with row-level repair feature enabled.
When Scylla Manager detects the feature is enabled, it delegates sharding to the Scylla node thus avoiding any split shards.
If you experience slow repairs, please upgrade to Manager 2.0.2 or newer.
Slowdowns are still possible if Scylla Manager is not configured correctly.
If `segments_per_repair` configuration option (`scylla-manager.yaml`) is set to a low value, repair can still take a long time to finish.
So for clusters with row-level repair it is recommended to set `segments_per_repair` to at least 16.
The row-level repair feature is available from Scylla Open Source 3.1 and higher.

View File

@@ -1,159 +0,0 @@
=============================================
Upgrade Guide - Scylla Manager 1.0.x to 1.1.x
=============================================
Enterprise customers who use Scylla Manager 1.0.x are encouraged to upgrade to 1.1.x.
The steps below instruct you how to upgrade the Scylla Manager server while keeping the manager datastore intact.
If you are not running Scylla Manager 1.0.x, do not perform this upgrade procedure. This procedure only covers upgrades from Scylla Manager 1.0.x to 1.1.x.
Please contact `Scylla Enterprise Support <https://www.scylladb.com/product/support/>`_ team with any question.
Upgrade Procedure
=================
* Backup the data
* Download and install new packages
* Validate that the upgrade was successful
Backup the data
------------------------------
Before any major procedure, like an upgrade, it is recommended to backup all the data to an external device. It is recommended to backup the ``scylla_manager`` keyspace from Scylla Manager's backend, following this :doc:`backup procedure </operating-scylla/procedures/backup-restore/backup>`.
Download and install the new release
------------------------------------
.. _upgrade-manager-1.0.x-to-1.1.x-previous-release:
Before upgrading, check what version you are running now using ``rpm -qa | grep scylla-manager``. You should use the same version in case you want to :ref:`rollback <upgrade-manager-1.0.x-to-1.1.x-rollback-procedure>` the upgrade.
To upgrade:
1. Update the `Update the Scylla Manager repo: <https://www.scylladb.com/enterprise-download/#manager>`_ to **1.1.x**
2. Run:
.. code:: sh
sudo yum update scylla-manager -y
Validate
--------
1. Check Scylla Manager status with ``systemctl status scylla-manager.service``. Confirm the service is active (running).
2. Confirm that the upgrade changed the Client and Server version. Run ``sctool version`` and make sure both are 1.1.x version
3. Confirm that following the update, that your managed clusters are still present. Run ``sctool cluster list``
4. Confirm that following the upgrade, there are no repairs in a stopped state. Run ``sctool task list`` to list the repair tasks in progress. If any are in a stopped state, run ``sctool repair unit schedule <repair-unit-id> --start-date=now`` to resume.
Update scylla-manager.yaml (optional)
-------------------------------------
As part of the upgrade procudre, paramaters were added to ``/etc/scylla-manager/scylla-manager.yaml``. See the new values below. There is no need to update the ``scylla-manager.yaml`` file as part of the upgrade.
.. code-block:: yaml
# Repair service configuration.
repair:
# Granularity of repair. Repair works on segments, segment is a continuous
# token range.
#
# Set the maximal number of tokens in a segment (zero is no limit).
segment_size_limit: 0
# Set number of segments to be repaired in one Scylla command.
segments_per_repair: 1
# Error tolerance.
#
# Set how many segments may fail to repair. Note that the manager would retry
# to repair the failed segments. If the limit is exceeded, however, repair
# will stop and the next repair will start from the beginning.
segment_error_limit: 100
# Fail-fast, set to true if you want repair to stop on first error. Unlike
# segment_error_limit this allows for resuming the stopped repair.
stop_on_error: false
# Set wait time if Scylla failed to execute a repair command. Note that if
# stop_on_error is true this has no effect.
error_backoff: 10s
# Set how often to poll Scylla node for command status.
poll_interval: 200ms
# Set time offset between the automated scheduler run and the scheduled
# repairs. If scheduler runs at midnight the repairs would start at
# midnight + this value. This gives you the opportunity to audit and modify
# the scheduled repairs.
auto_schedule_delay: 2h
# Set maximal time after which a restarted repair is forced to start from the
# beginning.
max_run_age: 36h
# Distribution of data among cores (shards) within a node.
# Copy value from Scylla configuration file.
murmur3_partitioner_ignore_msb_bits: 12
.. _upgrade-manager-1.0.x-to-1.1.x-rollback-procedure:
Rollback Procedure
==================
The following procedure describes a rollback from Scylla Manager 1.1 to 1.0. Apply this procedure if an upgrade from 1.0 to 1.1 failed for any reason.
**Warning:** note that you may lose the manged clusters after downgrade. Should this happen, you will need to add the managed clusters.
* Downgrade to :ref:`previous release <upgrade-manager-1.0.x-to-1.1.x-previous-release>`
* Start Scylla Manager
* Valdate Scylla Manager version
Downgrade to previous release
-----------------------------
1. Stop Scylla Manager
.. code:: sh
sudo systemctl stop scylla-manager
2. Drop scylla_manager keyspace from the remote datastore
.. code:: sh
cqlsh -e "DROP KEYSPACE scylla_manager"
3. Remove Scylla Manager repo
.. code:: sh
sudo rm -rf /etc/yum.repos.d/scylla-manager.repo
sudo yum clean all
4. Update the `Scylla Manager repo <https://www.scylladb.com/enterprise-download/#manager>`_ to **1.0.x**
5. Install previous version
.. code:: sh
sudo yum downgrade scylla-manager scylla-manager-server scylla-manager-client -y
Start Scylla Manager
--------------------
.. code:: sh
sudo systemctl start scylla-manager
Validate Scylla Manager Version
-------------------------------
Validate Scylla Manager version:
.. code:: sh
sctool version
The version should match with the results you had :ref:`previously <upgrade-manager-1.0.x-to-1.1.x-previous-release>`.

View File

@@ -1,170 +0,0 @@
=============================================
Upgrade Guide - Scylla Manager 1.1.x to 1.2.x
=============================================
Enterprise customers who use Scylla Manager 1.1.x are encouraged to upgrade to 1.2.x.
For new installations please see `Scylla Manager - Download and Install <https://www.scylladb.com/enterprise-download/#manager>`_.
The steps below instruct you how to upgrade the Scylla Manager server while keeping the manager datastore intact.
If you are not running Scylla Manager 1.1.x, do not perform this upgrade procedure. This procedure only covers upgrades from Scylla Manager 1.1.x to 1.2.x.
Please contact `Scylla Enterprise Support <https://www.scylladb.com/product/support/>`_ team with any question.
Upgrade Notes
=================
* This upgrade brings internal changes incompatible with previous versions of Scylla Manager.
We have removed the repair unit concept and replaced it with a repair task that can be tuned using simple glob based pattern matching.
The pattern matching that is performed can be applied to filter out multiple keyspaces and tables as well as entire datacenters.
A consequence of this is that custom repairs that previously were scheduled are removed during the upgrade and you may need to replace them.
A weekly repair task will be scheduled for each existing cluster so repairs will be performed automatically.
* The Scylla Manager API is now using HTTPS by default. Their default values have changed to 56443 for HTTPS and 56080 for plain HTTP.
The port can be changed as needed in ``/etc/scylla-manager/scylla-manager.yaml``
Upgrade Procedure
=================
* Backup the data
* Download and install new packages
* Validate that the upgrade was successful
Backup the data
------------------------------
Before any major procedure, like an upgrade, it is recommended to backup all the data to an external device. It is recommended to backup the ``scylla_manager`` keyspace from Scylla Manager's backend, following this :doc:`backup procedure </operating-scylla/procedures/backup-restore/backup>`.
Download and install the new release
------------------------------------
.. _upgrade-manager-1.1.x-to-1.2.x-previous-release:
Before upgrading, check what version you are currently running now using ``rpm -q scylla-manager``. You should use the same version that you had previously installed in case you want to :ref:`rollback <upgrade-manager-1.1.x-to-1.2.x-rollback-procedure>` the upgrade.
To upgrade:
1. Update the `Update the Scylla Manager repo: <https://www.scylladb.com/enterprise-download/#manager>`_ to **1.2.x**
2. Run:
.. code:: sh
sudo yum update scylla-manager -y
3. Reload your shell execute the below command to reload ``sctool`` code completion.
.. code:: sh
source /etc/bash_completion.d/sctool.bash
Validate
--------
1. Confirm that the upgrade changed the Client and Server version. Run ``sctool version`` and make sure both are 1.2.x version.
2. If you get an error from the version check then make sure that Scylla Manager is running with ``systemctl status scylla-manager.service``. Confirm the service is active (running). If not, then start it with ``systemctl start scylla-manager.service``.
3. Confirm that following the update, that your managed clusters are still present. Run ``sctool cluster list``
4. Confirm that following the upgrade, there is one repair task in a NEW state for each existing cluster. Run ``sctool task list`` to list the repair tasks.
Update scylla-manager.yaml (optional)
-------------------------------------
As part of the upgrade procudre, paramaters were changed in ``/etc/scylla-manager/scylla-manager.yaml``. See the new values below. There is no need to update the ``scylla-manager.yaml`` file as part of the upgrade.
.. code-block:: yaml
# Bind REST API to the specified TCP address using HTTP protocol.
# http: 127.0.0.1:56080
# Bind REST API to the specified TCP address using HTTPS protocol.
https: 127.0.0.1:56443
# TLS certificate file to use for HTTPS.
tls_cert_file: /var/lib/scylla-manager/scylla_manager.crt
# TLS key file to use for HTTPS.
tls_key_file: /var/lib/scylla-manager/scylla_manager.key
# Bind prometheus API to the specified TCP address using HTTP protocol.
# By default it binds to all network interfaces but you can restrict it
# by specifying it like this 127:0.0.1:56090 or any other combination
# of ip and port.
prometheus: ':56090'
.. _upgrade-manager-1.1.x-to-1.2.x-rollback-procedure:
Rollback Procedure
==================
The following procedure describes a rollback from Scylla Manager 1.2 to 1.1. Apply this procedure if an upgrade from 1.0 to 1.1 failed for any reason.
**Warning:** note that you may lose the manged clusters after downgrade. Should this happen, you will need to add the managed clusters clusters manually.
* Downgrade to :ref:`previous release <upgrade-manager-1.1.x-to-1.2.x-previous-release>`
* Start Scylla Manager
* Valdate Scylla Manager version
Downgrade to previous release
-----------------------------
1. Stop Scylla Manager
.. code:: sh
sudo systemctl stop scylla-manager
2. Drop the ``scylla_manager`` keyspace from the remote datastore
.. code:: sh
cqlsh -e "DROP KEYSPACE scylla_manager"
3. Remove Scylla Manager repo
.. code:: sh
sudo rm -rf /etc/yum.repos.d/scylla-manager.repo
sudo yum clean all
4. Update the `Scylla Manager repo <https://www.scylladb.com/enterprise-download/#manager>`_ to **1.1.x**
5. Install previous version
.. code:: sh
sudo yum downgrade scylla-manager scylla-manager-server scylla-manager-client -y
Rollback the Scylla Manager database
------------------------------------
1. Start Scylla Manager to reinitialize the data base schema.
.. code:: sh
sudo systemctl start scylla-manager
2. Stop Scylla Manager to avoid issues while restoring the backup. If you did not perform any backup before upgrading then you are done now and can continue at "Start Scylla Manager".
.. code:: sh
sudo systemctl stop scylla-manager
3. Restore the database backup if you performed a backup by following the instructions in :doc:`Restore from a Backup </operating-scylla/procedures/backup-restore/restore>`.
You can skip step 1 since the Scylla Manager has done this for you.
Start Scylla Manager
--------------------
.. code:: sh
sudo systemctl start scylla-manager
Validate Scylla Manager Version
-------------------------------
Validate Scylla Manager version:
.. code:: sh
sctool version
The version should match with the results you had :ref:`previously <upgrade-manager-1.1.x-to-1.2.x-previous-release>`.

View File

@@ -1,35 +0,0 @@
=======================================================
Upgrade Guide - Scylla Manager 1.x Maintenance Release
=======================================================
.. toctree::
:maxdepth: 2
:hidden:
Ubuntu <upgrade-guide-from-manager-1.x.y-to-1.x.z-ubuntu>
CentOS <upgrade-guide-from-manager-1.x.y-to-1.x.z-CentOS>
.. raw:: html
<div class="panel callout radius animated">
<div class="row">
<div class="medium-3 columns">
<h5 id="getting-started">Upgrade Scylla Manager</h5>
</div>
<div class="medium-9 columns">
Upgrade guides are available for:
* :doc:`Upgrade Guide - Scylla Manager 1.x.y to 1.x.z on CentOS <upgrade-guide-from-manager-1.x.y-to-1.x.z-CentOS>`
* :doc:`Upgrade guide - Scylla Manager 1.x.y to 1.x.z on Ubuntu 16 <upgrade-guide-from-manager-1.x.y-to-1.x.z-ubuntu>`
.. raw:: html
</div>
</div>
</div>

View File

@@ -1,178 +0,0 @@
========================================================
Upgrade Guide - Scylla Manager 1.x.y to 1.x.z on CentOS
========================================================
Enterprise customers who use Scylla Manager 1.x.y are encouraged to upgrade to 1.x.z.
For new installations please see `Scylla Manager - Download and Install <https://www.scylladb.com/enterprise-download/#manager>`_.
The steps below instruct you how to upgrade the Scylla Manager server while keeping the manager datastore intact.
If you are not running Scylla Manager 1.x.y, do not perform this upgrade procedure. This procedure only covers upgrades from Scylla Manager 1.x.y to 1.x.z.
Please contact `Scylla Enterprise Support <https://www.scylladb.com/product/support/>`_ team with any questions.
Upgrade Notes
=================
* This upgrade brings to sctool a new command: ``status``. This command shows a listing of the individual nodes in the cluster and records the CQL availability on the nodes.
* Health Check - This upgrade introduces a new feature where each node is monitored by Scylla Manager. When Scylla Manager detects that a node is down an alert message is sent to Scylla Monitoring. Alternitively, you can use the ``sctool status``` command to show the live cluster status.
* Automated health check - When a cluster is added a new health check task is automatically added to the cluster. Following an upgrade, all existing clusters will have an health check task as well.
* The sctool argument ``interval-days`` has been renamed to ``interval`` as it now supports more granular time units. For example: ``3d2h10m``. The available time units are ``d``, ``h``, ``m``, and ``s``.
* The sctool command ``cluster list`` no longer displays the **host** column in the results table. This was removed because it was easy to be mislead that this node was the only node being used. Adding a cluster (``cluster add``) still takes a ``--host`` argument, but when all the available nodes are discovered they are persisted and used for subsequent interactions with ScyllaDB.
Upgrade Procedure
=================
* Backup the data
* Download and install new packages
* Validate that the upgrade was successful
Backup the Scylla Manager data
-------------------------------
Scylla Manager server persists its data to a Scylla cluster (data store). Before upgrading, backup the ``scylla_manager`` keyspace from Scylla Manager's backend, following this :doc:`backup procedure </operating-scylla/procedures/backup-restore/backup>`.
Download and install the new release
------------------------------------
.. _upgrade-manager-1.x.y-to-1.x.z-previous-release:
Before upgrading, check what version you are currently running now using ``rpm -q scylla-manager``. You should use the same version that you had previously installed in case you want to :ref:`rollback <upgrade-manager-1.x.z-to-1.x.y-rollback-procedure-centos>` the upgrade.
To upgrade:
1. Update the `Update the Scylla Manager repo: <https://www.scylladb.com/enterprise-download/#manager>`_ to **1.x.y**
2. Run:
.. code:: sh
sudo yum update scylla-manager -y
3. Reload your shell execute the below command to reload ``sctool`` code completion.
.. code:: sh
source /etc/bash_completion.d/sctool.bash
Validate
--------
1. Check that Scylla Manager service is running with ``sudo systemctl status scylla-manager.service``. Confirm the service is active (running). If not, then start it with ``systemctl start scylla-manager.service``.
2. Confirm that the upgrade changed the Client and Server version. Run ``sctool version`` and make sure both are 1.x.z version. For example:
.. code-block:: none
sctool version
Client version: 1.3.0-0.20181130.03ae248
Server version: 1.3.0-0.20181130.03ae248
3. Confirm that following the update, that your managed clusters are still present. Run ``sctool cluster list``
.. code-block:: none
sctool cluster list
╭──────────────────────────────────────┬──────────┬───────────────╮
│ cluster id │ name │ssh user │
├──────────────────────────────────────┼──────────┼───────────────┤
│ db7faf98-7cc4-4a08-b707-2bc59d65551e │ cluster │scylla-manager │
╰──────────────────────────────────────┴──────────┴───────────────╯
4. Confirm that following the upgrade, there is a healtcheck task for each existing cluster. Run ``sctool task list`` to list the tasks.
.. code-block:: none
sctool task list -c cluster --all
╭──────────────────────────────────────────────────┬───────────────────────────────┬──────┬────────────┬────────╮
│ task │ next run │ ret. │ properties │ status │
├──────────────────────────────────────────────────┼───────────────────────────────┼──────┼────────────┼────────┤
│ healthcheck/afe9a610-e4c7-4d05-860e-5a0ddf14d7aa │ 10 Dec 18 20:21 UTC (+15s) │ 0 │ │ RUNNING│
│ repair/4d79ee63-7721-4105-8c6a-5b98c65c3e21 │ 12 Dec 18 00:00 UTC (+7d) │ 3 │ │ NEW │
╰──────────────────────────────────────────────────┴───────────────────────────────┴──────┴────────────┴────────╯
.. _upgrade-manager-1.x.z-to-1.x.y-rollback-procedure-centos:
Rollback Procedure
==================
The following procedure describes a rollback from Scylla Manager 1.x.z to 1.x.y. Apply this procedure if an upgrade from 1.x.y to 1.x.z failed for any reason.
**Warning:** note that you may lose the manged clusters after downgrade. Should this happen, you will need to add the managed clusters clusters manually.
* Downgrade to :ref:`previous release <upgrade-manager-1.x.y-to-1.x.z-previous-release>`
* Start Scylla Manager
* Valdate Scylla Manager version
Downgrade to previous release
-----------------------------
1. Stop Scylla Manager
.. code:: sh
sudo systemctl stop scylla-manager
2. Drop the ``scylla_manager`` keyspace from the remote datastore
.. code:: sh
cqlsh -e "DROP KEYSPACE scylla_manager"
3. Remove Scylla Manager repo
.. code:: sh
sudo rm -rf /etc/yum.repos.d/scylla-manager.repo
sudo yum clean all
4. Update the `Scylla Manager repo <https://www.scylladb.com/enterprise-download/#manager>`_ to **1.x.y**
5. Install previous version
.. code:: sh
sudo yum downgrade scylla-manager scylla-manager-server scylla-manager-client -y
Rollback the Scylla Manager database
------------------------------------
1. Start Scylla Manager to reinitialize the data base schema.
.. code:: sh
sudo systemctl start scylla-manager
2. Stop Scylla Manager to avoid issues while restoring the backup. If you did not perform any backup before upgrading then you are done now and can continue at "Start Scylla Manager".
.. code:: sh
sudo systemctl stop scylla-manager
3. Restore the database backup if you performed a backup by following the instructions in :doc:`Restore from a Backup </operating-scylla/procedures/backup-restore/restore>`.
You can skip step 1 since the Scylla Manager has done this for you.
Start Scylla Manager
--------------------
.. code:: sh
sudo systemctl start scylla-manager
Validate Scylla Manager Version
-------------------------------
Validate Scylla Manager version:
.. code:: sh
sctool version
The version should match with the results you had :ref:`previously <upgrade-manager-1.x.y-to-1.x.z-previous-release>`.

View File

@@ -1,188 +0,0 @@
==========================================================
Upgrade guide - Scylla Manager 1.x.y to 1.x.z on Ubuntu 16
==========================================================
Enterprise customers who use Scylla Manager 1.x.y are encouraged to upgrade to 1.x.z.
For new installations please see `Scylla Manager - Download and Install <https://www.scylladb.com/enterprise-download/#manager>`_.
The steps below instruct you how to upgrade the Scylla Manager server while keeping the manager datastore intact.
If you are not running Scylla Manager 1.x.y, do not perform this upgrade procedure. This procedure only covers upgrades from Scylla Manager 1.x.y to 1.x.z.
Please contact `Scylla Enterprise Support <https://www.scylladb.com/product/support/>`_ team with any questions.
Upgrade Notes
=================
* This upgrade brings to sctool a new command: ``status``. This command shows a listing of the individual nodes in the cluster and records the CQL availability on the nodes.
* Health Check - This upgrade introduces a new feature where each node is monitored by Scylla Manager. When Scylla Manager detects that a node is down an alert message is sent to Scylla Monitoring. Alternitively, you can use the ``sctool status``` command to show the live cluster status.
* Automated health check - When a cluster is added a new health check task is automatically added to the cluster. Following an upgrade, all existing clusters will have an health check task as well.
* The sctool argument ``interval-days`` has been renamed to ``interval`` as it now supports more granular time units. For example: ``3d2h10m``. The available time units are ``d``, ``h``, ``m``, and ``s``.
* The sctool command ``cluster list`` no longer displays the **host** column in the results table. This was removed because it was easy to be mislead that this node was the only node being used. Adding a cluster (``cluster add``) still takes a ``--host`` argument, but when all the available nodes are discovered they are persisted and used for subsequent interactions with ScyllaDB.
Upgrade Procedure
=================
* Backup the data
* Download and install new packages
* Validate that the upgrade was successful
Backup the Scylla Manager data
-------------------------------
Scylla Manager server persists its data to a Scylla cluster (data store). Before upgrading, backup the ``scylla_manager`` keyspace from Scylla Manager's backend, following this :doc:`backup procedure </operating-scylla/procedures/backup-restore/backup>`.
Download and install the new release
------------------------------------
.. _upgrade-manager-1.x.y-to-1.x.z-previous-release:
Before upgrading, check what version you are currently running now using ``rpm -q scylla-manager``. You should use the same version that you had previously installed in case you want to :ref:`rollback <upgrade-manager-1.x.z-to-1.x.y-rollback-procedure-ubuntu>` the upgrade.
To upgrade:
1. Update the `Update the Scylla Manager repo: <https://www.scylladb.com/enterprise-download/#manager>`_ to **1.x.y**
2. Run:
.. code:: sh
sudo apt-get update
sudo apt-get dist-upgrade scylla-manager*
3. Restart the service
.. code:: sh
sudo systemctl restart scylla-manager.service
4. Reload your shell execute the below command to reload ``sctool`` code completion.
.. code:: sh
source /etc/bash_completion.d/sctool.bash
Validate
--------
1. Check that Scylla Manager service is running with ``sudo systemctl status scylla-manager.service``. Confirm the service is active (running). If not, then start it with ``systemctl start scylla-manager.service``.
2. Confirm that the upgrade changed the Client and Server version. Run ``sctool version`` and make sure both are 1.x.y version. For example:
.. code-block:: none
sctool version
Client version: 1.3.0-0.20181130.03ae248
Server version: 1.3.0-0.20181130.03ae248
3. Confirm that following the update, that your managed clusters are still present. Run ``sctool cluster list``
.. code-block:: none
sctool cluster list
╭──────────────────────────────────────┬──────────┬───────────────╮
│ cluster id │ name │ssh user │
├──────────────────────────────────────┼──────────┼───────────────┤
│ db7faf98-7cc4-4a08-b707-2bc59d65551e │ cluster │scylla-manager │
╰──────────────────────────────────────┴──────────┴───────────────╯
4. Confirm that following the upgrade, there is a healtcheck task for each existing cluster. Run ``sctool task list`` to list the tasks.
.. code-block:: none
sctool task list -c cluster --all
╭──────────────────────────────────────────────────┬───────────────────────────────┬──────┬────────────┬────────╮
│ task │ next run │ ret. │ properties │ status │
├──────────────────────────────────────────────────┼───────────────────────────────┼──────┼────────────┼────────┤
│ healthcheck/afe9a610-e4c7-4d05-860e-5a0ddf14d7aa │ 10 Dec 18 20:21 UTC (+15s) │ 0 │ │ RUNNING│
│ repair/4d79ee63-7721-4105-8c6a-5b98c65c3e21 │ 12 Dec 18 00:00 UTC (+7d) │ 3 │ │ NEW │
╰──────────────────────────────────────────────────┴───────────────────────────────┴──────┴────────────┴────────╯
.. _upgrade-manager-1.x.z-to-1.x.y-rollback-procedure-ubuntu:
Rollback Procedure
==================
The following procedure describes a rollback from Scylla Manager 1.x.z to 1.x.y Apply this procedure if an upgrade from 1.2 to 1.3 failed for any reason.
**Warning:** note that you may lose the manged clusters after downgrade. Should this happen, you will need to add the managed clusters clusters manually.
* Downgrade to :ref:`previous release <upgrade-manager-1.x.y-to-1.x.z-previous-release>`
* Start Scylla Manager
* Valdate Scylla Manager version
Downgrade to previous release
-----------------------------
1. Stop Scylla Manager
.. code:: sh
sudo systemctl stop scylla-manager
2. Drop the ``scylla_manager`` keyspace from the remote datastore
.. code:: sh
cqlsh -e "DROP KEYSPACE scylla_manager"
3. Remove Scylla Manager repo
.. code:: sh
sudo rm -rf /etc/apt/sources.list.d/scylla-manager.list
4. Update the `Scylla Manager repo <https://www.scylladb.com/enterprise-download/#manager>`_ to **1.x.y**
5. Install previous version
.. code:: sh
sudo apt-get update
sudo apt-get remove scylla-manager\* -y
sudo apt-get install scylla-manager scylla-manager-server scylla-manager-client
sudo systemctl unmask scylla-manager.service
Rollback the Scylla Manager database
------------------------------------
1. Start Scylla Manager to reinitialize the data base schema.
.. code:: sh
sudo systemctl start scylla-manager
2. Stop Scylla Manager to avoid issues while restoring the backup. If you did not perform any backup before upgrading then you are done now and can continue at "Start Scylla Manager".
.. code:: sh
sudo systemctl stop scylla-manager
3. Restore the database backup if you performed a backup by following the instructions in :doc:`Restore from a Backup </operating-scylla/procedures/backup-restore/restore>`.
You can skip step 1 since the Scylla Manager has done this for you.
Start Scylla Manager
--------------------
.. code:: sh
sudo systemctl start scylla-manager
Validate Scylla Manager Version
-------------------------------
Validate Scylla Manager version:
.. code:: sh
sctool version
The version should match with the results you had :ref:`previously <upgrade-manager-1.x.y-to-1.x.z-previous-release>`.

View File

@@ -35,7 +35,7 @@ Apply the following procedure **serially** on each node. Do not move to the next
**During** the rolling upgrade it is highly recommended:
* Not to use new 2019.1 features
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See :doc:`here </operating-scylla/manager/2.1/sctool>` for suspending Scylla Manager scheduled or running repairs.
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See `sctool <https://manager.docs.scylladb.com/stable/sctool/index.html>`_ for suspending Scylla Manager scheduled or running repairs.
* Not to apply schema changes
Upgrade steps

View File

@@ -35,7 +35,7 @@ Apply the following procedure **serially** on each node. Do not move to the next
**During** the rolling upgrade it is highly recommended:
* Not to use new 2020.1 features
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See :doc:`here </operating-scylla/manager/2.1/sctool>` for suspending Scylla Manager scheduled or running repairs.
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See `sctool <https://manager.docs.scylladb.com/stable/sctool/index.html>`_ for suspending Scylla Manager scheduled or running repairs.
* Not to apply schema changes
Upgrade steps

View File

@@ -35,7 +35,7 @@ Apply the following procedure **serially** on each node. Do not move to the next
**During** the rolling upgrade it is highly recommended:
* Not to use new 2021.1 features
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See :doc:`here </operating-scylla/manager/2.1/sctool>` for suspending Scylla Manager scheduled or running repairs.
* Not to run administration functions, like repairs, refresh, rebuild or add or remove nodes. See `sctool <https://manager.docs.scylladb.com/stable/sctool/index.html>`_ for suspending Scylla Manager scheduled or running repairs.
* Not to apply schema changes
Upgrade steps