Add kubernetes version compatability matrix

Signed-off-by: Abigail McCarthy <mabigail@vmware.com>
This commit is contained in:
Abigail McCarthy
2022-01-06 14:20:41 -05:00
parent b6992101a4
commit b82559fe7c
16 changed files with 34 additions and 23 deletions

View File

@@ -8,7 +8,7 @@ Refer [this document](customize-installation.md) to customize your installation.
## Prerequisites
- Access to a Kubernetes cluster, v1.12 or later, with DNS and container networking enabled.
- Access to a Kubernetes cluster, v1.16 or later, with DNS and container networking enabled. For more information on supported Kubernetes versions, see the Velero [compatibility matrix](https://github.com/vmware-tanzu/velero#velero-compatabilty-matrix).
- `kubectl` installed locally
Velero uses object storage to store backups and associated artifacts. It also optionally integrates with supported block storage systems to snapshot your persistent volumes. Before beginning the installation process, you should identify the object storage provider and optional block storage provider(s) you'll be using from the list of [compatible providers][0].
@@ -70,4 +70,4 @@ Please refer to [this part of the documentation][5].
[2]: on-premises.md
[3]: overview-plugins.md
[4]: customize-installation.md#install-an-additional-volume-snapshot-provider
[5]: customize-installation.md#optional-velero-cli-configurations
[5]: customize-installation.md#optional-velero-cli-configurations

View File

@@ -11,7 +11,7 @@ You can deploy Velero on Tencent [TKE](https://cloud.tencent.com/document/produc
- Registered [Tencent Cloud Account](https://cloud.tencent.com/register).
- [Tencent Cloud COS](https://console.cloud.tencent.com/cos) service, referred to as COS, has been launched
- A Kubernetes cluster has been created, cluster version v1.12 or later, and the cluster can use DNS and Internet services normally. If you need to create a TKE cluster, refer to the Tencent [create a cluster](https://cloud.tencent.com/document/product/457/32189) documentation.
- A Kubernetes cluster has been created, cluster version v1.16 or later, and the cluster can use DNS and Internet services normally. If you need to create a TKE cluster, refer to the Tencent [create a cluster](https://cloud.tencent.com/document/product/457/32189) documentation.
## Create a Tencent Cloud COS bucket
@@ -21,7 +21,7 @@ Set access to the bucket through the object storage console, the bucket needs to
## Get bucket access credentials
Velero uses an AWS S3-compatible API to access Tencent Cloud COS storage, which requires authentication using a pair of access key IDs and key-created signatures.
Velero uses an AWS S3-compatible API to access Tencent Cloud COS storage, which requires authentication using a pair of access key IDs and key-created signatures.
In the S3 API parameter, the "access_key_id" field is the access key ID and the "secret_access_key" field is the key.
@@ -116,7 +116,7 @@ After deleting the MinIO resource, use your backup to restore the deleted MinIO
kubectl patch backupstoragelocation default --namespace velero \
--type merge \
--patch '{"spec":{"accessMode":"ReadOnly"}}'
```
Modifying access to Velero's storage location is "ReadOnly," as shown in the following image:

View File

@@ -24,7 +24,7 @@ cross-volume-type data migrations.
- Understand how Velero performs [backups with the Restic integration](#how-backup-and-restore-work-with-restic).
- [Download][3] the latest Velero release.
- Kubernetes v1.12.0 and later. Velero's Restic integration requires the Kubernetes [MountPropagation feature][6], which is enabled by default in Kubernetes v1.12.0 and later.
- Kubernetes v1.16.0 and later. Velero's Restic integration requires the Kubernetes [MountPropagation feature][6].
### Install Restic

View File

@@ -19,7 +19,7 @@ files in this directory are gitignored so you may configure your setup according
## Prerequisites
1. [Docker](https://docs.docker.com/install/) v19.03 or newer
1. A Kubernetes cluster v1.12 or greater (does not have to be Kind)
1. A Kubernetes cluster v1.16 or greater (does not have to be Kind)
1. [Tilt](https://docs.tilt.dev/install.html) v0.12.0 or newer
1. Clone the [Velero project](https://github.com/vmware-tanzu/velero) repository
locally
@@ -133,7 +133,7 @@ Here are two ways to use MinIO as the storage:
In the `tilt-settings.json` file, set `"setup-minio": true`. This will configure a Kubernetes deployment containing a running
instance of MinIO inside your cluster. There are [extra steps](contributions/minio/#expose-minio-outside-your-cluster-with-a-service)
necessary to expose MinIO outside the cluster.
necessary to expose MinIO outside the cluster.
To access this storage, you will need to expose MinIO outside the cluster by forwarding the MinIO port to the local machine using kubectl port-forward -n <velero-namespace> svc/minio 9000. Update the BSL configuration to use that as its "public URL" by adding `publicUrl: http://localhost:9000` to the BSL config. This is necessary to do things like download a backup file.

View File

@@ -6,6 +6,7 @@ layout: docs
## Prerequisites
- Velero [v1.6.x][6] installed.
- Kubernetes cluster version 1.16 and later.
If you're not yet running at least Velero v1.6, see the following:
@@ -16,6 +17,8 @@ If you're not yet running at least Velero v1.6, see the following:
- [Upgrading to v1.5][5]
- [Upgrading to v1.6][6]
Before upgrading, check the [Velero compatibility matrix](https://github.com/vmware-tanzu/velero#velero-compatabilty-matrix) to make sure your version of Kubernetes is supported by the new version of Velero.
## Instructions
1. Install the Velero v1.7 command-line interface (CLI) by following the [instructions here][0].
@@ -76,7 +79,7 @@ If you're not yet running at least Velero v1.6, see the following:
## Notes
### Default backup storage location
We have deprecated the way to indicate the default backup storage location. Previously, that was indicated according to the backup storage location name set on the velero server-side via the flag `velero server --default-backup-storage-location`. Now we configure the default backup storage location on the velero client-side. Please refer to the [About locations][9] on how to indicate which backup storage location is the default one.
We have deprecated the way to indicate the default backup storage location. Previously, that was indicated according to the backup storage location name set on the velero server-side via the flag `velero server --default-backup-storage-location`. Now we configure the default backup storage location on the velero client-side. Please refer to the [About locations][9] on how to indicate which backup storage location is the default one.
After upgrading, if there is a previously created backup storage location with the name that matches what was defined on the server side as the default, it will be automatically set as the `default`.

View File

@@ -8,7 +8,7 @@ Refer [this document](customize-installation.md) to customize your installation.
## Prerequisites
- Access to a Kubernetes cluster, v1.10 or later, with DNS and container networking enabled.
- Access to a Kubernetes cluster, v1.10-v1.21, with DNS and container networking enabled. For more information on supported Kubernetes versions, see the Velero [compatibility matrix](https://github.com/vmware-tanzu/velero#velero-compatabilty-matrix).
- `kubectl` installed locally
Velero uses object storage to store backups and associated artifacts. It also optionally integrates with supported block storage systems to snapshot your persistent volumes. Before beginning the installation process, you should identify the object storage provider and optional block storage provider(s) you'll be using from the list of [compatible providers][0].

View File

@@ -15,6 +15,8 @@ If you're not yet running at least Velero v1.3, see the following:
- [Upgrading to v1.2][2]
- [Upgrading to v1.3][3]
Before upgrading, check the [Velero compatibility matrix](https://github.com/vmware-tanzu/velero#velero-compatabilty-matrix) to make sure your version of Kubernetes is supported by the new version of Velero.
## Instructions
1. Install the Velero v1.4 command-line interface (CLI) by following the [instructions here][0].

View File

@@ -8,7 +8,7 @@ Refer [this document](customize-installation.md) to customize your installation.
## Prerequisites
- Access to a Kubernetes cluster, v1.10 or later, with DNS and container networking enabled.
- Access to a Kubernetes cluster, v1.12-v1.21, with DNS and container networking enabled. For more information on supported Kubernetes versions, see the Velero [compatibility matrix](https://github.com/vmware-tanzu/velero#velero-compatabilty-matrix).
- `kubectl` installed locally
Velero uses object storage to store backups and associated artifacts. It also optionally integrates with supported block storage systems to snapshot your persistent volumes. Before beginning the installation process, you should identify the object storage provider and optional block storage provider(s) you'll be using from the list of [compatible providers][0].

View File

@@ -11,7 +11,7 @@ You can deploy Velero on Tencent [TKE](https://cloud.tencent.com/document/produc
- Registered [Tencent Cloud Account](https://cloud.tencent.com/register).
- [Tencent Cloud COS](https://console.cloud.tencent.com/cos) service, referred to as COS, has been launched
- A Kubernetes cluster has been created, cluster version v1.10 or later, and the cluster can use DNS and Internet services normally. If you need to create a TKE cluster, refer to the Tencent [create a cluster](https://cloud.tencent.com/document/product/457/32189) documentation.
- A Kubernetes cluster has been created, cluster version v1.12-v1.21, and the cluster can use DNS and Internet services normally. If you need to create a TKE cluster, refer to the Tencent [create a cluster](https://cloud.tencent.com/document/product/457/32189) documentation.
## Create a Tencent Cloud COS bucket
@@ -21,7 +21,7 @@ Set access to the bucket through the object storage console, the bucket needs to
## Get bucket access credentials
Velero uses an AWS S3-compatible API to access Tencent Cloud COS storage, which requires authentication using a pair of access key IDs and key-created signatures.
Velero uses an AWS S3-compatible API to access Tencent Cloud COS storage, which requires authentication using a pair of access key IDs and key-created signatures.
In the S3 API parameter, the "access_key_id" field is the access key ID and the "secret_access_key" field is the key.
@@ -116,7 +116,7 @@ After deleting the MinIO resource, use your backup to restore the deleted MinIO
kubectl patch backupstoragelocation default --namespace velero \
--type merge \
--patch '{"spec":{"accessMode":"ReadOnly"}}'
```
Modifying access to Velero's storage location is "ReadOnly," as shown in the following image:

View File

@@ -24,7 +24,7 @@ cross-volume-type data migrations.
- Understand how Velero performs [backups with the restic integration](#how-backup-and-restore-work-with-restic).
- [Download][3] the latest Velero release.
- Kubernetes v1.10.0 and later. Velero's restic integration requires the Kubernetes [MountPropagation feature][6], which is enabled by default in Kubernetes v1.10.0 and later.
- Kubernetes v1.12.0-v1.21. Velero's restic integration requires the Kubernetes [MountPropagation feature][6], which is enabled by default in Kubernetes v1.10.0 and later.
### Install restic

View File

@@ -19,7 +19,7 @@ files in this directory are gitignored so you may configure your setup according
## Prerequisites
1. [Docker](https://docs.docker.com/install/) v19.03 or newer
1. A Kubernetes cluster v1.10 or greater (does not have to be Kind)
1. A Kubernetes cluster v1.12-v1.21 (does not have to be Kind)
1. [Tilt](https://docs.tilt.dev/install.html) v0.12.0 or newer
1. Clone the [Velero project](https://github.com/vmware-tanzu/velero) repository
locally
@@ -133,7 +133,7 @@ Here are two ways to use MinIO as the storage:
In the `tilt-settings.json` file, set `"setup-minio": true`. This will configure a Kubernetes deployment containing a running
instance of MinIO inside your cluster. There are [extra steps](contributions/minio/#expose-minio-outside-your-cluster-with-a-service)
necessary to expose MinIO outside the cluster.
necessary to expose MinIO outside the cluster.
To access this storage, you will need to expose MinIO outside the cluster by forwarding the MinIO port to the local machine using kubectl port-forward -n <velero-namespace> svc/minio 9000. Update the BSL configuration to use that as its "public URL" by adding `publicUrl: http://localhost:9000` to the BSL config. This is necessary to do things like download a backup file.

View File

@@ -14,6 +14,8 @@ If you're not yet running at least Velero v1.4, see the following:
- [Upgrading to v1.3][3]
- [Upgrading to v1.4][4]
Before upgrading, check the [Velero compatibility matrix](https://github.com/vmware-tanzu/velero#velero-compatabilty-matrix) to make sure your version of Kubernetes is supported by the new version of Velero.
## Instructions
1. Install the Velero v1.5 command-line interface (CLI) by following the [instructions here][0].

View File

@@ -8,7 +8,7 @@ Refer [this document](customize-installation.md) to customize your installation.
## Prerequisites
- Access to a Kubernetes cluster, v1.12 or later, with DNS and container networking enabled.
- Access to a Kubernetes cluster, v1.12 or later, with DNS and container networking enabled. Note that Velero versions 1.6.0-1.6.2, only support Kubernetes versions v1.12-1.21. If you are installing on a newer version of Kubernetes, its recommended that you install a newer version of Velero. For more information on supported Kubernetes versions, see the Velero [compatibility matrix](https://github.com/vmware-tanzu/velero#velero-compatabilty-matrix).
- `kubectl` installed locally
Velero uses object storage to store backups and associated artifacts. It also optionally integrates with supported block storage systems to snapshot your persistent volumes. Before beginning the installation process, you should identify the object storage provider and optional block storage provider(s) you'll be using from the list of [compatible providers][0].

View File

@@ -15,6 +15,8 @@ If you're not yet running at least Velero v1.5, see the following:
- [Upgrading to v1.4][4]
- [Upgrading to v1.5][5]
Before upgrading, check the [Velero compatibility matrix](https://github.com/vmware-tanzu/velero#velero-compatabilty-matrix) to make sure your version of Kubernetes is supported by the new version of Velero.
## Instructions
1. Install the Velero v1.6 command-line interface (CLI) by following the [instructions here][0].
@@ -73,7 +75,7 @@ If you're not yet running at least Velero v1.5, see the following:
## Notes
### Default backup storage location
We have deprecated the way to indicate the default backup storage location. Previously, that was indicated according to the backup storage location name set on the velero server-side via the flag `velero server --default-backup-storage-location`. Now we configure the default backup storage location on the velero client-side. Please refer to the [About locations][9] on how to indicate which backup storage location is the default one.
We have deprecated the way to indicate the default backup storage location. Previously, that was indicated according to the backup storage location name set on the velero server-side via the flag `velero server --default-backup-storage-location`. Now we configure the default backup storage location on the velero client-side. Please refer to the [About locations][9] on how to indicate which backup storage location is the default one.
After upgrading, if there is a previously created backup storage location with the name that matches what was defined on the server side as the default, it will be automatically set as the `default`.

View File

@@ -8,7 +8,7 @@ Refer [this document](customize-installation.md) to customize your installation.
## Prerequisites
- Access to a Kubernetes cluster, v1.12 or later, with DNS and container networking enabled.
- Access to a Kubernetes cluster, v1.12 or later, with DNS and container networking enabled. For more information on supported Kubernetes versions, see the Velero [compatibility matrix](https://github.com/vmware-tanzu/velero#velero-compatabilty-matrix).
- `kubectl` installed locally
Velero uses object storage to store backups and associated artifacts. It also optionally integrates with supported block storage systems to snapshot your persistent volumes. Before beginning the installation process, you should identify the object storage provider and optional block storage provider(s) you'll be using from the list of [compatible providers][0].
@@ -70,4 +70,4 @@ Please refer to [this part of the documentation][5].
[2]: on-premises.md
[3]: overview-plugins.md
[4]: customize-installation.md#install-an-additional-volume-snapshot-provider
[5]: customize-installation.md#optional-velero-cli-configurations
[5]: customize-installation.md#optional-velero-cli-configurations

View File

@@ -16,6 +16,8 @@ If you're not yet running at least Velero v1.6, see the following:
- [Upgrading to v1.5][5]
- [Upgrading to v1.6][6]
Before upgrading, check the [Velero compatibility matrix](https://github.com/vmware-tanzu/velero#velero-compatabilty-matrix) to make sure your version of Kubernetes is supported by the new version of Velero.
## Instructions
1. Install the Velero v1.7 command-line interface (CLI) by following the [instructions here][0].
@@ -47,7 +49,7 @@ If you're not yet running at least Velero v1.6, see the following:
1. Update the container image used by the Velero deployment, plugin and, optionally, the restic daemon set:
```bash
# set the container and image of the init container for plugin accordingly,
# set the container and image of the init container for plugin accordingly,
# if you are using other plugin
kubectl set image deployment/velero \
velero=velero/velero:v1.7.0 \
@@ -79,7 +81,7 @@ If you're not yet running at least Velero v1.6, see the following:
## Notes
### Default backup storage location
We have deprecated the way to indicate the default backup storage location. Previously, that was indicated according to the backup storage location name set on the velero server-side via the flag `velero server --default-backup-storage-location`. Now we configure the default backup storage location on the velero client-side. Please refer to the [About locations][9] on how to indicate which backup storage location is the default one.
We have deprecated the way to indicate the default backup storage location. Previously, that was indicated according to the backup storage location name set on the velero server-side via the flag `velero server --default-backup-storage-location`. Now we configure the default backup storage location on the velero client-side. Please refer to the [About locations][9] on how to indicate which backup storage location is the default one.
After upgrading, if there is a previously created backup storage location with the name that matches what was defined on the server side as the default, it will be automatically set as the `default`.