* Restore API group version by priority Signed-off-by: F. Gold <fgold@vmware.com> * Add changelog Signed-off-by: F. Gold <fgold@vmware.com> * Correct spelling Signed-off-by: F. Gold <fgold@vmware.com> * Refactor userResourceGroupVersionPriorities(...) to accept config map, adjust unit test Signed-off-by: F. Gold <fgold@vmware.com> * Move some unit tests into e2e Signed-off-by: F. Gold <fgold@vmware.com> * Add three e2e tests using Testify Suites Summary of changes Makefile - add testify e2e test target go.sum - changed with go mod tidy pkg/install/install.go - increased polling timeout test/e2e/restore_priority_group_test.go - deleted test/e2e/restore_test.go - deleted test/e2e/velero_utils.go - made restic optional in velero install test/e2e_testify/Makefile - makefile for testify e2e tests test/e2e_testify/README.md - example command for running tests test/e2e_testify/common_test.go - helper functions test/e2e_testify/e2e_suite_test.go - prepare for tests and run test/e2e_testify/restore_priority_apigv_test.go - test cases Signed-off-by: F. Gold <fgold@vmware.com> * Make changes per @nrb code review Signed-off-by: F. Gold <fgold@vmware.com> * Wait for pods in e2e tests Signed-off-by: F. Gold <fgold@vmware.com> * Remove testify suites e2e scaffolding moved to PR #3354 Signed-off-by: F. Gold <fgold@vmware.com> * Make changes per @brito-rafa and Velero maintainers code reviews - Made changes suggested by @brito-rafa in GitHub. - We had a code review meeting with @carlisia, @dsu-igeek, @zubron, and @nrb - and changes were made based on their suggetions: - pull in logic from 'meetsAPIGVResotreReqs()' to restore.go. - add TODO to remove APIGroupVersionFeatureFlag check - have feature flag and backup version format checks in separate `if` statements. - rename variables to be sourceGVs, targetGVs, and userGVs. Signed-off-by: F. Gold <fgold@vmware.com> * Convert Testify Suites e2e tests to existing Ginkgo framework Signed-off-by: F. Gold <fgold@vmware.com> * Made changes per @zubron PR review Signed-off-by: F. Gold <fgold@vmware.com> * Run go mod tidy after resolving go.sum merge conflict Signed-off-by: F. Gold <fgold@vmware.com> * Add feature documentation to velero.io site Signed-off-by: F. Gold <fgold@vmware.com> * Add config map e2e test; rename e2e test file and name Signed-off-by: F. Gold <fgold@vmware.com> * Update go.{mod,sum} files Signed-off-by: F. Gold <fgold@vmware.com> * Move CRDs and CRs to testdata folder Signed-off-by: F. Gold <fgold@vmware.com> * Fix typos in cert-manager to pass codespell CICD check Signed-off-by: F. Gold <fgold@vmware.com> * Make changes per @nrb code review round 2 - make checkAndReadDir function private - add info level messages when priorties 1-3 API group versions can not be used Signed-off-by: F. Gold <fgold@vmware.com> * Make user config map rules less strict Signed-off-by: F. Gold <fgold@vmware.com> * Update e2e test image version in example Signed-off-by: F. Gold <fgold@vmware.com> * Update case A music-system controller code Signed-off-by: F. Gold <fgold@vmware.com> * Documentation updates Signed-off-by: F. Gold <fgold@vmware.com> * Update migration case documentation Signed-off-by: F. Gold <fgold@vmware.com>
2.8 KiB
title, layout
| title | layout |
|---|---|
| Cluster migration | docs |
Using Backups and Restores
Velero can help you port your resources from one cluster to another, as long as you point each Velero instance to the same cloud object storage location. This scenario assumes that your clusters are hosted by the same cloud provider. Note that Velero does not natively support the migration of persistent volumes snapshots across cloud providers. If you would like to migrate volume data between cloud platforms, please enable restic, which will backup volume contents at the filesystem level.
-
(Cluster 1) Assuming you haven't already been checkpointing your data with the Velero
scheduleoperation, you need to first back up your entire cluster (replacing<BACKUP-NAME>as desired):velero backup create <BACKUP-NAME>The default backup retention period, expressed as TTL (time to live), is 30 days (720 hours); you can use the
--ttl <DURATION>flag to change this as necessary. See how velero works for more information about backup expiry. -
(Cluster 2) Configure
BackupStorageLocationsandVolumeSnapshotLocations, pointing to the locations used by Cluster 1, usingvelero backup-location createandvelero snapshot-location create. Make sure to configure theBackupStorageLocationsas read-only by using the--access-mode=ReadOnlyflag forvelero backup-location create. -
(Cluster 2) Make sure that the Velero Backup object is created. Velero resources are synchronized with the backup files in cloud storage.
velero backup describe <BACKUP-NAME>Note: The default sync interval is 1 minute, so make sure to wait before checking. You can configure this interval with the
--backup-sync-periodflag to the Velero server. -
(Cluster 2) Once you have confirmed that the right Backup (
<BACKUP-NAME>) is now present, you can restore everything with:velero restore create --from-backup <BACKUP-NAME>
Verify Both Clusters
Check that the second cluster is behaving as expected:
-
(Cluster 2) Run:
velero restore get -
Then run:
velero restore describe <RESTORE-NAME-FROM-GET-COMMAND>
If you encounter issues, make sure that Velero is running in the same namespace in both clusters.
Migrating Workloads Across Different Kubernetes Versions
Migration across clusters that are not running the same version of Kubernetes might be possible, but some factors need to be considered: compatibility of API groups between clusters for each custom resource, and if a Kubernetes version upgrade breaks the compatibility of core/native API groups. For more information about API group versions, please see EnableAPIGroupVersions.