mirror of
https://github.com/vmware-tanzu/velero.git
synced 2026-01-09 06:33:22 +00:00
Restore Services before Clusters
Restore Services before Clusters so they can be adopted by AKO-operator and no new Services will be created for the same clusters Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
This commit is contained in:
@@ -154,7 +154,7 @@
|
||||
* Skip completed jobs and pods when restoring (#463, @nrb)
|
||||
* Set namespace correctly when syncing backups from object storage (#472, @skriss)
|
||||
* When building on macOS, bind-mount volumes with delegated config (#478, @skriss)
|
||||
* Add replica sets and daemonsets to cohabitating resources so they're not backed up twice (#482 #485, @skriss)
|
||||
* Add replica sets and daemonsets to cohabiting resources so they're not backed up twice (#482 #485, @skriss)
|
||||
* Shut down the Ark server gracefully on SIGINT/SIGTERM (#483, @skriss)
|
||||
* Only back up resources that support GET and DELETE in addition to LIST and CREATE (#486, @nrb)
|
||||
* Show a better error message when trying to get an incomplete restore's logs (#496, @nrb)
|
||||
|
||||
@@ -103,7 +103,7 @@ Also added DownloadTargetKindBackupItemSnapshots for retrieving the signed URL t
|
||||
* Fix CVE-2020-29652 and CVE-2020-26160 (#4274, @ywk253100)
|
||||
* Refine tag-release.sh to align with change in release process (#4185, @reasonerjt)
|
||||
* Fix plugins incompatible issue in upgrade test (#4141, @danfengliu)
|
||||
* Verify group before treating resource as cohabitating (#4126, @sseago)
|
||||
* Verify group before treating resource as cohabiting (#4126, @sseago)
|
||||
* Added ItemSnapshotter plugin definition and plugin framework - addresses #3533.
|
||||
Part of the Upload Progress enhancement (#3533) (#4077, @dsmithuchida)
|
||||
* Add upgrade test in E2E test (#4058, @danfengliu)
|
||||
|
||||
1
changelogs/unreleased/6058-ywk253100
Normal file
1
changelogs/unreleased/6058-ywk253100
Normal file
@@ -0,0 +1 @@
|
||||
Restore Services before Clusters
|
||||
@@ -1002,7 +1002,7 @@ func TestBackupResourceCohabitation(t *testing.T) {
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "when deployments exist that are not in the cohabitating groups those are backed up along with apps/deployments",
|
||||
name: "when deployments exist that are not in the cohabiting groups those are backed up along with apps/deployments",
|
||||
backup: defaultBackup().Result(),
|
||||
apiResources: []*test.APIResource{
|
||||
test.VeleroDeployments(
|
||||
@@ -1047,7 +1047,7 @@ func TestBackupResourceCohabitation(t *testing.T) {
|
||||
}
|
||||
|
||||
// TestBackupUsesNewCohabitatingResourcesForEachBackup ensures that when two backups are
|
||||
// run that each include cohabitating resources, one copy of the relevant resources is
|
||||
// run that each include cohabiting resources, one copy of the relevant resources is
|
||||
// backed up in each backup. Verification is done by looking at the contents of the backup
|
||||
// tarball. This covers a specific issue that was fixed by https://github.com/vmware-tanzu/velero/pull/485.
|
||||
func TestBackupUsesNewCohabitatingResourcesForEachBackup(t *testing.T) {
|
||||
|
||||
@@ -508,6 +508,8 @@ func (s *server) veleroResourcesExist() error {
|
||||
// - Replica sets go before deployments/other controllers so they can be explicitly
|
||||
// restored and be adopted by controllers.
|
||||
// - CAPI ClusterClasses go before Clusters.
|
||||
// - Services go before Clusters so they can be adopted by AKO-operator and no new Services will be created
|
||||
// for the same clusters
|
||||
//
|
||||
// Low priorities:
|
||||
// - Tanzu ClusterBootstraps go last as it can reference any other kind of resources.
|
||||
@@ -536,6 +538,7 @@ var defaultRestorePriorities = restore.Priorities{
|
||||
// in the backup.
|
||||
"replicasets.apps",
|
||||
"clusterclasses.cluster.x-k8s.io",
|
||||
"services",
|
||||
},
|
||||
LowPriorities: []string{
|
||||
"clusterbootstraps.run.tanzu.vmware.com",
|
||||
|
||||
@@ -81,7 +81,7 @@ Server:
|
||||
|
||||
Below we've done 6 groups of tests, for each single group of test, we used limited resources (1 core CPU 2 GB memory or 4 cores CPU 4 GB memory) to do Velero file system backup under Restic path and Kopia path, and then compare the results.
|
||||
|
||||
Recorded the metrics of time consumption, maximum CPU usage, maximum memory usage, and minio strorage usage for node-agent daemonset, and the metrics of Velero deployment are not included since the differences are not obvious by whether using Restic uploader or Kopia uploader.
|
||||
Recorded the metrics of time consumption, maximum CPU usage, maximum memory usage, and minio storage usage for node-agent daemonset, and the metrics of Velero deployment are not included since the differences are not obvious by whether using Restic uploader or Kopia uploader.
|
||||
|
||||
Compression is either disabled or not unavailable for both uploader.
|
||||
|
||||
|
||||
@@ -81,7 +81,7 @@ Server:
|
||||
|
||||
Below we've done 6 groups of tests, for each single group of test, we used limited resources (1 core CPU 2 GB memory or 4 cores CPU 4 GB memory) to do Velero file system backup under Restic path and Kopia path, and then compare the results.
|
||||
|
||||
Recorded the metrics of time consumption, maximum CPU usage, maximum memory usage, and minio strorage usage for node-agent daemonset, and the metrics of Velero deployment are not included since the differences are not obvious by whether using Restic uploader or Kopia uploader.
|
||||
Recorded the metrics of time consumption, maximum CPU usage, maximum memory usage, and minio storage usage for node-agent daemonset, and the metrics of Velero deployment are not included since the differences are not obvious by whether using Restic uploader or Kopia uploader.
|
||||
|
||||
Compression is either disabled or not unavailable for both uploader.
|
||||
|
||||
|
||||
@@ -115,7 +115,7 @@ func (p *PVBackupFiltering) CreateResources() error {
|
||||
})
|
||||
}
|
||||
})
|
||||
By(fmt.Sprintf("Polulate all pods %s with file %s", p.podsList, FILE_NAME), func() {
|
||||
By(fmt.Sprintf("Populate all pods %s with file %s", p.podsList, FILE_NAME), func() {
|
||||
for index, ns := range *p.NSIncluded {
|
||||
By(fmt.Sprintf("Creating file in all pods to start %d in namespace %s", index, ns), func() {
|
||||
WaitForPods(p.Ctx, p.Client, ns, p.podsList[index])
|
||||
|
||||
Reference in New Issue
Block a user