Restore Services before Clusters

Restore Services before Clusters so they can be adopted by AKO-operator and no new Services will be created for the same clusters

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
This commit is contained in:
Wenkai Yin(尹文开)
2023-03-31 14:31:06 +08:00
parent 7416504e3a
commit 866242d950
8 changed files with 11 additions and 7 deletions

View File

@@ -154,7 +154,7 @@
* Skip completed jobs and pods when restoring (#463, @nrb)
* Set namespace correctly when syncing backups from object storage (#472, @skriss)
* When building on macOS, bind-mount volumes with delegated config (#478, @skriss)
* Add replica sets and daemonsets to cohabitating resources so they're not backed up twice (#482 #485, @skriss)
* Add replica sets and daemonsets to cohabiting resources so they're not backed up twice (#482 #485, @skriss)
* Shut down the Ark server gracefully on SIGINT/SIGTERM (#483, @skriss)
* Only back up resources that support GET and DELETE in addition to LIST and CREATE (#486, @nrb)
* Show a better error message when trying to get an incomplete restore's logs (#496, @nrb)

View File

@@ -103,7 +103,7 @@ Also added DownloadTargetKindBackupItemSnapshots for retrieving the signed URL t
* Fix CVE-2020-29652 and CVE-2020-26160 (#4274, @ywk253100)
* Refine tag-release.sh to align with change in release process (#4185, @reasonerjt)
* Fix plugins incompatible issue in upgrade test (#4141, @danfengliu)
* Verify group before treating resource as cohabitating (#4126, @sseago)
* Verify group before treating resource as cohabiting (#4126, @sseago)
* Added ItemSnapshotter plugin definition and plugin framework - addresses #3533.
Part of the Upload Progress enhancement (#3533) (#4077, @dsmithuchida)
* Add upgrade test in E2E test (#4058, @danfengliu)

View File

@@ -0,0 +1 @@
Restore Services before Clusters

View File

@@ -1002,7 +1002,7 @@ func TestBackupResourceCohabitation(t *testing.T) {
},
},
{
name: "when deployments exist that are not in the cohabitating groups those are backed up along with apps/deployments",
name: "when deployments exist that are not in the cohabiting groups those are backed up along with apps/deployments",
backup: defaultBackup().Result(),
apiResources: []*test.APIResource{
test.VeleroDeployments(
@@ -1047,7 +1047,7 @@ func TestBackupResourceCohabitation(t *testing.T) {
}
// TestBackupUsesNewCohabitatingResourcesForEachBackup ensures that when two backups are
// run that each include cohabitating resources, one copy of the relevant resources is
// run that each include cohabiting resources, one copy of the relevant resources is
// backed up in each backup. Verification is done by looking at the contents of the backup
// tarball. This covers a specific issue that was fixed by https://github.com/vmware-tanzu/velero/pull/485.
func TestBackupUsesNewCohabitatingResourcesForEachBackup(t *testing.T) {

View File

@@ -508,6 +508,8 @@ func (s *server) veleroResourcesExist() error {
// - Replica sets go before deployments/other controllers so they can be explicitly
// restored and be adopted by controllers.
// - CAPI ClusterClasses go before Clusters.
// - Services go before Clusters so they can be adopted by AKO-operator and no new Services will be created
// for the same clusters
//
// Low priorities:
// - Tanzu ClusterBootstraps go last as it can reference any other kind of resources.
@@ -536,6 +538,7 @@ var defaultRestorePriorities = restore.Priorities{
// in the backup.
"replicasets.apps",
"clusterclasses.cluster.x-k8s.io",
"services",
},
LowPriorities: []string{
"clusterbootstraps.run.tanzu.vmware.com",

View File

@@ -81,7 +81,7 @@ Server:
Below we've done 6 groups of tests, for each single group of test, we used limited resources (1 core CPU 2 GB memory or 4 cores CPU 4 GB memory) to do Velero file system backup under Restic path and Kopia path, and then compare the results.
Recorded the metrics of time consumption, maximum CPU usage, maximum memory usage, and minio strorage usage for node-agent daemonset, and the metrics of Velero deployment are not included since the differences are not obvious by whether using Restic uploader or Kopia uploader.
Recorded the metrics of time consumption, maximum CPU usage, maximum memory usage, and minio storage usage for node-agent daemonset, and the metrics of Velero deployment are not included since the differences are not obvious by whether using Restic uploader or Kopia uploader.
Compression is either disabled or not unavailable for both uploader.

View File

@@ -81,7 +81,7 @@ Server:
Below we've done 6 groups of tests, for each single group of test, we used limited resources (1 core CPU 2 GB memory or 4 cores CPU 4 GB memory) to do Velero file system backup under Restic path and Kopia path, and then compare the results.
Recorded the metrics of time consumption, maximum CPU usage, maximum memory usage, and minio strorage usage for node-agent daemonset, and the metrics of Velero deployment are not included since the differences are not obvious by whether using Restic uploader or Kopia uploader.
Recorded the metrics of time consumption, maximum CPU usage, maximum memory usage, and minio storage usage for node-agent daemonset, and the metrics of Velero deployment are not included since the differences are not obvious by whether using Restic uploader or Kopia uploader.
Compression is either disabled or not unavailable for both uploader.

View File

@@ -115,7 +115,7 @@ func (p *PVBackupFiltering) CreateResources() error {
})
}
})
By(fmt.Sprintf("Polulate all pods %s with file %s", p.podsList, FILE_NAME), func() {
By(fmt.Sprintf("Populate all pods %s with file %s", p.podsList, FILE_NAME), func() {
for index, ns := range *p.NSIncluded {
By(fmt.Sprintf("Creating file in all pods to start %d in namespace %s", index, ns), func() {
WaitForPods(p.Ctx, p.Client, ns, p.podsList[index])