Compare commits

..

1427 Commits

Author SHA1 Message Date
lyndon-li
32402880b6 Merge pull request #9261 from priyansh17/release-1.16
Some checks failed
Run the E2E test on kind / build (push) Has been cancelled
Run the E2E test on kind / setup-test-matrix (push) Has been cancelled
Main CI / Build (push) Has been cancelled
Run the E2E test on kind / run-e2e-test (push) Has been cancelled
Update AzureAD Microsoft Authentication Library to v1.5.0 (#9244)
2025-09-19 11:26:24 +08:00
lyndon-li
c686c59360 Merge branch 'release-1.16' into release-1.16 2025-09-19 10:54:51 +08:00
lyndon-li
c53f3fb4fb Merge pull request #9283 from kaovilai/bitnamiminio-1.16
1.16: Fix E2E tests: Build MinIO from Bitnami Dockerfile to replace deprecated image
2025-09-19 10:54:37 +08:00
Priyansh Choudhary
fb0abf8245 Added changelog
Signed-off-by: Priyansh Choudhary im1706@gmail.com
Signed-off-by: Priyansh Choudhary <im1706@gmail.com>
2025-09-19 03:18:02 +05:30
Priyansh Choudhary
6676647706 Update AzureAD Microsoft Authentication Library to v1.5.0 (#9244)
Signed-off-by: Priyansh Choudhary <im1706@gmail.com>
2025-09-19 03:18:01 +05:30
Tiger Kaovilai
4c3b7943f3 Fix E2E tests: Build MinIO from Bitnami Dockerfile to replace deprecated image
The Bitnami MinIO image bitnami/minio:2021.6.17-debian-10-r7 is no longer
available on Docker Hub, causing E2E tests to fail.

This change implements a solution to build the MinIO image locally from
Bitnami's public Dockerfile and cache it for subsequent runs:
- Fetches the latest commit hash of the Bitnami MinIO Dockerfile
- Uses GitHub Actions cache to store/retrieve built images
- Only rebuilds when the upstream Dockerfile changes
- Maintains compatibility with existing environment variables

Fixes #9279

🤖 Generated with [Claude Code](https://claude.ai/code)

Update .github/workflows/e2e-test-kind.yaml

Signed-off-by: Tiger Kaovilai <passawit.kaovilai@gmail.com>
Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2025-09-18 09:39:37 -04:00
lyndon-li
a60808256d Merge pull request #9108 from blackpiglet/bump_e2e_upgrade_versions
Some checks failed
Run the E2E test on kind / build (push) Failing after 8m6s
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 58s
Bump the Velero and plugin image versions for the upgrade and migrati…
2025-07-24 17:44:48 +08:00
Xun Jiang
befd9d4b51 Bump the Velero and plugin image versions for the upgrade and migration tests.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-07-24 16:35:54 +08:00
lyndon-li
5ae1caef9d Merge pull request #9107 from Lyndon-Li/release-1.16
1.16.2 changelog
2025-07-24 13:53:12 +08:00
Lyndon-Li
cc2dc02cbc 1.16.2 changelog
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-07-24 13:28:38 +08:00
Xun Jiang/Bruce Jiang
189a5b2836 Bump Golang, Ubuntu, and golang.org/x/oauth2 to fix CVEs. (#9104)
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-07-24 12:35:09 +08:00
Xun Jiang/Bruce Jiang
0fc7e2f98a Add imagePullSecrets inheritance for VGDP pod and maintenance job. (#9102)
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-07-23 22:28:28 -05:00
Shubham Pampattiwar
8adfd8d0b1 Merge pull request #9103 from shubham-pampattiwar/fix-backup-desc-cp
Some checks failed
Run the E2E test on kind / build (push) Failing after 7m58s
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 1m2s
[release-1.16] Fix missing defaultVolumesToFsBackup flag output in Velero describe backup cmd (#9056)
2025-07-23 15:51:14 -07:00
Shubham Pampattiwar
78fd58fb43 Update Backup describe string for DefaultVolumesToFSBackup flag (#9105)
add changelog file

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
(cherry picked from commit aa2e09c69e)
2025-07-23 15:01:21 -07:00
Shubham Pampattiwar
8f51c1c08c Fix missing defaultVolumesToFsBackup flag output in Velero describe backup cmd (#9056)
add changelog file

Show defaultVolumesToFsBackup in describe only when set by the user

minor ut fix

minor fix

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
(cherry picked from commit 60a6c7384f)

update changelog filename

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2025-07-23 15:01:21 -07:00
lyndon-li
fd9f3fe79f issue 9077: don't block backup deletion on list VS error (#9101)
Some checks failed
Run the E2E test on kind / build (push) Failing after 8m26s
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 1m2s
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-07-23 11:04:48 -04:00
Scott Seago
043005c7a4 Mounted cloud credentials should not be world-readable (#8919) (#9094)
Some checks failed
Run the E2E test on kind / build (push) Failing after 8m30s
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 51s
Signed-off-by: Scott Seago <sseago@redhat.com>
2025-07-21 11:11:38 +08:00
Wenkai Yin(尹文开)
1017d7aa6a Merge pull request #9060 from sseago/multiple-hook-tracking-1.16
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m54s
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 1m1s
[release-1.16] Allow for proper tracking of multiple hooks per container
2025-07-07 11:29:05 +08:00
Scott Seago
6709a8a24b Allow for proper tracking of multiple hooks per container
Signed-off-by: Scott Seago <sseago@redhat.com>
2025-07-02 14:21:32 -04:00
Adarsh Saxena
3415f39a76 Bump golang to v1.23.10 to fix CVEs for 1.16.2 release (#9058)
Some checks failed
Run the E2E test on kind / build (push) Failing after 7m49s
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 57s
* Bump golang to v1.23.10 to fix CVEs

Signed-off-by: Adarsh Saxena <adarsh.saxena@acquia.com>

* Dockerfile restic miss 1.23.10

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

* restic cve go1.23.10

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

---------

Signed-off-by: Adarsh Saxena <adarsh.saxena@acquia.com>
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
Co-authored-by: Tiger Kaovilai <tkaovila@redhat.com>
2025-07-02 11:49:30 -04:00
Wenkai Yin(尹文开)
8aeb8a2e70 Merge pull request #9010 from blackpiglet/7785_1.16
Some checks failed
Run the E2E test on kind / build (push) Failing after 45s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 42s
[cherry-pick] [release-1.16] Add BSL status check for backup/restore operations.
2025-06-20 14:31:38 +08:00
Xun Jiang
a8ce0fe3a4 Add BSL status check for backup/restore operations.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-06-09 14:53:53 +08:00
Wenkai Yin(尹文开)
2eb97fa8b1 Merge pull request #8940 from ywk253100/250514_fix
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m15s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 32s
Call WaitGroup.Done() once only when PVB changes to fianl status the first time to avoid panic
2025-05-14 15:57:37 +08:00
Wenkai Yin(尹文开)
f64fb36508 Call WaitGroup.Done() once only when PVB changes to final status the first time to avoid panic
Call WaitGroup.Done() once only when PVB changes to final status the first time to avoid
panic

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2025-05-14 15:34:24 +08:00
Xun Jiang/Bruce Jiang
4bd86f1275 Merge pull request #8939 from blackpiglet/modify_image_usage_1.16
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m27s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 32s
[cherry-pick] [1.16] Modify image usage
2025-05-14 12:49:54 +08:00
Xun Jiang
18ef5e61ad Support using image registry proxy in more cases.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-05-14 11:09:03 +08:00
Xun Jiang
01aa5385b5 Add default bakcup repository configuration for E2E.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-05-14 11:02:14 +08:00
Xun Jiang/Bruce Jiang
361717296b Merge pull request #8928 from Lyndon-Li/release-1.16
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m20s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 31s
1.16.1 changelog update
2025-05-12 19:43:36 +08:00
Lyndon-Li
82dce51004 1.16.1 changelog update
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-05-12 18:38:50 +08:00
Xun Jiang/Bruce Jiang
659a352ed1 Add VolumeSnapshotContent into the RIA and the mustHave resource list. (#8926)
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-05-12 17:00:33 +08:00
lyndon-li
9eeea4f211 Merge pull request #8922 from Lyndon-Li/release-1.16
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m26s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 32s
Bump up base image
2025-05-09 17:41:34 +08:00
Lyndon-Li
e1068d6062 bump up base image
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-05-09 17:11:36 +08:00
Xun Jiang/Bruce Jiang
bcd3d513c4 Merge pull request #8921 from Lyndon-Li/release-1.16
1.16.1 changelog
2025-05-09 16:40:01 +08:00
Lyndon-Li
5e87c3d48e 1.16.1 changelog
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-05-09 16:10:31 +08:00
lyndon-li
ed68b43acd Merge pull request #8911 from Lyndon-Li/release-1.16
[1.16] issue-8878: relief node os deduction error checks
2025-05-09 12:44:03 +08:00
lyndon-li
acc8cc41c3 Merge branch 'release-1.16' into release-1.16 2025-05-09 12:07:51 +08:00
lyndon-li
f1271372e8 Merge pull request #8916 from sseago/warn-managed-fields-patch-1.16
[1.16] Warn managed fields patch 1.16
2025-05-09 12:07:06 +08:00
lyndon-li
4b39481776 Merge branch 'release-1.16' into release-1.16 2025-05-09 11:27:08 +08:00
lyndon-li
80837ee2ac Merge branch 'release-1.16' into warn-managed-fields-patch-1.16 2025-05-09 11:27:03 +08:00
lyndon-li
8de844b8d3 Merge pull request #8920 from blackpiglet/remove_gcr_1.16
[1.16][cherry-pick] Remove pushing images to GCR.
2025-05-09 11:26:15 +08:00
Xun Jiang
2809de9ead Remove pushing images to GCR.
Remove dependency with GCR.

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-05-09 10:39:01 +08:00
Scott Seago
ea9b4f37f3 For not found errors on managed fields, add restore warning
Signed-off-by: Scott Seago <sseago@redhat.com>
2025-05-07 11:28:16 -04:00
Lyndon-Li
7bad9df51d issue-8878: relief node os deduction error checks
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-05-07 10:57:44 +08:00
lyndon-li
0c36cc82c1 Merge pull request #8889 from blackpiglet/fix_cve_for_1.16.1
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m30s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 36s
Bump Golang and golang.org/x/net to fix CVEs.
2025-04-28 15:59:18 +08:00
Xun Jiang
0d4fb1fd5e Bump Golang and golang.org/x/net to fix CVEs.
Also fix CVE for Restic.

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-04-27 15:09:57 +08:00
lyndon-li
8f31599fe4 Merge pull request #8849 from Lyndon-Li/release-1.16
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m40s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 38s
[1.16] Issue 8847: inherit pod info from node-agent-windows
2025-04-08 13:38:04 +08:00
lyndon-li
f8ae1495ac Merge branch 'release-1.16' into release-1.16 2025-04-08 12:57:26 +08:00
Xun Jiang/Bruce Jiang
b469d9f427 Bump base image to 0.2.57 to fix CVEs. (#8853)
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m58s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 37s
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-04-07 23:27:46 -04:00
Lyndon-Li
87084ce3c7 issue 8847: inherit pod info from node-agent-windows
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-04-07 19:41:14 +08:00
lyndon-li
3df026ffdb Merge pull request #8834 from Lyndon-Li/release-1.16
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m12s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 37s
Pin velero image for 1.16.0
2025-03-31 15:15:43 +08:00
Lyndon-Li
406a730c2a pin velero image
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-03-31 13:39:27 +08:00
lyndon-li
e5c7c7f2ae Merge pull request #8829 from blackpiglet/align_upgrade_cli_and_image_version
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m22s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 36s
Align the E2E upgrade test's CLI and image version.
2025-03-31 13:18:04 +08:00
Xun Jiang
6002d56735 Align the E2E upgrade test's CLI and image version.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-03-28 17:12:16 +08:00
Wenkai Yin(尹文开)
6df1424a44 Merge pull request #8828 from blackpiglet/bump_e2e_upgrade_migration_source_version
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m57s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 36s
Close stale issues and PRs / stale (push) Successful in 9s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m17s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 48s
Bump the migration and upgrade E2E test source version.
2025-03-28 14:13:28 +08:00
Xun Jiang/Bruce Jiang
07fd98e3fe Merge pull request #8824 from Lyndon-Li/1.16-change-log
Add 1.16 changelog and release notes
2025-03-28 13:47:49 +08:00
Xun Jiang
9d0493c2b5 Bump the migration and upgrade E2E test source version.
Add v1.16 related plugin and other image default version.

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-03-28 11:41:39 +08:00
Wenkai Yin(尹文开)
8f8884fbb3 Merge pull request #8826 from blackpiglet/fix_migration_for_non_data_move
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m21s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 37s
[E2E] Fix the non data mover migration failure.
2025-03-28 11:39:32 +08:00
Lyndon-Li
8580ef88fe add 1.16 changelog
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-03-28 11:38:14 +08:00
Xun Jiang
6a0c6d5b75 Fix the non data mover migration failure.
Migration cases use the Kibishii as the workload, and SC mapping
ConfigMap was needed for all scenarios, because standby cluster
doesn't have the Kibishii SC after setting up.

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-03-27 18:35:30 +08:00
lyndon-li
bea46e334d Merge pull request #8822 from Lyndon-Li/1.16-doc
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m53s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 35s
Close stale issues and PRs / stale (push) Successful in 7s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m9s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 50s
Add 1.16 doc
2025-03-27 17:20:26 +08:00
Lyndon-Li
b9fd3e40ed add 1.16 doc
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-03-27 17:05:16 +08:00
Wenkai Yin(尹文开)
3569ccc653 Merge pull request #8821 from Lyndon-Li/doc-upgrade-to-1.16
Add doc for upgrade to 1.16
2025-03-27 16:46:06 +08:00
lyndon-li
438a6db497 Merge pull request #8819 from blackpiglet/bump_restic_for_1.16
Bump the golang.org/x/net to v0.36.0 to fix Restic CVE.
2025-03-27 16:42:11 +08:00
Lyndon-Li
7114144278 add doc for upgrade to 1.16
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-03-27 14:31:50 +08:00
Wenkai Yin(尹文开)
9241b61972 Merge pull request #8820 from Lyndon-Li/1.16-read-me-and-implemented-design
Update readme and implemented design for 1.16
2025-03-27 14:03:23 +08:00
Lyndon-Li
9e9bb128a3 update readme and implemented design for 1.16
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-03-27 13:28:16 +08:00
Xun Jiang
96760885dc Bump the golang.org/x/net to v0.36.0 to fix Restic CVE.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-03-27 11:02:33 +08:00
lyndon-li
751d782293 Merge pull request #8812 from Lyndon-Li/third-party-annotation-for-maintenance-job
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m4s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 37s
Close stale issues and PRs / stale (push) Successful in 8s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m11s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 55s
Add third party annotation support for maintenance job
2025-03-26 17:08:03 +08:00
Lyndon-Li
f1dcb7ba11 add third party annotation support for maintenance job
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-03-25 13:43:38 +08:00
lyndon-li
883e3e4aae Merge pull request #8808 from Lyndon-Li/issue-fix-8803
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m3s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 36s
Close stale issues and PRs / stale (push) Successful in 9s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m4s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 47s
Issue 8803: use deterministic name to create backupRepository
2025-03-25 10:55:33 +08:00
Lyndon-Li
3c5ebbadd3 issue 8803: use deterministic name to create backupRepository
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-03-24 18:34:33 +08:00
Tiger Kaovilai
eaa5610904 Document schedule skipImmediately (#8802)
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m59s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 35s
Close stale issues and PRs / stale (push) Successful in 8s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m15s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 55s
Fixes #8787

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2025-03-24 15:33:59 +08:00
Wenkai Yin(尹文开)
76a5866107 Merge pull request #8799 from kaovilai/kind-cv2-images
e2e: Enable KinD containerdv2 images
2025-03-24 15:02:36 +08:00
dependabot[bot]
efad9a0e94 Bump github.com/golang-jwt/jwt/v5 from 5.2.1 to 5.2.2 (#8806)
Bumps [github.com/golang-jwt/jwt/v5](https://github.com/golang-jwt/jwt) from 5.2.1 to 5.2.2.
- [Release notes](https://github.com/golang-jwt/jwt/releases)
- [Changelog](https://github.com/golang-jwt/jwt/blob/main/VERSION_HISTORY.md)
- [Commits](https://github.com/golang-jwt/jwt/compare/v5.2.1...v5.2.2)

---
updated-dependencies:
- dependency-name: github.com/golang-jwt/jwt/v5
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-24 15:01:52 +08:00
lyndon-li
d086cb2fc3 Merge pull request #8797 from blackpiglet/update_vs_vsc_name
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m18s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 35s
Close stale issues and PRs / stale (push) Successful in 8s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m10s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 58s
Modify how the restore workflow using the resource name
2025-03-21 10:55:42 +08:00
Tiger Kaovilai
a98c559818 Enable containerdv2 images
Fixes https://github.com/vmware-tanzu/velero/issues/8648

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2025-03-20 09:13:25 -05:00
Xun Jiang
1652e6b27f Modify how the restore workflow using the resource name.
The restore workflow used name represents the backup resource and the
restore to be restored, but the restored resource name may be different
from the backup one, e.g. PV and VSC are global resources, to avoid
conflict, need to rename them.
Reanme the name variable to backupResourceName, and use obj.GetName()
for restore operation.

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-03-20 18:42:09 +08:00
Tiger Kaovilai
71863e017d Bump kind cli to v0.27.0 (#8699)
Some checks failed
Run the E2E test on kind / build (push) Failing after 34s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 45s
Close stale issues and PRs / stale (push) Successful in 7s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 53s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 59s
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2025-03-20 11:31:52 +08:00
hu-keyu
0d27d5258f issue8720: log doesn't show pv name (#8771)
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m16s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 35s
Close stale issues and PRs / stale (push) Failing after 1m14s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 52s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 47s
* fix: log doesn't show pv name

Signed-off-by: hu-keyu <hzldd999@gmail.com>

* fix: add changelog

Signed-off-by: hu-keyu <hzldd999@gmail.com>

* update changelog fileName

Signed-off-by: hu-keyu <hzldd999@gmail.com>

---------

Signed-off-by: hu-keyu <hzldd999@gmail.com>
2025-03-13 18:14:05 -04:00
Roger Zimmermann
38a52980cc Issue #8772 ensure pv removed (#8777)
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m16s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 37s
Close stale issues and PRs / stale (push) Successful in 8s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m10s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 53s
* ensure pv has been deleted

Signed-off-by: Roger Zimmermann <roger.zimmermann@inventx.ch>

* ensure delete pv unit test

Signed-off-by: Roger Zimmermann <roger.zimmermann@inventx.ch>

* comment, errors

Signed-off-by: Roger Zimmermann <roger.zimmermann@inventx.ch>

* updated changelog
Signed-off-by: Roger Zimmermann <roger.zimmermann@inventx.ch>

Signed-off-by: Roger Zimmermann <roger.zimmermann@inventx.ch>

* pass value

Co-authored-by: Tiger Kaovilai <passawit.kaovilai@gmail.com>
Signed-off-by: Roger Zimmermann <roger.zimmermann@inventx.ch>

* function renamed as suggested

Signed-off-by: Roger Zimmermann <roger.zimmermann@inventx.ch>

---------

Signed-off-by: Roger Zimmermann <roger.zimmermann@inventx.ch>
Co-authored-by: Tiger Kaovilai <passawit.kaovilai@gmail.com>
2025-03-13 10:39:25 -04:00
Shubham Pampattiwar
1b4c17bf9c Merge pull request #8784 from blackpiglet/update_repo_maintanence_doc
Fix the JSON format error in the repository-maitenance.md
2025-03-13 07:01:09 -07:00
Xun Jiang
b83148f626 Fix the JSON format error in the repository-maitenance.md
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-03-13 16:45:53 +08:00
Xun Jiang/Bruce Jiang
0fb63232ba Merge pull request #8782 from vmware-tanzu/dependabot/go_modules/golang.org/x/net-0.36.0
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m31s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 38s
Bump golang.org/x/net from 0.34.0 to 0.36.0
2025-03-13 10:36:46 +08:00
dependabot[bot]
55d1592aaa Bump golang.org/x/net from 0.34.0 to 0.36.0
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.34.0 to 0.36.0.
- [Commits](https://github.com/golang/net/compare/v0.34.0...v0.36.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-13 01:58:14 +00:00
lyndon-li
d1a244e12f Merge pull request #8774 from mpryc/upstream_8649
Some checks failed
Run the E2E test on kind / build (push) Failing after 7m29s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 39s
Close stale issues and PRs / stale (push) Successful in 9s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m19s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 1m8s
issue 8649: host_pods should not be mandatory to node-agent
2025-03-12 08:39:00 +08:00
Shubham Pampattiwar
6337c52cfb Merge pull request #8755 from sseago/csi-pvc-annotations
Some checks failed
Run the E2E test on kind / build (push) Failing after 12m44s
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 42s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m11s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 57s
Move pvc annotation removal from CSI RIA to regular PVC RIA
2025-03-11 10:45:58 -07:00
Michal Pryc
b4eee87e18 issue 8649: host_pods should not be mandatory to node-agent
Enables the node-agent to start even if the
/host_pods path does not exist.

If the path is present, the existing logic
remains unchanged, ensuring it is readable.

Signed-off-by: Michal Pryc <mpryc@redhat.com>
2025-03-11 13:11:25 +01:00
lyndon-li
eb5634f41e Merge pull request #8770 from Lyndon-Li/issue-fix-8754
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m24s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 38s
Issue 8754: add third party annotation support
2025-03-11 16:41:19 +08:00
Xun Jiang/Bruce Jiang
7a311d6ee0 Merge pull request #8775 from ywk253100/250311_doc
Fix incorrect indent in doc
2025-03-11 16:39:00 +08:00
Wenkai Yin(尹文开)
1eda42a9f2 Fix incorrect indent in doc
Fix incorrect indent in doc

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2025-03-11 14:17:26 +08:00
Lyndon-Li
b170892e64 issue 8754: add third party annotation support
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-03-10 10:38:26 +08:00
Tiger Kaovilai
1516e72ccb Merge pull request #8759 from shubham-pampattiwar/add-vp-labels-docs
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m24s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 36s
Close stale issues and PRs / stale (push) Successful in 8s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m7s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 59s
Add docs for volume policy with labels as a criteria
2025-03-07 09:52:49 -06:00
Shubham Pampattiwar
deb262c1b0 Add docs for volume policy with labels as a criteria
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

add changelog file

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2025-03-06 08:26:17 -08:00
Scott Seago
fe14a2c934 Move pvc annotation removal from CSI RIA to regular PVC RIA
Combine existing PVC non-CSI RIAs and move annotation
removal out of the CSI plugin to fix issues with
CSI volumes when using fs-backup

Signed-off-by: Scott Seago <sseago@redhat.com>
2025-03-05 15:55:55 -05:00
Shubham Pampattiwar
512199723f Merge pull request #8693 from shubham-pampattiwar/obj-status-restore-docs
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m49s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 35s
Close stale issues and PRs / stale (push) Successful in 8s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m1s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 48s
Add docs for object level status restore
2025-03-05 12:05:40 -08:00
Tiger Kaovilai
05112fef29 Merge pull request #8734 from lindhe/patch-1
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m39s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 36s
Close stale issues and PRs / stale (push) Successful in 7s
Fix typo "Defaults is"
2025-03-05 10:55:39 -06:00
Tiger Kaovilai
5dbf002da7 go-mod-upgrade: golang.org/x/oauth2@v0.27.0 (#8752)
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2025-03-05 09:21:16 -05:00
Wenkai Yin(尹文开)
d18278aa58 Merge pull request #8737 from Lyndon-Li/issue-fix-8733
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m7s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 36s
Issue 8733: add doc for restorePVC
2025-03-05 15:07:33 +08:00
Wenkai Yin(尹文开)
d4e40c01d8 Merge pull request #8736 from Lyndon-Li/issue-fix-8426
Add doc for Windows support
2025-03-05 15:06:26 +08:00
Xun Jiang/Bruce Jiang
5e68beb13f Merge pull request #8743 from kaovilai/crypto-652135
CVE-2025-22869
2025-03-05 11:25:12 +08:00
Tiger Kaovilai
945911ccb5 dockerfile go:1.23
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2025-03-04 11:18:14 -06:00
Tiger Kaovilai
bf2b1185bf CVE-2025-22869 + go1.23
Including https://go-review.googlesource.com/c/crypto/+/652135 patch to fix CVE

```sh
go get golang.org/x/crypto@v0.35.0 toolchain@none && go mod tidy
```

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2025-03-04 09:47:16 -06:00
Matthieu MOREL
aa88d1cfd3 chore: update Go to 1.23 and toolchain to 1.23.6 (#8717)
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m45s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
build-image / Build (push) Failing after 9s
Main CI / Build (push) Failing after 21s
Close stale issues and PRs / stale (push) Successful in 7s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 59s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 47s
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
Co-authored-by: Janne Kataja <janne.kataja@sdx.com>
2025-03-04 10:33:33 -05:00
Matthieu MOREL
6a6a237ba7 Bump golangci-lint from v1.57.2 to v1.64.5 (#8641)
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m43s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
build-image / Build (push) Failing after 8s
Main CI / Build (push) Failing after 33s
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-03-04 13:55:29 +05:30
lyndon-li
3c22de7fe3 Merge pull request #8747 from Lyndon-Li/doc-for-maintenance-history
Add doc for maintenance history
2025-03-04 14:22:42 +08:00
Lyndon-Li
88455b1e83 add doc for maintenance history
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-03-04 11:09:51 +08:00
Lyndon-Li
5ed2401b9d issue 8733: add doc for restorePVC
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-03-04 10:54:03 +08:00
Lyndon-Li
1746291e59 issue-8426: add doc for Windows support
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-02-28 17:06:18 +08:00
Lyndon-Li
cb400e1d6b Merge branch 'main' into issue-fix-8426 2025-02-28 17:02:53 +08:00
Lyndon-Li
b334bfc3d7 issue-8426: add doc for Windows support
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-02-28 16:40:55 +08:00
Andreas Lindhé
7208f94c4f Fix typo "Defaults is"
This change fixes a minor typo in the Backup Hooks documentation, changing "Defaults is" to "Defaults to".

Signed-off-by: Andreas Lindhé <7773090+lindhe@users.noreply.github.com>
2025-02-28 08:42:39 +01:00
lyndon-li
3821906ffa Merge pull request #8729 from Lyndon-Li/iss-fix-8475
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m10s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 38s
Close stale issues and PRs / stale (push) Successful in 8s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m0s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 48s
Issue 8475: refactor build-from-source doc
2025-02-28 14:37:50 +08:00
lyndon-li
81609484ae Merge pull request #8728 from ywk253100/250227_pvb
Return directly if no pod volme backup are tracked
2025-02-28 11:16:26 +08:00
Lyndon-Li
3c323060c0 issue 8475: refactor build-from-source doc
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-02-27 18:48:10 +08:00
Wenkai Yin(尹文开)
ee43d040a6 Return directly if no pod volme backup are tracked
Return directly if no pod volme backup are tracked

Fixes #8723

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2025-02-27 16:56:03 +08:00
lyndon-li
a7f977f198 Merge pull request #8727 from Lyndon-Li/bump-up-kopia-0.19.0
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m33s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 37s
Close stale issues and PRs / stale (push) Successful in 8s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m3s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 55s
Bump up kopia to 0.19.0
2025-02-27 15:59:05 +08:00
Lyndon-Li
f12b9c15b2 bump up kopia to 0.19.0
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-02-27 13:35:47 +08:00
Shubham Pampattiwar
0eb1040a0a Add labels as a criteria for volume policy (#8713)
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m23s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 38s
Close stale issues and PRs / stale (push) Successful in 9s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m5s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 55s
* Add labels as a criteria for volume policy

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

add changelog file

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

handle err

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

use labels selector.matches

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

make update

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

remove fetching pvc from volume policy filtering

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

add more ut coverage

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

* minor updates

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

use VolumeFilterData struct in GetMatchAction func

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

update parsePVC func and add more ut

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

lint fix

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

---------

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2025-02-26 10:02:45 -05:00
Wenkai Yin(尹文开)
a45c9f27e8 Merge pull request #8715 from Lyndon-Li/issue-fix-8706
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m45s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 36s
Close stale issues and PRs / stale (push) Successful in 9s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 55s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 44s
Issue 8706: for immediate volumes, get node from volumeattachment
2025-02-25 14:25:45 +08:00
Xun Jiang/Bruce Jiang
f79b825cf1 Merge pull request #8684 from blackpiglet/7979_fix
7979 fix
2025-02-25 13:27:01 +08:00
Xun Jiang/Bruce Jiang
ad08c7a3ff Merge pull request #8712 from sseago/pod-initcontainer-securitycontext
Copy SecurityContext from Containers[0] if present for PVR
2025-02-25 11:02:57 +08:00
lyndon-li
564e77465b Merge pull request #8581 from kaovilai/configKopiaMaintInterval
Configurable Kopia Maintenance Interval
2025-02-25 10:56:23 +08:00
Xun Jiang
6b7dd12bf7 Modify VS and VSC restore actions.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-02-25 10:44:45 +08:00
Scott Seago
21db5f8853 Copy SecurityContext from Containers[0] if present for PVR
Signed-off-by: Scott Seago <sseago@redhat.com>
2025-02-24 15:23:29 -05:00
lyndon-li
9295be4cc0 Merge pull request #8714 from kaovilai/gitignore_debug.test
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m44s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 35s
Close stale issues and PRs / stale (push) Successful in 8s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m8s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 43s
Ignore debug.test* from vscode debug
2025-02-24 14:20:52 +08:00
Tiger Kaovilai
178b6e3db5 add more maintenance interval unit tests
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2025-02-21 14:22:11 -06:00
Lyndon-Li
bf0d909524 issue 8706: for immediate volumes, get node from volumeattachment
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-02-21 13:27:44 +08:00
Tiger Kaovilai
1e6af39458 Ignore debug.test* from vscode debug
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2025-02-20 19:40:39 -06:00
Tiger Kaovilai
3fb8c72b6c empty string case
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2025-02-20 16:40:49 -06:00
Tiger Kaovilai
92617d07c5 log only if not equal
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2025-02-20 16:40:49 -06:00
Tiger Kaovilai
1b7d9014a5 add to unmarshal test
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2025-02-20 16:40:49 -06:00
Tiger Kaovilai
f93eed56ca doc update, move under kopia repo header
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2025-02-20 16:40:49 -06:00
Tiger Kaovilai
271ff180e9 lint
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2025-02-20 16:40:48 -06:00
Tiger Kaovilai
beb392e0db doc updates
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2025-02-20 16:40:48 -06:00
Tiger Kaovilai
21ae1cbe82 Address https://github.com/vmware-tanzu/velero/pull/8581#pullrequestreview-2622445640
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2025-02-20 16:40:48 -06:00
Tiger Kaovilai
3bb39d9331 Address https://github.com/vmware-tanzu/velero/pull/8581#pullrequestreview-2622443771
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2025-02-20 16:40:48 -06:00
Tiger Kaovilai
c153651044 Pass all backupRepoConfig keys to storageVariables, and thus RepoOptions.
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2025-02-20 16:40:48 -06:00
Tiger Kaovilai
5a79e70d79 Configurable Kopia Maintenance Interval
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

comment update

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

comment

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2025-02-20 16:40:48 -06:00
Shubham Pampattiwar
0f81772e83 Merge pull request #8503 from shubham-pampattiwar/vp-design-label-criteria
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m49s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 35s
Close stale issues and PRs / stale (push) Successful in 8s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m1s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 54s
Design to add label selector as a criteria for volume policy
2025-02-20 14:21:44 -08:00
Shubham Pampattiwar
62889238ed Design to add label selector as a criteria for volume policy
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

add changelog file

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

use pvc labels for vp criteria

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

update design

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

add examples and update non-goals

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2025-02-20 11:51:47 -08:00
Lyndon-Li
cf58cc8fb2 Merge branch 'main' into issue-fix-8706 2025-02-20 19:20:45 +08:00
Lyndon-Li
e2a7986629 issue 8706: for immediate volumes, get node from volumeattachment
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-02-20 19:19:28 +08:00
Xun Jiang
eb77151f48 Delete VSC after backup completes.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-02-19 14:36:59 +08:00
Xun Jiang
620a116e7f Modify CSI related DeleteItemActions.
Remove the VS DIA.
Modify the VSC DIA: create then delete the VSC.

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-02-19 14:36:59 +08:00
Xun Jiang
3843ae7030 Delete VolumeSnapshotContent from the backup sync process.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-02-19 14:36:59 +08:00
Daniel Jiang
e64806a651 Merge pull request #8695 from blackpiglet/golangci_config_fix
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m42s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 34s
Close stale issues and PRs / stale (push) Successful in 7s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 55s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 57s
Modify golangci configuration to make it work.
2025-02-19 14:26:16 +08:00
Wenkai Yin(尹文开)
82e3b1190c Merge pull request #8703 from ywk253100/250213_makefile
Update Makefile to support pushing images to an insecure registry
2025-02-19 14:16:49 +08:00
Xun Jiang
e736ef71df Modify golangci configuration to make it work.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-02-19 13:58:04 +08:00
Xun Jiang/Bruce Jiang
2b0c5094bd Merge pull request #8700 from kaovilai/kind-containerdv2-skip
e2e: skip more containerdv2 kind images
2025-02-19 13:55:46 +08:00
Wenkai Yin(尹文开)
bca5e55620 Update Makefile to support pushing images to an insecure registry
Update Makefile to support pushing images to an insecure registry

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2025-02-19 11:22:47 +08:00
Wenkai Yin(尹文开)
80cea31a84 Merge pull request #8694 from ywk253100/250214_hook
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m44s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 36s
Close stale issues and PRs / stale (push) Successful in 7s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 57s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 51s
Run backup post hooks inside ItemBlock synchronously
2025-02-18 14:37:27 +08:00
Tiger Kaovilai
4c6fedd563 e2e: skip more containerdv2 kind images
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2025-02-17 21:30:56 -06:00
Tiger Kaovilai
a3cee616dc Upgrade go.mod k8s.io/ go.mod to v0.31.3 and set klog.SetLogger() for client-go (#8450)
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m44s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
build-image / Build (push) Failing after 10s
Main CI / Build (push) Failing after 31s
Close stale issues and PRs / stale (push) Successful in 7s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 59s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 45s
Also bumped to support upgraded k8s.io/ deps.
- controller-gen to v0.16.5
- sigs.k8s.io/controller-runtime v0.19.2

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2025-02-17 15:05:10 -05:00
Wenkai Yin(尹文开)
7aa8040c09 Run backup post hooks inside ItemBlock synchronously
Run backup post hooks inside ItemBlock synchronously as the ItemBlocks are handled asynchronously

Fixes #8516

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2025-02-17 13:27:41 +08:00
Shubham Pampattiwar
e0153e011e Add docs for object level status restore
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

add changelog file

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2025-02-14 14:19:54 -08:00
Tiger Kaovilai
9235fe1eb1 Merge pull request #8676 from blackpiglet/7979_design
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m3s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 34s
Close stale issues and PRs / stale (push) Successful in 8s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m5s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 55s
Add the design of cleaning artifacts generated during CSI B/R
2025-02-14 08:19:07 -06:00
Daniel Jiang
d9721fddb5 Merge pull request #8665 from aj-2000/user/aj-2000/validate-from-schedule-flag
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m7s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 32s
Validate `--from-schedule` flag in create backup command
2025-02-14 18:57:39 +08:00
Xun Jiang/Bruce Jiang
c0c4407657 Merge pull request #8681 from blackpiglet/8238_fix
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m41s
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 36s
Don't run maintenance on the ReadOnly BackupRepositories.
2025-02-14 11:32:49 +08:00
Wenkai Yin(尹文开)
e3a64065f1 Merge pull request #8659 from sseago/parallel-itemblocks
Implement parallel ItemBlock processing via backup_controller goroutines
2025-02-14 10:42:14 +08:00
Xun Jiang/Bruce Jiang
a6ae21e7a3 Add the design of cleaning artifacts generated during CSI B/R
Signed-off-by: Xun Jiang/Bruce Jiang <59276555+blackpiglet@users.noreply.github.com>
2025-02-13 15:45:43 +08:00
Xun Jiang/Bruce Jiang
fa156c3961 Don't run maintenance on the ReadOnly BackupRepositories.
Signed-off-by: Xun Jiang/Bruce Jiang <59276555+blackpiglet@users.noreply.github.com>
2025-02-13 13:46:53 +08:00
Wenkai Yin(尹文开)
e446d92d4c Merge pull request #8464 from shubham-pampattiwar/obj-status-restore-impl
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m28s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 35s
Close stale issues and PRs / stale (push) Successful in 7s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 56s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 50s
Allowing Object-Level Resource Status Restore
2025-02-13 13:37:58 +08:00
Wenkai Yin(尹文开)
c8e623864f Merge pull request #8679 from ywk253100/250211_waitgroup
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m33s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 35s
Fix WaitGroup panic issue
2025-02-13 11:05:05 +08:00
Shubham Pampattiwar
893621c1ad Allowing Object-Level Resource Status Restore
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

add changelog

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

Update impl according to design

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

make update

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

update logging

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2025-02-12 18:59:25 -08:00
Scott Seago
fcfb2fd9ee Implement parallel ItemBlock processing via backup_controller goroutines
Signed-off-by: Scott Seago <sseago@redhat.com>
2025-02-12 12:03:37 -05:00
Wenkai Yin(尹文开)
cdcd6eb99d Fix WaitGroup panic issue
Make sure WaitGroup.Add() is called before WaitGroup.Done() to avoid WaitGroup panic issue

Fixes #8657

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2025-02-12 13:56:05 +08:00
Daniel Jiang
79707aaa60 Merge pull request #8403 from shubham-pampattiwar/status-restore-cr-design
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m36s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 35s
Close stale issues and PRs / stale (push) Successful in 8s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m3s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 50s
Add Design for Allowing Object-Level Resource Status Restore
2025-02-11 19:46:30 +08:00
Tiger Kaovilai
5d9a4e84cb Merge pull request #8673 from mmorel-35/revive/unnecessary-stmt
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m33s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 36s
chore: enable unnecessary-stmt from revive
2025-02-11 02:50:29 +07:00
Matthieu MOREL
9010d9b13e chore: enable unnecessary-stmt from revive
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-08 12:11:22 +00:00
Xun Jiang/Bruce Jiang
0bf2252e10 Merge pull request #8671 from mmorel-35/revive/increment-decrement
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m36s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 34s
Close stale issues and PRs / stale (push) Successful in 8s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m1s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 1m0s
chore: enable increment-decrement from revive
2025-02-08 10:55:34 +08:00
Matthieu MOREL
ae5e94e822 chore: enable increment-decrement from revive
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-07 20:58:39 +01:00
Ajay Sharma
06fc9da925 refactor code
Signed-off-by: Ajay Sharma <ajaysharma.13122000@gmail.com>
2025-02-07 15:16:34 +00:00
Xun Jiang/Bruce Jiang
f56698e27e Merge pull request #8658 from vmware-tanzu/dependabot/github_actions/actions/stale-9.1.0
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m39s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 36s
Close stale issues and PRs / stale (push) Successful in 23s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m13s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 50s
Bump actions/stale from 9.0.0 to 9.1.0
2025-02-07 15:49:50 +08:00
Xun Jiang/Bruce Jiang
10a5b7b702 Merge pull request #8624 from mmorel-35/revive/use-any
chore: enable use-any from revive
2025-02-07 15:09:05 +08:00
lyndon-li
ba0636e8de Merge pull request #8664 from Lyndon-Li/refactor-pod-volume-context
Refactor pod volume context
2025-02-07 11:28:01 +08:00
Lyndon-Li
de170043ea rename cancel function
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-02-06 10:58:04 +08:00
Ajay Sharma
e9bd9f3c8d add changelog
Signed-off-by: Ajay Sharma <ajaysharma.13122000@gmail.com>
2025-02-05 17:01:21 +00:00
Ajay Sharma
3ca547f186 validate --from-schedule flag
Signed-off-by: Ajay Sharma <ajaysharma.13122000@gmail.com>
2025-02-05 14:01:31 +00:00
Lyndon-Li
5fd9df3e2c refactor pod volume context
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-02-05 16:16:44 +08:00
Shubham Pampattiwar
7442147028 Add Design for Allowing Instance-Level Resource Status Restore
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

add changelog file

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

typo fix

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

change instance to object

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

add precedence notes adn false as a valid anootation value

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2025-01-29 11:04:24 -08:00
dependabot[bot]
6d164f430c Bump actions/stale from 9.0.0 to 9.1.0
Bumps [actions/stale](https://github.com/actions/stale) from 9.0.0 to 9.1.0.
- [Release notes](https://github.com/actions/stale/releases)
- [Changelog](https://github.com/actions/stale/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/stale/compare/v9.0.0...v9.1.0)

---
updated-dependencies:
- dependency-name: actions/stale
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-01-27 19:34:08 +00:00
Tiger Kaovilai
6ac38cde85 Merge pull request #8651 from kaovilai/temp-ignoreContainerdv2Kind
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m35s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 37s
Close stale issues and PRs / stale (push) Successful in 9s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m1s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 56s
e2e: Ignore containerdv2 KinD cluster
2025-01-27 08:35:58 +07:00
Tiger Kaovilai
b877f4acae e2e: Ignore containerdv2 KinD cluster
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2025-01-24 10:50:17 -05:00
lyndon-li
294bbbc69e Merge pull request #8642 from Lyndon-Li/bump-up-kopia
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m33s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 37s
Close stale issues and PRs / stale (push) Successful in 8s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m6s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 46s
Bump up Kopia
2025-01-24 13:25:59 +08:00
Wenkai Yin(尹文开)
ec1eadc501 Merge pull request #8643 from Lyndon-Li/windows-support-smoking-test
Windows support smoking test
2025-01-24 10:41:58 +08:00
Lyndon-Li
7caa52c1fa bump up kopia
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-01-23 16:01:12 +08:00
Wenkai Yin(尹文开)
9afad9a2db Merge pull request #8630 from ywk253100/250116_update
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m31s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 35s
Close stale issues and PRs / stale (push) Successful in 9s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 54s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 52s
Handle update conflict when restoring the status
2025-01-23 13:16:46 +08:00
Daniel Jiang
bedea9c74c Merge pull request #8637 from reasonerjt/rm-leaked-vs
Clean up leaked CSI snapshot for incomplete backup
2025-01-23 12:56:12 +08:00
Matthieu MOREL
1e54f1cb15 chore: enable var-declaration from revive (#8636)
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m28s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 33s
Close stale issues and PRs / stale (push) Successful in 8s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 52s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 56s
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-01-22 15:56:44 -05:00
Daniel Jiang
1c372893ec Clean up leaked CSI snapshot for incomplete backup
This commit makes sure when a backup is deleted the controller will
delete the CSI snapshot even when the bakckup tarball is not uploaded.

fixes #8160

Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2025-01-22 17:17:41 +08:00
Lyndon-Li
43fcaa2706 windows support smoking test
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-01-22 13:44:45 +08:00
lyndon-li
a9031eb13f Merge pull request #8626 from Lyndon-Li/repo-maintainance-for-windows-2
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m30s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 35s
Close stale issues and PRs / stale (push) Successful in 8s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m1s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 48s
Repo maintenance for windows
2025-01-21 13:47:40 +08:00
Wenkai Yin(尹文开)
f0efe2aaa1 Handle update conflict when restoring the status
Handle update conflict when restoring the status

Fixes #8184

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2025-01-21 13:06:24 +08:00
Lyndon-Li
0a4b05cb6e repo maintenance for windows
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-01-17 19:06:57 +08:00
Matthieu MOREL
cbba3bdde7 chore: enable use-any from revive
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-01-17 07:58:10 +01:00
lyndon-li
5b1738abf8 Merge pull request #8580 from Lyndon-Li/recall-repo-maintenance-history-on-restart
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m31s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 32s
Close stale issues and PRs / stale (push) Successful in 9s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 53s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 49s
Recall repo maintenance history on restart
2025-01-17 14:08:27 +08:00
Lyndon-Li
91fcb65118 add maintenance wait backoff log
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-01-16 13:38:51 +08:00
lyndon-li
223e1fca70 Merge pull request #8621 from sseago/datamover-new-ns
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m31s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 32s
Close stale issues and PRs / stale (push) Successful in 8s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 52s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 49s
Always create DataUpload configmap in restore namespace
2025-01-16 11:11:50 +08:00
Scott Seago
d090d0ad44 Always create DataUpload configmap in restore namespace
Signed-off-by: Scott Seago <sseago@redhat.com>
2025-01-15 16:30:13 -05:00
Lyndon-Li
0045e94072 get maintenance result only for failed jobs
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-01-15 17:35:12 +08:00
Lyndon-Li
3900f2f117 recall repo maintenance history on restart
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-01-15 15:05:02 +08:00
lyndon-li
054375093d Merge pull request #8615 from Lyndon-Li/avoid-creating-repo-when-bsl-is-readonly
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m20s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 34s
Close stale issues and PRs / stale (push) Successful in 8s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 54s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 48s
Avoid to create new repo when BSL is readonly
2025-01-15 14:41:14 +08:00
lyndon-li
1d3af6d160 Merge pull request #8611 from Lyndon-Li/distribute-dd-evenly
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m35s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 34s
Close stale issues and PRs / stale (push) Successful in 9s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 58s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 40s
Distribute dd evenly across nodes
2025-01-14 17:21:45 +08:00
Lyndon-Li
34c26dd476 avoid to create new repo when BSL is readonly
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-01-14 17:12:46 +08:00
lyndon-li
2ef7711227 Merge pull request #8608 from Lyndon-Li/update-du-dd-progress-when-terminal-event-is-missing
Update du/dd progress on completion
2025-01-14 15:00:45 +08:00
Lyndon-Li
b52b45012b distribute dd evenly across nodes
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-01-14 14:37:30 +08:00
Tiger Kaovilai
ddc1bcbdf5 Merge pull request #8609 from mmorel-35/golangci-lint/revive
chore: enable revive default rules
2025-01-14 13:35:46 +07:00
Matthieu MOREL
298b8ad992 chore: enable revive default rules
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-01-13 11:46:59 +00:00
Lyndon-Li
97ce5662ba Merge branch 'main' into update-du-dd-progress-when-terminal-event-is-missing 2025-01-13 19:17:53 +08:00
Lyndon-Li
411469b90c update du/dd progress on completion
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-01-13 18:33:32 +08:00
lyndon-li
5f7bf64d06 Merge pull request #8606 from Lyndon-Li/data-mover-pod-misc-enhancement-for-windows
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m26s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 32s
Close stale issues and PRs / stale (push) Successful in 8s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m1s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 50s
Add Windows toleration to data mover pods
2025-01-13 18:22:21 +08:00
lyndon-li
094ba59160 Merge pull request #8602 from Lyndon-Li/change-udmrepo-config-to-tmp
Change udmrepo config file location to tmp
2025-01-13 17:10:08 +08:00
Lyndon-Li
e79dbb8d60 change udmrepo config file location to tmp
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-01-13 15:53:54 +08:00
Lyndon-Li
5dedaca148 data mover pod misc enhancement for windows
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-01-13 15:30:47 +08:00
Tiger Kaovilai
e92069247d Merge pull request #8603 from ywk253100/250113_pvb
[cherry-pick]Check the PVB status via podvolume Backupper rather than calling API server to avoid API server issue
2025-01-13 14:22:17 +07:00
Tiger Kaovilai
fb7cf9e4ba Merge pull request #8598 from mmorel-35/partially-fix-dupword
fix: dupword on tests
2025-01-13 13:37:28 +07:00
lyndon-li
3207619f30 Merge pull request #8594 from Lyndon-Li/data-mover-restore-for-windows
Data mover restore for Windows
2025-01-13 13:04:29 +08:00
Wenkai Yin(尹文开)
1f39943291 Check the PVB status via podvolume Backupper rather than calling API server to avoid API server issue
Check the PVB status via podvolume Backupper rather than calling API server to avoid API server issue

Fixes #8587

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2025-01-13 12:56:26 +08:00
Lyndon-Li
fc9683688a move maintenance to a separate folder
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-01-13 10:57:14 +08:00
Matthieu MOREL
80bba2ee9c Update .golangci.yaml
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-01-10 12:22:16 +01:00
Matthieu MOREL
d8bb82b29e Update .golangci.yaml
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-01-10 11:52:15 +01:00
Matthieu MOREL
29a77958d5 fix: dupword on tests
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-01-10 11:44:06 +01:00
Lyndon-Li
a8469126d8 data mover restore for Windows
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-01-10 08:58:32 +00:00
Tiger Kaovilai
225db5e8c0 Merge pull request #8385 from mmorel-35/golangci-lint/perfsprint
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m22s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 33s
Close stale issues and PRs / stale (push) Successful in 7s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 52s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 43s
golangci-lint: enable int-conversion and fiximports rule of perfsprint
2025-01-10 15:28:21 +07:00
lyndon-li
46b8a31ef0 Merge pull request #8590 from Lyndon-Li/fix-data-mover-progress-missing-after-25-updates
Issue 8579 - set event burst
2025-01-10 15:12:51 +08:00
Lyndon-Li
32ae4091ac add event burst
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-01-10 14:18:07 +08:00
lyndon-li
42d2e9bfc4 Merge pull request #8591 from reasonerjt/finalize-async-op
Skip patching the PV in finalization for failed operation
2025-01-10 14:02:42 +08:00
Matthieu MOREL
05765fb2fd golangci-lint: enable int-conversion and fiximports rule of perfsprint
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-01-09 22:31:29 +00:00
Daniel Jiang
dc02caf2b0 Skip patching the PV in finalization for failed operation
This commit makes change in restore finalizer controller, to make it
check the status in item operation of a PVC before patch the PV that is
bound to it.  If the operation is not successful it will skip patching
the PV.

Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2025-01-09 01:42:50 +08:00
lyndon-li
be5f56ab18 Merge pull request #8550 from Lyndon-Li/restore-pvc-ignore-wait-for-first-consumer
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m14s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 32s
Close stale issues and PRs / stale (push) Successful in 8s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m3s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 52s
Issue 8044: generic restore - allow to ignore delay binding for WaitForFirstConsumer
2025-01-08 15:14:20 +08:00
Tiger Kaovilai
dce97770cd Merge pull request #8572 from sseago/exclude-pvs-from-backup
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m16s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 34s
Close stale issues and PRs / stale (push) Successful in 7s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 59s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 43s
Don't include excluded items in ItemBlocks
2025-01-07 13:21:36 +07:00
Lyndon-Li
4ce7361f5a recall repo maintenance history on restart
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-01-07 12:58:43 +08:00
Scott Seago
4b09b63c2d Don't include excluded items in ItemBlocks
Signed-off-by: Scott Seago <sseago@redhat.com>
2025-01-06 18:11:45 -05:00
Lyndon-Li
ceeab10b6e Merge branch 'main' into recall-repo-maintenance-history-on-restart 2025-01-06 17:21:52 +08:00
Lyndon-Li
6b73a256d5 recall repo maintenance history on restart
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-01-06 17:11:03 +08:00
Lyndon-Li
db69829fd7 repo maintenance job out of repo manager
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-01-06 16:25:33 +08:00
Daniel Jiang
3eaa73962b Merge pull request #8574 from ywk253100/241223_restore_helper
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m15s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 29s
Close stale issues and PRs / stale (push) Successful in 8s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 53s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 50s
Merge restore helper image into Velero server image
2025-01-06 13:48:28 +08:00
Wenkai Yin(尹文开)
3120e33ed7 Clear validation errors when schedule is valid (#8575)
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m23s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 36s
Close stale issues and PRs / stale (push) Successful in 8s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m1s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 42s
Clear validation errors when schedule is valid

Fixes #8571

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2025-01-03 15:13:43 -05:00
Lyndon-Li
912b116bdb always use job's time
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-01-03 16:50:35 +08:00
Lyndon-Li
cfad06b701 Merge branch 'main' into restore-pvc-ignore-wait-for-first-consumer 2025-01-03 14:14:37 +08:00
Wenkai Yin(尹文开)
eb5230e12f Merge restore helper image into Velero server image
Merge restore helper image into Velero server image

Fixes #8484

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2025-01-03 14:12:23 +08:00
lyndon-li
6860dabb85 Merge pull request #8569 from Lyndon-Li/uploaders-windows-support
Some checks failed
Run the E2E test on kind / build (push) Failing after 4m59s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 32s
Uploaders windows support
2025-01-03 11:32:32 +08:00
Lyndon-Li
cb22dfc482 fs uploader and block uploader support Windows nodes
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-01-02 13:25:23 +08:00
Lyndon-Li
d2a25cd446 fs uploader skip system folders on windows
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-01-02 11:30:40 +08:00
Lyndon-Li
bc6414672e disable block volume data mover on windows
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-01-02 11:28:21 +08:00
Lyndon-Li
6ff0aa32e3 recall existing repo maintenance to history
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-01-02 11:16:46 +08:00
Wenkai Yin(尹文开)
03d0bd9d22 Merge pull request #8555 from Lyndon-Li/data-mover-backup-for-windows-nodes
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m7s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 34s
Close stale issues and PRs / stale (push) Successful in 8s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 57s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 49s
Data mover backup for Windows nodes
2025-01-02 11:15:54 +08:00
Lyndon-Li
f5d13aeb17 data mover backup for Windows nodes
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-12-26 02:46:08 +00:00
Lyndon-Li
a56b06bab1 issue 8044: generic restore - allow to ignore WaitForFirstConsumer
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-12-26 10:29:15 +08:00
lyndon-li
78c97d93b5 Merge pull request #8518 from Lyndon-Li/fail-fs-backup-on-windows-nodes
Some checks failed
Run the E2E test on kind / build (push) Failing after 4m19s
Run the E2E test on kind / setup-test-matrix (push) Successful in 7s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 43s
Close stale issues and PRs / stale (push) Successful in 8s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m1s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 51s
fs-backup for clusters with windows nodes
2024-12-24 15:15:15 +08:00
Lyndon-Li
4e0a0e0b72 fail fs-backup for windows nodes
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-12-24 14:26:02 +08:00
Xun Jiang/Bruce Jiang
9dcfe164d8 Merge pull request #8553 from blackpiglet/bump_restic_go_mod
[cherry-pick] Bump Restic go.mod to fix CVEs.
2024-12-24 14:17:16 +08:00
Xun Jiang/Bruce Jiang
fa8f464fb3 Merge pull request #8551 from blackpiglet/migration_init
Some checks failed
Run the E2E test on kind / build (push) Failing after 4m57s
Run the E2E test on kind / setup-test-matrix (push) Successful in 18s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 43s
[cherry-pick] Modify the Init logic to fix the migration case error.
2024-12-24 11:31:01 +08:00
Xun Jiang/Bruce Jiang
20a647b265 Merge pull request #8552 from blackpiglet/skip_deprecation_message_main
[cherry-pick] Skip the deprecation message for the dry-run install CLI JSON output.
2024-12-24 11:30:32 +08:00
Xun Jiang
e68dca0112 Bump Restic go.mod to fix CVEs.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2024-12-24 11:19:02 +08:00
Xun Jiang
9486bd0acb Skip the deprecation message for the dry-run install CLI JSON output.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2024-12-24 11:04:23 +08:00
Xun Jiang
938dd3c661 Modify the Init logic to fix the migration case error.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2024-12-24 11:00:42 +08:00
Daniel Jiang
eeee79e551 Merge pull request #8532 from Lyndon-Li/isolate-message-in-backup-repo
Some checks failed
Run the E2E test on kind / build (push) Failing after 4m51s
Run the E2E test on kind / setup-test-matrix (push) Successful in 12s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 40s
Close stale issues and PRs / stale (push) Successful in 7s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m8s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 1m2s
Add maintenance history for backupRepository CRs
2024-12-23 19:29:52 +08:00
Lyndon-Li
623e023bb3 wait node-agent for Windows
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-12-23 19:04:40 +08:00
Wenkai Yin(尹文开)
e725f89906 Merge pull request #8548 from ywk253100/241223_fix
[cherry-pick]Bug fix: increase the WaitGroup counter before start the goroutine
2024-12-23 18:22:56 +08:00
Wenkai Yin(尹文开)
14e71fa2cd Bug fix: increase the WaitGroup counter before start the goroutine
Bug fix: increase the WaitGroup counter before start the goroutine

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-12-23 17:26:36 +08:00
Lyndon-Li
92390e9af5 add repo maintain result in history
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-12-23 15:37:27 +08:00
Lyndon-Li
77f1141ef5 backup repo crd changes for repo maintenance history
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-12-23 15:24:17 +08:00
Daniel Jiang
703a726cf2 Merge pull request #8541 from kaovilai/CVEs
CVE-2024-45337 CVE-2024-45338
2024-12-23 15:13:17 +08:00
Tiger Kaovilai
8cb04bba33 CVE-2024-45337 CVE-2024-45338
Replaces #8514

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-12-21 00:59:48 +07:00
lyndon-li
e85f18dc59 Merge pull request #8538 from Lyndon-Li/hide-restic-deprecation-warning-for-install-crd-only
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m14s
Run the E2E test on kind / setup-test-matrix (push) Successful in 8s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 50s
Close stale issues and PRs / stale (push) Successful in 6s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m2s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 52s
hide restic deprecation warning for install with crd-only
2024-12-20 16:00:33 +08:00
Lyndon-Li
be97a5c1c6 hide restic deprecation warning for install with crd-only
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-12-20 14:48:03 +08:00
Lyndon-Li
3504546ba9 Merge branch 'main' into fail-fs-backup-on-windows-nodes 2024-12-20 13:20:01 +08:00
Lyndon-Li
cae7a7a901 Merge branch 'main' into fail-fs-backup-on-windows-nodes 2024-12-20 11:41:45 +08:00
lyndon-li
ea93c00cc2 Merge pull request #8504 from Lyndon-Li/linux-windows-hybrid-deploy
Linux windows hybrid deploy
2024-12-20 11:40:25 +08:00
Lyndon-Li
3b2c50b459 add repo maintain result in history
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-12-19 16:20:15 +08:00
Lyndon-Li
c9bfd33077 isolate repo maintenane history
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-12-19 15:33:58 +08:00
Wenkai Yin(尹文开)
975e6bdc6c Merge pull request #8525 from Lyndon-Li/fix-gcr-push-problem
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m34s
Run the E2E test on kind / setup-test-matrix (push) Successful in 13s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 1m9s
Close stale issues and PRs / stale (push) Successful in 17s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 54s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 55s
Fix GCR image missing problem
2024-12-19 10:07:07 +08:00
Lyndon-Li
876a1fc30f fix gcr image missing problem
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-12-18 20:13:42 +08:00
Lyndon-Li
dfdb1c139d backup repo crd changes
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-12-18 10:56:46 +00:00
Wenkai Yin(尹文开)
a663cc4a76 Merge pull request #8512 from ywk253100/251213_pause
Fix issue: backup schedule pause/unpause doesn't work
2024-12-18 17:24:02 +08:00
Lyndon-Li
4ad9c2485a hybrid deploy
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-12-18 10:50:23 +08:00
Lyndon-Li
a711b1067b fail fs-backup for windows nodes
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-12-18 10:46:00 +08:00
Lyndon-Li
99ba81e5d1 add use-node-agent-windows
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-12-17 13:54:03 +08:00
Lyndon-Li
617411fa5a Merge branch 'main' into linux-windows-hybrid-deploy 2024-12-17 13:46:52 +08:00
Lyndon-Li
fe0a45eac6 restict velero server in linux nodes
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-12-17 13:38:33 +08:00
Lyndon-Li
a5a6e47e42 add use-node-agent-windows
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-12-17 13:27:51 +08:00
Lyndon-Li
11cd6d922b hybrid deploy
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-12-17 13:05:46 +08:00
Wenkai Yin(尹文开)
010fd1cb1d Merge pull request #8509 from ywk253100/241212_hook_fix
Fix backup post hook issue
2024-12-17 13:02:25 +08:00
Wenkai Yin(尹文开)
6e34c09d84 Fix issue: backup schedule pause/unpause doesn't work
The issue is caused by the changes of controller-runtime: WithEventFilter() doesn't apply to WatchesRawSource(),
this commit set Predicate for WatchesRawSource() seperatedly

Fixes #8437

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-12-13 16:07:53 +08:00
Wenkai Yin(尹文开)
0224d99889 Merge pull request #8482 from Lyndon-Li/data-mover-exposer-diagnostic
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m17s
Run the E2E test on kind / setup-test-matrix (push) Successful in 5s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 1m3s
Close stale issues and PRs / stale (push) Has started running
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Has started running
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Has started running
Data mover exposer diagnostic
2024-12-13 14:28:37 +08:00
Wenkai Yin(尹文开)
c43fc42c25 Fix backup post hook issue
Fix backup post hook issue

Fixes #8159

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-12-13 12:25:45 +08:00
lyndon-li
cd01222d8e Merge pull request #8508 from Lyndon-Li/issue-fix-8267-info-when-expose-error
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m21s
Run the E2E test on kind / setup-test-matrix (push) Successful in 14s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 1m1s
Close stale issues and PRs / stale (push) Successful in 26s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m27s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 1m8s
Issue 8267: enhance the error message when expose fails
2024-12-12 17:00:44 +08:00
Daniel Jiang
cb7758f72b Merge pull request #8441 from blackpiglet/refactor_migration_e2e
Refactor migration E2E case
2024-12-12 12:14:24 +08:00
Lyndon-Li
8b545532e2 issue 8267: add informative logs when expose error
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-12-12 11:19:26 +08:00
Daniel Jiang
eb48cbd60f Merge pull request #8297 from kaovilai/aws-getbucketregion-hint
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m8s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 57s
Close stale issues and PRs / stale (push) Successful in 12s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m16s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 1m19s
Set hinting region to use for GetBucketRegion() in pkg/repository/config/aws.go
2024-12-11 14:19:11 +08:00
Wenkai Yin(尹文开)
26661c775f Merge pull request #8498 from Lyndon-Li/move-accept-info-to-du-dd-cr
Move the accepted info from annotations to DU/DD CR
2024-12-11 13:22:39 +08:00
Lyndon-Li
0ea4eb563a hybrid deploy
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-12-10 18:28:18 +08:00
lyndon-li
ff6ea15796 Merge pull request #8476 from Lyndon-Li/build-hybrid-image
Some checks failed
Run the E2E test on kind / build (push) Failing after 5m33s
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 1m20s
Close stale issues and PRs / stale (push) Successful in 20s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m12s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 1m5s
Build hybrid image
2024-12-10 16:50:06 +08:00
Lyndon-Li
34e417bdac add diagnostic for data mover exposer
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-12-10 14:00:31 +08:00
lyndon-li
a1cf952b8d Issue 8433: add third party labels to data mover pods when the same labels exist in node-agent pods (#8487)
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m42s
Run the E2E test on kind / setup-test-matrix (push) Successful in 8s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 1m26s
Close stale issues and PRs / stale (push) Successful in 14s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m7s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 1m19s
* issue 8433: add ask label to data mover pods

Signed-off-by: Lyndon-Li <lyonghui@vmware.com>

* check existence of the same label from node-agent

Signed-off-by: Lyndon-Li <lyonghui@vmware.com>

---------

Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-12-09 12:44:39 -05:00
Lyndon-Li
86082eb137 move the accepted info from annotations to DU/DD CR
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-12-09 16:39:04 +08:00
lyndon-li
11f100fc59 Merge pull request #8486 from Lyndon-Li/fix-issue-8485-prepare-timeout-not-work
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m31s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 1m7s
Fix prepare timeout issue
2024-12-09 14:54:03 +08:00
Tiger Kaovilai
b588dc926d Merge pull request #8491 from reasonerjt/restore-help-secctx
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m55s
Run the E2E test on kind / setup-test-matrix (push) Successful in 10s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 1m15s
Close stale issues and PRs / stale (push) Successful in 18s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m20s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 1m19s
Add SecurityContext to restore-helper
2024-12-06 10:27:36 -05:00
Daniel Jiang
4b7f93189d Add SecurityContext to restore-helper
This commit adds SecurityContext that complies with "restricted" level
per Pod Security Standards to "restore-helper" initContainer.
It ensures the restore won't fail when the cluster enforces PSA.

Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2024-12-06 17:30:41 +08:00
Lyndon-Li
bcba234035 Merge branch 'main' into build-hybrid-image 2024-12-06 15:57:07 +08:00
Lyndon-Li
ed9af610e5 support specified buildx instance
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-12-06 14:48:47 +08:00
Tiger Kaovilai
aa7ca15159 Merge pull request #8489 from schen1/fix/aws-link
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m55s
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 1m29s
Close stale issues and PRs / stale (push) Successful in 29s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m44s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 1m6s
Fix: AWS Go SDK URL
2024-12-05 12:00:02 -05:00
Sylvain Chen
4f634dc3ab Fix: AWS Go SDK URL
Signed-off-by: Sylvain Chen <sylvain.chen1@gmail.com>
2024-12-05 14:30:40 +01:00
Lyndon-Li
cbdbbe26c2 fix prepare timeout issue
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-12-05 17:24:12 +08:00
Tiger Kaovilai
04d6c79179 Merge pull request #8471 from vmware-tanzu/8440_fix_main
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m42s
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 1m9s
[main] Add nil check for updating DataUpload VolumeInfo in finalizing phase
2024-12-05 01:17:19 -05:00
Shubham Pampattiwar
6c0ed1e5d2 Merge pull request #8366 from sseago/synchronise-backedupitems
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m39s
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 1m11s
Close stale issues and PRs / stale (push) Successful in 14s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m20s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 1m11s
Make BackedUpItems thread safe
2024-12-04 07:50:45 -08:00
Lyndon-Li
b607259563 add diagnostic for data mover exposer
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-12-04 14:49:58 +08:00
Lyndon-Li
abbfac09f4 Merge branch 'main' into data-mover-exposer-diagnostic 2024-12-04 10:33:57 +08:00
Lyndon-Li
baf74d67a7 build hybrid image
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-12-04 10:29:34 +08:00
Lyndon-Li
e4e9b18b37 add diagnostic for data mover exposer
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-12-04 10:28:50 +08:00
lyndon-li
2e5df858ad Merge pull request #8472 from Lyndon-Li/ping-kopia-to-0.18-branch
Some checks failed
Run the E2E test on kind / build (push) Failing after 21m41s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2m24s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 9m58s
Pin kopia to 0.18.2
2024-12-04 07:49:36 +08:00
Scott Seago
015b1e69f6 Make BackedUpItems thread safe
Signed-off-by: Scott Seago <sseago@redhat.com>
2024-12-03 15:23:45 -05:00
Lyndon-Li
dd18cb49e6 Merge branch 'main' into build-hybrid-image 2024-12-03 13:20:37 +08:00
Lyndon-Li
3cd85f5b43 ping kopia to 0.18.2
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-12-03 13:06:26 +08:00
Xun Jiang
226370d035 Add nil check for updating DataUpload VolumeInfo in finalizing phase.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2024-12-03 10:50:55 +08:00
lyndon-li
7e80d8f1fd Merge pull request #8459 from Lyndon-Li/design-for-windows-build
Some checks failed
Run the E2E test on kind / run-e2e-test (push) Blocked by required conditions
Run the E2E test on kind / setup-test-matrix (push) Successful in 1m32s
Run the E2E test on kind / build (push) Failing after 14m10s
Main CI / Build (push) Failing after 12m7s
Close stale issues and PRs / stale (push) Failing after 11m56s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 6m24s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 10m15s
Design for multi-arch build and windows build
2024-12-03 10:16:27 +08:00
Lyndon-Li
298b497482 design for multi-arch build and windows build - remove input parameter for GCR
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-12-02 15:01:00 +08:00
Wenkai Yin(尹文开)
b89270f2c1 Merge pull request #8456 from kaovilai/unused-change-struct
Some checks failed
Run the E2E test on kind / build (push) Failing after 11m29s
Run the E2E test on kind / setup-test-matrix (push) Successful in 1m37s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 6m14s
Close stale issues and PRs / stale (push) Failing after 11m57s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 11m55s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 11m54s
internal/hook/wait_exec_hook_handler_test.go: Remove unused change struct
2024-12-02 14:48:56 +08:00
Lyndon-Li
3723033c4f design for multi-arch build and windows build - add local build to tar
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-12-02 13:42:49 +08:00
Priyansh Choudhary
f338e874a8 Added ResourceModifier to Velero Documentation (#8467)
Some checks failed
Run the E2E test on kind / run-e2e-test (push) Blocked by required conditions
Run the E2E test on kind / build (push) Failing after 14m12s
Run the E2E test on kind / setup-test-matrix (push) Failing after 14m4s
Main CI / Build (push) Failing after 14m0s
* Doc updated, added resourceModifier

Signed-off-by: Priyansh Choudhary <im1706@gmail.com>

* Updated yaml to remove Apiversion

Signed-off-by: Priyansh Choudhary <im1706@gmail.com>

* Updated name of configmap

Signed-off-by: Priyansh Choudhary <im1706@gmail.com>

* Added doc updation to main page

Signed-off-by: Priyansh Choudhary <im1706@gmail.com>

---------

Signed-off-by: Priyansh Choudhary <im1706@gmail.com>
2024-12-02 10:11:19 +05:30
Mayank Aggarwal
074f26539d Adding Support For VolumeAttributes in Resource Policy (#8383)
Some checks failed
Run the E2E test on kind / build (push) Failing after 10m15s
Run the E2E test on kind / setup-test-matrix (push) Successful in 1m15s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 4m47s
Close stale issues and PRs / stale (push) Failing after 11m58s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 11m40s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 14m53s
* Adding VolumeAttributes validations in resource policy

Signed-off-by: mayaggar <mayaggar@microsoft.com>

* adding tests

Signed-off-by: mayaggar <mayaggar@microsoft.com>

* adding tests

Signed-off-by: mayaggar <mayaggar@microsoft.com>

* adding tests

Signed-off-by: mayaggar <mayaggar@microsoft.com>

* added changelog

Signed-off-by: mayaggar <mayaggar@microsoft.com>

* changelog

Signed-off-by: mayaggar <mayaggar@microsoft.com>

* design spec

Signed-off-by: mayaggar <mayaggar@microsoft.com>

* lint fixes

Signed-off-by: mayaggar <mayaggar@microsoft.com>

* doc update

Signed-off-by: mayaggar <mayaggar@microsoft.com>

* doc update

Signed-off-by: mayaggar <mayaggar@microsoft.com>

* Update internal/resourcepolicies/volume_resources_validator.go

Co-authored-by: Tiger Kaovilai <passawit.kaovilai@gmail.com>
Signed-off-by: Mayank Aggarwal <mayankagg9722@gmail.com>

* doc name update

Signed-off-by: mayaggar <mayaggar@microsoft.com>

---------

Signed-off-by: mayaggar <mayaggar@microsoft.com>
Signed-off-by: Mayank Aggarwal <mayankagg9722@gmail.com>
Co-authored-by: Tiger Kaovilai <passawit.kaovilai@gmail.com>
2024-11-28 10:17:07 +05:30
Lyndon-Li
3a7cf09957 design for multi-arch build and windows build
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-11-28 11:00:40 +08:00
Daniel Jiang
3c06fc8d87 Merge pull request #8438 from setoru/obs
Some checks failed
Run the E2E test on kind / build (push) Failing after 13m10s
Run the E2E test on kind / setup-test-matrix (push) Successful in 1m47s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 13m29s
Close stale issues and PRs / stale (push) Failing after 11m57s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 11m53s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 11m48s
add a storage supported provider : HuaweiCloud OBS
2024-11-27 14:26:25 +08:00
Lyndon-Li
18b3d96e64 build hybrid image
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-11-26 17:10:14 +08:00
lyndon-li
40a95aab32 Merge pull request #8455 from kaovilai/accessible-singleplat-images
Some checks failed
Run the E2E test on kind / run-e2e-test (push) Blocked by required conditions
Run the E2E test on kind / build (push) Failing after 11m56s
Main CI / Build (push) Failing after 5m29s
Run the E2E test on kind / setup-test-matrix (push) Failing after 11m32s
Close stale issues and PRs / stale (push) Successful in 2m18s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 5m17s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 1m57s
Make single platform built image locally accessible.
2024-11-26 16:49:32 +08:00
Xun Jiang/Bruce Jiang
ad987edd11 Merge pull request #8451 from kaovilai/new-changelog-brackets
Makefile: new-changelog handles `()` in pr title.
2024-11-26 13:51:57 +08:00
Xun Jiang
8fcb6de323 Refactor the migration cases.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2024-11-26 11:04:54 +08:00
Tiger Kaovilai
af85b7d59f Merge pull request #8430 from blackpiglet/8323_fix
Some checks failed
Run the E2E test on kind / build (push) Failing after 8m50s
Run the E2E test on kind / setup-test-matrix (push) Successful in 57s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 7m1s
Refactor the schedule cases
2024-11-25 17:44:05 -05:00
Tiger Kaovilai
b66d7a7e0c internal/hook/wait_exec_hook_handler_test.go: Remove unused change struct
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-11-25 14:25:19 -05:00
Tiger Kaovilai
483f0978e8 Make single platform built image accessible.
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-11-25 12:37:55 -05:00
Tiger Kaovilai
d00e7f8f2a Add make lint .cache/ to .gitignore (#8448)
Some checks failed
Run the E2E test on kind / run-e2e-test (push) Blocked by required conditions
Run the E2E test on kind / build (push) Failing after 14m3s
Run the E2E test on kind / setup-test-matrix (push) Failing after 13m59s
Main CI / Build (push) Failing after 13m51s
Close stale issues and PRs / stale (push) Successful in 1m9s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 3m19s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 1m39s
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-11-25 10:10:40 +05:30
Tiger Kaovilai
2bf98d3965 internal/volumes_information.go: reuse constants from pkg/apis/velero/v1 (#8446)
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-11-25 10:10:10 +05:30
Tiger Kaovilai
3517487611 Makefile: new-changelog handles () in pr title.
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-11-24 04:19:40 -05:00
setoru
871ba8de7c add huaweicloud as provider
Signed-off-by: setoru <setoru127@gmail.com>
2024-11-21 15:40:22 +08:00
Xun Jiang
226d50d9cb Modify the schedule cases.
* Modify the OrderResource case's verification code.
* Simplify the Periodical case.
* Simplify the InProgess case.
* Prettify the code.
* Replace math/rand with crypto/rand
* Replace PollUnitl with PollUntilContextTimeout

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2024-11-21 15:16:50 +08:00
Wenkai Yin(尹文开)
9f0026d7dc Merge pull request #8407 from blackpiglet/fix_storageclass
Some checks failed
Run the E2E test on kind / build (push) Failing after 8m48s
Main CI / Build (push) Failing after 3m55s
Run the E2E test on kind / setup-test-matrix (push) Failing after 10m18s
Run the E2E test on kind / run-e2e-test (push) Has been cancelled
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 11m59s
Close stale issues and PRs / stale (push) Failing after 11m58s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 11m50s
Fix E2E StorageClass and VolumeSnapshotClass's install and delete logic
2024-11-21 10:35:27 +08:00
Lyndon-Li
51490af667 Merge branch 'main' into build-hybrid-image 2024-11-20 13:44:04 +08:00
Shubham Pampattiwar
aed944cb0e Merge pull request #8257 from shubham-pampattiwar/add-warn-argocd
Some checks failed
Run the E2E test on kind / build (push) Failing after 15m53s
Run the E2E test on kind / setup-test-matrix (push) Successful in 1m27s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 13m40s
Close stale issues and PRs / stale (push) Failing after 11m56s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 11m56s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 11m54s
Add Backup warning for inclusion of NS managed by ArgoCD
2024-11-19 20:21:17 -08:00
Xun Jiang/Bruce Jiang
e19f45b9e9 Merge pull request #8414 from reasonerjt/rm-maintainers-from-website
Remove the Emeritus contributors from velero team section
2024-11-20 11:19:40 +08:00
Xun Jiang/Bruce Jiang
f50161d71f Merge pull request #8428 from vmware-tanzu/dependabot/github_actions/codecov/codecov-action-5
Bump codecov/codecov-action from 4 to 5
2024-11-20 10:53:26 +08:00
lyndon-li
55bbd5954f Merge pull request #8431 from Lyndon-Li/revert-push-image-tarball-to-gcs
Revert push image tarball to gcs
2024-11-20 10:42:00 +08:00
Shubham Pampattiwar
738bb79a99 Add Backup warning for inclusion of NS managed by ArgoCD
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

add changelog file

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

run make update

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

re-position import

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

update argo cd label comment

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

add nil check for backupRequest.Spec.IncludedNamespaces

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

minor fix

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

fix edge cases

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

add gh issue link in code comments

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2024-11-19 16:06:22 -08:00
Lyndon-Li
cc47be933d Revert "Upload Velero build package saved from build image to Google cloud storage"
This reverts commit 0b6df61eca.
2024-11-19 19:15:41 +08:00
Lyndon-Li
7cc0c99a08 Revert "Rename secret for Google cloud storage"
This reverts commit 4ab2712f6b.
2024-11-19 19:05:02 +08:00
Lyndon-Li
de7231cf86 Revert "Save vvelero image tarball only for velero namespace in docker registry (#5581)"
This reverts commit 1ea1d4df67.
2024-11-19 17:23:16 +08:00
Lyndon-Li
b92605f5fc build hybrid image
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-11-19 16:45:34 +08:00
Xun Jiang
e5354e123b Modify the StorageClass install and delete code.
* Only install and uninstall SC and VSC once for default cluster.
* Install and uninstall SC and VSC for standby cluster on migration case.
* Refactor the StorageClass and VolumeSnapshotClass YAMLs.
* Prettify the e2e_suite_test.go

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2024-11-19 11:10:50 +08:00
dependabot[bot]
ea09946803 Bump codecov/codecov-action from 4 to 5
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 4 to 5.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v4...v5)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-18 19:40:22 +00:00
Daniel Jiang
a9c9f19368 Merge pull request #8169 from mpryc/aws_creds_exposed
Fix #8168 - AWS secrets should not be exposed while running tests
2024-11-18 20:34:05 +08:00
Daniel Jiang
e7da6727cf Merge pull request #8343 from evhan/maintenance-job-env-from
Copy "envFrom" from Velero server when creating maintenance jobs
2024-11-18 20:28:44 +08:00
sangitaray2021
74790d9f60 Added tracking for deleted namespace status check in restore flow (#8233)
* Added tracking for deleted namespace status check in restore flow

Signed-off-by: sangitaray2021 <sangitaray@microsoft.com>

fixed unittest

Signed-off-by: sangitaray2021 <sangitaray@microsoft.com>

refactored tracker execution and caller

Signed-off-by: sangitaray2021 <sangitaray@microsoft.com>

added change log

Signed-off-by: sangitaray2021 <sangitaray@microsoft.com>

Author:    sangitaray2021 <sangitaray@microsft.com>

Author:    sangitaray2021 <sangitaray@microsoft.com>
Date:      Thu Sep 19 02:26:14 2024 +0530
Signed-off-by: sangitaray2021 <sangitaray@microsoft.com>

* fixed linter issuer

Signed-off-by: sangitaray2021 <sangitaray@microsoft.com>

* incorporated PR comments

Signed-off-by: sangitaray2021 <sangitaray@microsoft.com>

* resolved comments

Signed-off-by: sangitaray2021 <sangitaray@microsoft.com>

---------

Signed-off-by: sangitaray2021 <sangitaray@microsoft.com>
2024-11-18 13:41:07 +05:30
Daniel Jiang
6933e66dab Remove the Emeritus contributors from velero team section
Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2024-11-18 15:23:22 +08:00
Wenkai Yin(尹文开)
bef994e67a Merge pull request #8413 from reasonerjt/add-netlify-ref
Add reference to netlify in the website
2024-11-18 15:13:54 +08:00
Daniel Jiang
b2369cca28 Add reference to netlify in the website
In an effort to apply for OSS license of Netlify:
https://www.netlify.com/legal/open-source-policy

Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2024-11-18 15:03:32 +08:00
Shubham Pampattiwar
c30d044664 Merge pull request #8411 from qiuming-best/maintainer
Remove Ming Qiu from maintainers
2024-11-17 11:42:57 -08:00
Ming
677d99a857 Remove Ming Qiu from maintainers
Signed-off-by: Ming <mqiu@vmware.com>
2024-11-16 17:11:32 +08:00
Daniel Jiang
dacd5eff93 Merge pull request #8380 from sseago/worker-count
Add --item-block-worker-count flag to velero install and server
2024-11-15 16:04:25 +08:00
Xun Jiang/Bruce Jiang
5a64df9579 Merge pull request #8371 from blackpiglet/migration_case_support_vks
Make the E2E supporting VKS data mover environment.
2024-11-15 15:12:27 +08:00
Shubham Pampattiwar
7a51e0dad6 Merge pull request #8252 from kaovilai/mkcontainer-multiplat
Allow multi-arch manifest-list from `make container`
2024-11-14 10:17:55 -08:00
Xun Jiang/Bruce Jiang
ec2013b79d Merge pull request #8375 from kaovilai/run-e2e-latestk8s
Add v1.31, v1.30 to GHA matrix and use latest Kind k8s patch for each minor versions for e2e
2024-11-14 17:04:21 +08:00
Xun Jiang
bebea4d278 Modify upgrade and migration cases.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2024-11-13 23:11:20 +08:00
lyndon-li
32a8c62920 Merge pull request #8395 from Lyndon-Li/issue-fix-8394
Some checks failed
Run the E2E test on kind / run-e2e-test (1.23.17, ResourceFiltering && !Restic) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.23.17, ResourceModifier || (Backups && BackupsSync) || PrivilegesMgmt || OrderedResources) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.24.17, (NamespaceMapping && Single && Restic) || (NamespaceMapping && Multiple && Restic)) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.24.17, Basic && (ClusterResource || NodePort || StorageClass)) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.24.17, ResourceFiltering && !Restic) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.24.17, ResourceModifier || (Backups && BackupsSync) || PrivilegesMgmt || OrderedResources) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.25.16, (NamespaceMapping && Single && Restic) || (NamespaceMapping && Multiple && Restic)) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.25.16, Basic && (ClusterResource || NodePort || StorageClass)) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.25.16, ResourceFiltering && !Restic) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.25.16, ResourceModifier || (Backups && BackupsSync) || PrivilegesMgmt || OrderedResources) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.26.13, (NamespaceMapping && Single && Restic) || (NamespaceMapping && Multiple && Restic)) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.26.13, Basic && (ClusterResource || NodePort || StorageClass)) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.26.13, ResourceFiltering && !Restic) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.26.13, ResourceModifier || (Backups && BackupsSync) || PrivilegesMgmt || OrderedResources) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.27.10, (NamespaceMapping && Single && Restic) || (NamespaceMapping && Multiple && Restic)) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.27.10, Basic && (ClusterResource || NodePort || StorageClass)) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.27.10, ResourceFiltering && !Restic) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.27.10, ResourceModifier || (Backups && BackupsSync) || PrivilegesMgmt || OrderedResources) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.28.6, (NamespaceMapping && Single && Restic) || (NamespaceMapping && Multiple && Restic)) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.28.6, Basic && (ClusterResource || NodePort || StorageClass)) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.28.6, ResourceFiltering && !Restic) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.28.6, ResourceModifier || (Backups && BackupsSync) || PrivilegesMgmt || OrderedResources) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.29.1, (NamespaceMapping && Single && Restic) || (NamespaceMapping && Multiple && Restic)) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.29.1, Basic && (ClusterResource || NodePort || StorageClass)) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.29.1, ResourceFiltering && !Restic) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.29.1, ResourceModifier || (Backups && BackupsSync) || PrivilegesMgmt || OrderedResources) (push) Has been skipped
Main CI / Build (push) Failing after 3m35s
Close stale issues and PRs / stale (push) Failing after 14m19s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 14m18s
Trivy Nightly Scan / Trivy nightly scan (velero-restore-helper, main) (push) Failing after 14m16s
Issue 8394: move closeDataPath outside callbacks
2024-11-13 10:39:13 +08:00
Wenkai Yin(尹文开)
cb03de4574 Merge pull request #8396 from Lyndon-Li/issue-fix-8391
Issue 8391: check ErrCancelled from suffix
2024-11-13 10:08:06 +08:00
Xun Jiang
bcb60ed783 Modify other cases to support VKS environment.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2024-11-12 23:25:50 +08:00
Xun Jiang
b02fc1da96 E2E supports VKS data mover environment.
* Add new flag HAS_VSPHERE_PLUGIN for E2E test.
* Modify the E2E README for the new parameter.
* Add the VolumeSnapshotClass for VKS.
* Modify the plugin install logic.
* Modify the cases to support data mover case in VKS.

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2024-11-12 23:25:28 +08:00
Tiger Kaovilai
f200f8fe49 Remove 1.23, 1.24 from matrix
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-11-12 09:59:36 -05:00
Tiger Kaovilai
dfedc43cf3 Dynamic Kind Versions for e2e
Always test latest available patch version of each supported k8s version available in Kindest/node images.

ie. This adds v1.31, v1.30 to test matrix and upgrade patch versions for others.

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-11-12 09:59:21 -05:00
Lyndon-Li
7feda11e54 issue 8391: check ErrCancelled from suffix
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-11-12 18:32:38 +08:00
Lyndon-Li
e5d6c48fea issue 8394: move closeDataPath outside callbacks
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-11-12 17:07:50 +08:00
Daniel Jiang
8e23752a6e Merge pull request #8388 from blackpiglet/8384_fix
Remove crd-verify-kind action because e2e-test-kind already covered
2024-11-12 16:56:10 +08:00
Xun Jiang
d5d5cc6589 Remove crd-verify-kind action because the e2e-test-kind already cover it.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2024-11-11 15:47:35 +08:00
Wenkai Yin(尹文开)
1fbd22f353 Merge pull request #8381 from kaovilai/ebs.csi.aws.com
Typo: ebs.csi.aws.com instead of aws.ebs.csi.driver
2024-11-11 14:17:05 +08:00
Wenkai Yin(尹文开)
511afbe1eb Merge pull request #8377 from kaovilai/maintainerinfo
Add kaovilai maintainer details
2024-11-11 14:15:51 +08:00
Xun Jiang/Bruce Jiang
a46fef8f2f Merge pull request #8378 from kaovilai/skipTestsFor.md
Skip e2e, crd, go linters on .md checks.
2024-11-08 14:44:33 +08:00
Tiger Kaovilai
a5ef9d6f7c Typo: ebs.csi.aws.com instead of aws.ebs.csi.driver
Per driver [code](966da33cff/pkg/driver/driver.go (L49C30-L49C45))

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-11-07 16:24:25 -05:00
Scott Seago
6588141090 Add --item-block-worker-count flag to velero install and server
Signed-off-by: Scott Seago <sseago@redhat.com>
2024-11-07 10:58:36 -05:00
Shubham Pampattiwar
10fce5e0cd Merge pull request #8370 from shubham-pampattiwar/fix-status-rs-docs
Fix Restore object's status docs
2024-11-06 15:49:39 -08:00
Tiger Kaovilai
a75506bb13 Skip e2e, crd, go linters on .md checks.
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-11-06 15:34:12 -05:00
Tiger Kaovilai
4071435023 Add kaovilai maintainer details
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-11-06 14:55:20 -05:00
Daniel Jiang
d0cffa3d19 Merge pull request #8354 from alromeros/add-annotations-flag
Include --annotations flag in backup and restore create commands
2024-11-06 01:17:17 +08:00
Wenkai Yin(尹文开)
6bffac5d06 Merge pull request #8353 from ywk253100/241010_discovery
Use aggregated discovery API to discovery API groups and resources
2024-11-05 18:24:14 +08:00
Shubham Pampattiwar
7c4bc77cdc Fix Restore objects status docs
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2024-11-04 23:38:44 -08:00
Evan Hanson
f981dd4ab2 Copy "envFrom" from Velero node-agent when creating data mover pods
Signed-off-by: Evan Hanson <evanhanson@catalyst.net.nz>
2024-10-31 16:32:54 +13:00
Daniel Jiang
db470a751b Merge pull request #8315 from blackpiglet/8298_fix
Modifications to support VKS environment
2024-10-30 20:04:20 +08:00
Xun Jiang
29d84feb10 Refactor the code to get the plugin images for migration cases.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2024-10-30 15:53:44 +08:00
Evan Hanson
70d88901b9 Copy "envFrom" from Velero server when creating maintenance jobs
Signed-off-by: Evan Hanson <evanhanson@catalyst.net.nz>
2024-10-30 15:01:59 +13:00
Alvaro Romero
e2839bbdec Include --annotations flag in backup and restore create commands
This commit implements a new --annotations flag in the backup and restore create commands.

This allows users to specify key-value pairs for annotations directly at the time of backup and restore creation, in the same way as the --labels flag.

Signed-off-by: Alvaro Romero <alromero@redhat.com>
2024-10-28 09:52:31 +01:00
Wenkai Yin(尹文开)
07847925fe Use aggregated discovery API to discovery API groups and resources
Use aggregated discovery API to discovery API groups and resources

Fixes #7526

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-10-28 13:59:16 +08:00
Wenkai Yin(尹文开)
8320df44fd Merge pull request #8275 from ywk253100/241008_discovery
Bump up version of client-go and controller-runtime
2024-10-28 13:51:17 +08:00
Xun Jiang/Bruce Jiang
8058a38058 Merge pull request #8271 from mcluseau/main
fix(pkg/repository/maintenance): handle when there's no container status
2024-10-28 13:50:25 +08:00
Xun Jiang
82ce1fa44f Fix the KIBISHII_DIRECTORY parameter not work for make test-e2e issue.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2024-10-24 17:19:50 +08:00
Xun Jiang
e8267abdf9 Make change to support VKS environment.
FYI, the TKGm envrionment support is deprecated.

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2024-10-24 17:19:50 +08:00
lyndon-li
ebbeb7aeb7 Merge pull request #8338 from Lyndon-Li/fix-make-container-warning
Fix a warning during make container
2024-10-23 16:02:47 +08:00
Lyndon-Li
fa7fca8d3d fix a warning during make container
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-10-23 15:39:45 +08:00
lyndon-li
a9b5dbc0fa Merge pull request #8337 from Lyndon-Li/fix-windows-cli-compile-problem
Fix Windows cli compile problem
2024-10-23 15:29:30 +08:00
Lyndon-Li
53ef988c15 fix windows cli compile problem
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-10-23 14:49:44 +08:00
Wenkai Yin(尹文开)
706dd13020 Merge pull request #8330 from Lyndon-Li/1.15-change-log
Add 1.15 changelog
2024-10-23 10:24:48 +08:00
lyndon-li
bdd231cd31 Merge pull request #8333 from Lyndon-Li/add-1.15-doc
Add doc for 1.15
2024-10-23 10:19:35 +08:00
Lyndon-Li
6ffe4610c3 add 1.15 changelog
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-10-22 18:30:26 +08:00
Lyndon-Li
9f17fb30ee add doc for 1.15
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-10-22 17:35:33 +08:00
lyndon-li
182478fbdf Merge pull request #8332 from Lyndon-Li/fix-doc-index-for-1.15
Fix doc index for 1.15
2024-10-22 17:09:51 +08:00
Lyndon-Li
23bb0330d1 fix doc index for 1.15
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-10-22 16:58:31 +08:00
lyndon-li
660ea1e1db Merge pull request #8331 from Lyndon-Li/doc-upgrade-to-1.15
Update upgrade to 1.15 doc
2024-10-22 16:55:58 +08:00
Lyndon-Li
331d057caf update upgrade to 1.15 doc
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-10-22 16:41:57 +08:00
Daniel Jiang
fa4899a4b1 Merge pull request #8329 from blackpiglet/bump_e2e_migration_upgrade_version_for_1.15
Bump the E2E migration and upgrade Velero and plugin versions for 1.15
2024-10-22 15:39:47 +08:00
Xun Jiang
1831c7b2dc Bump the E2E migration and upgrade Velero and plugin versions for 1.15
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2024-10-22 14:38:19 +08:00
Mikaël Cluseau
e770f0c308 fix(pkg/repository/maintenance): don't panic when there's no container statuses
Signed-off-by: Mikaël Cluseau <mikael.cluseau@gmail.com>
2024-10-22 07:07:45 +02:00
Daniel Jiang
c53ab20d56 Merge pull request #8322 from mmorel-35/golangci-lint/contains
fix: use Contains or ErrorContains with testify
2024-10-21 14:49:03 +08:00
Matthieu MOREL
d06601e977 fix: use Contains or ErrorContains with testify
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2024-10-18 20:36:45 +02:00
Shubham Pampattiwar
732b87b250 Merge pull request #8314 from mmorel-35/golangci-lint/thelper
golangci-lint: enable and fix thelper linter
2024-10-16 23:53:56 -07:00
Matthieu MOREL
226a4c1138 golangci-lint: enable and fix thelper linter
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2024-10-17 08:12:57 +02:00
Daniel Jiang
c6264ff392 Merge pull request #8313 from Lyndon-Li/1.15-bump-up-kopia
Bump up kopia for 1.15
2024-10-16 20:40:15 +08:00
Daniel Jiang
b24b9fef08 Merge pull request #8309 from ywk253100/241016_action
Fix the issue in pushing image Github action
2024-10-16 15:48:57 +08:00
Lyndon-Li
9d5bb455a6 bump up kopia for 1.15
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-10-16 15:45:05 +08:00
Wenkai Yin(尹文开)
5c4b04efaa Fix the issue in pushing image Github action
Fix the issue in pushing image Github action

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-10-16 14:09:26 +08:00
lyndon-li
fe14fb235c Merge pull request #8301 from msfrucht/revert_expose_sourcevolumemode
Revert "Expose VSC SourceVolumeMode" 1.15
2024-10-16 10:04:46 +08:00
lyndon-li
0945780359 Merge pull request #8305 from sseago/iba-typo
fixed error message typo for item block action
2024-10-16 10:04:01 +08:00
Wenkai Yin(尹文开)
2b3a0b45c6 Merge pull request #8293 from blackpiglet/fix_e2e_namespace_missing_issue
Fix the context choosing error after migration case.
2024-10-16 09:47:39 +08:00
Scott Seago
6fa81ec9b9 fixed error message typo for item block action
Signed-off-by: Scott Seago <sseago@redhat.com>
2024-10-15 15:22:19 -04:00
lyndon-li
34d4f18cc8 Merge pull request #8288 from sseago/spc-norelabeling
add no-relabeling option to backupPVC configmap
2024-10-15 16:20:55 +08:00
Xun Jiang
6a1d8dfc6c Fix the context choosing error after migration case.
Change the FAIL_FAST default value to false.

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2024-10-15 13:49:32 +08:00
MICHAEL S FRUCHTMAN
d9b278edb9 Revert "Expose VSC SourceVolumeMode"
This reverts commit 7580538f03.

Signed-off-by: MICHAEL S FRUCHTMAN <msfrucht@us.ibm.com>
2024-10-14 12:01:05 -07:00
Tiger Kaovilai
69b456af70 Set hinting region to use for GetBucketRegion() in pkg/repository/config/aws.go
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-10-14 10:08:12 -05:00
Scott Seago
b1035dd49d add no-relabeling option to backupPVC configmap
Signed-off-by: Scott Seago <sseago@redhat.com>
2024-10-14 10:26:55 -04:00
lyndon-li
f02613d2f7 Merge pull request #8284 from sseago/selinux-readonly
only set spec.volumes readonly if PVC is readonly for datamover
2024-10-11 13:28:04 +08:00
Shubham Pampattiwar
b34e0116d7 Merge pull request #8286 from Lyndon-Li/1.15-readme
1.15 readme and implemented designs
2024-10-10 08:37:11 -07:00
Scott Seago
de7a414511 only set spec.volumes readonly if PVC is readonly for datamover
Signed-off-by: Scott Seago <sseago@redhat.com>
2024-10-10 10:51:33 -04:00
Lyndon-Li
561073d053 1.15 readme and implemented designs
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-10-10 16:14:01 +08:00
Daniel Jiang
ba0dbb91f9 Merge pull request #8281 from ywk253100/241009_fix
Use '"' rather than '`' in the log to avoid unexpected new line
2024-10-09 18:54:21 +08:00
Wenkai Yin(尹文开)
23ca089d40 Use '"' rather than '`' in the log to avoid unexpected new line
Use '"' rather than '`' in the log to avoid unexpected new line

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-10-09 14:44:13 +08:00
Daniel Jiang
10260bd34c Merge pull request #8056 from kaovilai/makelocalnodocker
Allow `make local` to work without `docker` in path
2024-10-09 08:34:09 +08:00
Daniel Jiang
db2eb89a26 Merge pull request #8245 from shubham-pampattiwar/fix-err-str
Remove multiple single quotes from Velero backup.status.validationErrors field
2024-10-09 08:33:19 +08:00
Wenkai Yin(尹文开)
0a4e417aab Bump up version of client-go and controller-runtime
Bump up version of client-go to v0.30.5
Bump up version of controller-runtime to v0.18.5

Fixes #8274

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-10-08 18:53:12 +08:00
lyndon-li
14758a3435 Merge pull request #8261 from msfrucht/copy_sourcevolumemode
Expose VSC SourceVolumeMode
2024-10-08 13:16:47 +08:00
MICHAEL S FRUCHTMAN
7580538f03 Expose VSC SourceVolumeMode
Add changelog and unittest

Signed-off-by: Michael Fruchtman <msfrucht@us.ibm.com>
2024-10-03 15:05:58 -07:00
Tiger Kaovilai
3f4a1c295a Makefile: Add BUILDX_PUSH var
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-10-01 16:16:47 -04:00
Shubham Pampattiwar
f15cde5dfd Remove mutiple single quotes from Velero backup.status.validationErrors field
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

update error message

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2024-09-30 14:20:44 -07:00
Tiger Kaovilai
42de654372 Revert "issue 8249: disable selinux relabel for backupPod (#8250)" (#8253)
This reverts commit 0ccdc7c6e1.

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-09-27 12:31:38 -04:00
lyndon-li
0ccdc7c6e1 issue 8249: disable selinux relabel for backupPod (#8250)
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-09-27 11:57:29 -04:00
Tiger Kaovilai
a4416874cf Allow multi-arch manifest-list from make container
by changing output type to image.

Then you can execute command like so to create a multi-arch image
```
BUILDX_PLATFORMS=linux/amd64,linux/arm64 BUILDX_OUTPUT_TYPE=image make container
```

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-09-27 10:29:08 -04:00
Daniel Jiang
aab2140a7c Merge pull request #8246 from shubham-pampattiwar/add-labels-job
Add labels to maintenance job pods
2024-09-25 17:22:45 +08:00
Shubham Pampattiwar
c0d51a5465 Add labels to maintanance job pods
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

add changelog

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2024-09-24 17:09:13 -07:00
Daniel Jiang
11f771fc39 Merge pull request #8216 from blackpiglet/skip_uninstall_on_fail_fast
Skip uninstall and resource cleanup when fail-fast is enabled.
2024-09-24 13:04:08 +08:00
Shubham Pampattiwar
8e94e1f9a8 Merge pull request #8239 from kaovilai/vgdpmcsv-abb
docs(vgdp-micro-service.md): correct typo in VGDP acronym description to match Unified Repository design reference
2024-09-23 21:10:36 -07:00
Xun Jiang/Bruce Jiang
025d66d5fd Merge pull request #8237 from Lyndon-Li/issue-fix-8232
Issue 8232: ensure the ending event sinked before shutdown
2024-09-24 11:20:52 +08:00
Tiger Kaovilai
9855cd28fb docs(vgdp-micro-service.md): correct typo in VGDP acronym description to match Unified Repository design reference
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-09-23 15:57:33 -04:00
Xun Jiang
5dcb315b10 Bump v1.13 and 1.14 plugin versions for E2E test.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2024-09-23 20:26:39 +08:00
Xun Jiang
1ba78b83bf Skip uninstall and resource cleanup when fail-fast is enabled.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2024-09-23 20:24:14 +08:00
Lyndon-Li
9deaa819aa issue 8232: ensure the ending event sinked before shutdown
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-09-23 18:35:56 +08:00
Daniel Jiang
60e9277e98 Merge pull request #8228 from ywk253100/240919_restore_priority
Add the Carvel package related resources to the restore priority list
2024-09-23 15:37:54 +08:00
Wenkai Yin(尹文开)
390ac497bb Add the Carvel package related resources to the restore priority list
Add the Carvel package related resources to the restore priority list

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-09-19 16:47:00 +08:00
Shubham Pampattiwar
0b74a73761 Merge pull request #8218 from sseago/itmblock-docs
Update design doc and site docs to reflect ItemBlock implementation
2024-09-18 16:27:06 -07:00
Xun Jiang/Bruce Jiang
95f6729276 Merge pull request #8225 from emmanuel-ferdman/main
Update the wait-for-additional-items design doc link
2024-09-18 22:30:31 +08:00
Emmanuel Ferdman
5d0f09da25 Update the wait-for-additional-items design doc link
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
2024-09-18 13:54:17 +03:00
Scott Seago
e6fb4ba3d5 Update design doc and site docs to reflect ItemBlock implementation
As with other plugin types, the information on how to implement
an IBA plugin will be in the velero-plugin-example repo.

Signed-off-by: Scott Seago <sseago@redhat.com>
2024-09-13 14:34:48 -04:00
Tiger Kaovilai
3f9c2dc789 Reduces ~140 indirect imports for plugin/framework importers (#8208)
* Avoid plugin framework importers from needing cloud provider imports

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-09-13 10:21:51 +08:00
Shubham Pampattiwar
da291467d7 Merge pull request #8199 from AlbeeSo/fix/use-new-gr
use newGR instead of groupResource after apiversion convert
2024-09-12 10:20:39 -07:00
Xun Jiang/Bruce Jiang
efcf836d16 Merge pull request #8201 from blackpiglet/update_velero_install_parameter
Add the ConfigMap-specified parameters into velero install CLI
2024-09-12 13:08:56 +08:00
Xun Jiang
68f3545424 Add the ConfigMap-specified parameters into velero install CLI.
Rename backup-repository-config to backup-repository-configmap.
Rename repo-maintenance-job-config to repo-maintenance-job-configmap.
Rename node-agent-config to node-agent-configmap.
Add those three parameters to `velero install` CLI.
Modify the design and the site documents.

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2024-09-12 11:24:14 +08:00
Wenkai Yin(尹文开)
7e8a3c0bbc Merge pull request #8206 from kaovilai/pkgpodvolumebackupper_test.go59714thecancelfunctionreturnedbycontext.WithTimeout
pkg/podvolume/backupper_test.go:597:14: the cancel function returned by context.WithTimeout should be called, not discarded, to avoid a context leak
2024-09-12 10:43:02 +08:00
Wenkai Yin(尹文开)
670338e02f Merge pull request #8210 from kaovilai/pvc_pv_test.go-32-2
pvc_pv_test.go:32:2: other import of "k8s.io/api/core/v1"
2024-09-12 10:42:19 +08:00
Wenkai Yin(尹文开)
5b4c8cd5b1 Merge pull request #8198 from kaovilai/pes-controller
Add controller name to periodical_enqueue_source
2024-09-12 10:41:46 +08:00
Tiger Kaovilai
70168634cb pvc_pv_test.go:32:2: other import of "k8s.io/api/core/v1"
package "k8s.io/api/core/v1" is being imported more than once (ST1019)
	pvc_pv_test.go:32:2: other import of "k8s.io/api/core/v1"

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-09-11 17:21:04 -04:00
Tiger Kaovilai
c8aa37d852 Remove additional param, use pkg/constant
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-09-11 17:13:37 -04:00
Xun Jiang/Bruce Jiang
1110853cba Merge pull request #8104 from kaovilai/makefile-changelog
Add new-changelog to Makefile
2024-09-11 14:09:30 +08:00
Xun Jiang/Bruce Jiang
bf6215c894 Merge pull request #7793 from kaovilai/upgrade_robfig/cron/v3
Upgrade to robfig/cron/v3 to support time zone specification
2024-09-11 14:02:58 +08:00
Daniel Jiang
f1e68f8ced Merge pull request #8202 from blackpiglet/7883_fix
Enable --fail-fast by default for E2E and performance tests.
2024-09-11 12:59:35 +08:00
Wenkai Yin(尹文开)
b523a1b680 Merge pull request #8068 from kaovilai/retry-patching-inprogress-implementation
Retry completion status patch for backup and restore resources
2024-09-11 11:24:56 +08:00
Tiger Kaovilai
3c777cb09f pkg/podvolume/backupper_test.go:597:14: the cancel function returned by context.WithTimeout should be called, not discarded, to avoid a context leak
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-09-10 19:26:58 -04:00
Shubham Pampattiwar
7c9b7c1ba5 Merge pull request #8144 from Lyndon-Li/data-mover-ms-doc
Data mover micro service doc
2024-09-10 15:49:05 -07:00
Tiger Kaovilai
c643ee5fd4 Retry completion status patch for backup and restore resources
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

update to design #8063

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-09-10 17:01:14 -04:00
Xun Jiang
f1846be634 Enable --fail-fast by default for E2E and performance tests.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2024-09-10 22:25:08 +08:00
AlbeeSo
c2192a75aa typo
Signed-off-by: AlbeeSo <suyashi1321@163.com>
2024-09-10 17:48:43 +08:00
AlbeeSo
02ac1069fe use newGR instead of groupResource if plugin has modified the group
Signed-off-by: AlbeeSo <suyashi1321@163.com>
2024-09-10 14:33:36 +08:00
Lyndon-Li
2641cc8fef data mover ms doc
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-09-10 10:31:14 +08:00
Lyndon-Li
ae5d97cd8c Merge branch 'main' into data-mover-ms-doc 2024-09-10 10:28:59 +08:00
Xun Jiang/Bruce Jiang
46801a0828 Merge pull request #8145 from blackpiglet/7758_implement
Implement the Repo maintanence Job configuration.
2024-09-10 08:09:43 +08:00
Tiger Kaovilai
5c4c66bee9 Add controller name to periodical_enqueue_source
The code changes are related to the `NewPeriodicalEnqueueSource` function in the `kube/periodical_enqueue_source.go` file. This function is used to create a new instance of the `PeriodicalEnqueueSource` struct, which is responsible for periodically enqueueing objects into a work queue.

The changes involve adding two new parameters to this function: `controllerName string` and modifying the existing `logger` parameter to include additional fields.

Here's what changed:

1. A new `controllerName` parameter was added to the `NewPeriodicalEnqueueSource` function.

These changes are to adding more context or metadata to the logging output, possibly for debugging or monitoring purposes.

The other files (`restore_operations_controller.go`, `schedule_controller.go`, and their respective test files) were modified to use this updated `NewPeriodicalEnqueueSource` function with the new `controllerName` parameter.

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-09-09 12:07:07 -04:00
Xun Jiang
26cc41f26d Implement the Repo maintanence Job configuration design.
Remove the resource parameters from the velero server CLI.

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2024-09-09 22:42:56 +08:00
Shubham Pampattiwar
b92143dad1 Merge pull request #8102 from sseago/itemblock-workflow
ItemBlock model and phase 1 (single-thread) workflow changes
2024-09-09 05:59:51 -07:00
Shubham Pampattiwar
a19cf56081 Merge pull request #8167 from Lyndon-Li/node-agent-memory-preserve-doc
Add doc for node-agent memory preserve
2024-09-09 05:37:39 -07:00
Lyndon-Li
43de32ada4 add doc for node-agent memory preserve
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-09-09 13:39:09 +08:00
Daniel Jiang
7439db57b3 Merge pull request #8166 from ywk253100/240705_plugin_args
Pass Velero server command args to the plugins
2024-09-06 14:29:42 +08:00
Xun Jiang/Bruce Jiang
12b2dbe0fa Merge pull request #8170 from shubham-pampattiwar/update-scc-docs
Update Openshift SCC docs link
2024-09-05 10:52:38 +08:00
Shubham Pampattiwar
74ca35ea6d Update Openshift SCC docs link
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

add changelog

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

change link to latest

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2024-09-04 12:09:06 -07:00
Michal Pryc
6d0f726c2f Fix #8168 - AWS secrets should not be exposed while running tests
Changed the tests to use mocked function that will not read actual
secrets from env variables nor AWS config file that may be
on the system that is running tests.

As a second guard against exposed secrets comparison for the values
does not shows the actual values for the AWS data. This is to prevent
situation where programming error may still allow the test to read
AWS config/env variables instead of using mocked function.

Signed-off-by: Michal Pryc <mpryc@redhat.com>
2024-09-04 10:45:29 +02:00
Wenkai Yin(尹文开)
dc6eeafe98 Pass Velero server command args to the plugins
Pass Velero server command args to the plugins

Fixes #7806

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-09-04 13:43:27 +08:00
Xun Jiang/Bruce Jiang
c78fea3204 Merge pull request #8174 from anshulahuja98/pvnullfix
Add check for PV claimref nil
2024-09-04 10:39:41 +08:00
Scott Seago
9d6f4d2db5 ItemBlock model and phase 1 (single-thread) workflow changes
Signed-off-by: Scott Seago <sseago@redhat.com>
2024-09-03 19:04:18 -04:00
Daniel Jiang
8ae667ef5e Merge pull request #8063 from kaovilai/retry-patching-inprogress-design
Add status patching retry configuration design.
2024-09-03 22:22:59 +08:00
Xun Jiang/Bruce Jiang
e8632b240d Merge pull request #7974 from blackpiglet/7823_fix
Only get VolumeSnapshotClass when DataUpload exists.
2024-09-03 22:13:46 +08:00
Anshul Ahuja
2d3521a56c fix
Signed-off-by: Anshul Ahuja <anshul.ahu@gmail.com>
2024-09-02 09:18:44 +00:00
Anshul Ahuja
434bd2f3ae linter
Signed-off-by: Anshul Ahuja <anshul.ahu@gmail.com>
2024-09-02 05:08:59 +00:00
Anshul Ahuja
79156bedad Merge branch 'main' into pvnullfix
Signed-off-by: Anshul Ahuja <anshul.ahu@gmail.com>
2024-09-02 04:53:53 +00:00
Anshul Ahuja
8be8fc6671 Add check for PV claimref nil
Signed-off-by: Anshul Ahuja <anshul.ahu@gmail.com>
2024-09-02 04:50:16 +00:00
Xun Jiang
1d9fbcfcf6 Only get VolumeSnapshotClass when DataUpload exists.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2024-08-31 17:53:03 +08:00
lyndon-li
3408ffefac Merge pull request #8141 from shubham-pampattiwar/fix-backup-pvc-config
Apply backupPVCConfig to backupPod volume spec
2024-08-30 11:09:46 +08:00
Shubham Pampattiwar
f6e2b0107f Apply backupPVCConfig to backupPod volume spec
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

add changelog file

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

make backupPod volume mount always readOnly

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

use assert.True()

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

Add readOnly param for MakePodPVCAttachment func

lint fix

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2024-08-29 13:18:17 -07:00
Lyndon-Li
c79d7ebc91 data mover ms doc
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-08-29 16:27:00 +08:00
Lyndon-Li
866c2ab781 Merge branch 'main' into data-mover-ms-doc 2024-08-29 16:14:19 +08:00
Daniel Jiang
b5c9921ee8 Merge pull request #8158 from Lyndon-Li/bump-up-kopia
Bump up kopia
2024-08-29 13:51:02 +08:00
Lyndon-Li
a80c9359bf bump up kopia
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-08-29 13:10:08 +08:00
lyndon-li
cb7eebd9c9 Merge pull request #8143 from Lyndon-Li/data-mover-ms-pod-resource-limit
node-agent config for data mover micro service pod resources
2024-08-29 11:00:24 +08:00
Lyndon-Li
252e8a866f node-agent config for pod resources
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-08-29 10:13:32 +08:00
Tiger Kaovilai
eebc4af484 Make retry func name more generic
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-08-28 11:19:43 -04:00
Tiger Kaovilai
cacb5f0eae Apply suggestions from code review
Signed-off-by: Tiger Kaovilai <passawit.kaovilai@gmail.com>
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-08-28 11:19:43 -04:00
Tiger Kaovilai
d112cc26da abstract backup/restore
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-08-28 11:19:43 -04:00
Tiger Kaovilai
8f1424f04e sseago feedback: finalizing
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-08-28 11:19:43 -04:00
Tiger Kaovilai
ad00ae7e6e Add retry patching configuration design.
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-08-28 11:19:42 -04:00
lyndon-li
981f30cb25 Merge pull request #8151 from reasonerjt/update-stale-exempt
Issues with "backlog" label should never stale
2024-08-26 17:42:45 +08:00
Daniel Jiang
3f1853c961 Issues with "backlog" label should never stale
Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2024-08-26 15:07:40 +08:00
Tiger Kaovilai
f5671c728c Scrub namespace terminating status and deletion timestamp on restore. Descriptive restore error on terminating namespace. (#7424)
revert utils_test.go



address c7b189dd60 (r1494194484)



Update pkg/util/kube/utils.go

Signed-off-by: Tiger Kaovilai <passawit.kaovilai@gmail.com>
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-08-23 16:45:10 +05:30
lyndon-li
de96d4c84b Merge pull request #8139 from blackpiglet/7579_fix
Add resource modifier for velero restore describe CLI
2024-08-23 13:21:03 +08:00
Lyndon-Li
16a73acf7b data mover pod resource config
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-08-22 16:20:56 +08:00
Lyndon-Li
e29432beb8 Merge branch 'main' into data-mover-ms-pod-resource-limit 2024-08-22 16:18:33 +08:00
Lyndon-Li
babd76f2a3 Merge branch 'main' into data-mover-ms-doc 2024-08-22 15:19:48 +08:00
Lyndon-Li
627e2fede6 nod-agent config for pod resources
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-08-22 15:06:35 +08:00
Xun Jiang
c2cd6b7176 Add resource modifier for velero restore describe CLI
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2024-08-22 00:01:56 +08:00
Shubham Pampattiwar
934b3ea6a9 Merge pull request #8131 from Lyndon-Li/backup-repo-config-doc
Add doc for backup repo config
2024-08-21 06:22:59 -07:00
Lyndon-Li
37e0ab12cc backup repo config doc
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-08-21 16:15:40 +08:00
Lyndon-Li
f684e16def data mover ms doc
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-08-21 16:13:20 +08:00
Lyndon-Li
4120d43b78 Merge branch 'main' into backup-repo-config-doc 2024-08-21 10:22:22 +08:00
lyndon-li
f63b714483 Merge pull request #8115 from Lyndon-Li/data-mover-ms-smoking-test
Data mover micro service smoke testing
2024-08-21 10:12:18 +08:00
Xun Jiang/Bruce Jiang
ec6090bd01 Merge pull request #8129 from vmware-tanzu/e2e_modification
Modify E2E and perf test report generated directory
2024-08-21 07:22:46 +08:00
Shubham Pampattiwar
6e65c73cc6 Merge pull request #8119 from shubham-pampattiwar/backup-pvc-config-docs
Add docs for backup pvc config support
2024-08-20 10:29:39 -07:00
Lyndon-Li
bdff60178a add doc for backup repo config
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-08-20 14:28:12 +08:00
Tiger Kaovilai
0b447771f1 Add new-changelog to Makefile
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-08-19 23:33:10 -04:00
Xun Jiang
af62dd4b3e Modify E2E and perf test result output directory.
Add LongTime label to more E2E cases.

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2024-08-20 10:58:32 +08:00
Shubham Pampattiwar
86963bf229 Merge pull request #8097 from Lyndon-Li/issue-fix-8032
Issue 8032: make node agent configMap name configurable
2024-08-19 14:49:43 -07:00
Shubham Pampattiwar
d4e7d1472e add docs for backup pvc config support
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

add changelog

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

add section to csi dm doc and minor fixes

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

configMap name is configurable

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2024-08-19 14:39:22 -07:00
Matthieu MOREL
a6c543384b Use native cache from actions/setup-go (#7768)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2024-08-19 14:45:59 -04:00
Daniel Jiang
8e1dc8e997 Merge pull request #8108 from shubham-pampattiwar/fix-docker-file
Minor fixes to Dockerfile and docs
2024-08-19 14:01:31 +08:00
Lyndon-Li
8cf1749ae0 issue 8032: make node agent configMap name configurable
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-08-19 10:33:17 +08:00
Shubham Pampattiwar
a9463cebe4 Merge pull request #8117 from blackpiglet/remove_code_generator
Remove code-generator from hack/update-3generated-crd.code.sh
2024-08-15 09:02:24 -07:00
Xun Jiang
c0402075fb Remove code-generator from hack/update-3generated-crd.code.sh
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2024-08-15 17:27:30 +08:00
Lyndon-Li
0ed1a7fc86 data mover ms smoke testing
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-08-15 15:06:31 +08:00
Lyndon-Li
d25a908b78 Merge branch 'main' into data-mover-ms-smoking-test 2024-08-15 11:12:44 +08:00
lyndon-li
8fde4a017d Merge pull request #8054 from sseago/iba-plugins
Iba plugins
2024-08-15 10:21:25 +08:00
lyndon-li
4e781d4009 Merge pull request #8109 from shubham-pampattiwar/backup-pvc-config-support
Add support for backup PVC configuration
2024-08-15 10:20:12 +08:00
Lyndon-Li
ed0ef67c16 data mover ms smoke testing
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-08-14 17:29:04 +08:00
Shubham Pampattiwar
6c3988e462 Merge pull request #8114 from blackpiglet/6190_remove_client
Delete the pkg/generated directory.
2024-08-13 21:32:05 -07:00
Xun Jiang
4ffc6d17b2 Delete the pkg/generated directory.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2024-08-14 10:35:23 +08:00
Shubham Pampattiwar
8eac3606d9 Add support for backup PVC configuration
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

add changelog file

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

make update

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

pass backupPVCConfig to exposer as part of csi params

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2024-08-13 17:06:06 -07:00
Lyndon-Li
3c0948c9be Merge branch 'main' into data-mover-ms-smoking-test 2024-08-14 00:15:56 +08:00
lyndon-li
07c03a8919 Merge pull request #8085 from Lyndon-Li/data-mover-ms-node-agent-resume
Data mover micro service node agent resume
2024-08-14 00:14:47 +08:00
Shubham Pampattiwar
b62b38f566 Merge pull request #8093 from Lyndon-Li/backkup-repo-config
Backup repo config
2024-08-12 13:44:00 -07:00
Shubham Pampattiwar
3aabfc3414 Minor fixes to Dockerfile and docs
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2024-08-12 13:28:19 -07:00
Daniel Jiang
260a4995c2 Merge pull request #8096 from Lyndon-Li/issue-fix-8072
Issue 8072: restic deprecation - warning messages
2024-08-12 14:13:25 +08:00
Lyndon-Li
04db3ba767 data mover ms node agent resume
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-08-12 10:45:59 +08:00
Scott Seago
1228b41851 Internal ItemBlockAction plugins
This PR implements the internal ItemBlockAction plugins needed for pod, PVC, and SA.

Signed-off-by: Scott Seago <sseago@redhat.com>
2024-08-09 12:24:55 -04:00
lyndon-li
cc32375b76 Merge pull request #8098 from reasonerjt/restore-status-doc
Add information about restore status to the doc
2024-08-09 18:24:54 +08:00
Daniel Jiang
8ca7cae662 Add information about restore status to the doc
fixes #6237

Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2024-08-09 16:23:30 +08:00
Lyndon-Li
4dea3a48e8 data mover ms smoking test
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-08-09 15:15:21 +08:00
Lyndon-Li
fefb4b858c issue 8072: restic deprecation - warning messages
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-08-09 14:40:42 +08:00
Lyndon-Li
2c7047a304 Merge branch 'main' into data-mover-ms-node-agent-resume
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-08-07 17:23:15 +08:00
lyndon-li
60f5ad5cf4 Merge branch 'main' into data-mover-ms-node-agent-resume
Signed-off-by: lyndon-li <98304688+Lyndon-Li@users.noreply.github.com>
2024-08-07 17:13:35 +08:00
Lyndon-Li
3b06d915ca Merge branch 'main' into data-mover-ms-node-agent-resume 2024-08-07 17:07:38 +08:00
lyndon-li
dd3d05bbac Merge pull request #8074 from Lyndon-Li/data-mover-ms-new-controller-1
Data mover micro service new controller
2024-08-07 17:00:27 +08:00
Lyndon-Li
82d9fe4d4d backup repo config
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-08-07 15:34:57 +08:00
lyndon-li
26459488ed Merge pull request #8089 from kaovilai/fix-csi-snapshot-datamovementv1.14doc
Fix v1.14 site header for csi-snapshot-data-movement
2024-08-07 10:33:26 +08:00
Wenkai Yin(尹文开)
b7f2e15c6e Merge pull request #8086 from reasonerjt/fix-7812
Patch dbr's status when error happens
2024-08-07 09:43:15 +08:00
Tiger Kaovilai
be4aabccd9 Fix v1.14 site header for csi-snapshot-data-movement
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-08-06 19:11:25 -04:00
Michael Steven Fruchtman
64d8014b87 Correcting OpenShift on IBM Documentation Error (#8077)
* Correcting Openshift on IBM Documentation Error

I have to admit to some significant error and embarrassment regarding the documentation update
about Openshift on IBM Cloud pull request https://github.com/vmware-tanzu/velero/pull/8069.

I will correct my error before it gets any further.

Just exposing /var/data/kubelet/pods is incorrect and host path /var/lib/kubelet/pods should remain unchanged.

The errors with the defaults during csi snapshot data movement were:

   data path backup failed: Failed to run kopia backup: unable to get local
    block device entry: resolveSymlink: lstat /var/data/: no such
    file or directory

I suspected this was the same as RancherOS and Nutanix. It is not.

The original tested changes changed both /var/lib/kubelet/{pods,plugins} to
/var/data/kubelet/{pods,plugins}.

The published changes only result in the error:

```
status:
  completionTimestamp: '2024-08-02T17:12:29Z'
  message: >-
    data path backup failed: Failed to run kopia backup: unable to get local
    block device entry: resolveSymlink: lstat /var/data/kubelet/plugins: no such
    file or directory
  node: 10.240.0.5
  phase: Failed
  progress: {}
  startTimestamp: '2024-08-02T17:12:11Z'
```

After making continued modifications to the daemonset the correct configuration was:

```
volumeMounts:
- name: host-pods
  mountPath: /host_pods
  mountPropagation: HostToContainer
- name: host-plugins
  mountPath: /var/data/kubelet/plugins
  mountPropagation: HostToContainer
```

```
volumes:
- name: host-pods
  hostPath:
	path: /var/lib/kubelet/pods
	type: ''
- name: host-plugins
  hostPath:
	path: /var/data/kubelet/plugins
	type: ''
```

Only the changes to the plugin path were required.
The plugin path changes were required to both the mount path and the host path.

Regardless of whether /var/lib/kubelet/pods or /var/data/kubelet/pods host path, backups and restore
succeeded provided the plugin path was modified.

```
volumeMounts:
- name: host-pods
  mountPath: /host_pods
  mountPropagation: HostToContainer
- name: host-plugins
  mountPath: /var/data/kubelet/plugins
  mountPropagation: HostToContainer
```

```
volumes:
- name: host-pods
  hostPath:
	path: /var/data/kubelet/pods
	type: ''
- name: host-plugins
  hostPath:
	path: /var/data/kubelet/plugins
	type: ''
```

After getting on-host access was able to confirm. Pods are at /var/lib/kubelet/pods.

```
ls /var/lib/kubelet/pods
07c0be63-335d-4cfb-b39f-816bc2fb32cd
51f31b3e-4710-4ef0-8626-5f1a78a624b2
a4802fd3-3b62-45a4-8f21-974880b6f92a
cccb35c9-b4f9-4ca9-a697-736ae64f09ad
0a5d4366-7fa1-4525-9e45-a43a362b8542
558b0643-0661-4d4a-b03e-aac60c6ad710
a4b106fb-5b7b-48e5-828a-ea7b41ba0e59
ce1290e1-4330-4df6-8166-14784bcce930
```

On host the volumes are in /var/data/kubelet/plugins.

```
ls /var/data/kubelet/plugins/kubernetes.io/csi/openshift-storage.cephfs.csi.ceph.com/231e04896c4f528efb95d23a3c153db9fc4a7206b7320f74443f30de7228dba5/globalmount/velero/backups/backup-resources-41d84d16-47a7-4ea8-a9cb-6348d01bcb2c/
backup-resources-41d84d16-47a7-4ea8-a9cb-6348d01bcb2c-csi-volumesnapshotclasses.json.gz
backup-resources-41d84d16-47a7-4ea8-a9cb-6348d01bcb2c-resource-list.json.gz
backup-resources-41d84d16-47a7-4ea8-a9cb-6348d01bcb2c-csi-volumesnapshotcontents.json.gz
backup-resources-41d84d16-47a7-4ea8-a9cb-6348d01bcb2c-results.gz
backup-resources-41d84d16-47a7-4ea8-a9cb-6348d01bcb2c-csi-volumesnapshots.json.gz
backup-resources-41d84d16-47a7-4ea8-a9cb-6348d01bcb2c-volumesnapshots.json.gz
backup-resources-41d84d16-47a7-4ea8-a9cb-6348d01bcb2c-itemoperations.json.gz
backup-resources-41d84d16-47a7-4ea8-a9cb-6348d01bcb2c.tar.gz
backup-resources-41d84d16-47a7-4ea8-a9cb-6348d01bcb2c-logs.gz
velero-backup.json
backup-resources-41d84d16-47a7-4ea8-a9cb-6348d01bcb2c-podvolumebackups.json.gz
```

With the volume config changed to expose /var/data/kubelet/plugins for the plugin hostPath, the DataUploads and DataDownloads
succeed for both Filesystem and Block mode PVCs.

```
status:
  completionTimestamp: '2024-08-02T17:23:33Z'
  node: 10.240.0.5
  path: >-
    /host_pods/7fcb9d56-7885-437c-acd3-67db6b1ee8ae/volumeDevices/kubernetes.io~csi/pvc-47b91f56-db8c-44bf-9ecc-737170561b4b
  phase: Completed
  progress:
    bytesDone: 5368709120
    totalBytes: 5368709120
  snapshotID: 8faae36b3592fee4efbfad024f26033e
  startTimestamp: '2024-08-02T17:21:22Z'
```

```
status:
  completionTimestamp: '2024-08-02T18:42:19Z'
  node: 10.240.0.5
  phase: Completed
  progress:
    bytesDone: 5368709120
    totalBytes: 5368709120
  startTimestamp: '2024-08-02T18:41:00Z'
```

My apologies for the error.

Signed-off-by: Michael Fruchtman <msfrucht@us.ibm.com>

* Add context to plugins mountPath

Signed-off-by: Michael Fruchtman <msfrucht@us.ibm.com>

---------

Signed-off-by: Michael Fruchtman <msfrucht@us.ibm.com>
2024-08-06 09:05:45 -04:00
Daniel Jiang
5c88c897a5 Patch dbr's status when error happens
This commit makes sure the dbr's status is "Processed" when an error
happens before the actual deletion is started

fixes #7812

Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2024-08-06 18:37:34 +08:00
Lyndon-Li
a523d10802 data mover ms node agent resume
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-08-06 16:25:56 +08:00
Shubham Pampattiwar
f7f9ed3393 Merge pull request #8082 from gjanders/update-ibm-cos-docs
Updated to the IBM COS documentation
2024-08-05 11:39:35 -07:00
Gareth Anderson
75210c7f4a Re-adding this doc line as requested by @blackpiglet
Signed-off-by: Gareth Anderson <gareth.anderson03@gmail.com>
2024-08-05 09:08:00 +00:00
Gareth Anderson
dc38a2a879 Updated IBM COS documentation
Added option checksumAlgorith, this stops 403 errors as per https://github.com/vmware-tanzu/velero/issues/7543
Added plugins line as velero install failed without this option in version 1.14.0
Removed the volumesnapshotlocation as it does not exist in 1.14.0

Signed-off-by: Gareth Anderson <gareth.anderson03@gmail.com>
2024-08-05 04:16:34 +00:00
Xun Jiang/Bruce Jiang
d4e743b138 Merge pull request #8071 from kaovilai/static-checks
static checks
2024-08-04 19:49:52 +08:00
Xun Jiang/Bruce Jiang
9bc32e0e5c Merge pull request #8070 from shubham-pampattiwar/update-policy-docs
Update docs for volume policy feature
2024-08-02 13:35:48 +08:00
Xun Jiang/Bruce Jiang
1a6750c025 Merge pull request #8069 from msfrucht/openshift_ibm_cloud_doc_update
Documentation Update for OpenShift IBM Cloud for CSI snapshot data movement
2024-08-02 11:37:38 +08:00
Michael Fruchtman
49a7fe74a9 s/kubet/kubelet
Signed-off-by: Michael Fruchtman <msfrucht@us.ibm.com>
2024-08-01 09:32:21 -07:00
Lyndon-Li
903458b61b data mover ms new controller
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-08-01 15:11:13 +08:00
Lyndon-Li
514ba56ca1 data mover ms new controller
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-08-01 14:42:17 +08:00
Lyndon-Li
29aad63f32 Merge branch 'main' into data-mover-ms-new-controller-1 2024-08-01 13:05:13 +08:00
lyndon-li
54bd7ce32e Merge pull request #8061 from Lyndon-Li/data-mover-ms-restore-1
Data mover micro service restore
2024-08-01 13:03:47 +08:00
Tiger Kaovilai
6d0d1aaccc tautological condition: non-nil != nil
https://pkg.go.dev/golang.org/x/tools/go/analysis/passes/nilness#cond:~:text=p%20%3A%3D%20%26v%0A...%0Aif%20p%20!%3D%20nil%20%7B%20//%20tautological%20condition%0A%7D
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-07-31 22:20:48 -04:00
Tiger Kaovilai
ad6104b90a unused write to field Spec
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-07-31 22:16:11 -04:00
Tiger Kaovilai
92b9e59fd5 Unused parameters
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-07-31 22:15:15 -04:00
Shubham Pampattiwar
2fa71e41b2 Update docs for volume policy feature
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2024-07-31 15:45:33 -07:00
Michael Fruchtman
c1e3d6f40e Add OpenShift on IBM Cloud to list
Signed-off-by: Michael Fruchtman <msfrucht@us.ibm.com>
2024-07-31 14:31:22 -07:00
Michael Fruchtman
545a0e2112 Doc update Openshift IBM Cloud
Updates the documentation for CSI snapshot data movement for OpenShift
on IBM Cloud.

The default hostpath /var/lib/kubelet/pods cannot find
PersistentVolumeClaims with volumeMode: Block on host.

The correct hostpath for OpenShift on IBM Cloud is
/var/data/kubelet/pods.

Signed-off-by: Michael Fruchtman <msfrucht@us.ibm.com>
2024-07-31 14:22:04 -07:00
Shubham Pampattiwar
7811b9f78c Merge pull request #8026 from sseago/itemblockaction
Create new ItemBlockAction (IBA) plugin type
2024-07-31 08:46:52 -07:00
Anshul Ahuja
1a167f9ebf Fail Delete Backup if BSL is not available (#8029)
* Fail Delete Backup if BSL is not available

Signed-off-by: Anshul Ahuja <anshul.ahu@gmail.com>

* linter

Signed-off-by: Anshul Ahuja <anshul.ahu@gmail.com>

---------

Signed-off-by: Anshul Ahuja <anshul.ahu@gmail.com>
2024-07-31 10:53:39 -04:00
Lyndon-Li
5dcd9dc81f Merge branch 'main' into backkup-repo-config 2024-07-31 17:34:44 +08:00
Lyndon-Li
7b7727e808 issue 7620: backup repo config
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-07-31 16:41:27 +08:00
Lyndon-Li
d48e9762eb data mover ms new controller
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-07-31 13:24:16 +08:00
Lyndon-Li
86e54801c5 data mover micro service restore
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-07-31 11:17:12 +08:00
Tiger Kaovilai
7b26673b29 Move design/secrets.md to Implemented (#8060)
Per https://github.com/vmware-tanzu/velero/issues/2425
multi credentials were implemented in #3190

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-07-30 09:26:58 -04:00
lyndon-li
8e0f4d17f7 Merge pull request #8046 from Lyndon-Li/data-mover-ms-backup-1
Data mover micro service backup
2024-07-30 16:24:22 +08:00
Scott Seago
ba9c109868 Create new ItemBlockAction (IBA) plugin type
Signed-off-by: Scott Seago <sseago@redhat.com>
2024-07-29 11:08:54 -04:00
Tiger Kaovilai
d6f89e2d07 Allow make local to work without docker in path
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-07-29 00:08:42 -04:00
Lyndon-Li
6997b8e393 data mover micro service backup
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-07-26 13:54:02 +08:00
Daniel Jiang
d9ca147479 Merge pull request #7963 from Lyndon-Li/issue-fix-7620-design
Add design for backup repository configurations
2024-07-26 13:13:43 +08:00
Lyndon-Li
e83ba06733 Merge branch 'main' into data-mover-ms-backup-1 2024-07-26 11:05:05 +08:00
lyndon-li
53b57f8bdf Merge pull request #7999 from Lyndon-Li/data-mover-ms-watcher-01
Data mover micro-service watcher
2024-07-26 10:10:07 +08:00
Xun Jiang/Bruce Jiang
c2bc67bdea Merge pull request #8038 from blackpiglet/7959_fix
Use labels instead of regex to filter E2E test cases.
2024-07-26 09:42:19 +08:00
Lyndon-Li
8742f1b1f3 data mover micro service backup
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-07-25 14:03:49 +08:00
Daniel Jiang
5b9b8e7828 Merge pull request #7942 from kaovilai/deployment.go-boolParam
Make pkg/install/Deployment podTemplateOptions bool functions accept bool param
2024-07-25 13:36:51 +08:00
Lyndon-Li
faa704d909 data mover ms watcher
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-07-25 10:47:52 +08:00
Xun Jiang
71c75d6dcb Set the Ginkgo timeout to 5 hours.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-07-24 22:39:45 +08:00
Xun Jiang
e862b976a4 Use labels instead of regex to filter E2E test cases.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-07-24 15:33:06 +08:00
Xun Jiang/Bruce Jiang
442cc76417 Merge pull request #8009 from blackpiglet/7758_fix
Add repository maintenance job configuration design.
2024-07-24 11:09:29 +08:00
lyndon-li
01aa657f0e Merge pull request #7988 from Lyndon-Li/data-mover-ms-new-exposer
New exposer for data mover ms
2024-07-24 10:11:54 +08:00
Xun Jiang
d72f857656 Add repository maintenance job configuration design.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-07-23 23:48:12 +08:00
Lyndon-Li
c01c679076 issue 7620: add design for backup repository configurations
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-07-23 18:27:24 +08:00
Shubham Pampattiwar
2bf3bc9cc7 Merge pull request #8033 from blackpiglet/7957_fix
Replace RunSpecsWithDefaultAndCustomReporters with RunSpecs.
2024-07-22 09:08:11 -07:00
Xun Jiang
afca7dd6fe Replace RunSpecsWithDefaultAndCustomReporters with RunSpecs.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-07-22 15:58:00 +08:00
Wenkai Yin(尹文开)
84feddb082 Merge pull request #8028 from mrnold/pod-volume-message-7857
Avoid wrapping failed PVB status with empty message.
2024-07-22 14:59:30 +08:00
lyndon-li
6e27ed3694 Merge pull request #8021 from shubham-pampattiwar/expose-pv-patch-max-timeout
Make PVPatchMaximumDuration timeout configurable
2024-07-22 13:00:38 +08:00
Lyndon-Li
a1d6d1d698 fix UT linter error
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-07-22 10:47:05 +08:00
Lyndon-Li
dc4b95e7de correct data mover ms design PR number
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-07-22 10:35:25 +08:00
Shubham Pampattiwar
fd6c74715a Expose PVPatchMaximumDuration timeout for custom configuration
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

remove debug log

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

use resource timeout server arg

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

add changelog

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

remove hardcoded PVPatchMaximumtimeout const usagDe

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2024-07-19 12:44:26 -07:00
lyndon-li
0d2f3db696 add the design for backup PVC configurations (#7982)
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-07-19 10:18:15 -04:00
Anshul Ahuja
2aecd45285 linter
Signed-off-by: Anshul Ahuja <anshul.ahu@gmail.com>
2024-07-19 09:12:06 +00:00
Anshul Ahuja
82faa554bd Fail Delete Backup if BSL is not available
Signed-off-by: Anshul Ahuja <anshul.ahu@gmail.com>
2024-07-19 04:32:39 +00:00
Matthew Arnold
f8e697d1e8 Add changelog file.
Signed-off-by: Matthew Arnold <marnold@redhat.com>
2024-07-18 16:16:08 -04:00
Matthew Arnold
c8a0c345dc Avoid wrapping failed PVB status with empty message.
Also change "get" to "found" as requested in issue #7857.

Signed-off-by: Matthew Arnold <marnold@redhat.com>
2024-07-18 15:57:11 -04:00
Shubham Pampattiwar
3e9f6cc83d Merge pull request #7628 from sseago/backup-perf-design
Add design for velero backup performance improvements
2024-07-18 09:44:28 -07:00
Matthieu MOREL
c8baaa9b11 testifylint: enable more rules (#8024)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2024-07-18 10:43:16 -04:00
lyndon-li
55e027897c Merge pull request #8010 from blackpiglet/7961_fix
Bump Ginkgo to v2.
2024-07-17 17:19:08 +08:00
Xun Jiang
7a3b947961 Bump Ginkgo to v2.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-07-17 15:31:23 +08:00
Matthieu MOREL
c69f47d5d2 Migrate from github.com/golang/protobuf to google.golang.org/protobuf (#7593)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2024-07-16 16:28:07 -04:00
Matthieu MOREL
35c90f1672 testifylint: enable error-nil rule (#7670)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2024-07-16 12:23:16 -04:00
Matthieu MOREL
aa3fde5ea5 testifylint: enable bool-compare rule (#7623)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2024-07-16 09:28:23 -04:00
Matthieu MOREL
917b55e107 golangci-lint(unconvert): fix test files (#7608)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2024-07-16 09:27:15 -04:00
Lyndon-Li
7f88d631a9 Merge branch 'main' into data-mover-ms-new-exposer 2024-07-16 15:54:29 +08:00
lyndon-li
2b018272e6 fix linter check error (#8014)
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-07-15 09:48:10 -04:00
Lyndon-Li
49097744ee new exposer for data mover ms
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-07-15 17:18:03 +08:00
Xun Jiang/Bruce Jiang
7f9fbabb7b Merge pull request #8012 from sseago/plugin-leak
Reuse existing plugin manager for get/put volume info
2024-07-15 10:15:53 +08:00
Scott Seago
dc286a38fc Reuse existing plugin manager for get/put volume info
Signed-off-by: Scott Seago <sseago@redhat.com>
2024-07-12 10:15:16 -04:00
Tiger Kaovilai
6c8d051269 Upgrade to robfig/cron/v3 to support time zone specification
Breaking change (can be mitigated if needed in the future):  v1 branch accepted an optional seconds field at the beginning of the cron spec. This is non-standard and has led to a lot of confusion. The new default parser conforms to the standard as described by [the Cron wikipedia page.](https://en.wikipedia.org/wiki/Cron). It is unlikely that this affects us per https://github.com/vmware-tanzu/velero/pull/31

Other notes:
> CRON_TZ is now the recommended way to specify the timezone of a single schedule, which is sanctioned by the specification. The legacy "TZ=" prefix will continue to be supported since it is unambiguous and easy to do so.

References: https://pkg.go.dev/github.com/robfig/cron/v3#readme-upgrading-to-v3-june-2019
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-07-12 00:08:42 -04:00
Tiger Kaovilai
bd2008c893 Make pkg/install/Deployment podTemplateOptions bool functions accept bool param
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-07-11 23:52:25 -04:00
Shubham Pampattiwar
3bd8a7da7d Skip PV patch step in Restoe workflow for WaitForFirstConsumer VolumeBindingMode Pending state PVCs (#7953)
add changelog file



change log level and add more detailed comments



make update



add return for sc get call if error

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2024-07-11 18:02:21 -04:00
Xun Jiang/Bruce Jiang
255a51f695 Merge pull request #5532 from weshayutin/deprecation_policy
Propose a deprecation process for velero
2024-07-11 16:08:20 +08:00
Shubham Pampattiwar
6697b5ccb4 Update GOVERNANCE.md
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2024-07-10 23:44:55 -07:00
lyndon-li
6a3e226381 Merge pull request #7983 from anshulahuja98/snapshotsync
Reset VolumeSnapshotRef in Backup Sync Flow
2024-07-11 13:13:37 +08:00
Xun Jiang/Bruce Jiang
6fb109f620 Merge pull request #7965 from blackpiglet/7928_fix
Check whether the namespaces specified in namespace filter exist.
2024-07-10 18:30:01 +08:00
Wenkai Yin(尹文开)
21beda3c2a Merge pull request #7955 from Lyndon-Li/data-mover-ms-new-data-path
New data path for data mover ms
2024-07-08 18:34:22 +08:00
Anshul Ahuja
4a6a362e60 Reset VolumeSnapshotRef in Backup Sync Flow
Signed-off-by: Anshul Ahuja <anshulahuja@microsoft.com>
2024-07-05 08:42:57 +00:00
Wenkai Yin(尹文开)
824bebbad7 Merge pull request #7973 from Lyndon-Li/issue-fix-7972
Issue 7972: sync the backupPVC deletion in expose clean up
2024-07-05 10:35:08 +08:00
Xun Jiang/Bruce Jiang
920396dfd8 Merge pull request #7969 from blackpiglet/7818_main_fix
[cherry-pick][main]Expose the VolumeHelper to third-party plugins.
2024-07-05 10:14:45 +08:00
Xun Jiang/Bruce Jiang
1ec52beca8 Merge pull request #7410 from seanblong/main
Ignore missing path error in conditional match
2024-07-04 10:10:53 +08:00
Xun Jiang
cf5dfdf42d Check whether the namespaces specified in namespace filter exist.
Check whether the namespaces specified in the
backup.Spec.IncludeNamespaces exist during backup resource collcetion
If not, log error to mark the backup as PartiallyFailed.

Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-07-04 10:02:10 +08:00
Lyndon-Li
7408dbd436 issue 7972: sync the backupPVC deletion in expose clean up
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-07-03 18:31:38 +08:00
Wenkai Yin(尹文开)
71de94b87a Merge pull request #7967 from blackpiglet/7929_fix
Check whether the volume's source is PVC before fetching its PV.
2024-07-03 17:33:53 +08:00
Xun Jiang
c4ce6a3382 Expose the VolumeHelper to third-party plugins.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-07-03 11:16:56 +08:00
Xun Jiang
d89a9f7b0c Check whether the volume's source is PVC before fetching its PV.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-07-03 10:29:14 +08:00
lyndon-li
28d64c2c52 Merge pull request #7775 from blackpiglet/add_volume_backup_result
Add volume backup result
2024-07-02 13:42:32 +08:00
lyndon-li
ff634862b4 issue 7903: add a limitation clarification for waitForSingleConsumer PVC (#7948)
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-07-01 17:58:20 -04:00
Shubham Pampattiwar
6a7f146aed Merge pull request #7924 from anshulahuja98/snapshotsync
In backup sync flow put snapshotHandle as source in CSI VSContent
2024-07-01 11:54:26 -07:00
Xun Jiang
df28134e25 Add result in backup VolumeInfo.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-07-02 00:20:36 +08:00
Lyndon-Li
20676c1ae7 new data path for data mover ms
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-07-01 19:07:00 +08:00
Lyndon-Li
3fa8f6c72d issue 7620: design for backup repo configurations
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-07-01 19:04:31 +08:00
lyndon-li
9c20b5ca15 Merge pull request #7576 from Lyndon-Li/data-mover-micro-service-design
Data mover micro service design
2024-07-01 13:48:48 +08:00
Xun Jiang
0789c6154c Add more PROW commands.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-06-29 11:33:56 +08:00
Scott Seago
0288ab7611 add Restore improvements to non-goals
Signed-off-by: Scott Seago <sseago@redhat.com>
2024-06-27 12:49:32 -04:00
Lyndon-Li
544d7965c6 data mover micro service design
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-06-27 11:28:32 +08:00
lyndon-li
c827fd0c6b Merge pull request #7922 from Lyndon-Li/fix-issue-7898-design-change
Issue 7898: change the node-agent load affinity design
2024-06-26 13:14:03 +08:00
Scott Seago
3c2d77f4cf replaced BIAv3 with new ItemBlockAction plugin type
Signed-off-by: Scott Seago <sseago@redhat.com>
2024-06-25 14:51:35 -04:00
Anshul Ahuja
d1d331faa8 In backup sync flow put snapshotHandle as source in CSI VSContent
Signed-off-by: Anshul Ahuja <anshulahuja@microsoft.com>
2024-06-25 11:53:54 +00:00
Lyndon-Li
a365d32105 issue 7898: change the node-agent load affinity design
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-06-25 14:51:05 +08:00
lyndon-li
b0dc189311 Merge pull request #7899 from sseago/no-fast-fail-for-unschedulable
Don't consider unschedulable pods unrecoverable
2024-06-25 10:08:51 +08:00
Scott Seago
9614ead033 Don't consider unschedulable pods unrecoverable
Signed-off-by: Scott Seago <sseago@redhat.com>
2024-06-17 10:05:52 -04:00
Daniel Jiang
89229d3899 Merge pull request #7881 from reasonerjt/update-pr-assignees
Remove Ming from auto assignee
2024-06-13 13:38:54 +08:00
Daniel Jiang
689b015480 Merge pull request #7877 from reasonerjt/update-release-note-1.14-cp-to-main
[CP-to-main]Update release note of 1.14
2024-06-13 13:38:38 +08:00
lyndon-li
385ecc2cd9 Merge pull request #7879 from blackpiglet/update_e2e
Skip parallel files upload and download test for Restic case.
2024-06-13 13:35:39 +08:00
Xun Jiang/Bruce Jiang
044530a9e3 Merge pull request #7878 from reasonerjt/fix-restore-crash
[CP-to-main] Add checks for csisnapshot for vol_info population
2024-06-13 13:32:10 +08:00
Daniel Jiang
d9ea253dde Remove Ming from auto assignee
Thanks @qiuming-best for your contribution!

Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2024-06-13 13:09:06 +08:00
Xun Jiang
9b7dd663c3 Skip parallel files upload and download test for Restic case.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-06-13 11:28:12 +08:00
Daniel Jiang
5551ded4fd Add checks for csisnapshot for vol_info population
fixes #7874

Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2024-06-13 11:24:41 +08:00
Daniel Jiang
2b57b4ca03 Update release note of 1.14
Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2024-06-13 11:21:00 +08:00
Xun Jiang/Bruce Jiang
b9b21b5b6c Merge pull request #7871 from vmware-tanzu/dependabot/go_modules/github.com/Azure/azure-sdk-for-go/sdk/azidentity-1.6.0
Bump github.com/Azure/azure-sdk-for-go/sdk/azidentity from 1.5.2 to 1.6.0
2024-06-12 10:50:15 +08:00
dependabot[bot]
04f52beee0 Bump github.com/Azure/azure-sdk-for-go/sdk/azidentity
Bumps [github.com/Azure/azure-sdk-for-go/sdk/azidentity](https://github.com/Azure/azure-sdk-for-go) from 1.5.2 to 1.6.0.
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/release.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/internal/v1.5.2...sdk/azcore/v1.6.0)

---
updated-dependencies:
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/azidentity
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-11 20:21:24 +00:00
Daniel Jiang
ac164582dc Merge pull request #7856 from reasonerjt/pvc-csi-snapshot-map
[CP-to-main]Use PVC to track the CSI snapshot in restore
2024-06-11 13:37:08 +08:00
Daniel Jiang
1d1090083a Use PVC to track the CSI snapshot in restore
This commit fixes #7849.
It will use PVC instead of PV to track CSI snapshots to generate restore
volume info metadata.  So that in the case the PVC is not bound to PV
the metadata can be populated correctly.

Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2024-06-11 00:30:23 +08:00
Xun Jiang/Bruce Jiang
8b049a5803 Add a notice in migration document. (#7867)
Correct the comments for metadata.lables for schedule API.

Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-06-07 12:50:08 -04:00
Daniel Jiang
a8d77eae95 Merge pull request #7846 from Lyndon-Li/avoid-unnecessary-repo-connect
Avoid unnecessary repo connect for maintenance
2024-05-31 11:15:49 +08:00
Xun Jiang/Bruce Jiang
068766adb4 Merge pull request #7844 from draghuram/patch-1
Improve help message for the option "--resource-policies-configmap"
2024-05-31 10:05:06 +08:00
Raghuram Devarakonda
7d61917d00 Improve help message for the option "--resource-policies-configmap"
Signed-off-by: Raghuram Devarakonda <draghuram@gmail.com>
2024-05-30 14:00:45 -04:00
Lyndon-Li
2d0bca5e29 avoid unnecessary repo connect for maintenance
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-05-30 19:13:52 +08:00
Wenkai Yin(尹文开)
33633d8a02 Merge pull request #7834 from reasonerjt/fix-git-status-issue
[CP-to-main]: Fix issue in "git status" in goreleaser.sh
2024-05-28 12:42:29 +08:00
Wenkai Yin(尹文开)
94c24bd29c Merge pull request #7835 from reasonerjt/bump-up-goreleaser
Bump up goreleaser to v1.26.2
2024-05-28 12:42:12 +08:00
Daniel Jiang
284e23bef3 Bump up goreleaser to v1.26.2
Also make update to the configuration file according to:
https://goreleaser.com/deprecations/#archivesrlcp

Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2024-05-28 11:25:17 +08:00
Daniel Jiang
d00721c874 Fix issue in "git status" in goreleaser.sh
When dry-run the tag-release.sh, there's an error
"fatal: detected dubious ownership in repository at
'/github.com/vmware-tanzu/velero'"

This commit works around this issue to make sure "tag-release.sh"
can finish successful

Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2024-05-28 10:27:01 +08:00
Xun Jiang/Bruce Jiang
d9c9f77860 Merge pull request #7671 from mmorel-35/testifylint/compare
testifylint: enable compares rule
2024-05-27 12:45:16 +08:00
Xun Jiang/Bruce Jiang
c8e252cfac Merge pull request #7592 from mmorel-35/gosimple
golangci-lint(gosimple): fix test files
2024-05-27 11:14:42 +08:00
Wenkai Yin(尹文开)
18921fce5f Merge pull request #7825 from reasonerjt/fix-codespell
[CP-to-main] Fix the problems found by codespell
2024-05-24 16:43:50 +08:00
Daniel Jiang
bed10c7fe6 Fix the problems found by codespell
Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2024-05-24 13:32:54 +08:00
Daniel Jiang
aae7bb00e4 Merge pull request #7820 from reasonerjt/changelog-v114
Update changelog for v1.14
2024-05-23 18:14:53 +08:00
Daniel Jiang
bd68bb4936 Update changelog for v1.14
Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2024-05-23 17:20:57 +08:00
Xun Jiang/Bruce Jiang
9ac9e0d7b3 Merge pull request #7819 from reasonerjt/fix-doc-v114
Fix minor issue in doc for v1.14
2024-05-23 17:19:28 +08:00
Daniel Jiang
8c9410cff1 Fix minor issue in doc for v1.14
The upgrade link and latest config is not updated by make

Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2024-05-23 16:50:58 +08:00
Daniel Jiang
05a6354bc8 Merge pull request #7816 from reasonerjt/doc-for-v114-new
User doc for v1.14
2024-05-23 16:31:21 +08:00
Daniel Jiang
62c7fef827 Merge pull request #7814 from reasonerjt/update-reamde-v114
Update README and move the implemented Designs for v1.14
2024-05-23 15:50:10 +08:00
Daniel Jiang
2276f3e7df User doc for v1.14
Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2024-05-23 15:34:36 +08:00
Daniel Jiang
349c8f26c6 Update README and move the implemented Designs for v1.14
Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2024-05-23 14:08:47 +08:00
Xun Jiang/Bruce Jiang
0e7fb402cd Merge pull request #7794 from blackpiglet/modify_volume_helper
Modify the volume helper logic.
2024-05-23 11:18:38 +08:00
Xun Jiang
a91d2cb036 Modify the volume helper logic.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-05-23 09:57:21 +08:00
Xun Jiang/Bruce Jiang
49eab81807 Merge pull request #7805 from piny940/fix-backuplog-error
Fix backup log to show error string, not index
2024-05-21 14:07:42 +08:00
lyndon-li
5943d385c1 Merge pull request #7779 from shubham-pampattiwar/vol-policy-extension-docs
Add documentation for extension of volume policy feature
2024-05-21 13:38:20 +08:00
Shubham Pampattiwar
2706667750 add documentation for extension of volume policy feature
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

add changelog

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

add more examples

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

remove snapshotVolumes flag req

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

fix intendation

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

add more notes re:snapshot action

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2024-05-20 11:54:35 -07:00
Daniel Jiang
b4b0b9d9c8 Merge pull request #7807 from reasonerjt/upgrade-doc-114
Update the doc upgrade-to-1.14
2024-05-20 18:37:00 +08:00
Daniel Jiang
1ffb6a9d66 Update the doc upgrade-to-1.14
Tweak the command and remove the sections which include upgrading from
older versions, given v1.13.x is a prerequisite.

Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2024-05-20 17:01:20 +08:00
piny940
059effce97 Add change log 7805
Signed-off-by: piny940 <83708535+piny940@users.noreply.github.com>
2024-05-18 11:08:18 +09:00
piny940
8b6c89cd4e Fix backup log to show error string, not index
Signed-off-by: piny940 <83708535+piny940@users.noreply.github.com>
2024-05-18 11:00:47 +09:00
Matthieu MOREL
75fe761061 golangci-lint: fix gosimple linter
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2024-05-17 07:31:03 +00:00
Xun Jiang/Bruce Jiang
f654188243 Merge pull request #7802 from blackpiglet/bump_e2e_migration_and_update_test_version
Bump the E2E upgrade and migration test version.
2024-05-17 15:09:50 +08:00
Xun Jiang
291d55f154 Bump the E2E upgrade and migration test version.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-05-17 14:19:06 +08:00
lyndon-li
65a831ed67 Merge pull request #7762 from kaovilai/waitBackupRepoErrsVerbose
Surface errors when waiting for backupRepository
2024-05-17 10:03:19 +08:00
Matthieu MOREL
1010b04821 testifylint: enable compares rule
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2024-05-16 20:18:43 +00:00
Xun Jiang/Bruce Jiang
a0b7382e5a Merge pull request #7595 from mmorel-35/golangci-lint-config
organize golangci workflow
2024-05-16 11:31:13 +08:00
Xun Jiang/Bruce Jiang
cdd5a4fdba Merge pull request #7755 from vmware-tanzu/dependabot/github_actions/actions/cache-4
Bump actions/cache from 2 to 4
2024-05-16 11:09:52 +08:00
Guang Jiong Lou
6c2b66b480 Modify the wrong ConfigMap name in v1.13 node-agent-concurrency document. (#7715)
Fix condition matching in resource modifier when there are multiple rules

Signed-off-by: lou <alex1988@outlook.com>
Co-authored-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-05-14 17:01:50 -04:00
Matthieu MOREL
bc1e88cb27 rename golangci-lint config file and use golangci-lint-action to lint
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2024-05-14 19:45:03 +00:00
Xun Jiang/Bruce Jiang
27392d3411 Support more PROW commands. (#7784)
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-05-14 14:19:25 -04:00
dependabot[bot]
93216e4a3a Bump golangci/golangci-lint-action from 5 to 6 (#7791)
Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from 5 to 6.
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](https://github.com/golangci/golangci-lint-action/compare/v5...v6)

---
updated-dependencies:
- dependency-name: golangci/golangci-lint-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-14 14:18:25 -04:00
Daniel Jiang
7e19cdbcc6 Merge pull request #7757 from kaovilai/addExistingResourcePolicyRestoreCRValidation
Add existingResourcePolicy restore CR validation to controller.
2024-05-14 11:35:58 +08:00
danfeng
0b5f10efbe Merge pull request #7598 from mmorel-35/azure-storage-blob-go
Migrate from github.com/Azure/azure-storage-blob-go to github.com/Azure/azure-sdk-for-go/sdk/storage/azblob
2024-05-14 10:34:03 +08:00
Wenkai Yin(尹文开)
23135d0d21 Merge pull request #7790 from blackpiglet/modify_nodeagent_cocurrency
Modify the wrong ConfigMap name in v1.13 node-agent-concurrency docum…
2024-05-13 16:50:42 +08:00
Xun Jiang
ef8f3b5cb8 Modify the wrong ConfigMap name in v1.13 node-agent-concurrency document.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-05-13 16:27:27 +08:00
danfeng
3c37c843f8 Merge pull request #7591 from mmorel-35/noctx
golangci-lint(noctx): fix test files
2024-05-13 13:29:07 +08:00
danfeng
1ca1178f76 Merge pull request #7788 from danfengliu/fix-makefile-param-issue
Fix makefile param issue
2024-05-13 10:24:32 +08:00
danfengl
85495eef48 Fix makefile param issue
Using VERSION instead of VELERO_VERSION, since VERSION is passed from root Makefile.
Signed-off-by: danfengl <danfengl@vmware.com>
2024-05-11 05:40:03 +00:00
Tiger Kaovilai
3c937d42dd ignore .git dir when formatting
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-05-10 14:12:29 -04:00
Matthieu MOREL
173f704796 golangci-lint(noctx): fix test files
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2024-05-10 10:05:28 +00:00
Matthieu MOREL
14e98b89ad Migrate from github.com/Azure/azure-storage-blob-go to github.com/Azure/azure-sdk-for-go/sdk/storage/azblob
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2024-05-10 09:24:35 +00:00
Xun Jiang/Bruce Jiang
f7c0244183 Merge pull request #7776 from mmorel-35/kind/changelog-not-required
split labels configurations add more prow commands
2024-05-10 15:19:31 +08:00
Tiger Kaovilai
2c6853b6e8 Surface errors when waiting for backupRepository
Make errors such as those found in https://github.com/vmware-tanzu/velero/issues/6928#issuecomment-1759369183

Makes errors easier to understand than "timed out waiting for the condition"

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-05-09 17:20:33 -04:00
qiuming
7563a453fb Merge pull request #7701 from qiuming-best/merge-makefile
Merge makefile for e2e perf test
2024-05-09 15:28:13 +08:00
qiuming
b8a48c0ef8 Merge pull request #7691 from qiuming-best/e2e-parallel-upload-download
Add E2E test for parallel files upload and download
2024-05-09 15:27:06 +08:00
Matthieu MOREL
3650337fff split labels configurations add more prow commands
Co-authored-by: Tiger Kaovilai <passawit.kaovilai@gmail.com>
Co-authored-by: Xun Jiang <blackpigletbruce@gmail.com>
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2024-05-09 06:56:04 +00:00
Ming Qiu
a628cb525f Add E2E test for parallel files upload and download
Signed-off-by: Ming Qiu <ming.qiu@broadcom.com>
2024-05-09 03:04:04 +00:00
Wenkai Yin(尹文开)
0d85a647b5 Merge pull request #7772 from Lyndon-Li/issue-fix-7535
Issue fix 7535: don't skip must have resources for label selector
2024-05-08 15:44:04 +08:00
danfeng
43d1568be6 Merge pull request #7621 from danfengliu/add-checkpoint-in-fs-backup-deletion
Add checkpoint in fs backup deletion
2024-05-08 15:42:56 +08:00
danfengl
61c4d7b148 Add checkpoint for FS backup deletion test
As per PR #7281, if repository count is more than 1, then snapshots deletion is achieved with a fast way, then we should have more than 1 FS backup repository per backup.

Signed-off-by: danfengl <danfengl@vmware.com>
2024-05-07 07:52:46 +00:00
qiuming
3cbf2eb4e2 Merge pull request #7752 from qiuming-best/maintenance-job-start-fix
Maintenance job should not be launched if the repo already has a runn…
2024-05-07 15:14:56 +08:00
Ming Qiu
e91d9b906c Maintenance job should not be launched if the repo already has a running one
Signed-off-by: Ming Qiu <ming.qiu@broadcom.com>
2024-05-07 06:12:05 +00:00
Xun Jiang/Bruce Jiang
dda6c1f37b Merge pull request #7769 from mmorel-35/kind/changelog-not-required
fix `/kind changelog-not-required`
2024-05-07 11:10:22 +08:00
Lyndon-Li
55f47c801a Merge branch 'main' into issue-fix-7535 2024-05-06 18:40:59 +08:00
Lyndon-Li
0a5c6db2b9 issue 7535: don't skip must have resources for label selector
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-05-06 18:39:32 +08:00
Matthieu MOREL
a90c0a420b fix /kind changelog-not-required
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2024-05-06 06:50:20 +00:00
Tiger Kaovilai
e1bef5b6c2 Add existingResourcePolicy restore CR validation to controller.
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-05-01 11:39:03 -04:00
dependabot[bot]
4d48273a24 Bump golangci/golangci-lint-action from 4 to 5 (#7756)
Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from 4 to 5.
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](https://github.com/golangci/golangci-lint-action/compare/v4...v5)

---
updated-dependencies:
- dependency-name: golangci/golangci-lint-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-30 09:43:16 -04:00
dependabot[bot]
516d06c7d1 Bump actions/cache from 2 to 4
Bumps [actions/cache](https://github.com/actions/cache) from 2 to 4.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](https://github.com/actions/cache/compare/v2...v4)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-29 19:58:24 +00:00
qiuming
8f78aaa5f6 Merge pull request #7745 from qiuming-best/maintenance-job-fix
[v1.14 test] Fix maintenance job launched immediately after prune error
2024-04-29 11:23:18 +08:00
danfeng
a798182d61 Merge pull request #7590 from danfengliu/add-vsc-checkpoint-for-migration-test
Add checkpoint of VSC for data movement migration test
2024-04-29 10:01:31 +08:00
danfengl
82fc557bd1 Add checkpoint of VSC for data movement migration test
1. In data movement scenario, volumesnapshotcontent by Velero backup will be deleted instead of retained in CSI scenaito, so add
a checkpoint for data movement scenario to verify no volumesnapshotcontent left after Velero backup;

2. Fix global context varaible issue, context varaible is not effective due to it's initialized right after the very beginning of
all tests instead of beginning of each test, so if someone script a new E2E test and did not overwrite it in the test body, then it
will fail the test if it was triggerd one hour later;

3. Due to CSI plugin is deprecated, it breaked down migration tests, because v1.13 still needs to install CSI plugin for the test.

Signed-off-by: danfengl <danfengl@vmware.com>
2024-04-28 09:47:08 +00:00
Ming Qiu
5eae542762 Fix maintenance job launched immediately after prune error
Signed-off-by: Ming Qiu <ming.qiu@broadcom.com>
2024-04-26 09:50:59 +00:00
Wenkai Yin(尹文开)
6f7807cb52 Merge pull request #7740 from kaovilai/schedule-docs-pause
Add Schedule.spec.pause to docs
2024-04-26 16:59:39 +08:00
Tiger Kaovilai
bffe4f9f56 Add Schedule.spec.pause to docs
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-04-25 12:30:58 -04:00
Scott Seago
7873ced0f1 updated design to remove biav3 requirement for everything, added alternatives
Signed-off-by: Scott Seago <sseago@redhat.com>
2024-04-24 17:32:08 -04:00
qiuming
159a49f0b2 Merge pull request #7733 from yanji09/update-hackmd-link-for-community-meeting-notes
Update hackmd link for community meeting notes
2024-04-24 14:16:49 +08:00
Jiaolin Yang
c894b4bff1 Update hackmd link for community meeting notes
Update hackmd link for community meeting notes.

Signed-off-by: Jiaolin Yang <Jiaolin.Yang@broadcom.com>
2024-04-24 13:50:13 +08:00
lyndon-li
01a2d952ac Merge pull request #7664 from shubham-pampattiwar/vol-policy-extension-impl
Extend Volume Policies feature to support more actions
2024-04-24 13:39:36 +08:00
Shubham Pampattiwar
8d2bef2486 Extend Volume Policies feature to support more actions
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

fix volume policy action execution

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

remove unused files

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

add changelog file

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

fix CI linter errors

fix linter errors

address pr review comments

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

fix via make update cmd

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

address PR feedback and add tests

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

fix codespell

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

fix ci linter checks

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

remove volsToExclude processing from volume policy logic and add tests

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

fix ci linter issue

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2024-04-23 12:54:14 -07:00
danfeng
e718a1325d Merge pull request #7727 from danfengliu/fix-1.14-nightly-issues
Fix 1.14 nightly issues
2024-04-23 17:14:15 +08:00
Daniel Jiang
d16003695a Merge pull request #7702 from reasonerjt/update-kind-k8s
Bump up the version of KinD and k8s in github actions
2024-04-23 17:08:49 +08:00
qiuming
7fd365c29d Merge pull request #7724 from vmware-tanzu/dependabot/github_actions/cirrus-actions/rebase-1.8
Bump cirrus-actions/rebase from 1.3.1 to 1.8
2024-04-23 14:55:36 +08:00
qiuming
e7a9d2e457 Merge pull request #7723 from vmware-tanzu/dependabot/github_actions/actions/checkout-4
Bump actions/checkout from 2 to 4
2024-04-23 14:55:13 +08:00
qiuming
7a72fe3be0 Merge pull request #7721 from vmware-tanzu/dependabot/github_actions/actions/stale-9.0.0
Bump actions/stale from 6.0.1 to 9.0.0
2024-04-23 14:54:07 +08:00
qiuming
9e84926bf1 Merge pull request #7722 from vmware-tanzu/dependabot/github_actions/github/codeql-action-3
Bump github/codeql-action from 2 to 3
2024-04-23 14:53:26 +08:00
qiuming
cc3f32410c Merge pull request #7720 from vmware-tanzu/dependabot/github_actions/jpmcb/prow-github-actions-1.1.3
Bump jpmcb/prow-github-actions from 1.1.2 to 1.1.3
2024-04-23 14:52:28 +08:00
danfengl
8a3f2f41e4 Fix 1.14 nightly issues
1. Add sleep for native snapshot tests when  using test.go interface;
2. Add --confirm for velero plugin add CLI as new feature introduced.

Signed-off-by: danfengl <danfengl@vmware.com>
2024-04-23 05:45:40 +00:00
dependabot[bot]
6550fc94bc Bump cirrus-actions/rebase from 1.3.1 to 1.8
Bumps [cirrus-actions/rebase](https://github.com/cirrus-actions/rebase) from 1.3.1 to 1.8.
- [Release notes](https://github.com/cirrus-actions/rebase/releases)
- [Commits](https://github.com/cirrus-actions/rebase/compare/1.3.1...1.8)

---
updated-dependencies:
- dependency-name: cirrus-actions/rebase
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-22 19:38:24 +00:00
dependabot[bot]
eed655dddd Bump actions/checkout from 2 to 4
Bumps [actions/checkout](https://github.com/actions/checkout) from 2 to 4.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v2...v4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-22 19:38:20 +00:00
dependabot[bot]
8c7f759002 Bump github/codeql-action from 2 to 3
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 2 to 3.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/github/codeql-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-22 19:38:14 +00:00
dependabot[bot]
2c9ff8b6d1 Bump actions/stale from 6.0.1 to 9.0.0
Bumps [actions/stale](https://github.com/actions/stale) from 6.0.1 to 9.0.0.
- [Release notes](https://github.com/actions/stale/releases)
- [Changelog](https://github.com/actions/stale/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/stale/compare/v6.0.1...v9.0.0)

---
updated-dependencies:
- dependency-name: actions/stale
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-22 19:38:07 +00:00
dependabot[bot]
01f25db1b4 Bump jpmcb/prow-github-actions from 1.1.2 to 1.1.3
Bumps [jpmcb/prow-github-actions](https://github.com/jpmcb/prow-github-actions) from 1.1.2 to 1.1.3.
- [Release notes](https://github.com/jpmcb/prow-github-actions/releases)
- [Commits](https://github.com/jpmcb/prow-github-actions/compare/v1.1.2...v1.1.3)

---
updated-dependencies:
- dependency-name: jpmcb/prow-github-actions
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-22 19:38:03 +00:00
Daniel Jiang
da2267fa3d Bump up the version of KinD and k8s in github actions
Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2024-04-22 18:17:15 +08:00
Xun Jiang/Bruce Jiang
5f9c53af6e Merge pull request #7697 from blackpiglet/backup_volumeinfo_cli_update
Modify namespace filter logic for backup with label selector
2024-04-22 15:57:50 +08:00
qiuming
9d66438c1f Merge pull request #7710 from danfengliu/rm-csi-plugin
Remove CSI plugin in E2E test
2024-04-22 15:50:26 +08:00
danfengl
a3bd26acd9 Remove CSI plugin in E2E test
Signed-off-by: danfengl <danfengl@vmware.com>
2024-04-22 07:13:40 +00:00
lyndon-li
d6a7319ff9 Merge pull request #7713 from Lyndon-Li/issue-fix-7712
Issue 7712: don't append nil error for BatchForget of Restic path
2024-04-22 13:05:13 +08:00
lyndon-li
9be2cdb6fe Merge pull request #7711 from blackpiglet/resolve_security_alert_202404
Fix CVEs reported in GitHub security.
2024-04-22 11:13:47 +08:00
Lyndon-Li
776efc4460 issue 7712: don't append nil error for BatchForget of restic path
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-04-22 11:10:45 +08:00
Xun Jiang
a01ef11c92 Fix CVEs reported in GitHub security.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-04-22 10:05:10 +08:00
Daniel Jiang
22b94654a4 Merge pull request #7686 from kaovilai/release-plan-plugin
Document plugin release plans as part of roadmap/milestone #6629
2024-04-19 19:48:21 +08:00
Xun Jiang
884bcbec98 Fix the typecheck error reported by the lint GitHub action.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-04-19 18:41:16 +08:00
Ming Qiu
a0fb7398cf Merge makefile for e2e perf test
Signed-off-by: Ming Qiu <ming.qiu@broadcom.com>
2024-04-19 02:15:30 +00:00
Scott Seago
9219e588d9 Add design for velero backup performance improvements
Signed-off-by: Scott Seago <sseago@redhat.com>
2024-04-18 16:02:39 -04:00
Xun Jiang
2eeaf4d55e Modify namespace filter logic for backup with label selector.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-04-18 10:30:59 +08:00
Xun Jiang
f1f0c8e5a7 Add size for DataMovement.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-04-18 10:30:59 +08:00
Daniel Jiang
f04fbbcc41 Merge pull request #7687 from reasonerjt/restore-desc-vol-info
Display CSI snapshot restores in restore describe
2024-04-17 15:39:55 +08:00
Xun Jiang/Bruce Jiang
b46fc6b4c7 Merge pull request #7692 from Lyndon-Li/pin-kopia-0.17.0
Pin kopia to 0.17.0
2024-04-17 14:28:25 +08:00
qiuming
224fc61987 Merge pull request #7680 from ywk253100/240415_azure
Use specific credential rather than the credential chain for Azure
2024-04-17 13:21:50 +08:00
Lyndon-Li
45b1b87055 pin kopia to 0.17.0
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-04-17 13:15:38 +08:00
Qi Xu
498a239e5b Modify hook docs for clarity on displaying hook execution results (#7679)
Signed-off-by: allenxu404 <qix2@vmware.com>
2024-04-17 09:37:04 +05:30
Daniel Jiang
2197cab3db Display CSI snapshot restores in restore describe
This commit makes change to CLI so `velero restore describe` will
download restore volume info and render the CSI snapshot restores based
on its content.

Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2024-04-16 17:08:05 +08:00
Xun Jiang/Bruce Jiang
bc29471ed6 Merge pull request #7619 from allenxu404/post_restore_hook_enhancement
Wait for results of restore exec hook executions in Finalizing phase instead of InProgress phase
2024-04-16 15:54:46 +08:00
qiuming
557ed915f3 Merge pull request #7684 from vmware-tanzu/dependabot/github_actions/docker/setup-qemu-action-3
Bump docker/setup-qemu-action from 1 to 3
2024-04-16 10:23:20 +08:00
qiuming
d8e3419754 Merge pull request #7681 from vmware-tanzu/dependabot/github_actions/actions/upload-artifact-4
Bump actions/upload-artifact from 2 to 4
2024-04-16 10:22:43 +08:00
qiuming
1de2bfe310 Merge pull request #7682 from vmware-tanzu/dependabot/github_actions/kentaro-m/auto-assign-action-2.0.0
Bump kentaro-m/auto-assign-action from 1.1.1 to 2.0.0
2024-04-16 10:22:06 +08:00
qiuming
d9651ab882 Merge pull request #7683 from vmware-tanzu/dependabot/github_actions/codecov/codecov-action-4
Bump codecov/codecov-action from 2 to 4
2024-04-16 10:21:18 +08:00
Xun Jiang/Bruce Jiang
6e5b438591 Merge pull request #7685 from vmware-tanzu/dependabot/github_actions/necojackarc/auto-request-review-0.13.0
Bump necojackarc/auto-request-review from 0.7.0 to 0.13.0
2024-04-16 10:08:00 +08:00
Tiger Kaovilai
aa494e8c6f Document plugin release plans as part of roadmap/milestone #6629
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-04-15 16:32:12 -04:00
dependabot[bot]
61ca69891f Bump necojackarc/auto-request-review from 0.7.0 to 0.13.0
Bumps [necojackarc/auto-request-review](https://github.com/necojackarc/auto-request-review) from 0.7.0 to 0.13.0.
- [Release notes](https://github.com/necojackarc/auto-request-review/releases)
- [Commits](https://github.com/necojackarc/auto-request-review/compare/v0.7.0...v0.13.0)

---
updated-dependencies:
- dependency-name: necojackarc/auto-request-review
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-15 19:55:30 +00:00
dependabot[bot]
5023f5ae26 Bump docker/setup-qemu-action from 1 to 3
Bumps [docker/setup-qemu-action](https://github.com/docker/setup-qemu-action) from 1 to 3.
- [Release notes](https://github.com/docker/setup-qemu-action/releases)
- [Commits](https://github.com/docker/setup-qemu-action/compare/v1...v3)

---
updated-dependencies:
- dependency-name: docker/setup-qemu-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-15 19:55:27 +00:00
dependabot[bot]
cff5b65ce7 Bump codecov/codecov-action from 2 to 4
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 2 to 4.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v2...v4)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-15 19:55:24 +00:00
dependabot[bot]
08d0c01f75 Bump kentaro-m/auto-assign-action from 1.1.1 to 2.0.0
Bumps [kentaro-m/auto-assign-action](https://github.com/kentaro-m/auto-assign-action) from 1.1.1 to 2.0.0.
- [Release notes](https://github.com/kentaro-m/auto-assign-action/releases)
- [Commits](https://github.com/kentaro-m/auto-assign-action/compare/v1.1.1...v2.0.0)

---
updated-dependencies:
- dependency-name: kentaro-m/auto-assign-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-15 19:55:21 +00:00
dependabot[bot]
61706ee2ea Bump actions/upload-artifact from 2 to 4
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 2 to 4.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/v2...v4)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-15 19:55:19 +00:00
Matthieu MOREL
facfb9552f migrating to sdk/resourcemanager/**/arm** from services/**/mgmt/** (#7596)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2024-04-15 09:55:52 -04:00
Wenkai Yin(尹文开)
40b0683dfc Use specific credential rather than the credential chain for Azure
Use specific credential rather than the credential chain for Azure

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-04-15 19:27:30 +08:00
allenxu404
28552258ae Wait for results of restore exec hook executions in Finalizing phase instead of InProgress phase
Signed-off-by: allenxu404 <qix2@vmware.com>
2024-04-15 17:49:36 +08:00
qiuming
0e5536370f Merge pull request #7655 from qiuming-best/v1.14-doc
Add maintenance job doc
2024-04-15 16:28:23 +08:00
qiuming
8754e27608 Merge pull request #7678 from eveneast/main
Fix some comments
2024-04-15 16:24:04 +08:00
Ming Qiu
f5e2552c5a Add maintenance job doc
Signed-off-by: Ming Qiu <ming.qiu@broadcom.com>
2024-04-15 07:37:57 +00:00
Wenkai Yin(尹文开)
f3295ccf08 Merge pull request #7666 from reasonerjt/bumpup-to-go1.22-new
Bump up to go1.22
2024-04-15 15:37:32 +08:00
eveneast
d0350960b6 Fix some comments
Signed-off-by: eveneast <qcqs@foxmail.com>
2024-04-15 14:48:20 +08:00
Wenkai Yin(尹文开)
74e355f3c8 Merge pull request #7661 from blackpiglet/merge_csi_ut
Add more UT for the CSI plugins.
2024-04-15 11:59:31 +08:00
Daniel Jiang
1b3fe95980 Bump up golang to v1.22
This commit bumps up the golang for building and testing velero to v1.22

It also updates controller-gen to v0.14.0 to fix an issue under new
versino of go.
More details see https://github.com/golang/go/issues/65637

Signed-off-by: Daniel Jiang <jiangd@vmware.com>
2024-04-14 20:15:51 +08:00
Xun Jiang
d3cc42d577 Change the timeout handling code due to third-party package change
The wait error changed from `timed out waiting for the condition`
to `context deadline exceeded`.

Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-04-13 23:27:02 +08:00
Xun Jiang
30995bcbd2 Add more UT for the CSI plugins.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-04-13 23:22:43 +08:00
Xun Jiang/Bruce Jiang
c888f51817 Merge pull request #7662 from Lyndon-Li/issue-fix-7648
Issue fix 7648: avoid snapshot leak on expose failure
2024-04-12 17:28:27 +08:00
qiuming
43aa89256b Merge pull request #7656 from blackpiglet/csi_doc_change
CSI doc change and remove the CSI feature verifier
2024-04-12 17:12:35 +08:00
Xun Jiang
59eeec268b Update CSI document. Remove the CSI plugin verifier.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-04-12 13:51:20 +08:00
Wenkai Yin(尹文开)
3c377bc3ec Merge pull request #7630 from reasonerjt/restore-vol-info
Track and persist restore volume info
2024-04-12 11:24:05 +08:00
Wenkai Yin(尹文开)
ab57112347 Merge pull request #7654 from blackpiglet/fix_push_action
Modify the GCP auth method because of the action version update.
2024-04-12 10:50:02 +08:00
Lyndon-Li
dcf760d5f1 issue 7648:avoid snapshot leak on expose failure
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-04-12 10:41:39 +08:00
Lyndon-Li
61061d5c83 issue 7648: merge main
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-04-12 10:18:02 +08:00
Lyndon-Li
bf03938dd2 issue 7648: don't leak snapshot on failure
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-04-11 19:23:57 +08:00
Xun Jiang/Bruce Jiang
f25c154709 Merge pull request #7569 from ywk253100/240326_namespace
Check the existence of the namespaces provided in the "--include-namespaces" option
2024-04-11 19:00:33 +08:00
Daniel Jiang
0a280e5786 Track and persist restore volume info
Signed-off-by: Daniel Jiang <jiangd@vmware.com>
2024-04-11 17:32:18 +08:00
Xun Jiang
9551b8e4c8 Modify the GCP auth method because of the action version update.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-04-11 14:22:57 +08:00
lyndon-li
500e5aeeca Merge pull request #7653 from qiuming-best/data-mover-restore-parallel
Add data download parallel files download configuration
2024-04-11 13:02:10 +08:00
Ming Qiu
89967c1cb6 Add data download parallel files download configuration
Signed-off-by: Ming Qiu <ming.qiu@broadcom.com>
2024-04-11 02:54:22 +00:00
Xun Jiang/Bruce Jiang
6ef38365ea Merge pull request #7609 from blackpiglet/merge_csi
Merge CSI plugin code.
2024-04-11 10:38:58 +08:00
qiuming
bbb5d7da03 Merge pull request #7640 from Lyndon-Li/data-mover-node-selection-doc
Data mover node selection doc
2024-04-11 10:33:15 +08:00
lyndon-li
218aa8655f Merge pull request #7523 from 27149chen/fix-for-resource-conversion
do not skip unknown gvr at the beginning and get new gr when kind is changed
2024-04-11 10:31:14 +08:00
Xun Jiang/Bruce Jiang
85e8b73d8d Merge pull request #7637 from vmware-tanzu/dependabot/github_actions/google-github-actions/setup-gcloud-2
Bump google-github-actions/setup-gcloud from 0 to 2
2024-04-11 10:00:35 +08:00
Xun Jiang/Bruce Jiang
8df4e6aded Merge branch 'main' into merge_csi
Signed-off-by: Xun Jiang/Bruce Jiang <59276555+blackpiglet@users.noreply.github.com>
2024-04-10 18:54:16 +08:00
Xun Jiang/Bruce Jiang
b91b907f06 Merge pull request #7639 from vmware-tanzu/dependabot/github_actions/docker/setup-buildx-action-3
Bump docker/setup-buildx-action from 1 to 3
2024-04-10 18:45:38 +08:00
Xun Jiang/Bruce Jiang
7935236db0 Merge pull request #7584 from mmorel-35/json-patch/v5
build(deps): bump json-patch to v5.8.0
2024-04-10 18:40:16 +08:00
clonefetch
474dc824e7 chore: fix function names in comment (#7633)
Signed-off-by: clonefetch <c0217@outlook.com>
2024-04-10 15:15:29 +05:30
Xun Jiang
31e140919a Merge CSI plugin code.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-04-10 14:53:29 +08:00
qiuming
63fe9f1f1f Merge pull request #7646 from qiuming-best/label-action-fix
Modify labels config for label action
2024-04-10 10:41:33 +08:00
Ming Qiu
69def18ccf Modify labels config for label action
Signed-off-by: Ming Qiu <ming.qiu@broadcom.com>
2024-04-09 09:31:50 +00:00
Wenkai Yin(尹文开)
7b3e6a4612 Merge pull request #7634 from ywk253100/240408_periodical_queue
Empty the list before next round of listing
2024-04-09 16:57:26 +08:00
qiuming
c3a3992be5 Merge pull request #7641 from qiuming-best/test-action
Fix labeler action failure
2024-04-09 16:51:26 +08:00
Ming Qiu
836300f583 Fix labeler action failure
Signed-off-by: Ming Qiu <ming.qiu@broadcom.com>
2024-04-09 08:20:21 +00:00
Wenkai Yin(尹文开)
d631517298 Merge pull request #7610 from reasonerjt/restore-vol-info-design
Add design to introduce restore volume info
2024-04-09 15:10:11 +08:00
qiuming
6bd9d4aee4 Merge pull request #7638 from vmware-tanzu/dependabot/github_actions/actions/setup-go-5
Bump actions/setup-go from 4 to 5
2024-04-09 11:23:10 +08:00
qiuming
a4fc81df42 Merge pull request #7636 from vmware-tanzu/dependabot/github_actions/actions/labeler-5
Bump actions/labeler from 3 to 5
2024-04-09 11:21:41 +08:00
qiuming
4697ed9a50 Merge pull request #7635 from vmware-tanzu/dependabot/github_actions/docker/login-action-3
Bump docker/login-action from 2 to 3
2024-04-09 11:20:54 +08:00
Lyndon-Li
080a61b43d data mover node selection doc
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-04-09 10:23:02 +08:00
dependabot[bot]
9274f4b664 Bump docker/setup-buildx-action from 1 to 3
Bumps [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action) from 1 to 3.
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](https://github.com/docker/setup-buildx-action/compare/v1...v3)

---
updated-dependencies:
- dependency-name: docker/setup-buildx-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-09 02:03:14 +00:00
dependabot[bot]
58838fc5c6 Bump actions/setup-go from 4 to 5
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 4 to 5.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-09 02:03:10 +00:00
dependabot[bot]
c36dc9263e Bump google-github-actions/setup-gcloud from 0 to 2
Bumps [google-github-actions/setup-gcloud](https://github.com/google-github-actions/setup-gcloud) from 0 to 2.
- [Release notes](https://github.com/google-github-actions/setup-gcloud/releases)
- [Changelog](https://github.com/google-github-actions/setup-gcloud/blob/main/CHANGELOG.md)
- [Commits](https://github.com/google-github-actions/setup-gcloud/compare/v0...v2)

---
updated-dependencies:
- dependency-name: google-github-actions/setup-gcloud
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-09 02:03:09 +00:00
dependabot[bot]
268978d2ab Bump actions/labeler from 3 to 5
Bumps [actions/labeler](https://github.com/actions/labeler) from 3 to 5.
- [Release notes](https://github.com/actions/labeler/releases)
- [Commits](https://github.com/actions/labeler/compare/v3...v5)

---
updated-dependencies:
- dependency-name: actions/labeler
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-09 02:03:04 +00:00
dependabot[bot]
e0a690c402 Bump docker/login-action from 2 to 3
Bumps [docker/login-action](https://github.com/docker/login-action) from 2 to 3.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](https://github.com/docker/login-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-09 02:03:02 +00:00
qiuming
b755433f26 Merge pull request #7594 from mmorel-35/dependabot/github-actions
dependabot: support github-actions updates
2024-04-09 10:02:37 +08:00
Wenkai Yin(尹文开)
91774af54d Empty the list before next round of listing
Empty the list before next round of listing

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-04-08 18:57:29 +08:00
Wenkai Yin(尹文开)
54462c4f7b Merge pull request #7631 from ywk253100/240408_codecoverage
Upgrade codecov action to v4
2024-04-08 17:09:04 +08:00
Wenkai Yin(尹文开)
6c215d7915 Upgrade codecov action to v4
Upgrade codecov action to v4

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-04-08 16:31:24 +08:00
Shubham Pampattiwar
f85f87759c add design for Extending VolumePolicies to support more actions (#6956)
add changelog



fix codespell



update codeblocks for language syntax rendering



redo design



update volume policies design



add notes and modify design based on community feedback



add future scope

add bia csi snapshot action details



add volumehelper package in implementation section



fix codespell



introduce volumehelper interface and volumepolicyhelper struct

address feedback regarding volumehelper interface and its funcs



fix codespell

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2024-04-03 11:38:42 -04:00
Xun Jiang/Bruce Jiang
ff45680430 Merge pull request #7622 from Lyndon-Li/issue-fix-7246
Issue 7246: document the behavior for repo snapshot deletion
2024-04-03 15:05:58 +08:00
Lyndon-Li
d66d00a82c issue 7246: document the behavior for repo snapshot deletion
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-04-03 13:53:24 +08:00
Xun Jiang/Bruce Jiang
8de622e37c Merge pull request #7618 from Lyndon-Li/pin-kopia-to-latest
Pin kopia to the latest commit
2024-04-03 13:21:00 +08:00
Lyndon-Li
0392e31c3d pin kopia to the latest commit
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-04-03 11:01:52 +08:00
Xun Jiang/Bruce Jiang
c2d267d894 Merge pull request #7611 from qiuming-best/datamover-cancel
Fix cancel bug && adjust StartTimestamp for data mover
2024-04-03 10:49:06 +08:00
Ming Qiu
a2c1a5a113 Fix cancel bug && adjust StartTimestamp for data mover
Signed-off-by: Ming Qiu <ming.qiu@broadcom.com>
2024-04-03 02:21:26 +00:00
qiuming
dcd62b908f Merge pull request #7617 from Lyndon-Li/issue-fix-7583
Issue 7583: set backupName optional for Restore CRD
2024-04-03 10:19:24 +08:00
qiuming
c7c59db2a2 Merge pull request #7604 from Lyndon-Li/resource-consumption-in-doc
Add resource consumption in fs-backup and data mover doc
2024-04-03 10:06:04 +08:00
Lyndon-Li
711609e00e issue 7583: set backupName optional for Restore CRD
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-04-03 10:00:55 +08:00
Daniel Jiang
ab5ee7b6ff Add design to introduce restore volume info
Signed-off-by: Daniel Jiang <jiangd@vmware.com>
2024-04-02 14:58:07 +08:00
Wenkai Yin(尹文开)
d974cd3f29 Merge pull request #7607 from blackpiglet/modify_uninstall_document
Modify the uninstall.md document.
2024-04-02 09:19:23 +08:00
Xun Jiang
07ff562209 Modify the uninstall.md document.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-04-01 19:18:56 +08:00
Lyndon-Li
49cd34535e add resource consumption in fs-backup and data mover doc
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-04-01 18:29:41 +08:00
qiuming
3465e8cddf Merge pull request #7558 from qiuming-best/uploader-fast-fail
Fix snapshot leak for backup
2024-04-01 15:24:34 +08:00
lyndon-li
c9b41ba9d2 Merge pull request #7572 from webwurst/patch-1
Link to merged design document
2024-04-01 15:04:18 +08:00
lyndon-li
de2cb525aa Merge pull request #7602 from alingse/fix-append-all-when-range-it
Fix: append all slice data when range for it
2024-04-01 14:07:13 +08:00
lyndon-li
f24eabd8b8 Merge pull request #7603 from Lyndon-Li/fix-ut-fail
Fix ut fail
2024-04-01 13:55:16 +08:00
qiuming
9b705033b2 Merge pull request #7567 from danfengliu/debug-ns-deletion-hung-issue
Delete ns using kubectl
2024-04-01 13:31:55 +08:00
Lyndon-Li
90e9efc544 fix ut fail
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-04-01 13:28:37 +08:00
Xun Jiang/Bruce Jiang
58effeb879 Merge pull request #7566 from kaovilai/biaOperationErrorsPluginName
Add confirm flag to velero plugin add
2024-04-01 12:47:53 +08:00
Xun Jiang/Bruce Jiang
75962653c5 Merge pull request #7554 from blackpiglet/7357_fix
Support update the backup VolumeInfos by the Async ops result.
2024-04-01 11:05:33 +08:00
Ming Qiu
3d5282e12b Fix snapshot leak for backup
Signed-off-by: Ming Qiu <ming.qiu@broadcom.com>
2024-04-01 03:02:24 +00:00
danfengl
b605f9dbd5 Delete ns using kubectl
Signed-off-by: danfengl <danfengl@vmware.com>
2024-04-01 02:40:26 +00:00
alingse
2ef22c082f Fix: append all slice data when range for it
Signed-off-by: alingse <alingse@foxmail.com>
2024-04-01 00:11:50 +08:00
Matthieu MOREL
b52b0a9650 dependabot: support github-actions updates
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2024-03-30 14:05:05 +01:00
Matthieu MOREL
a9085033b2 build(deps): bump json-patch to v5.8.0
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2024-03-29 14:08:33 +00:00
Matthieu MOREL
3d6dab0708 lint(ginkgolinter): expect (not)to HaveOccurred (#7565)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2024-03-29 10:05:48 -04:00
lyndon-li
67bd694d1b Merge pull request #7437 from Lyndon-Li/issue-fix-7036
Issue 7036: node selection for data mover backup
2024-03-29 17:04:40 +08:00
lyndon-li
01ef3a3e62 Merge pull request #7589 from Lyndon-Li/kopia-index-compaction-during-maintenance
Kopia: index compaction during maintenance
2024-03-29 15:53:57 +08:00
Xun Jiang/Bruce Jiang
d982058d3b Merge pull request #7588 from ywk253100/240329_ut
Improve the UT code coverage for pkg/podvolume
2024-03-29 15:36:54 +08:00
Lyndon-Li
18976c0a62 kopia: index compaction during maintenance
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-03-29 15:24:47 +08:00
qiuming
beb221b15c Merge pull request #7587 from blackpiglet/move_actions
Add actions directory for backup and restore.
2024-03-29 15:10:38 +08:00
Wenkai Yin(尹文开)
039fc20b65 Improve the UT code coverage for pkg/podvolume
Improve the UT code coverage for pkg/podvolume

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-03-29 15:02:30 +08:00
Xun Jiang
5462035469 Delete the unneeded pvRestorer action in
handleSkippedPVHasRetainPolicy

According to comment, calling executePVAction aims to reset PV's
claimRef, but the reset logic was moved into resetVolumeBindingInfo
since release-1.4.

Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-03-29 14:12:12 +08:00
Xun Jiang
08c93b4145 Add actions directory for backup and restore.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-03-29 13:21:50 +08:00
lyndon-li
81da8e67c7 Merge pull request #7585 from Lyndon-Li/issue-fix-7535
Issue fix 7535
2024-03-29 10:48:48 +08:00
Lyndon-Li
9b74643b3a Merge branch 'main' into issue-fix-7535 2024-03-29 10:28:53 +08:00
Wenkai Yin(尹文开)
a3eeb7dad9 Merge pull request #7571 from ywk253100/240322_concurrency
Improve the concurrency for PVBs in different pods
2024-03-29 10:26:26 +08:00
Lyndon-Li
070e99da3d issue 7535: don't exclude resources in MustHave list during restore
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-03-29 10:14:54 +08:00
Wenkai Yin(尹文开)
8d10b68eda Improve the concurrency for PVBs in different pods
Improve the concurrency for PVBs in different pods

Fixes #6676

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-03-29 09:58:50 +08:00
Xun Jiang
b06d7a467f Support update the backup VolumeInfos by the Async ops result.
1. Add PutBackupVolumeInfos method.
2. Add CompletionTimestamp in VolumeInfo.
3. Add Size in SnapshotDataMovementInfo.
4. Update CompletionTimpstmap, SnapshotHandle, RetainedSnapshot
   and Size in VolumeInfo on DataUpload Operation completes.

Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-03-28 19:52:44 +08:00
Lyndon-Li
65e8ddb89b Merge branch 'main' into issue-fix-7535 2024-03-28 18:37:39 +08:00
Xun Jiang/Bruce Jiang
d640cc16ab Merge pull request #7573 from mmorel-35/golangci-lint-exclude-rules
golangci-lint: use exclude-rules instead of skip-files and skip-dirs
2024-03-28 16:39:40 +08:00
qiuming
e80bdcf2e2 Merge pull request #7451 from qiuming-best/maintenance-job
Add repository maintenance job
2024-03-28 14:47:15 +08:00
Ming Qiu
8d63c76c92 Add maintenance job
Signed-off-by: Ming Qiu <mqiu@vmware.com>
2024-03-28 03:22:06 +00:00
Matthieu MOREL
ef04ef6361 golangci-lint: use exclude-rules instead of skip-files and skip-dirs
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2024-03-27 20:17:34 +00:00
Matthieu MOREL
3c704ba1b1 linter(testifylint): use Len or Empty for arrays testing (#7555)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2024-03-27 14:16:58 -04:00
Xun Jiang/Bruce Jiang
7a9d7a83ed Update the Velero CSI version in csi.md (#7570)
Describe how to support Velero with Kopia,
when ReadOnlyRootFilesystem is enabled.

Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-03-27 14:14:34 -04:00
Tobias Bradtke
53215ec2cd Link to merged design document
Signed-off-by: Tobias Bradtke <webwurst@gmail.com>
2024-03-27 16:03:45 +01:00
Wenkai Yin(尹文开)
35d2534e19 Check the existence of the namespaces provided in the "--include-namespaces" option
Check the existence of the namespaces provided in the "--include-namespaces" opt
ion and reports validation error if not found

Fixes #7431

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-03-27 18:37:03 +08:00
Wenkai Yin(尹文开)
cd0632c5db Merge pull request #7549 from ywk253100/240318_cert
Support certificate-based authentication for Azure
2024-03-27 18:15:32 +08:00
lyndon-li
a2c87fc8b2 Merge pull request #7438 from Lyndon-Li/batch-delete-snapshot
Issue 7281: batch delete snapshot
2024-03-27 13:31:07 +08:00
Lyndon-Li
d538fc87ad batch delete snapshot
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-03-27 11:21:51 +08:00
lyndon-li
25188248d6 Merge pull request #7559 from Lyndon-Li/open-kopia-with-no-index-change
Bump up Kopia to v0.16.0 and open kopia repo with no index change
2024-03-26 13:08:27 +08:00
Tiger Kaovilai
3c243653c4 Add confirm flag to velero plugin add
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-03-26 05:56:11 +07:00
Shubham Pampattiwar
f1f7f04233 Merge pull request #7557 from kaovilai/detectPodmanEmulation
Add notes for podman / colima usage on macOS
2024-03-25 06:15:09 -07:00
Lyndon-Li
5d48e36b55 open kopia with no index change
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-03-25 18:14:43 +08:00
Lyndon-Li
929731cf8b issue 7535: don't exclude resources in MustHave list during restore
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-03-25 16:23:34 +08:00
lou
19e5f38cbc update after review
Signed-off-by: lou <alex1988@outlook.com>
2024-03-25 12:42:28 +08:00
Tiger Kaovilai
8f8bc9fd9e Add notes for podman / colima usage on macOS
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-03-25 11:29:16 +07:00
qiuming
24941b4f15 Merge pull request #7375 from qiuming-best/repo-maintenance
Add design for repository maintenance job
2024-03-25 10:50:07 +08:00
qiuming
365423d220 Merge pull request #7512 from qiuming-best/support-parallel-restore
Make parallel restore configurable
2024-03-25 10:49:40 +08:00
Wenkai Yin(尹文开)
4c95edd8ba Support certificate-based authentication for Azure
Support certificate-based authentication for Azure

Fixes #6735

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-03-21 15:59:37 +08:00
Wenkai Yin(尹文开)
13f4efdbc9 Merge pull request #7544 from blackpiglet/refactor_native_snapshot
Refactor the native snapshot definition code.
2024-03-21 09:22:37 +08:00
Xun Jiang
efb94ae610 Refactor the native snapshot definition code.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-03-20 15:38:07 +08:00
Xun Jiang/Bruce Jiang
67c06e613a Merge pull request #7504 from allenxu404/pv-patch-in-finalizing-phase
Patch newly dynamically provisioned PV with volume info to restore PV's custom setting
2024-03-20 09:56:03 +08:00
qiuming
922653e97d Merge pull request #7530 from zhouhaoA1/fix-kindfor
Fix gvr `Group` and `Version` field missing in `KindFor` method
2024-03-19 17:03:20 +08:00
Ming Qiu
64a3f2aa3a Make parallel restore configurable
Signed-off-by: Ming Qiu <mqiu@vmware.com>
2024-03-19 15:15:47 +08:00
allenxu404
67b5e82d49 Patch newly dynamically provisioned PV with volume info to restore custom setting of PV
Signed-off-by: allenxu404 <qix2@vmware.com>
2024-03-18 17:32:35 +08:00
Lyndon-Li
dccde10368 issue-7036: make affinity as list and take 1st one
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-03-18 11:25:27 +08:00
lyndon-li
6ec1701b27 Merge pull request #7383 from Lyndon-Li/data-mover-node-selection
Design for data mover node selection
2024-03-18 11:00:44 +08:00
zhouhao
7ce35ece4a Issue 7529: fix gvr Group and Version field missing in KindFor method
Signed-off-by: zhouhao <zhouhao@cmss.chinamobile.com>
2024-03-15 10:07:03 +08:00
dependabot[bot]
b3a53ee8df Bump google.golang.org/protobuf from 1.31.0 to 1.33.0 (#7518)
Bumps google.golang.org/protobuf from 1.31.0 to 1.33.0.

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-14 11:10:39 -04:00
lyndon-li
6c0cb4bf89 Merge pull request #7521 from qiuming-best/data-mover-empty-dir
Fix DataDownload fails during restore for empty PVC workload
2024-03-14 16:07:43 +08:00
lou
f25004cd9c fix changelog
Signed-off-by: lou <alex1988@outlook.com>
2024-03-14 15:29:57 +08:00
lou
25c006f536 add changelog
Signed-off-by: lou <alex1988@outlook.com>
2024-03-14 15:27:44 +08:00
Ming Qiu
74ffa50bb4 Fix DataDownload fails during restore for empty PVC workload
Signed-off-by: Ming Qiu <mqiu@vmware.com>
2024-03-14 07:22:59 +00:00
lou
00b9869369 do not ignore unknown gvr at the beginning and get new gr when kind is changed
Signed-off-by: lou <alex1988@outlook.com>
2024-03-14 15:21:00 +08:00
qiuming
5d08d62144 Merge pull request #7515 from blackpiglet/7494_fix
Check whether the VolumeSnapshot's source PVC is nil before using it
2024-03-14 11:28:29 +08:00
Lyndon-Li
2f9d8ae4bd design for data mover node selection
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-03-14 09:55:32 +08:00
Xun Jiang
f8deea1617 Skip populate VolumeInfo for data-moved PV when CSI is not enabled.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-03-13 15:47:43 +08:00
Xun Jiang
4d01c7ffa3 Check whether the VolumeSnapshot's source PVC is nil before using it.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-03-13 11:21:48 +08:00
Wenkai Yin(尹文开)
79e9e31d8d Merge pull request #7489 from ywk253100/240229_lib
Bump up the versions of several Kubernetes-related libs
2024-03-12 16:12:56 +08:00
Wenkai Yin(尹文开)
84c1eca66c Merge pull request #7497 from qiuming-best/resource-polices-log-adjust
Adjust resource policies logic in BackupPodVolumes
2024-03-08 14:21:53 +08:00
Ming Qiu
6e76e03a8b Adjust resource policies log
Signed-off-by: Ming Qiu <mqiu@vmware.com>
2024-03-07 05:43:28 +00:00
David Hulick
4d548612d4 docs: clarify upgrade instructions doc (#7486)
Signed-off-by: David Hulick <dave.hulick@gmail.com>
2024-03-05 17:51:50 -05:00
Wenkai Yin(尹文开)
8752c3a820 Bump up the versions of severel Kubernetes-related libs
Bump up the versions of severel Kubernetes-related libs

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-03-05 13:09:38 +08:00
lyndon-li
f274fe7bfc Merge pull request #7488 from Lyndon-Li/issue-fix-7391
Issue 7391:remove the default constraint for node-agent pods
2024-03-04 13:18:17 +08:00
Lyndon-Li
d558e49288 issue 7391:remove the default constraint for node-agent pods
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-03-04 11:01:02 +08:00
lyndon-li
97d276caa7 Merge pull request #7452 from Lyndon-Li/issue-fix-7211
Issue 7211: support concatenate objects
2024-03-04 10:34:37 +08:00
Xun Jiang/Bruce Jiang
79c55fba24 Merge pull request #7484 from dbbaskette/main
Updated Zoom Link to Broadcom Zoom
2024-03-04 09:26:22 +08:00
Dan Baskette
599faae25b Updated Zoom Link to Broadcom Zoom
Signed-off-by: Dan Baskette <dbbaskette@gmail.com>
2024-03-01 10:36:10 -05:00
Ming Qiu
ebd90bbe36 Add design for repository maintenance job
Signed-off-by: Ming Qiu <mqiu@vmware.com>
2024-03-01 14:57:04 +08:00
Xun Jiang/Bruce Jiang
157984279b Merge pull request #7472 from blackpiglet/7045_fix
Skip pvb creation when pvc excluded
2024-03-01 10:09:37 +08:00
Daniel Jiang
edd0d3b073 Merge pull request #7377 from allenxu404/restore-finalization-implementation
Add the finalization phase to the restore workflow
2024-02-29 17:21:25 +08:00
allenxu404
2b8bb871d3 Add the finalization phase to the restore workflow
Signed-off-by: allenxu404 <qix2@vmware.com>
2024-02-29 13:51:45 +08:00
qiuming
e727d29bcd Merge pull request #7467 from blackpiglet/7464_fix
Modify the label used by the restore CLI to filter the PVR.
2024-02-29 10:07:36 +08:00
danfeng
512fe0dabd Merge pull request #7442 from danfengliu/using-zfs-for-vanilla-cluster
using zfs for vanilla cluster test
2024-02-27 18:02:18 +08:00
Shahaf Bahar
36d58943cd Skip pvb creation when pvc excluded
Signed-off-by: Shahaf Bahar <sbahar@redhat.com>
2024-02-27 16:30:29 +08:00
danfengl
7c50c3cb8c using zfs for vanilla cluster test
Signed-off-by: danfengl <danfengl@vmware.com>
2024-02-27 06:39:33 +00:00
Xun Jiang
bb4a62f3a7 Modify the label used by the restore CLI to filter the PVR.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-02-26 10:41:51 +08:00
danfeng
82f84814f5 Merge pull request #7396 from blackpiglet/backup_volumeinfo_e2e
Add E2E test cases for backup VolumeInfo feature.
2024-02-23 15:26:49 +08:00
Xun Jiang
ef5c2ed805 Modify according to comments.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-02-22 19:03:20 +08:00
Xun Jiang
effbcba521 Add E2E test cases for backup VolumeInfo feature.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-02-22 16:18:55 +08:00
lyndon-li
174c10fa8a Merge pull request #7458 from Lyndon-Li/issue-fix-7308
Issue 7308: change the data path requeue time to 5 second
2024-02-22 15:58:14 +08:00
Lyndon-Li
e1bcdf0f63 issue 7308: change the data path requeue time to 5 second
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-02-22 10:10:33 +08:00
Lyndon-Li
24c4eb075f issue 7211: support concatenate objects
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-02-21 17:34:04 +08:00
Wenkai Yin(尹文开)
74bf4c3d80 Merge pull request #7443 from ywk253100/240219_credential
Don't return error when no credential file found
2024-02-21 16:03:55 +08:00
Xun Jiang/Bruce Jiang
2a1ae0ec0a Merge pull request #7445 from allenxu404/backup-last-status-fixed
Adjust the logic for the backup_last_status metric to stop incorrectly incrementing over time
2024-02-21 13:50:46 +08:00
allenxu404
84fb88c19c Adjust the logic for the backup_last_status metrics to stop incorrectly incrementing over time
Signed-off-by: allenxu404 <qix2@vmware.com>
2024-02-21 13:21:03 +08:00
Wenkai Yin(尹文开)
d45b313f07 Merge pull request #7439 from draghuram/cloudcasa
Update CloudCasa description in adopters list.
2024-02-20 09:46:07 +08:00
Wenkai Yin(尹文开)
f6383916a2 Don't return error when no credential file found
Don't return error when no credential file found

Fixes #7395

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-02-19 17:02:30 +08:00
danfeng
b2f1588d2e Merge pull request #7392 from danfengliu/fix-wrong-usage-of-global-velerocfg-var-enhance
Fix wrong usage of global velerocfg var enhance
2024-02-19 12:14:20 +08:00
qiuming
56af62ba78 Merge pull request #7406 from blackpiglet/fix_velero_repo_get_bug
Fix the `velero repo get` nil pointer issue.
2024-02-19 10:54:19 +08:00
Raghuram Devarakonda
5c7c61c4d3 Update CloudCasa description in adopters list.
Signed-off-by: Raghuram Devarakonda <draghuram@gmail.com>
2024-02-18 12:18:40 -05:00
Lyndon-Li
57879357fc Merge branch 'main' into batch-delete-snapshot 2024-02-18 14:54:01 +08:00
Lyndon-Li
32d92ca964 batch delete snapshot
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-02-18 14:49:21 +08:00
Lyndon-Li
7bf7fb9fc1 issue 7036: fail early by peek expose
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-02-18 14:34:35 +08:00
Sean Blong
017b9a43e8 Adding unit test to ensure patches are skipped for missing paths.
Signed-off-by: Sean Blong <seanblong@gmail.com>
2024-02-09 13:29:59 -08:00
Sean Blong
cf460d51c3 Ignore missing path error in conditional match
Signed-off-by: Sean Blong <seanblong@gmail.com>
2024-02-08 18:06:36 -08:00
Xun Jiang
27348ca039 Fix the velero repo get nil pointer issue.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-02-08 14:25:29 +08:00
danfengl
c9ba808bf1 Fix wrong usage of global velerocfg var, further PR
Signed-off-by: danfengl <danfengl@vmware.com>
2024-02-07 02:32:20 +00:00
danfengl
b0956322b9 Fix wrong usage of global varaible VeleroCfg
Signed-off-by: danfengl <danfengl@vmware.com>
2024-02-07 02:32:20 +00:00
Xun Jiang/Bruce Jiang
3b8370e13c Merge pull request #7353 from kaovilai/e2e-updates
E2E usability updates
2024-02-07 10:05:02 +08:00
Xun Jiang/Bruce Jiang
d24298e063 Merge pull request #6085 from kaovilai/build-image-matches-platform
Make build-image Dockerfile multi-platform
2024-02-07 09:47:35 +08:00
Tiger Kaovilai
a5c72a4866 BackupRepositories associated with a BSL are invalidated when BSL is (re-)created. (#7380)
* Add BackupRepositories invalidation on BSL Create
Simplify comments

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

* Simplify

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

---------

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-02-06 10:13:30 -05:00
Wenkai Yin(尹文开)
a9e80d585a Merge pull request #7342 from ywk253100/240122_azure
Put credential related config into getStorageCredentials function
2024-02-06 13:34:51 +08:00
Tiger Kaovilai
2375f78d0f ppc64le fix for protocolbuffers
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-02-05 23:36:18 -05:00
Tiger Kaovilai
5adb7d0def Make arch more flexible
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-02-05 22:24:57 -05:00
Tiger Kaovilai
b9f3f410e7 Make build-image arm64 compatible
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-02-05 21:12:32 -05:00
Wenkai Yin(尹文开)
9649619a6f Put credential related config into getStorageCredentials function
Put credential related config into getStorageCredentials function

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-02-05 16:09:15 +08:00
Lyndon-Li
9a907a21f2 issue-7036:data mover load affinties
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-02-05 14:13:10 +08:00
Tiger Kaovilai
b1d95cf2aa Set GOBIN so Makefile don't modify $PATH on go install Fix realPath resolving when cloud credentials is prefixed by ~ for home dir Use ~/.docker/config.json if REGISTRY_CREDENTIAL_FILE not defined and skip step if does not exists since it is optional
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

Set `GOBIN` so Makefile don't modify $PATH on `go install`
Fix realPath resolving when cloud credentials is prefixed by `~` for home dir
Use `~/.docker/config.json` if REGISTRY_CREDENTIAL_FILE not defined and skip step if does not exists since it is optional

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

Add kind testdata storageclass

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

Add kind testdata storageclass

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

log `Start to install Azure VolumeSnapshotClass ...` only on azure when csi is enabled

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

Add BSL_CONFIG example and notes

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

Makefile: Set `GOBIN` for `_output/...`

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

README spacing

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

StandbyClusterObjectStoreProvider typo

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

Specify velero namespace during get/delete command

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

Use object stores rather than cloudProvider for bucket queries

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

Remove debug print

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

simplify NS get changes, add velero NS to `DeleteBackupResource`

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

Skip file system backups on kind which uses hostPath volumes

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

Add StorageClass change test to PR kind e2e

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

Add more tests to pr

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

Add NS mapping to PR e2e

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

Add `SKIP_KIND` to some jobs containing volumes

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

Remove kind from kibishii tests

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

Label volume resource policies as restic, skip restic/snapshot tests, add more tests

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

TTLTest is a snapshot test

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

Remove non working tests

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

Resolve https://github.com/vmware-tanzu/velero/pull/7353#issuecomment-1925660077

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

address https://github.com/vmware-tanzu/velero/pull/7353/files#r1477218762

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

Address https://github.com/vmware-tanzu/velero/pull/7353#issuecomment-1923414840

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-02-04 22:17:37 -05:00
Daniel Jiang
2f25c25908 Merge pull request #7317 from allenxu404/restore-finalizing-design
Design for adding the finalization phase to the restore workflow
2024-02-02 13:35:32 +08:00
danfeng
99e0c483c7 Merge pull request #7297 from danfengliu/add-irsa-for-eks-pipeline
Support IRSA as credential in one of  nightly EKS pipelines
2024-02-02 12:37:22 +08:00
Daniel Jiang
cc9c954d8f Merge pull request #7374 from reasonerjt/bypass-irsa-kopia
Force to Credentials file when IRSA is configured
2024-02-02 11:26:37 +08:00
danfengl
72438b7319 Support IRSA for data mover pipeline
Signed-off-by: danfengl <danfengl@vmware.com>
2024-02-02 02:04:26 +00:00
Wenkai Yin(尹文开)
b509df5172 Upgrade the version of go plugin related libs/tools (#7373)
Upgrade the version of go plugin related libs/tools

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-02-01 13:02:42 -05:00
Daniel Jiang
30728c248c Respect the config in BSL when IRSA is configured
This commit makes sure when kopia connects to the repository the
crendentials file specified in BSL.spec.config has the higher priority over
Pod Environment credentials when IRSA is configured.

Signed-off-by: Daniel Jiang <jiangd@vmware.com>
2024-02-01 16:33:10 +08:00
allenxu404
8f84f50711 Include the design for adding the finalization phase to the restore workflow
Signed-off-by: allenxu404 <qix2@vmware.com>
2024-02-01 14:44:25 +08:00
Xun Jiang/Bruce Jiang
08a020ebcd Merge pull request #7370 from blackpiglet/add_uploader_config_for_schedule
Add `ParallelFilesUpload` for schedule creation.
2024-01-31 11:22:17 +08:00
Xun Jiang
7aaf62442a Add ParallelFilesUpload for schedule creation.
Modify restore-helper print information.

Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-01-31 07:52:11 +08:00
qiuming
b30a679e5b Merge pull request #7368 from qiuming-best/bsl-bug-fix
Fix server start failure when no default BSL
2024-01-30 16:26:45 +08:00
Ming Qiu
5fc8b3f426 Fix server start failure when no default BSL
Signed-off-by: Ming Qiu <mqiu@vmware.com>
2024-01-30 06:09:15 +00:00
danfengl
df585053e7 Add a new EKS pipeline with IRSA as credential
Signed-off-by: danfengl <danfengl@vmware.com>
2024-01-26 02:17:21 +00:00
Daniel Jiang
1034d6aee0 Merge pull request #7349 from ywk253100/240124_informer_main
[cherry-pick]Check whether the API resource exists before creating the informer cache
2024-01-24 14:50:51 +08:00
Wenkai Yin(尹文开)
c8ad69ab04 Check whether the API resource exists before creating the informer cache
Check whether the API resource exists before creating the informer cache

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-01-24 12:41:00 +08:00
Shubham Pampattiwar
760930282a Merge pull request #6939 from anarnold97/Typo-in-csi-snapshot-data-movement
A small typo duplicated csi-snapshot-data-movement.md in main and v.1.12
2024-01-23 09:46:47 -08:00
Wenkai Yin(尹文开)
7969c694d7 Merge pull request #7340 from ywk253100/240122_log_error_main
[cherry-pick]Log the error got from the discovery helper
2024-01-23 10:00:05 +08:00
Wenkai Yin(尹文开)
673bfefd45 Log the error got from the discovery helper
Log the error got from the discovery helper

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-01-22 15:24:53 +08:00
Tiger Kaovilai
270b1de6a1 Do not attempt restore resource with no available GVK in cluster (#7322)
Check for GVK before attempting restore.

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-01-22 09:50:13 +08:00
Xun Jiang/Bruce Jiang
a81e049d36 Merge pull request #7334 from ywk253100/240119_debug
Specify the Kind explicitly in the API resource
2024-01-19 13:51:13 +08:00
Wenkai Yin(尹文开)
427a254136 Specify the Kind explicitly in the API resource
Specify the Kind explicitly in the API resource to avoid wrong Kind conversion

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-01-19 12:51:56 +08:00
Shubham Pampattiwar
33bc0f1acf Merge pull request #7331 from ywk253100/240118_release_note_main
[cherry-pick]Add release note for the informer cache memory consumption
2024-01-18 10:01:27 -08:00
Shubham Pampattiwar
c9b1f1c23e Merge pull request #7332 from blackpiglet/update_document
Modify S3ForcePathStyle description.
2024-01-18 10:00:27 -08:00
Xun Jiang
91abc93087 Modify S3ForcePathStyle description.
And cross-version link for CSI snapshot data movement page.

Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-01-18 20:51:30 +08:00
Wenkai Yin(尹文开)
bc526b99b1 Add release note for the informer cache memory consumption
Add release note for the informer cache memory consumption

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-01-18 15:29:21 +08:00
Xun Jiang/Bruce Jiang
3ac48a43c2 Merge pull request #7329 from kart2/uk/spell-check
fix typo maintenance of log message
2024-01-18 14:25:50 +08:00
Daniel Jiang
a176e6a73a Merge pull request #7326 from ywk253100/240118_informer_main
[cherry-pick]Create informer per resources to avoid huge memory consumption
2024-01-18 13:17:00 +08:00
Karthick Udayakumar
198fbf6873 fix typo maintenance of log message
Signed-off-by: Karthick Udayakumar <kudayakumar@vmware.com>
2024-01-17 22:30:41 -05:00
Wenkai Yin(尹文开)
956248e8a6 Create informer per resources to avoid huge memory consumption
Create informer per resources to avoid huge memory consumption

Fixes #7323

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-01-18 10:47:00 +08:00
Shubham Pampattiwar
6a4c661735 Merge pull request #7307 from guikcd/aws_sdk_upgrade 2024-01-17 16:25:38 -08:00
Xun Jiang/Bruce Jiang
cbe5a36a3c Merge pull request #7254 from learner0810/reduce-backup-deepCopy
Returns directly when backup status is BackupPhaseFailedValidation No need for DeepCopy
2024-01-17 21:41:24 +08:00
Xun Jiang/Bruce Jiang
f66aa9f3b3 Merge pull request #7284 from learner0810/fix-item-operation-timeout-explain
Update itemOperationTimeout Default Timeout Note
2024-01-17 14:58:15 +08:00
Guillaume Delacour
373b24e2c1 Upgrade AWS SDK
Signed-off-by: Guillaume Delacour <delacoug@amazon.com>
2024-01-16 23:35:33 +01:00
Xun Jiang/Bruce Jiang
2caba3efb9 Merge pull request #7311 from ywk253100/240112_qps
Increase the k8s client QPS/burst
2024-01-15 11:21:35 +08:00
Wenkai Yin(尹文开)
d676bfde22 Increase the k8s client QPS/burst
Increase the k8s client QPS/burst to avoid throttling request errors

Fixes #7127
Fixes #3191

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-01-12 15:02:05 +08:00
Wenkai Yin(尹文开)
e498ea99a3 Merge pull request #7295 from josemarevalo/main
Add CRD name to error message when it is not ready to use
2024-01-12 14:59:48 +08:00
lyndon-li
d412854259 Merge pull request #7279 from blackpiglet/fix_7268
Add detail for parameter s3ForcePathStyle in MinIO page.
2024-01-11 12:25:29 +08:00
Wenkai Yin(尹文开)
09af92c54f Merge pull request #7300 from ywk253100/240110_changelog
Add changelog for v1.13.0
2024-01-10 16:03:47 +08:00
Wenkai Yin(尹文开)
ac4c9ed919 Add changelog for v1.13.0
Add changelog for v1.13.0

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-01-10 13:36:51 +08:00
danfeng
b39d91aea3 Merge pull request #7296 from danfengliu/fix-nightly-informer-cache-param-issue
Fix nightly issue of missing param WithoutDisableInformerCacheParam during Velero installation
2024-01-10 13:04:09 +08:00
danfengl
a9c820c9d6 Fix nightly issue of missing param WithoutDisableInformerCacheParam during Velero installation
Signed-off-by: danfengl <danfengl@vmware.com>
2024-01-10 02:57:44 +00:00
Daniel Jiang
3b82395ee1 Merge pull request #7294 from ywk253100/240109_informer_cache
Make "disable-informer-cache" option false(enabled) by default to keep it consistent with the help message
2024-01-10 10:57:21 +08:00
Jose Arevalo
0b307ca035 Add CRD name to error message when it is not ready to use
When debugging this error it is currently hard to identify what
CRD is causing the issue. This is particularly difficult when
dealing with over a hundred CRDs.

Signed-off-by: Jose Arevalo <jose.matias.arevalo@gmail.com>
2024-01-10 12:11:47 +10:00
Wenkai Yin(尹文开)
9a1be6f53f Make "disable-informer-cache" option false(enabled) by default to keep it consistent with the help message
Make "disable-informer-cache" option false(enabled) by default to keep it consi
stent with the help message

Fixes #7264

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-01-10 09:49:54 +08:00
Xun Jiang/Bruce Jiang
e65ef28948 Merge pull request #7272 from danfengliu/bumpup-plugins-matrix-for-1.13
Bumpup E2E test plugins matirx for v1.13
2024-01-09 10:38:36 +08:00
danfengl
1b22a49d22 Bumpup E2E test plugins matirx for v1.13
Signed-off-by: danfengl <danfengl@vmware.com>
2024-01-09 02:26:53 +00:00
zhongjun.li
306a8fda3e fix-item-operation-timeout-explain
Signed-off-by: zhongjun.li <zhongjun.li@daocloud.io>
2024-01-08 15:00:06 +08:00
lyndon-li
72f2da92b7 Merge pull request #7282 from Lyndon-Li/issue-fix-6928
Issue 6928: remove snapshot deletion timeout for PVB
2024-01-08 12:58:43 +08:00
Xun Jiang
3ea4f345c6 Add detail for parameter s3ForcePathStyle in MinIO page.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-01-08 11:29:21 +08:00
Lyndon-Li
200fd80448 isue 6928: remove snapshot deletion timeout for PVB
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-01-08 11:28:23 +08:00
danfeng
c2177c24e8 Merge pull request #7277 from danfengliu/add-disable-informer-cache-param
Add test for disable informer cache param of velero installation
2024-01-05 15:34:26 +08:00
danfengl
fdca488209 Add param disable informer cache for velero installation
Signed-off-by: danfengl <danfengl@vmware.com>
2024-01-05 07:22:50 +00:00
Daniel Jiang
3401db47f9 Merge pull request #7274 from reasonerjt/fix-7263
Do not set "targetNamespace" to namepsace items
2024-01-05 14:41:27 +08:00
Daniel Jiang
a5d08ac5f0 Do not set "targetNamespace" to namepsace items
fixes #7263
This commit makes the data structures more consistent, that namespaces,
as cluster scoped resource will not have "targetNamespace" in the
"restoreableItem" instance.

Signed-off-by: Daniel Jiang <jiangd@vmware.com>
2024-01-05 14:01:16 +08:00
qiuming
e84a51deec Merge pull request #7262 from qiuming-best/intermediate-pv-delete
Fix intermediate PV delete for data mover
2024-01-04 15:45:32 +08:00
lyndon-li
c3c4c97914 Merge pull request #7265 from Lyndon-Li/change-node-agent-config-name
Change node-agent-config name
2024-01-04 15:43:43 +08:00
Ming Qiu
92fdf407c7 Fix intermediate pv delete for data mover
Signed-off-by: Ming Qiu <mqiu@vmware.com>
2024-01-04 03:26:47 +00:00
Lyndon-Li
58ead55fd1 change node-agent-config name
Signed-off-by: Lyndon-Li <yonghui.li@broadcom.com>
2024-01-03 22:02:04 +08:00
Xun Jiang/Bruce Jiang
6b632affe8 Merge pull request #7255 from ywk253100/240102_doc
Generate docs for v1.13
2024-01-03 14:13:55 +08:00
Daniel Jiang
6e641f44b9 Merge pull request #7260 from blackpiglet/rename_volumeinfo_metadata_file
Rename volumeinfo metadata file.
2024-01-03 13:33:01 +08:00
Xun Jiang
08dedd8b66 Rename volumeinfo metadata file.
Change from <backup-name>-volumeinfos.json.gz to
<backup-name>-volumeinfo.json.gz.

Signed-off-by: Xun Jiang <jxun@vmware.com>
2024-01-03 11:22:49 +08:00
Shubham Pampattiwar
68016033ec Update GOVERNANCE.md
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2024-01-02 12:13:10 -08:00
qiuming
f6dfa8e7b2 Merge pull request #7176 from danfengliu/fix-issue-of-hiiting-snapshot-limit
Add sleep to avoid snapshot limitation issue
2024-01-02 17:09:43 +08:00
Wenkai Yin(尹文开)
d8dba993d3 Generate docs for v1.13
Generate docs for v1.13

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-01-02 13:54:28 +08:00
danfengl
b25578d6e1 Add sleep to avoid snapshot limitation issue and skip retain PV on vSphere pipeline
1. Add sleep to avoid snapshot limitation issue https://docs.aws.amazon.com/AWSEC2/latest/APIReference/errors-overview.html#:~:text=SnapshotCreationPerVolumeRateExceeded;
2. Move InstallVelero variable out of struct of Veleroconfig as a global one since it's not for controlling any individual case;
3. Unskip migration test case on AWS pipeline, because we added a new EKS pipeline and deleted TKG AWS pipline in internal E2E test, so this restriction for TKG AWS pipline is no long existed;
4. Skip retainPV test on vSphere pipeline due to PV longtime bounding issue;
5. Fix failing get snapshot by CSI from EC2 issue, snapshot by CSI has no label of backup name.

Signed-off-by: danfengl <danfengl@vmware.com>
2024-01-02 05:53:03 +00:00
qiuming
f109f38a72 Merge pull request #7253 from learner0810/fix-pvc-assignment
Fix pvc assignment
2024-01-02 13:25:37 +08:00
zhongjun.li
8e4cefbb0d Reduce backup DeepCopy
Signed-off-by: zhongjun.li <zhongjun.li@daocloud.io>
2024-01-02 11:34:40 +08:00
zhongjun.li
8c84836644 Fix pvc assignment
Signed-off-by: zhongjun.li <zhongjun.li@daocloud.io>
2023-12-29 15:09:41 +08:00
Shubham Pampattiwar
78bd67aa1d Merge pull request #7248 from rajats22/main
Adopter update for Azure Backup for AKS
2023-12-22 10:46:03 -08:00
rajats22
29997a3bfb <commit mesage>
Signed-off-by: rajats22 <111422846+rajats22@users.noreply.github.com>
2023-12-22 15:16:11 +05:30
lyndon-li
f5e36c12ad Merge pull request #7245 from Lyndon-Li/issue-fix-7244
Issue 7244: delete incomplete snapshot automatically for kopia uploader
2023-12-22 16:56:53 +08:00
Lyndon-Li
60d2c62c1a issue 7244: delete incomplete snapshot automatically for kopia uploader
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-12-22 16:44:00 +08:00
Qi Xu
ee345cf281 Adjust the newline output of resource list in restore describer (#7238)
Signed-off-by: allenxu404 <qix2@vmware.com>
2023-12-22 10:53:29 +05:30
Xun Jiang/Bruce Jiang
7d2c749abf Merge pull request #7231 from blackpiglet/update_volumeinfo_json_tag
Don't generate empty structure.
2023-12-21 16:32:58 +08:00
Xun Jiang
9be8eb0c6d Don't generate empty structure.
VolumeInfo contains several sub-structures. They are filled for
different scenarios. Do not generate empty structure for the
not filled sub-structures.

Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-12-21 14:53:03 +08:00
lyndon-li
b4f2469145 Merge pull request #7240 from Lyndon-Li/issue-fix-7237
Issue 7237: add pvc namespace to backup describe
2023-12-21 13:25:33 +08:00
Lyndon-Li
210838267f issue 7237: add pvc namespace to backup describe
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-12-21 10:02:27 +08:00
lyndon-li
e6b248ccc0 Merge pull request #7236 from Lyndon-Li/remove-csi-feature-check-from-backup-describe
Remove csi feature check from backup describe
2023-12-20 15:46:58 +08:00
Lyndon-Li
0da01842ad remove csi feature check from backup describe
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-12-20 14:51:21 +08:00
qiuming
79f0541574 Merge pull request #7234 from blackpiglet/bump_restic_golang_library_version
Bump Golang library versions for v1.13 Restic to fix CVEs.
2023-12-20 13:06:30 +08:00
Xun Jiang
3dc202d30a Bump Golang library versions for v1.13 Restic to fix CVEs.
Bump golang.org/x/crypto version to v0.17.0.
Bump google.golang.org/grpc version to v1.56.3.

Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-12-20 10:31:48 +08:00
qiuming
a44cd4be33 Merge pull request #7222 from qiuming-best/adjust-bsl-setting-logic
Adjust velero server side default backup location setting logic
2023-12-20 10:29:59 +08:00
Wenkai Yin(尹文开)
970af1ddfd Merge pull request #7225 from vmware-tanzu/dependabot/go_modules/golang.org/x/crypto-0.17.0
Bump golang.org/x/crypto from 0.14.0 to 0.17.0
2023-12-19 17:43:53 +08:00
Daniel Jiang
4fd40f19c7 Merge pull request #7229 from allenxu404/remove-newline
Remove the redundant newline in backup describe output
2023-12-19 16:22:58 +08:00
qiuming
93e29f13aa Merge pull request #7228 from qiuming-best/upload-config-doc
Update uploader configuration design doc
2023-12-19 15:42:30 +08:00
Ming Qiu
236c271cd4 Update uploader configuration design doc
Signed-off-by: Ming Qiu <mqiu@vmware.com>
2023-12-19 07:34:48 +00:00
allenxu404
8f6d46be87 Remove the redundant newline in backup describe output
Signed-off-by: allenxu404 <qix2@vmware.com>
2023-12-19 15:25:37 +08:00
lyndon-li
89cbdac0a3 Merge pull request #7226 from ywk253100/231219_upgrade_doc
Add upgrade doc for v1.13
2023-12-19 13:55:09 +08:00
Ming Qiu
7d2be128ae Move velero server side default backup location setting logic to server startup
Signed-off-by: Ming Qiu <mqiu@vmware.com>
2023-12-19 05:43:29 +00:00
Wenkai Yin(尹文开)
5b403c57b9 Add upgrade doc for v1.13
Add upgrade doc for v1.13

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2023-12-19 13:09:00 +08:00
Wenkai Yin(尹文开)
d99ad5cb7a Merge pull request #7220 from ywk253100/231218_doc
Update k8s metrix and move implemented designs
2023-12-19 10:57:25 +08:00
dependabot[bot]
ddb4889301 Bump golang.org/x/crypto from 0.14.0 to 0.17.0
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.14.0 to 0.17.0.
- [Commits](https://github.com/golang/crypto/compare/v0.14.0...v0.17.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-18 23:36:41 +00:00
Xun Jiang/Bruce Jiang
ee879fdcc3 Merge pull request #7221 from blackpiglet/schedule_cli_fix
Fix shedule get and describe CLI nil pointer issue
2023-12-18 20:44:03 +08:00
Xun Jiang
6222891d5b Fix shedule get and describe CLI issue.
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-12-18 16:41:10 +08:00
lyndon-li
71b947ab5b Merge pull request #7218 from Lyndon-Li/issue-fix-7214
Issue 7214: data mover backup describe for legacy backups
2023-12-18 14:18:51 +08:00
Wenkai Yin(尹文开)
b57cdb8f96 Update k8s metrix and move implemented designs
Update k8s metrix and move implemented designs

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2023-12-18 14:09:20 +08:00
Lyndon-Li
0313c2add0 issue 7214: data mover backup describe for legacy backups
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-12-18 11:07:01 +08:00
Shubham Pampattiwar
ea6c8ca127 fix finalizer typo in logs (#7204)
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2023-12-13 11:46:21 -05:00
lyndon-li
5f14628d69 Merge pull request #7201 from Lyndon-Li/issue-fix-7189
Issue 7189: generic restore - don't assume the first volume as the restore volume
2023-12-12 12:47:25 +08:00
Lyndon-Li
cf7d27c4bc issue 7189: generic restore - don't assume the first volume as the restore volume
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-12-12 10:04:31 +08:00
Shubham Pampattiwar
2bd9bf2903 Merge pull request #7076 from shubham-pampattiwar/update-backup-log
Update backup log to reflect appropriate backup phase
2023-12-11 12:49:06 -08:00
Daniel Jiang
804b9a8d91 Merge pull request #7171 from kaovilai/tests-explicit-enableCSI
Add explicit enableCSI to TestProcessBackupCompletions
2023-12-11 14:11:37 +08:00
Wenkai Yin(尹文开)
c0613f1cf6 Merge pull request #7195 from reasonerjt/fix-7190
Use a new variable for resource path
2023-12-11 10:47:19 +08:00
Daniel Jiang
0f49935720 Use a new variable for resource path
This commit avoids mistakes when checking the type of the resource
Fixes #7190

Signed-off-by: Daniel Jiang <jiangd@vmware.com>
2023-12-10 23:19:52 +08:00
qiuming
52d3fca652 Merge pull request #7191 from qiuming-best/uploader-configmapkey
Modify uploader config map key
2023-12-08 13:49:34 +08:00
Ming Qiu
df82691097 Modify uploader config map key
Signed-off-by: Ming Qiu <mqiu@vmware.com>
2023-12-08 03:07:13 +00:00
Wenkai Yin(尹文开)
fa73bcdd22 Merge pull request #7169 from kaovilai/schedule-skip-immediately
Add `--skip-immediately` to schedule CLI/API, and related to server, install commands
2023-12-08 11:06:29 +08:00
Tiger Kaovilai
eaba99b92e Add test skipImmediately is switched to false after reconcile
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-12-08 09:12:08 +07:00
Tiger Kaovilai
9e016c568a Address requeue feedback
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-12-08 09:12:08 +07:00
Tiger Kaovilai
e4bd59727f Schedule SkipImmediately
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-12-08 09:12:08 +07:00
Tiger Kaovilai
544c8481cc Schedule Skip Immediately Config Design
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

switch from "unpause triggers" to "skip immediately" for clarity

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

Apply suggestions from code review

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

Uncomment velero server option

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

Backup will also be triggered at the next cron schedule.

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

Clarify: unpauseTriggers trigger based from lastBackup timestamp,  CRD default blocks server flags

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

`velero schedule unpause schedule-1` will check `.spec.UnpauseTriggers`

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

Add `LastUnpaused` to ScheduleStatus

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

Add `velero install`

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-12-08 09:10:25 +07:00
lyndon-li
4070934f85 Merge pull request #7125 from Lyndon-Li/issue-fix-6695
Issue fix 6695: add describe for data mover backups
2023-12-07 16:23:30 +08:00
Xun Jiang/Bruce Jiang
759e8a9c63 Merge pull request #7184 from blackpiglet/7163_fix
Update CSIVolumeSnapshotsCompleted in backup's status and the metric
2023-12-07 11:14:28 +08:00
Xun Jiang
edb0860dd2 Fix issue #7163.
Update CSIVolumeSnapshotsCompleted in backup's status and the metric
during backup finalize stage according to async operations content.

Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-12-07 09:43:10 +08:00
lyndon-li
099acd2527 Merge pull request #7141 from qiuming-best/support-restore-sparse
Allow sparse option for Kopia & Restic restore
2023-12-06 18:25:34 +08:00
Daniel Jiang
10bd5b14e4 Merge pull request #7136 from davidhulick/fix-kubectl-port-forwarding-docs-link
docs: fix link to kubectl port forwarding docs
2023-12-06 18:15:38 +08:00
Ming Qiu
1a237d3e4c Update API
Signed-off-by: Ming Qiu <mqiu@vmware.com>
2023-12-06 08:59:12 +00:00
danfengliu
49e3e545be Merge pull request #7048 from danfengliu/add-readme-for-e2e-test
Update E2E README file to latest
2023-12-06 16:53:13 +08:00
Lyndon-Li
72fcd84a51 csi data mover backup describe
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-12-06 10:53:09 +08:00
lyndon-li
8d8d68d649 Merge pull request #7175 from blackpiglet/download_request
Refactor DownloadRequest Stream function
2023-12-06 10:28:44 +08:00
qiuming
ea04a86eb2 Merge pull request #6771 from qiuming-best/bsl-fix
Fix default BSL setting not work
2023-12-05 19:09:50 +08:00
Xun Jiang/Bruce Jiang
6093e651cb Merge pull request #7161 from Lyndon-Li/node-agent-config-doc
Add node-agent concurrency doc
2023-12-05 16:52:29 +08:00
Lyndon-Li
ac5d030ab4 Merge branch 'main' into issue-fix-6695 2023-12-05 16:46:31 +08:00
qiuming
2fa785a3dd Merge pull request #7052 from qiuming-best/data-mover-fail-early
Make data mover fail early
2023-12-05 16:33:46 +08:00
Lyndon-Li
434e073c67 csi data mover backup describe, support legacy backups
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-12-05 15:49:35 +08:00
Xun Jiang/Bruce Jiang
45ae68575d Merge pull request #7153 from allenxu404/hooktracker-update
Enhance hooks tracker by adding an returned error to record function
2023-12-05 13:43:38 +08:00
Xun Jiang
c8e76f4602 Fix the DownloadRequest context error.
Clean the DownloadRequest Stream function.

Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-12-05 13:29:23 +08:00
allenxu404
6051b3cbe0 Enhance hooks tracker by adding a returned error to record function
Signed-off-by: allenxu404 <qix2@vmware.com>
2023-12-05 12:56:42 +08:00
Daniel Jiang
f2ba625229 Merge pull request #7138 from blackpiglet/6595_volumeinfo_restore
Use VolumeInfo to help restore the PV.
2023-12-05 10:19:16 +08:00
Xun Jiang
28df14d9d5 Modify restore logic.
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-12-05 10:01:16 +08:00
Xun Jiang/Bruce Jiang
3b42abd139 Merge pull request #7174 from reasonerjt/snapshot-flag-skip-csi
Make sure the PVs skipped by CSI plugin due to settings in backup spec are tracked
2023-12-05 09:31:21 +08:00
Daniel Jiang
905de8cab1 Merge pull request #7167 from yanggangtony/fix-design-for-unified-repo
Discard --pod-volume-backup-uploader in unified-repo design doc.
2023-12-05 08:59:36 +08:00
Xun Jiang
c77bec73bb Move VolumesInformation to an independant package.
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-12-04 08:33:37 +08:00
Xun Jiang
ca97248f2a Use VolumeInfo to help restore the PV.
Add VolumeInfo for left PVs during backup.

Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-12-04 08:33:37 +08:00
Tiger Kaovilai
2132506e8c Add explicit enableCSI to TestProcessBackupCompletions
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-12-01 14:22:40 -05:00
Daniel Jiang
266ea5d55a Make sure the PVs skipped by CSI plugin due to settings in backup spec
are tracked

Signed-off-by: Daniel Jiang <jiangd@vmware.com>
2023-12-01 14:19:54 +08:00
Shashank Singh
a318e1da99 Fix floatation of error/message in the backup result. (#7159)
* Fix floatation of error/message in the backup/restore result

Signed-off-by: Shashank Singh <shashank1306s@gmail.com>

* fix for checkgates

Signed-off-by: Shashank Singh <shashank1306s@gmail.com>

* refactoring

Signed-off-by: Shashank Singh <shashank1306s@gmail.com>

---------

Signed-off-by: Shashank Singh <shashank1306s@gmail.com>
2023-12-01 09:50:01 +05:30
Ming Qiu
c6cba300fb Fix default BSL setting not work
Signed-off-by: Ming Qiu <mqiu@vmware.com>
2023-12-01 02:06:35 +00:00
Ming Qiu
0afaa70e9b Merge branch 'main' of https://github.com/qiuming-best/velero into support-restore-sparse 2023-11-30 10:55:55 +00:00
yanggang
fcf59376c1 Discard --pod-volume-backup-uploader in unified-repo design doc.
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-11-30 08:50:59 +00:00
Daniel Jiang
5cbfd9fffd Merge pull request #7150 from Lyndon-Li/issue-fix-7135
Issue 7135: check pod status before checking node-agent pod status
2023-11-29 15:47:23 +08:00
Lyndon-Li
81183f683e Merge branch 'main' into issue-fix-6695 2023-11-29 15:12:21 +08:00
Xun Jiang/Bruce Jiang
f5bbe82e78 Merge pull request #7152 from reasonerjt/track-skipped-SnapshotVolumes-false
Track the skipped PV when SnapshotVolumes set as false
2023-11-29 14:46:23 +08:00
Lyndon-Li
33b570d5cd Merge branch 'main' into node-agent-config-doc 2023-11-29 14:45:20 +08:00
Lyndon-Li
8968ae5ec4 add node-agent concurrency doc
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-11-29 14:33:51 +08:00
Lyndon-Li
e416b20148 issue 7135: check pod status before checking node-agent pod status
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-11-29 13:46:50 +08:00
lyndon-li
4d21e29d9d Merge pull request #7151 from blackpiglet/linter_part2
Linter part2
2023-11-29 13:17:59 +08:00
Xun Jiang
f5c159ce56 Resolve linter issues.
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-11-29 11:15:43 +08:00
Xun Jiang
d70535b6d2 Add nolintlint linter.
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-11-29 11:13:46 +08:00
Xun Jiang
ec03d1ebce Add noctx linter.
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-11-29 11:13:46 +08:00
Xun Jiang
dbd1a12d9f Add nilerr and ginkgolinter linter.
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-11-29 11:13:46 +08:00
Xun Jiang
cddc11e000 Enable linter errchkjson.
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-11-29 11:13:46 +08:00
Xun Jiang
3805a470a9 Enable dupword linter.
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-11-29 11:13:46 +08:00
Ming
03dff100a3 Make data mover fail early
Signed-off-by: Ming <mqiu@vmware.com>
2023-11-29 03:03:53 +00:00
Daniel Jiang
b8604b6a89 Treat namespace as a regular restorable item (#7143)
Fixes #1970

Namespaces will be handled as cluster-scope resource, but for
consistency they will still created via "Ensure namespace" flow for
consistency.

Signed-off-by: Daniel Jiang <jiangd@vmware.com>
2023-11-28 11:20:36 -05:00
Daniel Jiang
b759877f5b Track the skipped PV when SnapshotVolumes set as false
This commit makes sure if a PV is not taken snapshot b/c the flag
SnapshotVolumes is set to false in a backup CR, the PV is also also
tracked as skipped in the tracker.

Signed-off-by: Daniel Jiang <jiangd@vmware.com>
2023-11-28 22:52:17 +08:00
Ming Qiu
b57dde1572 Allow sparse option for Kopia & Restic restore
Signed-off-by: Ming Qiu <mqiu@vmware.com>
2023-11-28 13:48:09 +00:00
Daniel Jiang
85482aefaf Merge pull request #7117 from allenxu404/issue6567
Add hook status to backup/restore CR
2023-11-28 16:54:11 +08:00
allenxu404
5d1a632be4 Add hook status to backup/restore CR
Signed-off-by: allenxu404 <qix2@vmware.com>
2023-11-28 14:47:31 +08:00
Wenkai Yin(尹文开)
6ac7ff1230 Merge pull request #7130 from qiuming-best/data-mover-recoverbility
Node agent restart enhancement
2023-11-28 14:25:47 +08:00
Ming Qiu
98a56eb5c7 Node agent restart enhancement
Signed-off-by: Ming Qiu <mqiu@vmware.com>
2023-11-28 05:50:46 +00:00
qiuming
f6ed4558bf Merge pull request #7149 from yanggangtony/fix-test-VeleroInstall
Fix test code wrong code for VeleroInstall
2023-11-28 09:59:53 +08:00
Yang Gang
402a61481d [docs] Fix all typos in plugins typo. (#7129)
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-11-27 13:03:01 -05:00
yanggang
9ccb5a14bb Fix test code wrong code for VeleroInstall
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-11-27 11:13:52 +00:00
qiuming
3fdb3ec7c5 Merge pull request #7069 from 27149chen/imporve-discovery-refresh
improve discoveryHelper.Refresh() in restore
2023-11-27 18:02:36 +08:00
lou
179faf3e33 update after review
Signed-off-by: lou <alex1988@outlook.com>
2023-11-27 17:39:37 +08:00
Xun Jiang/Bruce Jiang
d336e2812e Merge pull request #6958 from blackpiglet/5156_list_option_fix
Change controller-runtime List option from MatchingFields to ListOpti…
2023-11-27 17:38:12 +08:00
Lyndon-Li
8ab0c017a9 issue 6695: add backup description for data mover
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-11-27 16:19:34 +08:00
qiuming
ccd3f220ad Merge pull request #7090 from qiuming-best/perf-test-0
Enhance perf test
2023-11-27 16:10:26 +08:00
Ming
507157f812 Add perf test namespace mapping when restore
Signed-off-by: Ming <mqiu@vmware.com>
2023-11-27 02:11:13 +00:00
Lyndon-Li
1815c1691f Merge branch 'main' into issue-fix-6695 2023-11-27 09:46:22 +08:00
danfengl
4590579105 Update E2E README file to latest
Signed-off-by: danfengl <danfengl@vmware.com>
2023-11-25 12:37:21 +00:00
danfengliu
7320bb7674 Merge pull request #7122 from danfengliu/add-csi-retain-policy-e2e-test
Add E2E test for taking CSI snapshot to PV with retain reclaim policy
2023-11-22 17:35:35 +08:00
qiuming
b276564b95 Merge pull request #7000 from qiuming-best/kopia-parallelism
Make Kopia file parallelism configurable
2023-11-22 12:13:14 +08:00
Ming Qiu
c2d4495efe Merge branch 'main' of https://github.com/qiuming-best/velero into kopia-parallelism 2023-11-22 03:52:20 +00:00
Wenkai Yin(尹文开)
5c958d820d Merge pull request #7100 from blackpiglet/6595_volumeinfo_generate
6595 volumeinfo generate
2023-11-22 11:14:36 +08:00
Ming Qiu
fea22bbbc9 Merge branch 'main' of https://github.com/qiuming-best/velero into kopia-parallelism 2023-11-22 01:42:39 +00:00
Xun Jiang
7f52321772 Generate VolumeInfo.
Remove CSI VolumeSnapshot listter and the informer.
Add download the VolumeInfos metadata for backup.

Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-11-22 09:40:38 +08:00
David Hulick
5e3b5317cd docs: fix link to kubectl port forwarding docs
Signed-off-by: David Hulick <dave.hulick@gmail.com>
2023-11-21 16:38:37 -05:00
danfengl
55a465a941 Add E2E test for taking CSI snapshot to PV with retain reclaim policy
Signed-off-by: danfengl <danfengl@vmware.com>
2023-11-21 07:11:22 +00:00
Tiger Kaovilai
a68ddd458c Close stale issue with not-planned status (#7128)
Instead of closing as completed which would signify work has been done.

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-11-21 09:24:43 +05:30
Anshul Ahuja
0e53cd0916 RM support for Escaped bool, float, null (#7118)
* RM support for Escaped bool, float, null

Signed-off-by: Anshul Ahuja <anshul.ahu@gmail.com>

* fix ci

Signed-off-by: Anshul Ahuja <anshul.ahu@gmail.com>

---------

Signed-off-by: Anshul Ahuja <anshul.ahu@gmail.com>
2023-11-21 09:18:34 +05:30
Shubham Pampattiwar
e58a7808e0 Merge pull request #7116 from adux6991/fix-docs-typo
Fix typo in documentation
2023-11-20 06:01:42 -08:00
qiuming
b8a5859fe7 Merge pull request #7091 from anshulahuja98/recoverplugin
Don't fail backup/restore on velero server restart in PhaseWaitingFor…
2023-11-20 14:49:15 +08:00
Daniel Jiang
e0edc8ee93 Merge pull request #7107 from yanggangtony/update-configmaps
Fix docs: Use camel case for API objects: configmaps and secrets
2023-11-20 14:48:47 +08:00
Wenkai Yin(尹文开)
e3fb94833d Merge pull request #7115 from reasonerjt/wrap-bia-err
Include plugin name in the error message by operations
2023-11-20 14:48:18 +08:00
Daniel Jiang
ca57756ff6 Include plugin name in the error message by operations
fixes #6512

Signed-off-by: Daniel Jiang <jiangd@vmware.com>
2023-11-20 12:12:02 +08:00
Lyndon-Li
4e4f0aa1da issue 6695: add backup describe for CSI snapshot data movement 02
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-11-20 12:11:21 +08:00
Lyndon-Li
582be97a63 Merge branch 'main' into issue-fix-6695 2023-11-18 00:12:25 +08:00
Lyndon-Li
b99ac448ae issue 6695: add backup describe for CSI snapshot data movement
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-11-18 00:11:29 +08:00
Wenkai Yin(尹文开)
939dd7149a Merge pull request #7070 from blackpiglet/6595_interface
Add VolumeInfo metadata structures.
2023-11-17 19:31:29 +08:00
Xun Jiang
b440a4f53f Add VolumeInfo metadata structures and object get method.
Modify design according to comments.
Add PVInfo structure.
Add backup VolumeInfo's object storage's put and get methods.

Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-11-17 17:23:47 +08:00
xuda
9c0c7a2a77 Fix typo in documentation 2023-11-17 15:37:24 +08:00
Xun Jiang/Bruce Jiang
c283edf4a5 Merge pull request #7032 from deefdragon/main
Add check for owner references in backup sync, removing if missing
2023-11-17 09:32:50 +08:00
yanggang
c78e8980d8 Use camel case for API objects: configmaps and secrets.
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-11-16 22:17:35 +00:00
Jeffrey Koehler
292aa34a48 move filtering code to seperate method, add tests
Signed-off-by: Jeffrey Koehler <koehler@streem.tech>
2023-11-16 03:57:36 -06:00
Jeffrey Koehler
8eec6865d1 Check only schedules, and verify UIDs are the same
Signed-off-by: Jeffrey Koehler <koehler@streem.tech>
2023-11-16 02:29:56 -06:00
Wenkai Yin(尹文开)
d42505ddd0 Merge pull request #7102 from Lyndon-Li/issue-fix-7068-2
Issue 7068: add a finalizer to protect retained VSC
2023-11-15 17:13:44 +08:00
Lyndon-Li
067984b13c Issue 7068: add a finalizer to protect retained VSC
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-11-15 16:04:07 +08:00
Wenkai Yin(尹文开)
d345bda3a1 Merge pull request #7081 from ywk253100/231110_sync
Skip syncing the backup which doesn't contain backup metadata
2023-11-15 16:00:06 +08:00
Wenkai Yin(尹文开)
2a533d01bf Merge pull request #7046 from kaovilai/backup-patch-status-unittest
Update Backup.Status.CSIVolumeSnapshotsCompleted during finalize
2023-11-15 15:32:51 +08:00
Wenkai Yin(尹文开)
9b5678f32a Merge pull request #7096 from Lyndon-Li/issue-fix-7094
Issue 7094: fallback to full backup if previous snapshot is not found
2023-11-14 11:45:32 +08:00
Lyndon-Li
50f8acda79 issue 7094: fallback to full backup if previous snapshot is not found
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-11-14 11:28:09 +08:00
Wenkai Yin(尹文开)
dde06472e5 Merge pull request #7095 from Lyndon-Li/issue-fix-7068
Issue 7068: add a finalizer to protect retained VSC
2023-11-14 10:44:47 +08:00
Lyndon-Li
cb651d0436 issue 7068: add a finalizer to protect retained VSC
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-11-14 10:18:07 +08:00
Daniel Jiang
e826b70327 Merge pull request #7086 from yanggangtony/fix-design-wrong-reference-link
Fix wrong reference link in design docs.
2023-11-13 14:34:44 +08:00
Anshul Ahuja
dd6ab8c32a Don't fail backup/restore on velero server restart in PhaseWaitingForPluginOperation
Signed-off-by: Anshul Ahuja <anshulahuja@microsoft.com>
2023-11-13 11:13:32 +05:30
lyndon-li
a0b8a503c8 Merge pull request #7077 from Lyndon-Li/issue-fix-6693
Issue 6693: partially fail restore if CSI snapshot is involved but CSI feature is not ready
2023-11-13 10:30:24 +08:00
yanggang
7fd692eb68 Fix wrong reference link in design docs.
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-11-10 22:57:13 +00:00
Lyndon-Li
efc5319c1c Issue 6693: partially fail restore if CSI snapshot is involved but CSI feature is not ready
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-11-10 12:40:41 +08:00
Wenkai Yin(尹文开)
84c96047b9 Skip syncing the backup which doesn't contain backup metadata
Skip syncing the backup which doesn't contain backup metadata

Fixes #6849

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2023-11-10 10:22:27 +08:00
Lyndon-Li
2841be7681 Merge branch 'main' into issue-fix-6693 2023-11-10 10:04:27 +08:00
Xun Jiang/Bruce Jiang
cb5ffe2753 Merge pull request #7061 from blackpiglet/6595_backward_compatability
Add DataUpload Result and CSI VolumeSnapshot check for restore PV.
2023-11-10 09:37:19 +08:00
Rémi Verchère
3fa7d29573 doc: add resourcePolicy for schedule (#7079)
Signed-off-by: Rémi Verchère <remi@verchere.fr>
2023-11-09 11:45:58 -05:00
Shubham Pampattiwar
ea7f249e90 Update backup log to reflect appropriate backup phase
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

use infof instead of sprintf

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2023-11-09 04:55:24 -08:00
Lyndon-Li
873197ff50 issue 6693: partially fail restore if CSI snapshot is involved but CSI feature is not ready
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-11-09 17:37:23 +08:00
qiuming
76e89f7dc5 Merge pull request #7059 from Lyndon-Li/issue-fix-6663
Issue 6663: changes for configurable data path concurrency
2023-11-09 14:37:28 +08:00
Lyndon-Li
db43200cc8 configurable data path concurrency: all in one json
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-11-08 12:02:02 +08:00
Lyndon-Li
c638ca557e Merge branch 'main' into issue-fix-6663 2023-11-08 10:45:40 +08:00
qiuming
5f7e16b98b Merge pull request #7072 from ywk253100/231108_truncate
[cherry-pick]Truncate the credential file to avoid the change of secret content messing it up
2023-11-08 10:43:17 +08:00
Wenkai Yin(尹文开)
5a10f9090a Truncate the credential file to avoid the change of secret content messing it up
Truncate the credential file to avoid the change of secret content messing it up

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2023-11-08 09:33:56 +08:00
Wenkai Yin(尹文开)
866fbb5cdb Merge pull request #6950 from Lyndon-Li/issue-fix-6663-design
Design for node-agent concurrency
2023-11-08 09:04:05 +08:00
lou
ebb21303ab add changelog
Signed-off-by: lou <alex1988@outlook.com>
2023-11-07 19:50:35 +08:00
lou
70483ded90 improve discoveryHelper.Refresh() in restore
Signed-off-by: lou <alex1988@outlook.com>
2023-11-07 19:12:30 +08:00
Xun Jiang
1fb0529d98 Add DataUpload Result and CSI VolumeSnapshot check for restore PV.
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-11-06 22:40:03 +08:00
Lyndon-Li
68579448d6 configurable data path concurrency: UT
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-11-06 20:29:33 +08:00
Lyndon-Li
262f10ff49 Merge branch 'main' into issue-fix-6663 2023-11-06 16:52:41 +08:00
Lyndon-Li
04a9851ee9 configurable data path concurrency: all in cm
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-11-06 16:46:13 +08:00
Anshul Ahuja
6b7ce6655d Merge pull request #7022 from allenxu404/i6721
Fix inconsistent behavior of Backup and Restore hook execution
2023-11-06 14:01:30 +05:30
lyndon
11938f9a5e Merge pull request #7051 from blackpiglet/6190_part_3
Remove dependency of generated client part 3
2023-11-06 15:22:02 +08:00
Xun Jiang
56b5e982d9 Remove dependency of generated client part 3
Replace generated discovery client with client-go client.
Remove generated client from PVR action.
Remove generated client from pkg/cmd directory.
Delete velero generate client from client factory.

Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-11-06 11:34:39 +08:00
lyndon
d6146ecff4 Merge pull request #7041 from blackpiglet/6190_part_2
Remove dependency of generated client part 2
2023-11-03 17:43:10 +08:00
Xun Jiang
a221a88945 Remove dependency of generated client part 2
Remove dependecy of generate client from pkg/cmd/cli/snapshotLocation.
Remove the Velero generated informer from PVB and PVR. 
Remove dependency of generated client from pkg/podvolume directory.
Replace generated codec with runtime codec. 

Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-11-03 17:11:36 +08:00
Tiger Kaovilai
8c727429c4 revert test changes
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-11-02 17:06:19 -04:00
Tiger Kaovilai
cd0ad74d31 make update
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-11-02 16:46:15 -04:00
Tiger Kaovilai
6896a1ffe4 update changelog to reflect removed waits
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-11-02 16:22:30 -04:00
Tiger Kaovilai
1c138b8f55 CSIFeatureFlag enable check
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-11-02 16:20:46 -04:00
Tiger Kaovilai
18acf005d6 remove waiting during finalize
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-11-02 16:16:27 -04:00
Tiger Kaovilai
f9e716a8c9 skip this if SnapshotMoveData
https://github.com/vmware-tanzu/velero/pull/7046/files#r1380708644
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-11-02 16:14:55 -04:00
Tiger Kaovilai
10245b05de restore: Use warning when Create IsAlreadyExist and Get error (#7004)
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-11-02 15:53:47 -04:00
Tiger Kaovilai
9311a4269b refactor backup snapshot status updates into UpdateBackupSnapshotsStatus() and run in backup_finalizer_controller
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-11-02 15:30:35 -04:00
allenxu404
3a3527553a Fix inconsistent behavior of Backup and Restore hook execution
Signed-off-by: allenxu404 <qix2@vmware.com>
2023-11-02 12:31:53 +08:00
lyndon
166a58bddc Merge pull request #6962 from blackpiglet/6595_design
Add the PV backup information design document.
2023-11-02 10:50:56 +08:00
Wenkai Yin(尹文开)
73c948d6bd Merge pull request #6917 from 27149chen/rm-improvement
support JSON Merge Patch and Strategic Merge Patch in Resource Modifiers
2023-11-02 10:36:40 +08:00
Xun Jiang
23b9484370 Add the PV backup information design document.
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-11-02 10:14:16 +08:00
Tiger Kaovilai
886e074b55 Add PatchResource unit test for backup status
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-11-01 15:28:56 -04:00
Shubham Pampattiwar
705a3bc355 fix typo in documentation (#7043)
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2023-11-01 11:26:14 -04:00
lou
e30937550e update after review
Signed-off-by: lou <alex1988@outlook.com>
2023-11-01 21:53:30 +08:00
Lyndon-Li
a0edad94db design for node-agent concurrency
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-11-01 11:35:06 +08:00
qiuming
38e1ae0405 Merge pull request #7034 from ywk253100/231030_cred
Read information from the credential specified by BSL
2023-11-01 09:41:25 +08:00
qiuming
e17751fd09 Merge pull request #7038 from Lyndon-Li/issue-fix-7027
Issue 7027: backup exposer -- don't assume first volume as the backup volume
2023-11-01 09:39:09 +08:00
Lyndon-Li
8e442407c3 issue 7027: backup exposer -- don't assume first volume as the backup volume
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-10-31 12:11:34 +08:00
Shubham Pampattiwar
03e582cb6c Merge pull request #6995 from kaovilai/kopias3profilecred
kopia/repository/config/aws.go: Set session.Options profile from config
2023-10-30 09:11:15 -07:00
Wenkai Yin(尹文开)
49a85e1636 Read information from the credential specified by BSL
Read information from the credential specified by BSL

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2023-10-30 17:28:10 +08:00
qiuming
1fcdc20d75 Merge pull request #7003 from mateusoliveira43/fix/make-verify-command
fix: make verify permission error
2023-10-30 16:28:07 +08:00
qiuming
6e703b81ff Merge pull request #7029 from yanggangtony/fix-docs-for-tencent-config
Fix the wrong url for Tencent COS.
2023-10-30 14:30:03 +08:00
Jeffrey Koehler
929af4f734 Add check for owner reference in backup sync, removing if missing
Signed-off-by: Jeffrey Koehler <koehler@streem.tech>
2023-10-29 22:06:14 -05:00
Shubham Pampattiwar
23921e5d29 add description markers for dataupload and datadownload CRDs (#7028)
add changelog file

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2023-10-27 11:05:10 -04:00
yanggang
5691371899 Fix the wrong url for Tencent COS.
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-10-27 12:55:40 +01:00
Lyndon-Li
0f765ceef2 Merge branch 'main' into issue-fix-6663 2023-10-27 17:44:17 +08:00
Lyndon-Li
c44a9b8956 issue 6663: changes for configurable data path concurrency
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-10-27 17:37:29 +08:00
Xun Jiang/Bruce Jiang
9ff4b1e079 Merge pull request #7026 from blackpiglet/6376_fix
Add HealthCheckNodePort deletion logic in Serivce restore
2023-10-27 16:40:04 +08:00
Xun Jiang
a94918026c Add HealthCheckNodePort deletion logic in Service restore.
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-10-27 14:13:52 +08:00
Shubham Pampattiwar
1e0fc77e4d Fix issue 6913 (#6914)
add changelog file



keep canceling phase const



fix data download as well



address PR feedback



minor fixes

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2023-10-26 09:39:38 -04:00
Anshul Ahuja
20a1118acf Make configmapref check case insensitive (#6804)
* Make configmapref check case insensitive

Signed-off-by: Anshul Ahuja <anshul.ahu@gmail.com>

* update resourcemodfier test case to validate case

Signed-off-by: Anshul Ahuja <anshulahuja@microsoft.com>

---------

Signed-off-by: Anshul Ahuja <anshul.ahu@gmail.com>
Signed-off-by: Anshul Ahuja <anshulahuja@microsoft.com>
Co-authored-by: Anshul Ahuja <anshulahuja@microsoft.com>
2023-10-26 15:30:21 +05:30
lyndon
638647cb7a Merge pull request #7018 from vmware-tanzu/dependabot/go_modules/google.golang.org/grpc-1.58.3
Bump google.golang.org/grpc from 1.58.2 to 1.58.3
2023-10-26 11:25:30 +08:00
Ming
481cb60493 Make Kopia file parallelism configurable
Signed-off-by: Ming <mqiu@vmware.com>
2023-10-26 02:28:36 +00:00
qiuming
3b22ff3358 Merge pull request #7005 from qiuming-best/kopia-parallelism-design
Design for Velero uploader configuration integration and extensibility
2023-10-26 10:01:55 +08:00
dependabot[bot]
8be1f4beff Bump google.golang.org/grpc from 1.58.2 to 1.58.3
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.58.2 to 1.58.3.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.58.2...v1.58.3)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-25 21:43:35 +00:00
Xun Jiang/Bruce Jiang
45ed3bf613 Record platform limitation of the Kopia block mode uploader in docs. (#7013)
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-10-25 19:43:46 +05:30
Mateus Oliveira
3bc23aeb84 fixup! fix: make verify permission error
Signed-off-by: Mateus Oliveira <msouzaol@redhat.com>
2023-10-25 08:12:41 -03:00
Mateus Oliveira
cbf849ab4c fix: make verify permission error
Signed-off-by: Mateus Oliveira <msouzaol@redhat.com>
2023-10-25 08:12:41 -03:00
lou
f66016d416 update docs
Signed-off-by: lou <alex1988@outlook.com>
2023-10-25 17:54:20 +08:00
lyndon
30bf6bd28c Merge pull request #7011 from Lyndon-Li/issue-fix-6964-2
Issue 6964: use preparingTimeout for snapshot readiness wait
2023-10-25 11:11:27 +08:00
Lyndon-Li
0eade6c615 issue 6964: use preparingTimeout for snapshot readiness wait
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-10-25 10:51:08 +08:00
Tiger Kaovilai
d5f238c83c kopia/repository/config/aws.go: Set session.Options profile from config
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-10-24 14:05:47 -04:00
Daniel Jiang
941dd0039f Merge pull request #6968 from blackpiglet/6585_fix
Check whether the action is a CSI action and whether CSI feature is
2023-10-25 00:39:58 +08:00
Daniel Jiang
317db25d20 Merge pull request #6923 from reasonerjt/aws-sdk-v2
Bump up aws sdk to aws-sdk-go-v2
2023-10-24 23:53:16 +08:00
lou
4ead4d6976 update after review
Signed-off-by: lou <alex1988@outlook.com>
2023-10-24 21:44:14 +08:00
Daniel Jiang
b71d2b3898 Bump up aws sdk to aws-sdk-go-v2
Signed-off-by: Daniel Jiang <jiangd@vmware.com>
2023-10-24 17:01:26 +08:00
Wenkai Yin(尹文开)
61d333a31a Merge pull request #6989 from blackpiglet/support_windows_build_main
[cherry-pick][main]Make Windows build skip BlockMode code.
2023-10-24 16:58:03 +08:00
Xun Jiang
908e2c63ba Check whether the action is a CSI action and whether CSI feature is
enabled, before executing the action.

The DeleteItemAction is not checked, because the DIA doesn't have a
method to get the action's plugin name.
This should be OK, because the CSI will check whether the VS and VSC
have a backup name annotation. If the VS and VSC is not handled by
the CSI plugin, then they don't have the annotation.

Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-10-24 16:54:38 +08:00
lyndon
e2ec855c4a Merge pull request #6983 from danfengliu/fix-resource-groupname-issue
Fix fail to get backup repo due to missing api group name issue
2023-10-24 15:26:55 +08:00
Ming
a86b3943fe Velero Uploader Configuration Integration and Extensibility
Signed-off-by: Ming <mqiu@vmware.com>
2023-10-24 06:10:03 +00:00
lyndon
27f301cb89 Merge pull request #7001 from Lyndon-Li/bump-to-kopia-0.15.0
Bump kopia to 0.15.0
2023-10-24 08:40:46 +08:00
Orlix
107c55813f Revert PR #6907 as site is not deploying (#6981)
Signed-off-by: OrlinVasilev <ovasilev@vmware.com>
2023-10-23 12:14:26 -04:00
Lyndon-Li
d3a1a83c6d bump to kopia 0.15.0
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-10-23 12:03:21 +08:00
Shubham Pampattiwar
b85dc271ef Merge pull request #6978 from yanggangtony/fix-tiny-errors
Fix wrong logs , add missiong license file.
2023-10-22 20:52:18 -07:00
Daniel Jiang
5fe53daf21 Merge pull request #6990 from Lyndon-Li/udmrepo-use-region-from-bsl
Issue 6988: udmrepo use region specified in BSL when s3URL is empty
2023-10-20 20:15:36 +08:00
Lyndon-Li
3d841dd8f1 udmrepo use region specified in BSL when s3URL is empty
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-10-20 19:58:54 +08:00
Xun Jiang
ecc6e1621e Make Windows build skip BlockMode code.
PVC block mode backup and restore introduced some OS specific
system calls. Those calls are not available for Windows, so
add both non Windows version and Windows version code, and
return error for block mode on the Windows platform.

Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-10-20 19:39:44 +08:00
danfengl
d2fc9fa1a9 Fix fail to get backup repo due to missing api group name issue
Signed-off-by: danfengl <danfengl@vmware.com>
2023-10-20 01:50:24 +00:00
yanggang
1efd533d0d Fix wrong logs in markDataDownloadsCancel() and add missiong license file.
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-10-19 14:04:41 +01:00
Xun Jiang
79c75718ca Change controller-runtime List option from MatchingFields to ListOptions.
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-10-19 17:09:12 +08:00
qiuming
fd8350f919 Merge pull request #6976 from Lyndon-Li/issue-fix-6964
Issue 6964: get volume size from source PVC if it is invalid in VS
2023-10-19 13:53:57 +08:00
Lyndon-Li
329c128279 issue 6964: get volume size from source PVC if it is invalid in VS
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-10-19 11:50:28 +08:00
lou
d1f5219cbb update after review
Signed-off-by: lou <alex1988@outlook.com>
2023-10-18 17:05:00 +08:00
Wenkai Yin(尹文开)
19f38f9623 Merge pull request #6947 from 0x113/SGLAB-CLOUDCASA-oidc-auth
Issue #6933: Import auth provider plugins
2023-10-18 16:01:50 +08:00
Sebastian Glab
265d285b1d Import auth provider plugins
Signed-off-by: Sebastian Glab <sglab@catalogicsoftware.com>
2023-10-18 08:53:35 +02:00
qiuming
5ff5073cc3 Add volume types filter in resource policies (#6863)
Signed-off-by: Ming Qiu <mqiu@vmware.com>
2023-10-16 17:36:54 -04:00
Yang Gang
7ca33f8f12 Add MSI Support for Azure plugin. (#6938)
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-10-16 09:47:53 +05:30
Xun Jiang/Bruce Jiang
b4fb2d9644 Merge pull request #6918 from Ripolin/main
Add WaitForReady flag to check container readiness state before exec a hook
2023-10-15 13:27:34 +08:00
Wenkai Yin(尹文开)
ed441de43c Merge pull request #6953 from blackpiglet/bump_golang
Bump golang version.
2023-10-13 18:23:11 +08:00
Xun Jiang
a726329e82 Bump golang version.
Bump golang version to v1.21.
Bump golang.org/x/net version to v0.17.0 in Velero and Restic.

Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-10-13 16:30:23 +08:00
Xun Jiang/Bruce Jiang
9606df624f Merge pull request #6784 from yanggangtony/node-agent-metrics-addr
Fix node-agent missing metrics-addr parms to define the server start. #6784
2023-10-13 14:28:45 +08:00
yanggang
069c280f03 Fix node-agent missing metrics-addr parms to define the server start.
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-10-13 03:33:18 +01:00
Ripolin
e5af7f5cea Add WaitForReady flag to check container readiness state before exec a hook
Signed-off-by: Ripolin <florent.david@gmail.com>
2023-10-12 20:31:36 +02:00
Shubham Pampattiwar
ad114f8f65 Merge pull request #6723 from sseago/restore-get-perf 2023-10-12 07:57:40 -07:00
Wenkai Yin(尹文开)
84734f1040 Merge pull request #6937 from blackpiglet/release_choco
Update the Velero chocolatey package release procedure.
2023-10-12 15:47:26 +08:00
lyndon
741b696180 Merge pull request #6946 from Lyndon-Li/issue-fix-6668
Issue fix 6668: add a limitation for fs restore parallelism with other types of restore
2023-10-12 14:53:29 +08:00
Lyndon-Li
b14bd2cd75 issue 6668: add a limitation for fs restore parallelism with other types of restores
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-10-12 11:58:26 +08:00
Shubham Pampattiwar
74ed994e5e Merge pull request #6830 from sseago/retry-generateName
issue #6807: Retry failed create when using generateName
2023-10-11 08:50:14 -07:00
Scott Seago
7750e12151 Perf improvements for existing resource restore
Use informer cache with dynamic client for Get calls on restore
When enabled, also make the Get call before create.

Add server and install parameter to allow disabling this feature,
but enable by default

Signed-off-by: Scott Seago <sseago@redhat.com>
2023-10-11 10:51:39 -04:00
Andy Arnold
4c3207a56d A small typo duplicated csi-snapshot-data-movement.md in main and v.1.12
Signed-off-by: Andy Arnold <anarnold@redhat.com>
2023-10-10 21:17:26 +01:00
lou
6d89780fb2 add more tests
Signed-off-by: lou <alex1988@outlook.com>
2023-10-10 22:33:35 +08:00
Xun Jiang
79e176086c Add some configurations to avoid ArgoCD pruning backups generated from schedule.
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-10-10 21:06:48 +08:00
Xun Jiang
dbc3ad7453 Update the Velero chocolatey package release procedure.
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-10-10 20:29:29 +08:00
lou
a607810b13 update design
Signed-off-by: lou <alex1988@outlook.com>
2023-10-10 19:11:43 +08:00
lou
19d5bee572 Merge branch 'main' into rm-improvement 2023-10-10 19:02:16 +08:00
lou
65082f33a4 add deserialization tests
Signed-off-by: lou <alex1988@outlook.com>
2023-10-10 18:59:45 +08:00
lyndon
b31610157d Merge pull request #6927 from blackpiglet/restricted_rbac
Add an working example for rbac.md.
2023-10-10 16:52:30 +08:00
lou
5932e263c9 update after review
Signed-off-by: lou <alex1988@outlook.com>
2023-10-10 16:00:46 +08:00
Wenkai Yin(尹文开)
5f71a662a4 Merge pull request #6907 from kaovilai/vmain
Resolve netlify site publish issues due to missing directory `site/site/public`
2023-10-10 15:24:19 +08:00
Xun Jiang
98a383d94a Add an working example for rbac.md.
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-10-10 13:56:50 +08:00
Wenkai Yin(尹文开)
5961253768 Merge pull request #6926 from Lyndon-Li/backup-pod-spread-evenly
Issue 6734: spread backup pod evenly
2023-10-10 10:05:41 +08:00
Lyndon-Li
0a6c89abc6 Merge branch 'main' into backup-pod-spread-evenly 2023-10-10 09:45:52 +08:00
Scott Seago
09be1f7995 issue #6807: Retry failed create when using generateName
When creating resources with generateName, apimachinery
does not guarantee uniqueness when it appends the random
suffix to the generateName stub, so if it fails with
already exists error, we need to retry.

Signed-off-by: Scott Seago <sseago@redhat.com>
2023-10-09 17:38:37 -04:00
Shubham Pampattiwar
541425ba97 Merge pull request #6844 from sseago/pr-standards 2023-10-09 14:33:06 -07:00
Mateus Oliveira
1c1054dedc doc: Alert that plugins run as separate processes, when turning on debug logs (#6882)
* doc: Alert that plugins run as binaries when turning on debug logs

Signed-off-by: Mateus Oliveira <msouzaol@redhat.com>

* fixup! doc: Alert that plugins run as binaries when turning on debug logs

Signed-off-by: Mateus Oliveira <msouzaol@redhat.com>

* fixup! doc: Alert that plugins run as binaries when turning on debug logs

Signed-off-by: Mateus Oliveira <msouzaol@redhat.com>

* fixup! doc: Alert that plugins run as binaries when turning on debug logs

Signed-off-by: Mateus Oliveira <msouzaol@redhat.com>

---------

Signed-off-by: Mateus Oliveira <msouzaol@redhat.com>
2023-10-09 11:12:36 -04:00
Yang Gang
e5e99c75a0 Fix dep package describe and ci words spell. (#6924)
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-10-09 12:12:14 +05:30
Lyndon-Li
d8d66381e7 issue 6734: spread backup pod evenly
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-10-08 20:01:12 +08:00
lou
e880c0d01b update after review
Signed-off-by: lou <alex1988@outlook.com>
2023-10-07 16:33:33 +08:00
Raghuram Devarakonda
b7cc62d077 Document about item action plugin ordering. (#6719)
Signed-off-by: Raghuram Devarakonda <draghuram@gmail.com>
2023-10-06 16:11:24 -04:00
Shubham Pampattiwar
0d4e61eb24 Merge pull request #6649 from sseago/orphaned-partially-failed 2023-10-06 10:35:57 -07:00
Scott Seago
cd7e2d6fcc Expanded PR section of code standards doc
Signed-off-by: Scott Seago <sseago@redhat.com>
2023-10-04 18:07:02 -04:00
lou
58d8425952 fix lint
Signed-off-by: lou <alex1988@outlook.com>
2023-10-05 01:19:05 +08:00
lou
06ed9dcc71 add changelog
Signed-off-by: lou <alex1988@outlook.com>
2023-10-04 16:02:23 +08:00
Guang Jiong Lou
7f73acab16 Proposal to support JSON Merge Patch and Strategic Merge Patch in Resource Modifiers (#6797)
* Proposal to support JSON Merge Patch and Strategic Merge Patch in Resource Modifiers

Signed-off-by: lou <alex1988@outlook.com>

* add changelog

Signed-off-by: lou <alex1988@outlook.com>

* add conditional patches

Signed-off-by: lou <alex1988@outlook.com>

* update design

Signed-off-by: lou <alex1988@outlook.com>

* update after review

Signed-off-by: lou <alex1988@outlook.com>

* update after review

Signed-off-by: lou <alex1988@outlook.com>

---------

Signed-off-by: lou <alex1988@outlook.com>
2023-10-04 09:29:09 +05:30
Shubham Pampattiwar
5ab66728e2 Merge pull request #6843 from yanggangtony/clean-and-addlicenses
Add missing file licences and do some clean works.
2023-10-02 12:11:28 -07:00
Tiger Kaovilai
09f7744e33 remove site/ prefix from publish
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-10-02 15:07:40 -04:00
Shubham Pampattiwar
cf1aebea04 Merge pull request #6901 from kaovilai/dcosignoff
Fix code-standards url rendering for `https://developercertificate.org/)`
2023-10-02 10:12:38 -07:00
Raghuram Devarakonda
13019b943a Document pod volume host path setting for Nutanix. (#6902)
Signed-off-by: Raghuram Devarakonda <draghuram@gmail.com>
2023-10-02 09:57:11 -04:00
Tiger Kaovilai
c51b599845 Fix code-standards url rendering for https://developercertificate.org/)
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-09-29 13:55:32 -04:00
Yang Gang
fd67ecb688 Code clean for backup cmd client. (#6750)
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-09-29 12:23:12 -04:00
Wenkai Yin(尹文开)
0d79afe049 Replace the base image with paketobuildpacks image (#6883)
Replace the base image with paketobuildpacks image

Fixes #6851

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2023-09-29 12:19:51 -04:00
yanggang
11745809c4 Add missing file licences and do some clean works.
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-09-29 04:25:01 +01:00
David Zaninovic
8e01d1b9be Add support for block volumes (#6680)
Signed-off-by: David Zaninovic <dzaninovic@catalogicsoftware.com>
2023-09-28 09:44:46 -04:00
danfengliu
a22f28e876 Merge pull request #6895 from blackpiglet/fix_main_push_action_failure
Add go clean in Dockerfile and action.
2023-09-28 21:16:33 +08:00
Xun Jiang
64595cc0f7 Add go clean in Dockerfile and action.
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-09-28 20:30:05 +08:00
qiuming
c6191797b4 Merge pull request #6884 from ywk253100/230928_repo_init
Create the backup repository only when it doesn't exist
2023-09-28 17:36:44 +08:00
qiuming
dffe4f85ce Merge pull request #6893 from Lyndon-Li/fix-main-ci-out-of-space-problem
Fix CI out of disk space problem
2023-09-28 17:36:14 +08:00
Lyndon-Li
24e37c5115 fix CI out of disk space problem
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-28 17:13:27 +08:00
Wenkai Yin(尹文开)
61a6c1ba2a Create the backup repository only when it doesn't exist
When preparing a backup repository, Velero tries to connect to it, if fails then create it. The repository status always records the error reported by creation but the real reason maybe caused by the connect operation. This is confuseing and hard to debug

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2023-09-28 14:53:59 +08:00
lyndon
af43d96ac9 Merge pull request #6885 from Lyndon-Li/issue-fix-6880
Issue 6880: set ParallelUploadAboveSize as MaxInt64
2023-09-28 14:24:08 +08:00
Lyndon-Li
3e3ffec7cd issue 6880: set ParallelUploadAboveSize as MaxInt64
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-28 12:34:30 +08:00
lyndon
73ea00b477 issue 6861: fill repoIdentifier only for restic repo (#6872)
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-27 16:49:35 -04:00
Wenkai Yin(尹文开)
563f1ccee1 Merge pull request #6475 from nilesh-akhade/main
Add `--or-selector` for backup and restore command
2023-09-27 20:09:07 +08:00
lyndon
b6b320c85b Merge pull request #6875 from Lyndon-Li/issue-fix-6859
Issue 6859: move plugin depdending podvolume functions to util pkg
2023-09-27 11:21:24 +08:00
Xun Jiang/Bruce Jiang
66f8e4fc68 Merge pull request #6874 from OrlinVasilev/dave-emeratus
Move Dave Smith-Uchida to Emeritus Maintainer
2023-09-27 03:02:26 +08:00
Lyndon-Li
2e71cffe0e issue: move plugin depdending podvolume functions to util pkg
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-26 16:39:33 +08:00
OrlinVasilev
df0c6724c6 Move Dave Smith-Uchida to Emeritus Maintainer
Signed-off-by: OrlinVasilev <ovasilev@vmware.com>
2023-09-26 10:58:31 +03:00
Shubham Pampattiwar
c3ec7b71c5 Merge pull request #6715 from nilesh-akhade/metric
Remove schedule-related metrics on schedule delete
2023-09-25 10:24:04 -07:00
lou
d8b9328310 support JSON Merge Patch and Strategic Merge Patch in Resource Modifiers
Signed-off-by: lou <alex1988@outlook.com>
2023-09-25 18:00:18 +08:00
Xun Jiang/Bruce Jiang
4bf87c01ea Add some description of update existing policy to state it works in a best-effort way. (#6856)
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-09-22 14:18:42 -04:00
Wenkai Yin(尹文开)
d3e5bb7451 Merge pull request #6838 from yanggangtony/fix-metrics-backup_last_status
Change the default value of the velero_backup_last_status metrics.
2023-09-20 10:18:52 +08:00
lyndon
b42fb23991 Merge pull request #6839 from Lyndon-Li/multiple-snapshot-class-doc
Doc for multiple snapshot class
2023-09-19 16:24:52 +08:00
Lyndon-Li
f73d9dcaed doc for multiple snapshot class
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-19 16:09:35 +08:00
yanggang
cda722cf9d Fix the metrics backup_last_status not report right value when the schedule down unexpectation.
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-09-19 15:25:21 +08:00
Wenkai Yin(尹文开)
63c6a48f92 Merge pull request #6686 from ywk253100/230612_kopia
Make Kopia support Azure AD
2023-09-19 14:31:14 +08:00
Wenkai Yin(尹文开)
b598150cd1 Support setting CA cert for BSL
Support setting CA cert for BSL

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2023-09-19 11:28:05 +08:00
Wenkai Yin(尹文开)
3a291e368a Make Kopia support Azure AD
This commit introduces our own Azure storage provider by wrapping Kopia's implementation rather than contributing to upstream based on the following considerations:
1. Velero needs the capability to interact with the repository concurrently while Kopia doesn't, this will increase the complexity of Kopia if we contribute to upstream
2. The configuration items provided by Velero and Kopia are conflict, e.g. Velero supports customizing storage account URI which is a full path while Kopia supports customizing storage account domain which is part of the URI. We need to consider the backward compatibility and upgrade case if we contribute to upstream which needs extra efforts
3. Contribute to upstream is a longer cycle when we need to introduce new changes. With this commit, we no longer depends on upstream for the Azure storage provider part and is easy for us to maintain

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2023-09-19 11:28:04 +08:00
lyndon
5af664d361 bump kopia to v0.14 (#6833)
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-18 21:05:21 +08:00
Daniel Jiang
cf3cb9c4ed Merge pull request #6712 from kaovilai/jobs-label-k8s1.27
On restore, delete Kubernetes 1.27 job controller uid label
2023-09-18 16:49:50 +08:00
lyndon
8481b4c035 Merge pull request #6816 from yanggangtony/fix-docs
Fix some typos about the docs.
2023-09-18 15:07:43 +08:00
lyndon
b3df028e83 Merge pull request #6815 from AgustinRamiroDiaz/main
Typo: remove double space
2023-09-18 12:06:27 +08:00
lyndon
c85638ddb6 Merge pull request #6827 from Lyndon-Li/issue-fix-6786
Issue 6786:always delete VSC regardless of the deletion policy
2023-09-15 14:18:38 +08:00
Lyndon-Li
53489b10ad issue 6786:always delete VSC regardless of the deletion policy
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-15 12:10:20 +08:00
Wenkai Yin(尹文开)
185a95585a Set data mover related properties for schedule (#6824)
Set data mover related properties for schedule

Fixes #6820

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2023-09-14 18:14:06 +08:00
lyndon
3d4d184a8d Merge pull request #6822 from reasonerjt/update-kopia-repo
Switch the kopia repo to new org
2023-09-14 11:53:05 +08:00
Daniel Jiang
b7bc9a31cb Switch the kopia repo to new org
Signed-off-by: Daniel Jiang <jiangd@vmware.com>
2023-09-14 11:18:11 +08:00
yanggang
4d1c23adfa Fix some typos about the docs.
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-09-13 23:04:08 +08:00
Agustín Díaz
ff45be6fdd Typo: remove double space
Signed-off-by: Agustín Díaz <agustin.ramiro.diaz@gmail.com>
2023-09-13 10:46:28 -03:00
Qi Xu
558a0eef03 Add doc changes after rc1 to v1.12 docs (#6812)
Signed-off-by: allenxu404 <qix2@vmware.com>
2023-09-13 18:01:01 +08:00
Clever Hu
9b1cffc007 check pod status before hook (#5211)
Signed-off-by: cleverhu <shouping.hu@daocloud.io>
Co-authored-by: cleverhu <shouping.hu@daocloud.io>
2023-09-13 14:49:46 +08:00
qiuming
402703f226 [Cherry-Pick] Optimize of removing finalizer no matter the dataupload datadownload cr is been deleted or not (#6808)
Signed-off-by: Ming Qiu <mqiu@vmware.com>
2023-09-12 11:33:33 -04:00
qiuming
8a366c6924 Merge pull request #6798 from yanggangtony/clean-some-code
Fix issue #6781,  and some code clean.
2023-09-12 14:56:27 +08:00
qiuming
c9fde84586 Merge pull request #6779 from yanggangtony/fix-log-ns-name
Keep the logs info ns/name is the same with other modules.
2023-09-12 14:55:56 +08:00
Yang Gang (成都)
ec11a5a4cc code clean for repository (#6768)
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-09-12 14:43:28 +08:00
yanggang
c97b31363d Fix some wrong logs and code clean.
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-09-11 13:38:32 +08:00
Guang Jiong Lou
246831de7b use old namespace in resource modifier (#6724)
* use old namespace in resource modifier

Signed-off-by: lou <alex1988@outlook.com>

* add changelog

Signed-off-by: lou <alex1988@outlook.com>

* update docs

Signed-off-by: lou <alex1988@outlook.com>

* updated after review

Signed-off-by: lou <alex1988@outlook.com>

---------

Signed-off-by: lou <alex1988@outlook.com>
2023-09-08 15:29:46 +05:30
lyndon
a4b5b0a79e add csi snapshot data mover doc (#6637)
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-08 17:17:42 +08:00
lyndon
2348099a73 Merge pull request #6788 from Lyndon-Li/issue-fix-6748-3
Fix issue 6748 [2]
2023-09-08 14:57:14 +08:00
lyndon
682422772a Merge pull request #6790 from Lyndon-Li/issue-fix-6785
Fix issue 6785
2023-09-08 14:48:41 +08:00
Lyndon-Li
13d61c27a6 fix issue 6785
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-08 12:34:12 +08:00
Lyndon-Li
9895428765 fix issue 6748
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-08 09:14:30 +08:00
lyndon
cddc89ea92 Merge pull request #6783 from kaovilai/patch-1
Show yaml example of repository password: file-system-backup.md
2023-09-07 17:44:57 +08:00
Tiger Kaovilai
d714c3c237 Show yaml example of repository password: file-system-backup.md
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-09-06 16:32:36 -04:00
yanggang
76b6077683 Keep the logs info ns/name is the same with other modules.
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-09-06 18:40:10 +08:00
Xun Jiang/Bruce Jiang
f72afc8a5a Merge pull request #6760 from blackpiglet/6752_fix
Fix #6752: add namespace exclude check.
2023-09-06 15:44:20 +08:00
Xun Jiang
79b810ed25 Fix #6752: add namespace exclude check.
Add PSA audit and warn labels.

Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-09-06 14:44:30 +08:00
Daniel Jiang
a6d61ec5f6 Merge pull request #6770 from ywk253100/230906_restore
[cherry-pick]Update restore controller logic for restore deletion
2023-09-06 12:06:04 +08:00
qiuming
49bb998e59 Merge pull request #6765 from Lyndon-Li/issue-fix-6748
Fix issue 6748
2023-09-06 11:12:44 +08:00
Wenkai Yin(尹文开)
da6ac026d1 Update restore controller logic for restore deletion
1. Skip deleting the restore files from storage if the backup/BSL is not found
2. Allow deleting the restore files from storage even though the BSL is readonly

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2023-09-06 09:19:42 +08:00
lyndon
8cb04d4f69 Merge pull request #6751 from Lyndon-Li/issue-fix-6647
Fix issue 6647
2023-09-06 09:03:00 +08:00
Lyndon-Li
d13a23364f fix issue 6748
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-05 19:29:28 +08:00
lyndon
c9e1ade1f7 fix issue 6753 (#6757)
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-05 10:58:28 +08:00
Lyndon-Li
778feba3ae fix issue 6647
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-04 16:55:36 +08:00
Daniel Jiang
8d3a67544d Merge pull request #6726 from yanggangtony/add-license-velero-helper
Add license notes for velero-helper.
2023-09-04 14:55:51 +08:00
Anshul Ahuja
24abbdcc02 Add anshulahuja98 maintainer details (#6737)
Signed-off-by: Anshul Ahuja <anshulahuja@microsoft.com>
Co-authored-by: Anshul Ahuja <anshulahuja@microsoft.com>
2023-09-04 14:54:06 +08:00
Yang Gang (成都)
25898305ef delete unused shcema package and parms. (#6716)
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-09-04 14:50:10 +08:00
lyndon
b9b2c88c5b Merge pull request #6738 from Lyndon-Li/issue-fix-6733
Fix issue 6733
2023-09-01 17:10:29 +08:00
lyndon
1615cfd7f3 fix issue 6709 (#6741)
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-01 16:52:24 +08:00
qiuming
f26ec9043a Fix kopia snapshot policy not work (#6739)
Signed-off-by: Ming <mqiu@vmware.com>
2023-09-01 16:21:43 +08:00
Lyndon-Li
c4443d506c fix issue 6733
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-01 15:49:13 +08:00
qiuming
0e5022254f [Cherry-pick Main] Fix velero uninstall bug (#6729)
Signed-off-by: Ming <mqiu@vmware.com>
2023-08-31 16:15:24 +08:00
yanggang
f408b9f6c4 Add license notes for velero-helper.
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-08-31 14:07:34 +08:00
Guang Jiong Lou
5dd7c5cd46 add label selector in Resource Modifiers (#6704)
* add label selector in resource modifier

Signed-off-by: lou <alex1988@outlook.com>

* add ut

Signed-off-by: lou <alex1988@outlook.com>

* update after review

Signed-off-by: lou <alex1988@outlook.com>

* update after review

Signed-off-by: lou <alex1988@outlook.com>

---------

Signed-off-by: lou <alex1988@outlook.com>
2023-08-31 10:36:59 +05:30
Xun Jiang/Bruce Jiang
db6784aa81 Merge pull request #6674 from danfengliu/monitor-velero-info
monitor velero logs and fix E2E issues
2023-08-29 10:30:51 +08:00
qiuming
499ee7c5d1 Merge pull request #6717 from qiuming-best/main
[Cherry-Pick main] make velero uninstall backward compatible
2023-08-29 10:19:59 +08:00
Ming
85d5785d68 [Cherry-Pick main] make velero uninstall backward compatible
Signed-off-by: Ming <mqiu@vmware.com>
2023-08-29 01:07:41 +00:00
Nilesh Akhade
c7c441364c Remove schedule-related metrics on schedule delete
Signed-off-by: Nilesh Akhade <nakhade@catalogicsoftware.com>
2023-08-28 20:52:32 +05:30
Tiger Kaovilai
c5aad9e488 Remove legacy label version check, to be added back when version is known
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-08-28 11:08:44 -04:00
Tiger Kaovilai
f6e8c208ad changelog
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-08-28 10:45:55 -04:00
Tiger Kaovilai
7d3d818f93 Handle 1.27 k8s job label changes
per  0e86fa5115/CHANGELOG/CHANGELOG-1.27.md (L1768)

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-08-28 10:42:09 -04:00
danfengl
15be42f47b monitor velero logs and fix E2E issues
1. Capture Velero pod log and K8S cluster event;
2. Fix wrong path of storageclass yaml file issue caused by pert test;
3. Fix change storageclass test issue that no sc named 'default' in EKS cluster;
4. Support AWS credential as config format;
5. Support more E2E script input parameters like standy cluster plugins and provider.

Signed-off-by: danfengl <danfengl@vmware.com>
2023-08-28 05:53:32 +00:00
lyndon
831be07dd3 fix issue 6391 (#6702)
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-08-25 16:36:41 +08:00
qiuming
164431b2b3 Merge pull request #6689 from qiuming-best/uninstall-fix
Fix delete dataupload datadownload failure when Velero uninstall
2023-08-25 11:09:47 +08:00
Xun Jiang/Bruce Jiang
497543774c Merge pull request #6618 from shubham-pampattiwar/restic-pass-doc
Add note for backup repository password configuration
2023-08-24 14:56:32 +08:00
Shubham Pampattiwar
c7422a207a add note for backup repository password configuration
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

address PR feedback

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

reword the note

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

change FS backups to normal backups in the note

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2023-08-23 20:40:08 -07:00
Ming
7f3b7fe853 Fix delete dataupload datadownload failure when Velero uninstall
Signed-off-by: Ming <mqiu@vmware.com>
2023-08-24 03:30:28 +00:00
Daniel Jiang
3e613862e6 Merge pull request #6635 from 27149chen/skip-subresource
skip subresource in resource discovery
2023-08-22 13:39:12 +08:00
Xun Jiang/Bruce Jiang
8d0a8bac34 Update changelogs/unreleased/6649-sseago
Co-authored-by: Tiger Kaovilai <passawit.kaovilai@gmail.com>
Signed-off-by: Xun Jiang/Bruce Jiang <59276555+blackpiglet@users.noreply.github.com>
2023-08-22 10:37:59 +08:00
Xun Jiang/Bruce Jiang
a62f2fa1a3 Merge pull request #6653 from yanggangtony/fix-backup-controller-err-check
fix backup_controller when credentials to volume snapshot location sh…
2023-08-21 17:12:17 +08:00
yanggang
46ef54e80a fix backup_controller when credentials to volume snapshot location show error.
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-08-15 19:36:07 +08:00
Scott Seago
441a32a861 Deal with PartiallyFailed orphaned backups as well as Completed ones
Fixes https://github.com/vmware-tanzu/velero/issues/6648

Signed-off-by: Scott Seago <sseago@redhat.com>
2023-08-14 13:40:32 -04:00
lou
0f9e582fd9 add changelog
Signed-off-by: lou <alex1988@outlook.com>
2023-08-11 10:05:23 +08:00
lou
dc83981871 skip subresource in resource discovery
Signed-off-by: lou <alex1988@outlook.com>
2023-08-10 19:13:25 +08:00
Nilesh Akhade
d9a7e2b6ca Add 'orLabelSelector' for backup, restore command
Signed-off-by: Nilesh Akhade <nakhade@catalogicsoftware.com>
2023-07-19 16:16:35 +05:30
Wesley Hayutin
8e8c340dd1 Propose a deprecation process for velero
As discussed in the velero community call [1]
This is a proposed deprecation policy for the
velero project based on the goharbor project.

[1] https://hackmd.io/Jq6F5zqZR7S80CeDWUklkA

Update GOVERNANCE.md

definitive deprecation times, well done

Co-authored-by: Orlix <OrlinVasilev@users.noreply.github.com>
Signed-off-by: Wesley Hayutin <weshayutin@gmail.com>

Update GOVERNANCE.md

Co-authored-by: Ivan Sim <1330522+ihcsim@users.noreply.github.com>
Signed-off-by: Orlix <OrlinVasilev@users.noreply.github.com>

Update GOVERNANCE.md

Co-authored-by: Ivan Sim <1330522+ihcsim@users.noreply.github.com>
Signed-off-by: Orlix <OrlinVasilev@users.noreply.github.com>

add note regarding deprecation window

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

add changelog file

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2023-06-08 14:24:46 -07:00
1392 changed files with 113245 additions and 26019 deletions

View File

@@ -13,9 +13,10 @@ reviewers:
- reasonerjt
- ywk253100
- blackpiglet
- qiuming-best
- shubham-pampattiwar
- Lyndon-Li
- anshulahuja98
- kaovilai
tech-writer:
- sseago

View File

@@ -1,5 +1,14 @@
version: 2
updates:
# Dependencies listed in .github/workflows
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"
labels:
- "Dependencies"
- "github_actions"
- "kind/changelog-not-required"
# Dependencies listed in go.mod
- package-ecosystem: "gomod"
directory: "/" # Location of package manifests

33
.github/labeler.yml vendored Normal file
View File

@@ -0,0 +1,33 @@
# This file is used by Auto Label PRs action.
# Works with https://github.com/actions/labeler/
# Below this line, the keys are labels to be applied, and the values are the file globs to match against.
# Anything in the `design` directory gets the `Design` label.
Area/Design:
- changed-files:
- any-glob-to-any-file: design/*
# Anything that has plugin infra will be labeled.
# Individual plugins don't necessarily live here, though
Area/Plugins:
- changed-files:
- any-glob-to-any-file: pkg/plugins/**/*
Dependencies:
- changed-files:
- any-glob-to-any-file: go.mod
Documentation:
- changed-files:
- any-glob-to-any-file: site/content/docs/**/*
# Anything in the site directory gets the website label *EXCEPT* docs
Website:
- all:
- changed-files:
- any-glob-to-any-file: site/**/*
- all-globs-to-all-files: '!site/content/docs/**/*'
has-changelog:
- changed-files:
- any-glob-to-any-file: changelogs/**
has-e2e-2tests:
- changed-files:
- any-glob-to-any-file: test/e2e/**/*
has-unit-tests:
- changed-files:
- any-glob-to-any-file: pkg/**/*_test.go

43
.github/labels.yaml vendored Normal file
View File

@@ -0,0 +1,43 @@
# This file is used by [prow github action](https://github.com/jpmcb/prow-github-actions/) in .github/workflows/prow-action.yml.
# This file only has values for kind and area commands.
area:
- CLI
- CSI
- Cloud/AWS
- Cloud/Azure
- Cloud/DigitalOcean
- Cloud/GCP
- Cloud/vSphere
- Design
- Documentation
- Filters
- Plugins
- Process
- Storage/Minio
- Storage/Cinder
- WindowsSupport
- datamover
- fs-backup
- fs-backup/deletion
- fs-backup/file-selectable
- fs-uploader
- kopia-integration
- migration
- multi-tenancy
- progress-monitoring
- resilience
- schedule
- storage/IBM-ObjectStorage
- upgrade
- volume-snapshot-dm
kind:
- changelog-not-required
- question
- refactor
- requirement
- release-note
- release-blocker
- spike
- tech-debt
- usage-error
- voting

41
.github/labels.yml vendored
View File

@@ -1,41 +0,0 @@
area:
- "Cloud/AWS"
- "Cloud/GCP"
- "Cloud/Azure"
- "Design"
- "Plugins"
# Labels that can be applied to PRs with the /kind command
kind:
- "changelog-not-required"
- "tech-debt"
# Works with https://github.com/actions/labeler/
# Below this line, the keys are labels to be applied, and the values are the file globs to match against.
# Anything in the `design` directory gets the `Design` label.
Area/Design:
- design/*
# Anything in the site directory gets the website label *EXCEPT* docs
Website:
- any: ["site/**/*", "!site/content/docs/**/*"]
Documentation:
- site/content/docs/**/*
Dependencies:
- go.mod
# Anything that has plugin infra will be labeled.
# Individual plugins don't necessarily live here, though
Area/Plugins:
- "pkg/plugins/**/*"
has-unit-tests:
- "pkg/**/*_test.go"
has-e2e-2tests:
- "test/e2e/**/*"
has-changelog:
- "changelogs/**"

View File

@@ -9,5 +9,5 @@ Fixes #(issue)
# Please indicate you've done the following:
- [ ] [Accepted the DCO](https://velero.io/docs/v1.5/code-standards/#dco-sign-off). Commits without the DCO will delay acceptance.
- [ ] [Created a changelog file](https://velero.io/docs/v1.5/code-standards/#adding-a-changelog) or added `/kind changelog-not-required` as a comment on this pull request.
- [ ] [Created a changelog file (`make new-changelog`)](https://velero.io/docs/main/code-standards/#adding-a-changelog) or comment `/kind changelog-not-required` on this PR.
- [ ] Updated the corresponding documentation in `site/content/docs/main`.

View File

@@ -13,7 +13,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Set the author of a PR as the assignee
uses: kentaro-m/auto-assign-action@v1.1.1
uses: kentaro-m/auto-assign-action@v2.0.0
with:
configuration-path: ".github/auto-assignees.yml"
repo-token: "${{ secrets.GITHUB_TOKEN }}"

View File

@@ -13,7 +13,7 @@ jobs:
triage:
runs-on: ubuntu-latest
steps:
- uses: actions/labeler@v3
- uses: actions/labeler@v5
with:
repo-token: "${{ secrets.GITHUB_TOKEN }}"
configuration-path: .github/labels.yml
configuration-path: .github/labeler.yml

View File

@@ -11,7 +11,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Request a PR review based on files types/paths, and/or groups the author belongs to
uses: necojackarc/auto-request-review@v0.7.0
uses: necojackarc/auto-request-review@v0.13.0
with:
token: ${{ secrets.GITHUB_TOKEN }}
config: .github/auto-assignees.yml

View File

@@ -1,93 +0,0 @@
name: "Verify Velero CRDs across k8s versions"
on:
pull_request:
# Do not run when the change only includes these directories.
paths-ignore:
- "site/**"
- "design/**"
jobs:
# Build the Velero CLI once for all Kubernetes versions, and cache it so the fan-out workers can get it.
build-cli:
runs-on: ubuntu-latest
steps:
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.20.7'
id: go
# Look for a CLI that's made for this PR
- name: Fetch built CLI
id: cache
uses: actions/cache@v2
env:
cache-name: cache-velero-cli
with:
path: ./_output/bin/linux/amd64/velero
# The cache key a combination of the current PR number, and a SHA256 hash of the Velero binary
key: velero-${{ github.event.pull_request.number }}-${{ hashFiles('./_output/bin/linux/amd64/velero') }}
# This key controls the prefixes that we'll look at in the cache to restore from
restore-keys: |
velero-${{ github.event.pull_request.number }}-
- name: Fetch cached go modules
uses: actions/cache@v2
if: steps.cache.outputs.cache-hit != 'true'
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Check out the code
uses: actions/checkout@v2
if: steps.cache.outputs.cache-hit != 'true'
# If no binaries were built for this PR, build it now.
- name: Build Velero CLI
if: steps.cache.outputs.cache-hit != 'true'
run: |
make local
# Check the common CLI against all Kubernetes versions
crd-check:
needs: build-cli
runs-on: ubuntu-latest
strategy:
matrix:
# Latest k8s versions. There's no series-based tag, nor is there a latest tag.
k8s:
- 1.19.7
- 1.20.2
- 1.21.1
- 1.22.0
- 1.23.6
- 1.24.2
- 1.25.3
# All steps run in parallel unless otherwise specified.
# See https://docs.github.com/en/actions/learn-github-actions/managing-complex-workflows#creating-dependent-jobs
steps:
- name: Fetch built CLI
id: cache
uses: actions/cache@v2
env:
cache-name: cache-velero-cli
with:
path: ./_output/bin/linux/amd64/velero
# The cache key a combination of the current PR number, and a SHA256 hash of the Velero binary
key: velero-${{ github.event.pull_request.number }}-${{ hashFiles('./_output/bin/linux/amd64/velero') }}
# This key controls the prefixes that we'll look at in the cache to restore from
restore-keys: |
velero-${{ github.event.pull_request.number }}-
- uses: engineerd/setup-kind@v0.5.0
with:
version: "v0.17.0"
image: "kindest/node:v${{ matrix.k8s }}"
- name: Install CRDs
run: |
kubectl cluster-info
kubectl get pods -n kube-system
kubectl version
echo "current-context:" $(kubectl config current-context)
echo "environment-kubeconfig:" ${KUBECONFIG}
./_output/bin/linux/amd64/velero install --crds-only --dry-run -oyaml | kubectl apply -f -

View File

@@ -6,42 +6,35 @@ on:
paths-ignore:
- "site/**"
- "design/**"
- "**/*.md"
jobs:
# Build the Velero CLI and image once for all Kubernetes versions, and cache it so the fan-out workers can get it.
build:
runs-on: ubuntu-latest
outputs:
minio-dockerfile-sha: ${{ steps.minio-version.outputs.dockerfile_sha }}
steps:
- name: Check out the code
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v4
uses: actions/setup-go@v5
with:
go-version: '1.20.7'
id: go
go-version-file: 'go.mod'
# Look for a CLI that's made for this PR
- name: Fetch built CLI
id: cli-cache
uses: actions/cache@v2
uses: actions/cache@v4
with:
path: ./_output/bin/linux/amd64/velero
# The cache key a combination of the current PR number and the commit SHA
key: velero-cli-${{ github.event.pull_request.number }}-${{ github.sha }}
- name: Fetch built image
id: image-cache
uses: actions/cache@v2
uses: actions/cache@v4
with:
path: ./velero.tar
# The cache key a combination of the current PR number and the commit SHA
key: velero-image-${{ github.event.pull_request.number }}-${{ github.sha }}
- name: Fetch cached go modules
uses: actions/cache@v2
if: steps.cli-cache.outputs.cache-hit != 'true'
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Check out the code
uses: actions/checkout@v2
if: steps.cli-cache.outputs.cache-hit != 'true' || steps.image-cache.outputs.cache-hit != 'true'
# If no binaries were built for this PR, build it now.
- name: Build Velero CLI
if: steps.cli-cache.outputs.cache-hit != 'true'
@@ -51,61 +44,104 @@ jobs:
- name: Build Velero Image
if: steps.image-cache.outputs.cache-hit != 'true'
run: |
IMAGE=velero VERSION=pr-test make container
docker save velero:pr-test -o ./velero.tar
IMAGE=velero VERSION=pr-test BUILD_OUTPUT_TYPE=docker make container
docker save velero:pr-test-linux-amd64 -o ./velero.tar
# Check and build MinIO image once for all e2e tests
- name: Check Bitnami MinIO Dockerfile version
id: minio-version
run: |
DOCKERFILE_SHA=$(curl -s https://api.github.com/repos/bitnami/containers/commits?path=bitnami/minio/2025/debian-12/Dockerfile\&per_page=1 | jq -r '.[0].sha')
echo "dockerfile_sha=${DOCKERFILE_SHA}" >> $GITHUB_OUTPUT
- name: Cache MinIO Image
uses: actions/cache@v4
id: minio-cache
with:
path: ./minio-image.tar
key: minio-bitnami-${{ steps.minio-version.outputs.dockerfile_sha }}
- name: Build MinIO Image from Bitnami Dockerfile
if: steps.minio-cache.outputs.cache-hit != 'true'
run: |
echo "Building MinIO image from Bitnami Dockerfile..."
git clone --depth 1 https://github.com/bitnami/containers.git /tmp/bitnami-containers
cd /tmp/bitnami-containers/bitnami/minio/2025/debian-12
docker build -t bitnami/minio:local .
docker save bitnami/minio:local > ${{ github.workspace }}/minio-image.tar
# Create json of k8s versions to test
# from guide: https://stackoverflow.com/a/65094398/4590470
setup-test-matrix:
runs-on: ubuntu-latest
env:
GH_TOKEN: ${{ github.token }}
outputs:
matrix: ${{ steps.set-matrix.outputs.matrix }}
steps:
- name: Set k8s versions
id: set-matrix
# everything excluding older tags. limits needs to be high enough to cover all latest versions
# and test labels
# grep -E "v[1-9]\.(2[5-9]|[3-9][0-9])" filters for v1.25 to v9.99
# and removes older patches of the same minor version
# awk -F. '{if(!a[$1"."$2]++)print $1"."$2"."$NF}'
run: |
echo "matrix={\
\"k8s\":$(wget -q -O - "https://hub.docker.com/v2/namespaces/kindest/repositories/node/tags?page_size=50" | grep -o '"name": *"[^"]*' | grep -o '[^"]*$' | grep -v -E "alpha|beta" | grep -E "v[1-9]\.(2[5-9]|[3-9][0-9])" | awk -F. '{if(!a[$1"."$2]++)print $1"."$2"."$NF}' | sort -r | sed s/v//g | jq -R -c -s 'split("\n")[:-1]'),\
\"labels\":[\
\"Basic && (ClusterResource || NodePort || StorageClass)\", \
\"ResourceFiltering && !Restic\", \
\"ResourceModifier || (Backups && BackupsSync) || PrivilegesMgmt || OrderedResources\", \
\"(NamespaceMapping && Single && Restic) || (NamespaceMapping && Multiple && Restic)\"\
]}" >> $GITHUB_OUTPUT
# Run E2E test against all Kubernetes versions on kind
run-e2e-test:
needs: build
needs:
- build
- setup-test-matrix
runs-on: ubuntu-latest
strategy:
matrix:
k8s:
- 1.19.16
- 1.20.15
- 1.21.12
- 1.22.9
- 1.23.6
- 1.24.0
- 1.25.3
matrix: ${{fromJson(needs.setup-test-matrix.outputs.matrix)}}
fail-fast: false
steps:
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.20.7'
id: go
- name: Check out the code
uses: actions/checkout@v2
- name: Install MinIO
run:
docker run -d --rm -p 9000:9000 -e "MINIO_ACCESS_KEY=minio" -e "MINIO_SECRET_KEY=minio123" -e "MINIO_DEFAULT_BUCKETS=bucket,additional-bucket" bitnami/minio:2021.6.17-debian-10-r7
- uses: engineerd/setup-kind@v0.5.0
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v5
with:
version: "v0.17.0"
go-version-file: 'go.mod'
# Fetch the pre-built MinIO image from the build job
- name: Fetch built MinIO Image
uses: actions/cache@v4
id: minio-cache
with:
path: ./minio-image.tar
key: minio-bitnami-${{ needs.build.outputs.minio-dockerfile-sha }}
- name: Load MinIO Image
run: |
echo "Loading MinIO image..."
docker load < ./minio-image.tar
- name: Install MinIO
run: |
docker run -d --rm -p 9000:9000 -e "MINIO_ROOT_USER=minio" -e "MINIO_ROOT_PASSWORD=minio123" -e "MINIO_DEFAULT_BUCKETS=bucket,additional-bucket" bitnami/minio:local
- uses: engineerd/setup-kind@v0.6.2
with:
skipClusterLogsExport: true
version: "v0.27.0"
image: "kindest/node:v${{ matrix.k8s }}"
- name: Fetch built CLI
id: cli-cache
uses: actions/cache@v2
uses: actions/cache@v4
with:
path: ./_output/bin/linux/amd64/velero
key: velero-cli-${{ github.event.pull_request.number }}-${{ github.sha }}
- name: Fetch built Image
id: image-cache
uses: actions/cache@v2
uses: actions/cache@v4
with:
path: ./velero.tar
key: velero-image-${{ github.event.pull_request.number }}-${{ github.sha }}
- name: Load Velero Image
run:
kind load image-archive velero.tar
# always try to fetch the cached go modules as the e2e test needs it either
- name: Fetch cached go modules
uses: actions/cache@v2
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Run E2E test
run: |
cat << EOF > /tmp/credential
@@ -118,17 +154,27 @@ jobs:
curl -LO https://dl.k8s.io/release/v${{ matrix.k8s }}/bin/linux/amd64/kubectl
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
GOPATH=~/go CLOUD_PROVIDER=kind \
OBJECT_STORE_PROVIDER=aws BSL_CONFIG=region=minio,s3ForcePathStyle="true",s3Url=http://$(hostname -i):9000 \
CREDS_FILE=/tmp/credential BSL_BUCKET=bucket \
ADDITIONAL_OBJECT_STORE_PROVIDER=aws ADDITIONAL_BSL_CONFIG=region=minio,s3ForcePathStyle="true",s3Url=http://$(hostname -i):9000 \
ADDITIONAL_CREDS_FILE=/tmp/credential ADDITIONAL_BSL_BUCKET=additional-bucket \
GINKGO_FOCUS='Basic\]\[ClusterResource' VELERO_IMAGE=velero:pr-test \
make -C test/e2e run
git clone https://github.com/vmware-tanzu-experiments/distributed-data-generator.git -b main /tmp/kibishii
GOPATH=~/go \
CLOUD_PROVIDER=kind \
OBJECT_STORE_PROVIDER=aws \
BSL_CONFIG=region=minio,s3ForcePathStyle="true",s3Url=http://$(hostname -i):9000 \
CREDS_FILE=/tmp/credential \
BSL_BUCKET=bucket \
ADDITIONAL_OBJECT_STORE_PROVIDER=aws \
ADDITIONAL_BSL_CONFIG=region=minio,s3ForcePathStyle="true",s3Url=http://$(hostname -i):9000 \
ADDITIONAL_CREDS_FILE=/tmp/credential \
ADDITIONAL_BSL_BUCKET=additional-bucket \
VELERO_IMAGE=velero:pr-test-linux-amd64 \
PLUGINS=velero/velero-plugin-for-aws:latest \
GINKGO_LABELS="${{ matrix.labels }}" \
KIBISHII_DIRECTORY=/tmp/kibishii/kubernetes/yaml/ \
make -C test/ run-e2e
timeout-minutes: 30
- name: Upload debug bundle
if: ${{ failure() }}
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v4
with:
name: DebugBundle
path: /home/runner/work/velero/velero/test/e2e/debug-bundle*
path: /home/runner/work/velero/velero/test/e2e/debug-bundle*

View File

@@ -19,7 +19,7 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
@@ -31,6 +31,6 @@ jobs:
output: 'trivy-results.sarif'
- name: Upload Trivy scan results to GitHub Security tab
uses: github/codeql-action/upload-sarif@v2
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: 'trivy-results.sarif'

View File

@@ -12,7 +12,7 @@ jobs:
steps:
- name: Check out the code
uses: actions/checkout@v2
uses: actions/checkout@v4
- name: Changelog check
if: ${{ !(contains(github.event.pull_request.labels.*.name, 'kind/changelog-not-required') || contains(github.event.pull_request.labels.*.name, 'Design') || contains(github.event.pull_request.labels.*.name, 'Website') || contains(github.event.pull_request.labels.*.name, 'Documentation'))}}

View File

@@ -7,24 +7,16 @@ jobs:
strategy:
fail-fast: false
steps:
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.20.7'
id: go
- name: Check out the code
uses: actions/checkout@v2
- name: Fetch cached go modules
uses: actions/cache@v2
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v5
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
go-version-file: 'go.mod'
- name: Make ci
run: make ci
- name: Upload test coverage
uses: codecov/codecov-action@v3
uses: codecov/codecov-action@v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: coverage.out

View File

@@ -8,14 +8,14 @@ jobs:
steps:
- name: Check out the code
uses: actions/checkout@v2
uses: actions/checkout@v4
- name: Codespell
uses: codespell-project/actions-codespell@master
with:
# ignore the config/.../crd.go file as it's generated binary data that is edited elswhere.
# ignore the config/.../crd.go file as it's generated binary data that is edited elsewhere.
skip: .git,*.png,*.jpg,*.woff,*.ttf,*.gif,*.ico,./config/crd/v1beta1/crds/crds.go,./config/crd/v1/crds/crds.go,./config/crd/v2alpha1/crds/crds.go,./go.sum,./LICENSE
ignore_words_list: iam,aks,ist,bridget,ue,shouldnot,atleast
ignore_words_list: iam,aks,ist,bridget,ue,shouldnot,atleast,notin,sme,optin
check_filenames: true
check_hidden: true

View File

@@ -13,18 +13,18 @@ jobs:
name: Build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
name: Checkout
- name: Set up QEMU
id: qemu
uses: docker/setup-qemu-action@v1
uses: docker/setup-qemu-action@v3
with:
platforms: all
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
uses: docker/setup-buildx-action@v3
with:
version: latest

View File

@@ -14,7 +14,7 @@ jobs:
name: Build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
name: Checkout
- name: Verify .goreleaser.yml and try a dryrun release.

View File

@@ -1,14 +1,24 @@
name: Pull Request Linter Check
on: [pull_request]
on:
pull_request:
# Do not run when the change only includes these directories.
paths-ignore:
- "site/**"
- "design/**"
- "**/*.md"
jobs:
build:
name: Run Linter Check
runs-on: ubuntu-latest
steps:
- name: Check out the code
uses: actions/checkout@v2
- name: Linter check
run: make lint
- name: Check out the code
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version-file: 'go.mod'
- name: Linter check
uses: golangci/golangci-lint-action@v6
with:
version: v1.64.5
args: --verbose

View File

@@ -9,12 +9,21 @@ jobs:
execute:
runs-on: ubuntu-latest
steps:
- uses: jpmcb/prow-github-actions@v1.1.2
- uses: jpmcb/prow-github-actions@v1.1.3
with:
# Only support /kind command for now.
# TODO: before allowing the /lgtm command, see if we can block merging if changelog labels are missing.
prow-commands: "/area
/kind
prow-commands: |
/approve
/area
/assign
/cc
/uncc"
/close
/hold
/kind
/milestone
/retitle
/remove
/reopen
/uncc
/unassign
github-token: "${{ secrets.GITHUB_TOKEN }}"

View File

@@ -12,7 +12,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v4
with:
# The default value is "1" which fetches only a single commit. If we merge PR without squash or rebase,
# there are at least two commits: the first one is the merge commit and the second one is the real commit

View File

@@ -14,87 +14,43 @@ jobs:
name: Build
runs-on: ubuntu-latest
steps:
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.20.7'
id: go
- uses: actions/checkout@v3
# Fix issue of setup-gcloud
- run: |
sudo apt-get install python2.7
export CLOUDSDK_PYTHON="/usr/bin/python2"
- uses: google-github-actions/setup-gcloud@v0
with:
version: '285.0.0'
service_account_key: ${{ secrets.GCS_SA_KEY }}
export_default_credentials: true
- run: gcloud info
- name: Set up QEMU
id: qemu
uses: docker/setup-qemu-action@v1
with:
platforms: all
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
with:
version: latest
- name: Build
run: make local
- name: Test
run: make test
- name: Upload test coverage
uses: codecov/codecov-action@v2
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: coverage.out
verbose: true
# Use the JSON key in secret to login gcr.io
- uses: 'docker/login-action@v2'
with:
registry: 'gcr.io' # or REGION.docker.pkg.dev
username: '_json_key'
password: '${{ secrets.GCR_SA_KEY }}'
# Only try to publish the container image from the root repo; forks don't have permission to do so and will always get failures.
- name: Publish container image
if: github.repository == 'vmware-tanzu/velero'
run: |
sudo swapoff -a
sudo rm -f /mnt/swapfile
docker image prune -a --force
- name: Check out the code
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version-file: 'go.mod'
- name: Set up QEMU
id: qemu
uses: docker/setup-qemu-action@v3
with:
platforms: all
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v3
with:
version: latest
- name: Build
run: |
make local
# Clean go cache to ease the build environment storage pressure.
go clean -modcache -cache
- name: Test
run: make test
- name: Upload test coverage
uses: codecov/codecov-action@v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: coverage.out
verbose: true
# Only try to publish the container image from the root repo; forks don't have permission to do so and will always get failures.
- name: Publish container image
if: github.repository == 'vmware-tanzu/velero'
run: |
sudo swapoff -a
sudo rm -f /mnt/swapfile
docker system prune -a --force
# Build and push Velero image to docker registry
docker login -u ${{ secrets.DOCKER_USER }} -p ${{ secrets.DOCKER_PASSWORD }}
VERSION=$(./hack/docker-push.sh | grep 'VERSION:' | awk -F: '{print $2}' | xargs)
# Upload Velero image package to GCS
source hack/ci/build_util.sh
BIN=velero
RESTORE_HELPER_BIN=velero-restore-helper
GCS_BUCKET=velero-builds
VELERO_IMAGE=${BIN}-${VERSION}
VELERO_RESTORE_HELPER_IMAGE=${RESTORE_HELPER_BIN}-${VERSION}
VELERO_IMAGE_FILE=${VELERO_IMAGE}.tar.gz
VELERO_RESTORE_HELPER_IMAGE_FILE=${VELERO_RESTORE_HELPER_IMAGE}.tar.gz
VELERO_IMAGE_BACKUP_FILE=${VELERO_IMAGE}-'build.'${GITHUB_RUN_NUMBER}.tar.gz
VELERO_RESTORE_HELPER_IMAGE_BACKUP_FILE=${VELERO_RESTORE_HELPER_IMAGE}-'build.'${GITHUB_RUN_NUMBER}.tar.gz
cp ${VELERO_IMAGE_FILE} ${VELERO_IMAGE_BACKUP_FILE}
cp ${VELERO_RESTORE_HELPER_IMAGE_FILE} ${VELERO_RESTORE_HELPER_IMAGE_BACKUP_FILE}
uploader ${VELERO_IMAGE_FILE} ${GCS_BUCKET}
uploader ${VELERO_RESTORE_HELPER_IMAGE_FILE} ${GCS_BUCKET}
uploader ${VELERO_IMAGE_BACKUP_FILE} ${GCS_BUCKET}
uploader ${VELERO_RESTORE_HELPER_IMAGE_BACKUP_FILE} ${GCS_BUCKET}
# Build and push Velero image to docker registry
docker login -u ${{ secrets.DOCKER_USER }} -p ${{ secrets.DOCKER_PASSWORD }}
./hack/docker-push.sh

View File

@@ -9,10 +9,10 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout the latest code
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Automatic Rebase
uses: cirrus-actions/rebase@1.3.1
uses: cirrus-actions/rebase@1.8
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -7,7 +7,7 @@ jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v3
- uses: actions/stale@v9.1.0
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-issue-message: "This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 14 days. If a Velero team member has requested log or more information, please provide the output of the shared commands."
@@ -20,4 +20,4 @@ jobs:
days-before-pr-close: -1
# Only issues made after Feb 09 2021.
start-date: "2021-09-02T00:00:00"
exempt-issue-labels: "Epic,Area/CLI,Area/Cloud/AWS,Area/Cloud/Azure,Area/Cloud/GCP,Area/Cloud/vSphere,Area/CSI,Area/Design,Area/Documentation,Area/Plugins,Bug,Enhancement/User,kind/requirement,kind/refactor,kind/tech-debt,limitation,Needs investigation,Needs triage,Needs Product,P0 - Hair on fire,P1 - Important,P2 - Long-term important,P3 - Wouldn't it be nice if...,Product Requirements,Restic - GA,Restic,release-blocker,Security"
exempt-issue-labels: "Epic,Area/CLI,Area/Cloud/AWS,Area/Cloud/Azure,Area/Cloud/GCP,Area/Cloud/vSphere,Area/CSI,Area/Design,Area/Documentation,Area/Plugins,Bug,Enhancement/User,kind/requirement,kind/refactor,kind/tech-debt,limitation,Needs investigation,Needs triage,Needs Product,P0 - Hair on fire,P1 - Important,P2 - Long-term important,P3 - Wouldn't it be nice if...,Product Requirements,Restic - GA,Restic,release-blocker,Security,backlog"

6
.gitignore vendored
View File

@@ -53,4 +53,8 @@ tilt-resources/cloud
# test generated files
test/e2e/report.xml
coverage.out
__debug_bin*
__debug_bin*
debug.test*
# make lint cache
.cache/

View File

@@ -12,31 +12,6 @@ run:
# exit code when at least one issue was found, default is 1
issues-exit-code: 1
# which dirs to skip: issues from them won't be reported;
# can use regexp here: generated.*, regexp is applied on full path;
# default value is empty list, but default dirs are skipped independently
# from this option's value (see skip-dirs-use-default).
# "/" will be replaced by current OS file path separator to properly work
# on Windows.
skip-dirs:
- test/*
- pkg/plugin/generated/*
# - autogenerated_by_my_lib
# default is true. Enables skipping of directories:
# vendor$, third_party$, testdata$, examples$, Godeps$, builtin$
skip-dirs-use-default: true
# which files to skip: they will be analyzed, but issues from them
# won't be reported. Default value is empty list, but there is
# no need to include all autogenerated files, we confidently recognize
# autogenerated files. If it's not please let us know.
# "/" will be replaced by current OS file path separator to properly work
# on Windows.
skip-files:
- ".*_test.go$"
# - lib/bad.go
# by default isn't set. If set we pass it to "go list -mod={option}". From "go help modules":
# If invoked with -mod=readonly, the go command is disallowed from the implicit
# automatic updating of go.mod described above. Instead, it fails when any changes
@@ -52,11 +27,12 @@ run:
# If false (default) - golangci-lint acquires file lock on start.
allow-parallel-runners: false
# output configuration options
output:
# colored-line-number|line-number|json|tab|checkstyle|code-climate, default is "colored-line-number"
format: colored-line-number
formats:
- format: colored-line-number
path: stdout
# print lines of code with issue, default is true
print-issued-lines: true
@@ -64,18 +40,25 @@ output:
# print linter name in the end of issue text, default is true
print-linter-name: true
# make issues output unique by line, default is true
uniq-by-line: true
# all available settings of specific linters
linters-settings:
depguard:
rules:
main:
deny:
# specify an error message to output when a denylisted package is used
- pkg: github.com/sirupsen/logrus
desc: "logging is allowed only by logutils.Log"
dogsled:
# checks assignments with too many blank identifiers; default is 2
max-blank-identifiers: 2
dupl:
# tokens count to trigger issue, 150 by default
threshold: 100
errcheck:
# report about not checking of errors in type assertions: `a := b.(MyStruct)`;
# default is false: such cases aren't reported by default.
@@ -93,25 +76,31 @@ linters-settings:
# path to a file containing a list of functions to exclude from checking
# see https://github.com/kisielk/errcheck#excluding-functions for details
# exclude: /path/to/file.txt
exhaustive:
# indicates that switch statements are to be considered exhaustive if a
# 'default' case is present, even if all enum members aren't listed in the
# switch
default-signifies-exhaustive: false
funlen:
lines: 60
statements: 40
gocognit:
# minimal code complexity to report, 30 by default (but we recommend 10-20)
min-complexity: 10
nestif:
# minimal complexity of if statements to report, 5 by default
min-complexity: 4
goconst:
# minimal length of string constant, 3 by default
min-len: 3
# minimal occurrences count to trigger, 3 by default
min-occurrences: 5
gocritic:
# Which checks should be enabled; can't be combined with 'disabled-checks';
# See https://go-critic.github.io/overview#checks-overview
@@ -136,12 +125,15 @@ linters-settings:
paramsOnly: true
# rangeValCopy:
# sizeThreshold: 32
gocyclo:
# minimal code complexity to report, 30 by default (but we recommend 10-20)
min-complexity: 10
godot:
# check all top-level comments, not only declarations
check-all: false
godox:
# report any comments starting with keywords, this is useful for TODO or FIXME comments that
# might be left in the code accidentally and should be resolved before merging
@@ -149,37 +141,20 @@ linters-settings:
- NOTE
- OPTIMIZE # marks code that should be optimized before merging
- HACK # marks hack-arounds that should be removed before merging
gofmt:
# simplify code: gofmt with `-s` option, true by default
simplify: true
goimports:
# put imports beginning with prefix after 3rd-party packages;
# it's a comma-separated list of prefixes
local-prefixes: github.com/org/project
golint:
# minimal confidence for issues, default is 0.8
min-confidence: 0.8
gomnd:
settings:
mnd:
# the list of enabled checks, see https://github.com/tommy-muehle/go-mnd/#checks for description.
checks: argument,case,condition,operation,return,assign
gomodguard:
allowed:
modules: # List of allowed modules
# - gopkg.in/yaml.v2
domains: # List of allowed module domains
# - golang.org
blocked:
modules: # List of blocked modules
# - github.com/uudashr/go-module: # Blocked module
# recommendations: # Recommended modules that should be used instead (Optional)
# - golang.org/x/mod
# reason: "`mod` is the official go.mod parser library." # Reason why the recommended module should be used (Optional)
versions: # List of blocked module version constraints
# - github.com/mitchellh/go-homedir: # Blocked module with version constraint
# version: "< 1.1.0" # Version constraint, see https://github.com/Masterminds/semver#basic-comparisons
# reason: "testing if blocked version constraint works." # Reason why the version constraint exists. (Optional)
gosec:
excludes:
- G115
govet:
# report about shadowed variables
# check-shadowing: true
@@ -200,23 +175,14 @@ linters-settings:
disable:
- shadow
disable-all: false
depguard:
list-type: blacklist # Velero.io word list : ignore
include-go-root: false
packages:
- github.com/sirupsen/logrus
packages-with-error-message:
# specify an error message to output when a denylisted package is used
- github.com/sirupsen/logrus: "logging is allowed only by logutils.Log"
lll:
# max line length, lines longer will be reported. Default is 120.
# '\t' is counted as 1 character by default, and can be changed with the tab-width option
line-length: 120
# tab width in spaces. Default to 1.
tab-width: 1
maligned:
# print struct with more effective memory layout or not, false by default
suggest-new: true
misspell:
# Correct spellings using locale preferences for US or UK.
# Default is to use a neutral variety of English.
@@ -224,9 +190,11 @@ linters-settings:
locale: US
ignore-words:
- someword
nakedret:
# make an issue if func has more lines of code than this setting and it has naked returns; default is 30
max-func-lines: 30
prealloc:
# XXX: we don't recommend using this linter before doing performance profiling.
# For most programs usage of prealloc will be a premature optimization.
@@ -236,25 +204,82 @@ linters-settings:
simple: true
range-loops: true # Report preallocation suggestions on range loops, true by default
for-loops: false # Report preallocation suggestions on for loops, false by default
nolintlint:
# Enable to ensure that nolint directives are all used. Default is true.
allow-unused: false
# Disable to ensure that nolint directives don't have a leading space. Default is true.
allow-leading-space: true
# Exclude following linters from requiring an explanation. Default is [].
allow-no-explanation: []
# Enable to require an explanation of nonzero length after each nolint directive. Default is false.
require-explanation: true
# Enable to require nolint directives to mention the specific linter being suppressed. Default is false.
require-specific: true
perfsprint:
strconcat: false
sprintf1: false
errorf: false
int-conversion: true
revive:
rules:
- name: blank-imports
disabled: true
- name: context-as-argument
disabled: true
- name: context-keys-type
- name: dot-imports
disabled: true
- name: early-return
disabled: true
arguments:
- "preserveScope"
- name: empty-block
disabled: true
- name: error-naming
disabled: true
- name: error-return
disabled: true
- name: error-strings
disabled: true
- name: errorf
disabled: true
- name: increment-decrement
- name: indent-error-flow
disabled: true
- name: range
- name: receiver-naming
disabled: true
- name: redefines-builtin-id
disabled: true
- name: superfluous-else
disabled: true
arguments:
- "preserveScope"
- name: time-naming
- name: unexported-return
disabled: true
- name: unnecessary-stmt
- name: unreachable-code
- name: unused-parameter
disabled: true
- name: use-any
- name: var-declaration
- name: var-naming
disabled: true
rowserrcheck:
packages:
- github.com/jmoiron/sqlx
testifylint:
# TODO: enable them all
disable:
- go-require
- float-compare
- require-error
enable-all: true
testpackage:
# regexp pattern to skip files
skip-regexp: (export|internal)_test\.go
@@ -264,15 +289,11 @@ linters-settings:
# if it's called for subdir of a project it can't find external interfaces. All text editor integrations
# with golangci-lint call it on a directory with the changed file.
check-exported: false
unused:
# treat code as a program (not a library) and report unused exported identifiers; default is false.
# XXX: if you enable this setting, unused will report a lot of false-positives in text editors:
# if it's called for subdir of a project it can't find funcs usages. All text editor integrations
# with golangci-lint call it on a directory with the changed file.
check-exported: false
whitespace:
multi-if: false # Enforces newlines (or comments) after every multi-line if statement
multi-func: false # Enforces newlines (or comments) after every multi-line function signature
wsl:
# If true append is only allowed to be cuddled if appending value is
# matching variables, fields or types on line above. Default is true.
@@ -290,7 +311,7 @@ linters-settings:
force-case-trailing-whitespace: 0
# Force cuddling of err checks with err var assignment
force-err-cuddling: false
# Allow leading comments to be separated with empty liens
# Allow leading comments to be separated with empty lines
allow-separated-leading-comment: false
linters:
@@ -300,10 +321,12 @@ linters:
- asciicheck
- bidichk
- bodyclose
- copyloopvar
- dogsled
- durationcheck
- dupword
- errcheck
- exportloopref
- errchkjson
- goconst
- gofmt
- goheader
@@ -312,14 +335,21 @@ linters:
- gosec
- gosimple
- govet
- ginkgolinter
- importas
- ineffassign
- misspell
- nakedret
- nosprintfhostport
- nilerr
- noctx
- nolintlint
- perfsprint
- revive
- staticcheck
- stylecheck
- revive
- testifylint
- thelper
- typecheck
- unconvert
- unparam
@@ -328,18 +358,48 @@ linters:
- whitespace
fast: false
issues:
# which dirs to skip: issues from them won't be reported;
# can use regexp here: generated.*, regexp is applied on full path;
# default value is empty list, but default dirs are skipped independently
# from this option's value (see skip-dirs-use-default).
# "/" will be replaced by current OS file path separator to properly work
# on Windows.
exclude-dirs:
- pkg/plugin/generated/*
exclude-rules:
- linters:
- staticcheck
text: "github.com/golang/protobuf/proto" # grpc-go still uses github.com/golang/protobuf/proto.
- linters:
- staticcheck
text: "github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2019-06-01/storage" # Kopia still depends on this.
- linters:
- staticcheck
text: "DefaultVolumesToRestic" # No need to report deprecate for DefaultVolumesToRestic.
- path: ".*_test.go$"
linters:
- errcheck
- goconst
- gosec
- govet
- staticcheck
- stylecheck
- unparam
- unused
- path: test/
linters:
- errcheck
- goconst
- gosec
- nilerr
- staticcheck
- stylecheck
- unparam
- unused
- path: ".*data_upload_controller_test.go$"
linters:
- dupword
text: "type"
- path: ".*config_test.go$"
linters:
- dupword
text: "bucket"
# The list of ids of default excludes to include or disable. By default it's empty.
include:
@@ -351,8 +411,8 @@ issues:
# Maximum count of issues with the same text. Set to 0 to disable. Default is 3.
max-same-issues: 0
# Show only new issues created after git revision `REV`
# new-from-rev: origin/main
# make issues output unique by line, default is true
uniq-by-line: true
severity:
# Default value is empty string.
@@ -377,4 +437,4 @@ severity:
rules:
- linters:
- dupl
severity: info
severity: info

View File

@@ -46,9 +46,6 @@ archives:
files:
- LICENSE
- examples/**/*
# Add the setting to resolve the DEPRECATED warning. Actually, Velero's case is not affected by the rlcp behavior change.
# https://github.com/orgs/goreleaser/discussions/3659#discussioncomment-4587257
rlcp: true
checksum:
name_template: 'CHECKSUM'
release:

View File

@@ -16,6 +16,7 @@ If you're using Velero and want to add your organization to this list,
<a href="https://mayadata.io/" border="0" target="_blank"><img alt="mayadata.io" src="site/static/img/adopters/mayadata.svg" height="50"></a>&nbsp; &nbsp; &nbsp;
<a href="https://www.replicated.com/" border="0" target="_blank"><img alt="replicated.com" src="site/static/img/adopters/replicated-logo-red.svg" height="50"></a>
<a href="https://cloudcasa.io/" border="0" target="_blank"><img alt="cloudcasa.io" src="site/static/img/adopters/cloudcasa.svg" height="50"></a>
<a href="https://azure.microsoft.com/" border="0" target="_blank"><img alt="azure.com" src="site/static/img/adopters/azure.svg" height="50"></a>
## Success Stories
Below is a list of adopters of Velero in **production environments** that have
@@ -62,7 +63,10 @@ Okteto integrates Velero in [Okteto Cloud][94] and [Okteto Enterprise][95] to pe
Replicated uses the Velero open source project to enable snapshots in [KOTS][101] to backup Kubernetes manifests & persistent volumes. In addition to the default functionality that Velero provides, [KOTS][101] provides a detailed interface in the [Admin Console][102] that can be used to manage the storage destination and schedule, and to perform and monitor the backup and restore process.<br>
**[CloudCasa][103]**<br>
[Catalogic Software][104] integrates Velero with [CloudCasa][103] - A Smart Home in the Cloud for Backups. CloudCasa is a simple, scalable, cloud-native solution providing data protection and disaster recovery as a service. This solution is built using Kubernetes for protecting Kubernetes clusters.<br>
[Catalogic Software][104] integrates Velero with [CloudCasa][103] - A Smart Home in the Cloud for Backups. CloudCasa is a full-featured, scalable, cloud-native solution providing Kubernetes data protection, disaster recovery, and migration as a service. An option to manage existing Velero instances and an enterprise self-hosted option are also available.<br>
**[Microsoft Azure][105]**<br>
[Azure Backup for AKS][106] is an Azure native, Kubernetes aware, Enterprise ready backup for containerized applications deployed on Azure Kubernetes Service (AKS). AKS Backup utilizes Velero to perform backup and restore operations to protect stateful applications in AKS clusters.<br>
## Adding your organization to the list of Velero Adopters
@@ -118,3 +122,6 @@ If you would like to add your logo to a future `Adopters of Velero` section on [
[103]: https://cloudcasa.io/
[104]: https://www.catalogicsoftware.com/
[105]: https://azure.microsoft.com/
[106]: https://learn.microsoft.com/azure/backup/backup-overview

View File

@@ -1,7 +1,11 @@
## Current release:
* [CHANGELOG-1.11.md][21]
* [CHANGELOG-1.15.md][25]
## Older releases:
* [CHANGELOG-1.14.md][24]
* [CHANGELOG-1.13.md][23]
* [CHANGELOG-1.12.md][22]
* [CHANGELOG-1.11.md][21]
* [CHANGELOG-1.10.md][20]
* [CHANGELOG-1.9.md][19]
* [CHANGELOG-1.8.md][18]
@@ -24,6 +28,10 @@
* [CHANGELOG-0.3.md][1]
[25]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.15.md
[24]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.14.md
[23]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.13.md
[22]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.12.md
[21]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.11.md
[20]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.10.md
[19]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.9.md

View File

@@ -5,7 +5,7 @@
We as members, contributors, and leaders pledge to make participation in the Velero project and our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
identity and expression, level of experience, education, socioeconomic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.

View File

@@ -13,7 +13,7 @@
# limitations under the License.
# Velero binary build section
FROM --platform=$BUILDPLATFORM golang:1.20.7-bullseye as velero-builder
FROM --platform=$BUILDPLATFORM golang:1.23.11-bookworm AS velero-builder
ARG GOPROXY
ARG BIN
@@ -42,12 +42,16 @@ RUN mkdir -p /output/usr/bin && \
export GOARM=$( echo "${GOARM}" | cut -c2-) && \
go build -o /output/${BIN} \
-ldflags "${LDFLAGS}" ${PKG}/cmd/${BIN} && \
go build -o /output/velero-restore-helper \
-ldflags "${LDFLAGS}" ${PKG}/cmd/velero-restore-helper && \
go build -o /output/velero-helper \
-ldflags "${LDFLAGS}" ${PKG}/cmd/velero-helper
-ldflags "${LDFLAGS}" ${PKG}/cmd/velero-helper && \
go clean -modcache -cache
# Restic binary build section
FROM --platform=$BUILDPLATFORM golang:1.20.7-bullseye as restic-builder
FROM --platform=$BUILDPLATFORM golang:1.23.11-bookworm AS restic-builder
ARG GOPROXY
ARG BIN
ARG TARGETOS
ARG TARGETARCH
@@ -65,10 +69,11 @@ COPY . /go/src/github.com/vmware-tanzu/velero
RUN mkdir -p /output/usr/bin && \
export GOARM=$(echo "${GOARM}" | cut -c2-) && \
/go/src/github.com/vmware-tanzu/velero/hack/build-restic.sh
/go/src/github.com/vmware-tanzu/velero/hack/build-restic.sh && \
go clean -modcache -cache
# Velero image packing section
FROM gcr.io/distroless/base-nossl-debian11@sha256:f10e1fbf558c630a4b74a987e6c754d45bf59f9ddcefce090f6b111925996767
FROM paketobuildpacks/run-jammy-tiny:0.2.73
LABEL maintainer="Xun Jiang <jxun@vmware.com>"
@@ -76,5 +81,4 @@ COPY --from=velero-builder /output /
COPY --from=restic-builder /output /
USER nonroot:nonroot
USER cnb:cnb

55
Dockerfile-Windows Normal file
View File

@@ -0,0 +1,55 @@
# Copyright the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
ARG OS_VERSION=1809
# Velero binary build section
FROM --platform=$BUILDPLATFORM golang:1.23.10-bookworm AS velero-builder
ARG GOPROXY
ARG BIN
ARG PKG
ARG VERSION
ARG REGISTRY
ARG GIT_SHA
ARG GIT_TREE_STATE
ARG TARGETOS
ARG TARGETARCH
ARG TARGETVARIANT
ENV CGO_ENABLED=0 \
GO111MODULE=on \
GOPROXY=${GOPROXY} \
GOOS=${TARGETOS} \
GOARCH=${TARGETARCH} \
GOARM=${TARGETVARIANT} \
LDFLAGS="-X ${PKG}/pkg/buildinfo.Version=${VERSION} -X ${PKG}/pkg/buildinfo.GitSHA=${GIT_SHA} -X ${PKG}/pkg/buildinfo.GitTreeState=${GIT_TREE_STATE} -X ${PKG}/pkg/buildinfo.ImageRegistry=${REGISTRY}"
WORKDIR /go/src/github.com/vmware-tanzu/velero
COPY . /go/src/github.com/vmware-tanzu/velero
RUN mkdir -p /output/usr/bin && \
export GOARM=$( echo "${GOARM}" | cut -c2-) && \
go build -o /output/${BIN}.exe \
-ldflags "${LDFLAGS}" ${PKG}/cmd/${BIN} && \
go build -o /output/velero-helper.exe \
-ldflags "${LDFLAGS}" ${PKG}/cmd/velero-helper && \
go clean -modcache -cache
# Velero image packing section
FROM mcr.microsoft.com/windows/nanoserver:${OS_VERSION}
COPY --from=velero-builder /output /
USER ContainerUser

View File

@@ -107,6 +107,29 @@ Lazy consensus does _not_ apply to the process of:
* Removal of maintainers from Velero
## Deprecation Policy
### Deprecation Process
Any contributor may introduce a request to deprecate a feature or an option of a feature by opening a feature request issue in the vmware-tanzu/velero GitHub project. The issue should describe why the feature is no longer needed or has become detrimental to Velero, as well as whether and how it has been superseded. The submitter should give as much detail as possible.
Once the issue is filed, a one-month discussion period begins. Discussions take place within the issue itself as well as in the community meetings. The person who opens the issue, or a maintainer, should add the date and time marking the end of the discussion period in a comment on the issue as soon as possible after it is opened. A decision on the issue needs to be made within this one-month period.
The feature will be deprecated by a supermajority vote of 50% plus one of the project maintainers at the time of the vote tallying, which is 72 hours after the end of the community meeting that is the end of the comment period. (Maintainers are permitted to vote in advance of the deadline, but should hold their votes until as close as possible to hear all possible discussion.) Votes will be tallied in comments on the issue.
Non-maintainers may add non-binding votes in comments to the issue as well; these are opinions to be taken into consideration by maintainers, but they do not count as votes.
If the vote passes, the deprecation window takes effect in the subsequent release, and the removal follows the schedule.
### Schedule
If depreciation proposal passes by supermajority votes, the feature is deprecated in the next minor release and the feature can be removed completely after two minor version or equivalent major version e.g., if feature gets deprecated in Nth minor version, then feature can be removed after N+2 minor version or its equivalent if the major version number changes.
### Deprecation Window
The deprecation window is the period from the release in which the deprecation takes effect through the release in which the feature is removed. During this period, only critical security vulnerabilities and catastrophic bugs should be fixed.
**Note:** If a backup relies on a deprecated feature, then backups made with the last Velero release before this feature is removed must still be restorable in version `n+2`. For instance, something like restic feature support, that might mean that restic is removed from the list of supported uploader types in version `n` but the underlying implementation required to restore from a restic backup won't be removed until release `n+2`.
## Updating Governance
All substantive changes in Governance require a supermajority agreement by all maintainers.

View File

@@ -4,16 +4,16 @@
## Maintainers
| Maintainer | GitHub ID | Affiliation |
|---------------------|---------------------------------------------------------------|-------------------------------------------|
| Dave Smith-Uchida | [dsu-igeek](https://github.com/dsu-igeek) | [Kasten](https://github.com/kastenhq/) |
| Scott Seago | [sseago](https://github.com/sseago) | [OpenShift](https://github.com/openshift) |
| Daniel Jiang | [reasonerjt](https://github.com/reasonerjt) | [VMware](https://www.github.com/vmware/) |
| Wenkai Yin | [ywk253100](https://github.com/ywk253100) | [VMware](https://www.github.com/vmware/) |
| Xun Jiang | [blackpiglet](https://github.com/blackpiglet) | [VMware](https://www.github.com/vmware/) |
| Ming Qiu | [qiuming-best](https://github.com/qiuming-best) | [VMware](https://www.github.com/vmware/) |
| Shubham Pampattiwar | [shubham-pampattiwar](https://github.com/shubham-pampattiwar) | [OpenShift](https://github.com/openshift) |
| Yonghui Li | [Lyndon-Li](https://github.com/Lyndon-Li) | [VMware](https://www.github.com/vmware/) |
| Maintainer | GitHub ID | Affiliation |
|---------------------|---------------------------------------------------------------|--------------------------------------------------|
| Scott Seago | [sseago](https://github.com/sseago) | [OpenShift](https://github.com/openshift) |
| Daniel Jiang | [reasonerjt](https://github.com/reasonerjt) | [VMware](https://www.github.com/vmware/) |
| Wenkai Yin | [ywk253100](https://github.com/ywk253100) | [VMware](https://www.github.com/vmware/) |
| Xun Jiang | [blackpiglet](https://github.com/blackpiglet) | [VMware](https://www.github.com/vmware/) |
| Shubham Pampattiwar | [shubham-pampattiwar](https://github.com/shubham-pampattiwar) | [OpenShift](https://github.com/openshift) |
| Yonghui Li | [Lyndon-Li](https://github.com/Lyndon-Li) | [VMware](https://www.github.com/vmware/) |
| Anshul Ahuja | [anshulahuja98](https://github.com/anshulahuja98) | [Microsoft Azure](https://www.github.com/azure/) |
| Tiger Kaovilai | [kaovilai](https://github.com/kaovilai) | [OpenShift](https://github.com/openshift) |
## Emeritus Maintainers
* Adnan Abdulhussein ([prydonius](https://github.com/prydonius))
@@ -25,12 +25,13 @@
* Carlisia Thompson ([carlisia](https://github.com/carlisia))
* Bridget McErlean ([zubron](https://github.com/zubron))
* JenTing Hsiao ([jenting](https://github.com/jenting))
* Dave Smith-Uchida ([dsu-igeek](https://github.com/dsu-igeek))
* Ming Qiu ([qiuming-best](https://github.com/qiuming-best))
## Velero Contributors & Stakeholders
| Feature Area | Lead |
|------------------------|:------------------------------------------------------------------------------------:|
| Architect | Dave Smith-Uchida [dsu-igeek](https://github.com/dsu-igeek) |
| Technical Lead | Daniel Jiang [reasonerjt](https://github.com/reasonerjt) |
| Kubernetes CSI Liaison | |
| Deployment | |

173
Makefile
View File

@@ -22,15 +22,26 @@ PKG := github.com/vmware-tanzu/velero
# Where to push the docker image.
REGISTRY ?= velero
GCR_REGISTRY ?= gcr.io/velero-gcp
# In order to push images to an insecure registry, follow the two steps:
# 1. Set "INSECURE_REGISTRY=true"
# 2. Provide your own buildx builder instance by setting "BUILDX_INSTANCE=your-own-builder-instance"
# The builder can be created with the following command:
# cat << EOF > buildkitd.toml
# [registry."insecure-registry-ip:port"]
# http = true
# insecure = true
# EOF
# docker buildx create --name=velero-builder --driver=docker-container --bootstrap --use --config ./buildkitd.toml
# Refer to https://github.com/docker/buildx/issues/1370#issuecomment-1288516840 for more details
INSECURE_REGISTRY ?= false
# Image name
IMAGE ?= $(REGISTRY)/$(BIN)
GCR_IMAGE ?= $(GCR_REGISTRY)/$(BIN)
# We allow the Dockerfile to be configurable to enable the use of custom Dockerfiles
# that pull base images from different registries.
VELERO_DOCKERFILE ?= Dockerfile
VELERO_DOCKERFILE_WINDOWS ?= Dockerfile-Windows
BUILDER_IMAGE_DOCKERFILE ?= hack/build-image/Dockerfile
# Calculate the realpath of the build-image Dockerfile as we `cd` into the hack/build
@@ -68,13 +79,21 @@ TAG_LATEST ?= false
ifeq ($(TAG_LATEST), true)
IMAGE_TAGS ?= $(IMAGE):$(VERSION) $(IMAGE):latest
GCR_IMAGE_TAGS ?= $(GCR_IMAGE):$(VERSION) $(GCR_IMAGE):latest
else
IMAGE_TAGS ?= $(IMAGE):$(VERSION)
GCR_IMAGE_TAGS ?= $(GCR_IMAGE):$(VERSION)
endif
ifeq ($(shell docker buildx inspect 2>/dev/null | awk '/Status/ { print $$2 }'), running)
# check buildx is enabled only if docker is in path
# macOS/Windows docker cli without Docker Desktop license: https://github.com/abiosoft/colima
# To add buildx to docker cli: https://github.com/abiosoft/colima/discussions/273#discussioncomment-2684502
ifeq ($(shell which docker 2>/dev/null 1>&2 && docker buildx inspect 2>/dev/null | awk '/Status/ { print $$2 }'), running)
BUILDX_ENABLED ?= true
# if emulated docker cli from podman, assume enabled
# emulated docker cli from podman: https://podman-desktop.io/docs/migrating-from-docker/emulating-docker-cli-with-podman
# podman known issues:
# - on remote podman, such as on macOS,
# --output issue: https://github.com/containers/podman/issues/15922
else ifeq ($(shell which docker 2>/dev/null 1>&2 && cat $(shell which docker) | grep -c "exec podman"), 1)
BUILDX_ENABLED ?= true
else
BUILDX_ENABLED ?= false
@@ -84,13 +103,32 @@ define BUILDX_ERROR
buildx not enabled, refusing to run this recipe
see: https://velero.io/docs/main/build-from-source/#making-images-and-updating-velero for more info
endef
# comma cannot be escaped and can only be used in Make function arguments by putting into variable
comma=,
# The version of restic binary to be downloaded
RESTIC_VERSION ?= 0.15.0
CLI_PLATFORMS ?= linux-amd64 linux-arm linux-arm64 darwin-amd64 darwin-arm64 windows-amd64 linux-ppc64le
BUILDX_PLATFORMS ?= $(subst -,/,$(ARCH))
BUILDX_OUTPUT_TYPE ?= docker
BUILD_OUTPUT_TYPE ?= docker
BUILD_OS ?= linux
BUILD_ARCH ?= amd64
BUILD_WINDOWS_VERSION ?= ltsc2022
ifeq ($(BUILD_OUTPUT_TYPE), docker)
ALL_OS = linux
ALL_ARCH.linux = $(word 2, $(subst -, ,$(shell go env GOOS)-$(shell go env GOARCH)))
else
ALL_OS = $(subst $(comma), ,$(BUILD_OS))
ALL_ARCH.linux = $(subst $(comma), ,$(BUILD_ARCH))
endif
ALL_ARCH.windows = $(if $(filter windows,$(ALL_OS)),amd64,)
ALL_OSVERSIONS.windows = $(if $(filter windows,$(ALL_OS)),$(BUILD_WINDOWS_VERSION),)
ALL_OS_ARCH.linux = $(foreach os, $(filter linux,$(ALL_OS)), $(foreach arch, ${ALL_ARCH.linux}, ${os}-$(arch)))
ALL_OS_ARCH.windows = $(foreach os, $(filter windows,$(ALL_OS)), $(foreach arch, $(ALL_ARCH.windows), $(foreach osversion, ${ALL_OSVERSIONS.windows}, ${os}-${osversion}-${arch})))
ALL_OS_ARCH = $(ALL_OS_ARCH.linux)$(ALL_OS_ARCH.windows)
ALL_IMAGE_TAGS = $(IMAGE_TAGS)
# set git sha and tree state
GIT_SHA = $(shell git rev-parse HEAD)
@@ -108,27 +146,26 @@ platform_temp = $(subst -, ,$(ARCH))
GOOS = $(word 1, $(platform_temp))
GOARCH = $(word 2, $(platform_temp))
GOPROXY ?= https://proxy.golang.org
GOBIN=$$(pwd)/.go/bin
# If you want to build all binaries, see the 'all-build' rule.
# If you want to build all containers, see the 'all-containers' rule.
all:
@$(MAKE) build
@$(MAKE) build BIN=velero-restore-helper
build-%:
@$(MAKE) --no-print-directory ARCH=$* build
@$(MAKE) --no-print-directory ARCH=$* build BIN=velero-restore-helper
all-build: $(addprefix build-, $(CLI_PLATFORMS))
all-containers:
@$(MAKE) --no-print-directory container
@$(MAKE) --no-print-directory container BIN=velero-restore-helper
local: build-dirs
# Add DEBUG=1 to enable debug locally
GOOS=$(GOOS) \
GOARCH=$(GOARCH) \
GOBIN=$(GOBIN) \
VERSION=$(VERSION) \
REGISTRY=$(REGISTRY) \
PKG=$(PKG) \
@@ -145,6 +182,7 @@ _output/bin/$(GOOS)/$(GOARCH)/$(BIN): build-dirs
$(MAKE) shell CMD="-c '\
GOOS=$(GOOS) \
GOARCH=$(GOARCH) \
GOBIN=$(GOBIN) \
VERSION=$(VERSION) \
REGISTRY=$(REGISTRY) \
PKG=$(PKG) \
@@ -183,11 +221,38 @@ container:
ifneq ($(BUILDX_ENABLED), true)
$(error $(BUILDX_ERROR))
endif
ifeq ($(BUILDX_INSTANCE),)
@echo creating a buildx instance
-docker buildx rm velero-builder || true
@docker buildx create --use --name=velero-builder
else
@echo using a specified buildx instance $(BUILDX_INSTANCE)
@docker buildx use $(BUILDX_INSTANCE)
endif
@mkdir -p _output
@for osarch in $(ALL_OS_ARCH); do \
$(MAKE) container-$${osarch}; \
done
ifeq ($(BUILD_OUTPUT_TYPE), registry)
@for tag in $(ALL_IMAGE_TAGS); do \
IMAGE_TAG=$${tag} $(MAKE) push-manifest; \
done
endif
container-linux-%:
@BUILDX_ARCH=$* $(MAKE) container-linux
container-linux:
@echo "building container: $(IMAGE):$(VERSION)-linux-$(BUILDX_ARCH)"
@docker buildx build --pull \
--output=type=$(BUILDX_OUTPUT_TYPE) \
--platform $(BUILDX_PLATFORMS) \
$(addprefix -t , $(IMAGE_TAGS)) \
$(addprefix -t , $(GCR_IMAGE_TAGS)) \
--output="type=$(BUILD_OUTPUT_TYPE)$(if $(findstring tar, $(BUILD_OUTPUT_TYPE)),$(comma)dest=_output/$(BIN)-$(VERSION)-linux-$(BUILDX_ARCH).tar,)" \
--platform="linux/$(BUILDX_ARCH)" \
$(addprefix -t , $(addsuffix "-linux-$(BUILDX_ARCH)",$(ALL_IMAGE_TAGS))) \
--build-arg=GOPROXY=$(GOPROXY) \
--build-arg=PKG=$(PKG) \
--build-arg=BIN=$(BIN) \
@@ -196,14 +261,54 @@ endif
--build-arg=GIT_TREE_STATE=$(GIT_TREE_STATE) \
--build-arg=REGISTRY=$(REGISTRY) \
--build-arg=RESTIC_VERSION=$(RESTIC_VERSION) \
--provenance=false \
--sbom=false \
-f $(VELERO_DOCKERFILE) .
@echo "container: $(IMAGE):$(VERSION)"
ifeq ($(BUILDX_OUTPUT_TYPE)_$(REGISTRY), registry_velero)
docker pull $(IMAGE):$(VERSION)
rm -f $(BIN)-$(VERSION).tar
docker save $(IMAGE):$(VERSION) -o $(BIN)-$(VERSION).tar
gzip -f $(BIN)-$(VERSION).tar
endif
@echo "built container: $(IMAGE):$(VERSION)-linux-$(BUILDX_ARCH)"
container-windows-%:
@BUILDX_OSVERSION=$(firstword $(subst -, ,$*)) BUILDX_ARCH=$(lastword $(subst -, ,$*)) $(MAKE) container-windows
container-windows:
@echo "building container: $(IMAGE):$(VERSION)-windows-$(BUILDX_OSVERSION)-$(BUILDX_ARCH)"
@docker buildx build --pull \
--output="type=$(BUILD_OUTPUT_TYPE)$(if $(findstring tar, $(BUILD_OUTPUT_TYPE)),$(comma)dest=_output/$(BIN)-$(VERSION)-windows-$(BUILDX_OSVERSION)-$(BUILDX_ARCH).tar,)" \
--platform="windows/$(BUILDX_ARCH)" \
$(addprefix -t , $(addsuffix "-windows-$(BUILDX_OSVERSION)-$(BUILDX_ARCH)",$(ALL_IMAGE_TAGS))) \
--build-arg=GOPROXY=$(GOPROXY) \
--build-arg=PKG=$(PKG) \
--build-arg=BIN=$(BIN) \
--build-arg=VERSION=$(VERSION) \
--build-arg=OS_VERSION=$(BUILDX_OSVERSION) \
--build-arg=GIT_SHA=$(GIT_SHA) \
--build-arg=GIT_TREE_STATE=$(GIT_TREE_STATE) \
--build-arg=REGISTRY=$(REGISTRY) \
--provenance=false \
--sbom=false \
-f $(VELERO_DOCKERFILE_WINDOWS) .
@echo "built container: $(IMAGE):$(VERSION)-windows-$(BUILDX_OSVERSION)-$(BUILDX_ARCH)"
push-manifest:
@echo "building manifest: $(IMAGE_TAG) for $(foreach osarch, $(ALL_OS_ARCH), $(IMAGE_TAG)-${osarch})"
@docker manifest create --amend --insecure=$(INSECURE_REGISTRY) $(IMAGE_TAG) $(foreach osarch, $(ALL_OS_ARCH), $(IMAGE_TAG)-${osarch})
@set -x; \
for arch in $(ALL_ARCH.windows); do \
for osversion in $(ALL_OSVERSIONS.windows); do \
BASEIMAGE=mcr.microsoft.com/windows/nanoserver:$${osversion}; \
full_version=`docker manifest inspect --insecure=$(INSECURE_REGISTRY) $${BASEIMAGE} | jq -r '.manifests[0].platform["os.version"]'`; \
docker manifest annotate --os windows --arch $${arch} --os-version $${full_version} $(IMAGE_TAG) $(IMAGE_TAG)-windows-$${osversion}-$${arch}; \
done; \
done
@echo "pushing manifest $(IMAGE_TAG)"
@docker manifest push --purge --insecure=$(INSECURE_REGISTRY) $(IMAGE_TAG)
@echo "pushed manifest $(IMAGE_TAG):"
@docker manifest inspect --insecure=$(INSECURE_REGISTRY) $(IMAGE_TAG)
SKIP_TESTS ?=
test: build-dirs
@@ -357,11 +462,29 @@ gen-docs:
.PHONY: test-e2e
test-e2e: local
$(MAKE) -e VERSION=$(VERSION) -C test/e2e run
$(MAKE) -e VERSION=$(VERSION) -C test/ run-e2e
.PHONY: test-perf
test-perf: local
$(MAKE) -e VERSION=$(VERSION) -C test/perf run
$(MAKE) -e VERSION=$(VERSION) -C test/ run-perf
go-generate:
go generate ./pkg/...
go generate ./pkg/...
# requires an authenticated gh cli
# gh: https://cli.github.com/
# First create a PR
# gh pr create --title 'Title name' --body 'PR body'
# by default uses PR title as changelog body but can be overwritten like so
# make new-changelog CHANGELOG_BODY="Changes you have made"
new-changelog: GH_LOGIN ?= $(shell gh pr view --json author --jq .author.login 2> /dev/null)
new-changelog: GH_PR_NUMBER ?= $(shell gh pr view --json number --jq .number 2> /dev/null)
new-changelog: CHANGELOG_BODY ?= '$(shell gh pr view --json title --jq .title)'
new-changelog:
@if [ "$(GH_LOGIN)" = "" ]; then \
echo "branch does not have PR or cli not logged in, try 'gh auth login' or 'gh pr create'"; \
exit 1; \
fi
@mkdir -p ./changelogs/unreleased/ && \
echo $(CHANGELOG_BODY) > ./changelogs/unreleased/$(GH_PR_NUMBER)-$(GH_LOGIN) && \
echo \"$(CHANGELOG_BODY)\" added to "./changelogs/unreleased/$(GH_PR_NUMBER)-$(GH_LOGIN)"

24
OWNERS Normal file
View File

@@ -0,0 +1,24 @@
# This file is used by the [PROW action](https://github.com/jpmcb/prow-github-actions) to approve and merge PRs.
# The file's format follows the [OWNERS SPEC](https://www.kubernetes.dev/docs/guide/owners/#owners-spec).
# List of usernames who may use /lgtm
reviewers:
- @Lyndon-Li
- @anshulahuja98
- @blackpiglet
- @qiuming-best
- @reasonerjt
- @shubham-pampattiwar
- @sseago
- @ywk253100
# List of usernames who may use /approve
approvers:
- @Lyndon-Li
- @anshulahuja98
- @blackpiglet
- @qiuming-best
- @reasonerjt
- @shubham-pampattiwar
- @sseago
- @ywk253100

View File

@@ -40,17 +40,18 @@ See [the list of releases][6] to find out about feature changes.
The following is a list of the supported Kubernetes versions for each Velero version.
| Velero version | Expected Kubernetes version compatibility | Tested on Kubernetes version |
|----------------|-------------------------------------------|----------------------------------------|
| 1.12 | 1.18-latest | 1.25.7, 1.26.5, 1.26.7, and 1.27.3 |
| 1.11 | 1.18-latest | 1.23.10, 1.24.9, 1.25.5, and 1.26.1 |
| 1.10 | 1.18-latest | 1.22.5, 1.23.8, 1.24.6 and 1.25.1 |
| 1.9 | 1.18-latest | 1.20.5, 1.21.2, 1.22.5, 1.23, and 1.24 |
| 1.8 | 1.18-latest | |
| Velero version | Expected Kubernetes version compatibility | Tested on Kubernetes version |
|----------------|-------------------------------------------|-------------------------------------|
| 1.16 | 1.18-latest | 1.31.4, 1.32.3, and 1.33.0 |
| 1.15 | 1.18-latest | 1.28.8, 1.29.8, 1.30.4 and 1.31.1 |
| 1.14 | 1.18-latest | 1.27.9, 1.28.9, and 1.29.4 |
| 1.13 | 1.18-latest | 1.26.5, 1.27.3, 1.27.8, and 1.28.3 |
| 1.12 | 1.18-latest | 1.25.7, 1.26.5, 1.26.7, and 1.27.3 |
| 1.11 | 1.18-latest | 1.23.10, 1.24.9, 1.25.5, and 1.26.1 |
Velero supports IPv4, IPv6, and dual stack environments. Support for this was tested against Velero v1.8.
The Velero maintainers are continuously working to expand testing coverage, but are not able to test every combination of Velero and supported Kubernetes versions for each Velero release. The table above is meant to track the current testing coverage and the expected supported Kubernetes versions for each Velero version. If you have a question about test coverage before v1.9, please reach out in the [#velero-users](https://kubernetes.slack.com/archives/C6VCGP4MT) Slack channel.
The Velero maintainers are continuously working to expand testing coverage, but are not able to test every combination of Velero and supported Kubernetes versions for each Velero release. The table above is meant to track the current testing coverage and the expected supported Kubernetes versions for each Velero version.
If you are interested in using a different version of Kubernetes with a given Velero version, we'd recommend that you perform testing before installing or upgrading your environment. For full information around capabilities within a release, also see the Velero [release notes](https://github.com/vmware-tanzu/velero/releases) or Kubernetes [release notes](https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG). See the Velero [support page](https://velero.io/docs/latest/support-process/) for information about supported versions of Velero.

View File

@@ -52,7 +52,7 @@ git_sha = str(local("git rev-parse HEAD", quiet = True, echo_off = True)).strip(
tilt_helper_dockerfile_header = """
# Tilt image
FROM golang:1.20.7 as tilt-helper
FROM golang:1.23.11 as tilt-helper
# Support live reloading with Tilt
RUN wget --output-document /restart.sh --quiet https://raw.githubusercontent.com/windmilleng/rerun-process-wrapper/master/restart.sh && \

View File

@@ -100,7 +100,7 @@ To fix CVEs and keep pace with Golang, Velero made changes as follows:
* Enable staticcheck linter. (#5788, @blackpiglet)
* Set Kopia IgnoreUnknownTypes in ErrorHandlingPolicy to True for ignoring backup unknown file type (#5786, @qiuming-best)
* Bump up Restic version to 0.15.0 (#5784, @qiuming-best)
* Add File system backup related matrics to Grafana dashboard
* Add File system backup related metrics to Grafana dashboard
- Add metrics backup_warning_total for record of total warnings
- Add metrics backup_last_status for record of last status of the backup (#5779, @allenxu404)
* Design for Handling backup of volumes by resources filters (#5773, @qiuming-best)

View File

@@ -51,7 +51,6 @@ To fix CVEs and keep pace with Golang, Velero made changes as follows:
* Prior to v1.12, the parameter `uploader-type` for Velero installation had a default value of "restic". However, starting from this version, the default value has been changed to "kopia". This means that Velero will now use Kopia as the default path for file system backup.
* The ways of setting CSI snapshot time have changed in v1.12. First, the sync waiting time for creating a snapshot handle in the CSI plugin is changed from the fixed 10 minutes into backup.Spec.CSISnapshotTimeout. The second, the async waiting time for VolumeSnapshot and VolumeSnapshotContent's status turning into `ReadyToUse` in operation uses the operation's timeout. The default value is 4 hours.
* As from [Velero helm chart v4.0.0](https://github.com/vmware-tanzu/helm-charts/releases/tag/velero-4.0.0), it supports multiple BSL and VSL, and the BSL and VSL have changed from the map into a slice, and[ this breaking change](https://github.com/vmware-tanzu/helm-charts/pull/413) is not backward compatible. So it would be best to change the BSL and VSL configuration into slices before the Upgrade.
* Prior to v1.12, deleting the Velero namespace would easily remove all the resources within it. However, with the introduction of finalizers attached to the Velero CR including `restore`, `dataupload`, and `datadownload` in this version, directly deleting Velero namespace may get stuck indefinitely because the pods responsible for handling the finalizers might be deleted before the resources attached to the finalizers. To avoid this issue, please use the command `velero uninstall` to delete all the Velero resources or ensure that you handle the finalizer appropriately before deleting the Velero namespace.
### Limitations/Known issues
@@ -133,10 +132,3 @@ prior PVC restores with CSI (#6111, @eemcmullan)
* Make GetPluginConfig accessible from other packages. (#6151, @tkaovila)
* Ignore not found error during patching managedFields (#6136, @ywk253100)
* Fix the goreleaser issues and add a new goreleaser action (#6109, @blackpiglet)
* Add CSI snapshot data movement doc (#6793, @Lyndon-Li)
* Use old(origin) namespace in resource modifier conditions in case namespace may change during restore (#6724, @27149chen)
* Fix #6752: add namespace exclude check. (#6762, @blackpiglet)
* Update restore controller logic for restore deletion (#6761, @ywk253100)
* Fix issue #6753, remove the check for read-only BSL in restore async operation controller since Velero cannot fully support read-only mode BSL in restore at present (#6758, @Lyndon-Li)
* Fixes #6636, skip subresource in resource discovery (#6688, @27149chen)
* This pr made some improvements in Resource Modifiers:1. add label selector 2. change the field name from groupKind to groupResource (#6704, @27149chen)

View File

@@ -0,0 +1,166 @@
## v1.13
### 2024-01-10
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.13.0
### Container Image
`velero/velero:v1.13.0`
### Documentation
https://velero.io/docs/v1.13/
### Upgrading
https://velero.io/docs/v1.13/upgrade-to-1.13/
### Highlights
#### Resource Modifier Enhancement
Velero introduced the Resource Modifiers in v1.12.0. This feature allows users to specify a ConfigMap with a set of rules to modify the resources during restoration. However, only the JSON Patch is supported when creating the rules, and JSON Patch has some limitations, which cannot cover all use cases. In v1.13.0, Velero adds new support for JSON Merge Patch and Strategic Merge Patch, which provide more power and flexibility and allow users to use the same ConfigMap to apply patches on the resources. More design details can be found in [Support JSON Merge Patch and Strategic Merge Patch in Resource Modifiers](https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/merge-patch-and-strategic-in-resource-modifier.md) design. For instructions on how to use the feature, please refer to the [Resource Modifiers](https://velero.io/docs/v1.13/restore-resource-modifiers/) doc.
#### Node-Agent Concurrency
Velero data movement activities from fs-backups and CSI snapshot data movements run in Velero node-agent, so may be hosted by every node in the cluster and consume resources (i.e. CPU, memory, network bandwidth) from there. With v1.13, users are allowed to configure how many data movement activities (a.k.a, loads) run in each node globally or by node, so that users can better leverage the performance of Velero data movement activities and the resource consumption in the cluster. For more information, check the [Node-Agent Concurrency](https://velero.io/docs/v1.13/node-agent-concurrency/) document.
#### Parallel Files Upload Options
Velero now supports configurable options for parallel files upload when using Kopia uploader to do fs-backups or CSI snapshot data movements which makes speed up backup possible.
For more information, please check [Here](https://velero.io/docs/v1.13/backup-reference/#parallel-files-upload).
#### Write Sparse Files Options
If using fs-restore or CSI snapshot data movements, its supported to write sparse files during restore. For more information, please check [Here](https://velero.io/docs/v1.13/restore-reference/#write-sparse-files).
#### Backup Describe
In v1.13, the Backup Volume section is added to the velero backup describe command output. The backup Volumes section describes information for all the volumes included in the backup of various backup types, i.e. native snapshot, fs-backup, CSI snapshot, and CSI snapshot data movement. Particularly, the velero backup description now supports showing the information of CSI snapshot data movements, which is not supported in v1.12.
Additionally, backup describe command will not check EnableCSI feature gate from client side, so if a backup has volumes with CSI snapshot or CSI snapshot data movement, backup describe command always shows the corresponding information in its output.
#### Backup's new VolumeInfo metadata
Create a new metadata file in the backup repository's backup name sub-directory to store the backup-including PVC and PV information. The information includes the backing-up method of the PVC and PV data, snapshot information, and status. The VolumeInfo metadata file determines how the PV resource should be restored. The Velero downstream software can also use this metadata file to get a summary of the backup's volume data information.
#### Enhancement for CSI Snapshot Data Movements when Velero Pod Restart
When performing backup and restore operations, enhancements have been implemented for Velero server pods or node agents to ensure that the current backup or restore process is not stuck or interrupted after restart due to certain exceptional circumstances.
#### New status fields added to show hook execution details
Hook execution status is now included in the backup/restore CR status and displayed in the backup/restore describe command output. Specifically, it will show the number of hooks which attempted to execute under the HooksAttempted field and the number of hooks which failed to execute under the HooksFailed field.
#### AWS SDK Bump Up
Bump up AWS SDK for Go to version 2, which offers significant performance improvements in CPU and memory utilization over version 1.
#### Azure AD/Workload Identity Support
Azure AD/Workload Identity is the recommended approach to do the authentication with Azure services/AKS, Velero has introduced support for Azure AD/Workload Identity on the Velero Azure plugin side in previous releases, and in v1.13.0 Velero adds new support for Kopia operations(file system backup/data mover/etc.) with Azure AD/Workload Identity.
#### Runtime and dependencies
To fix CVEs and keep pace with Golang, Velero made changes as follows:
* Bump Golang runtime to v1.21.6.
* Bump several dependent libraries to new versions.
* Bump Kopia to v0.15.0.
### Breaking changes
* Backup describe command: due to the backup describe output enhancement, some existing information (i.e. the output for native snapshot, CSI snapshot, and fs-backup) has been moved to the Backup Volumes section with some format changes.
* API type changes: changes the field [DataMoverConfig](https://github.com/vmware-tanzu/velero/blob/v1.13.0/pkg/apis/velero/v2alpha1/data_upload_types.go#L54) in DataUploadSpec from `*map[string][string]`` to `map[string]string`
* Velero install command: due to the issue [#7264](https://github.com/vmware-tanzu/velero/issues/7264), v1.13.0 introduces a break change that make the informer cache enabled by default to keep the actual behavior consistent with the helper message(the informer cache is disabled by default before the change).
### Limitations/Known issues
* The backup's VolumeInfo metadata doesn't have the information updated in the async operations. This function could be supported in v1.14 release.
### Note
* Velero introduces the informer cache which is enabled by default. The informer cache improves the restore performance but may cause higher memory consumption. Increase the memory limit of the Velero pod or disable the informer cache by specifying the `--disable-informer-cache` option when installing Velero if you get the OOM error.
### Deprecation announcement
* The generated k8s clients, informers, and listers are deprecated in the Velero v1.13 release. They are put in the Velero repository's pkg/generated directory. According to the n+2 supporting policy, the deprecated are kept for two more releases. The pkg/generated directory should be deleted in the v1.15 release.
* After the backup VolumeInfo metadata file is added to the backup, Velero decides how to restore the PV resource according to the VolumeInfo content. To support the backup generated by the older version of Velero, the old logic is also kept. The support for the backup without the VolumeInfo metadata file will be kept for two releases. The support logic will be deleted in the v1.15 release.
### All Changes
* Make "disable-informer-cache" option false(enabled) by default to keep it consistent with the help message (#7294, @ywk253100)
* Fix issue #6928, remove snapshot deletion timeout for PVB (#7282, @Lyndon-Li)
* Do not set "targetNamespace" to namespace items (#7274, @reasonerjt)
* Fix issue #7244. By the end of the upload, check the outstanding incomplete snapshots and delete them by calling ApplyRetentionPolicy (#7245, @Lyndon-Li)
* Adjust the newline output of resource list in restore describer (#7238, @allenxu404)
* Remove the redundant newline in backup describe output (#7229, @allenxu404)
* Fix issue #7189, data mover generic restore - don't assume the first volume as the restore volume (#7201, @Lyndon-Li)
* Update CSIVolumeSnapshotsCompleted in backup's status and the metric
during backup finalize stage according to async operations content. (#7184, @blackpiglet)
* Refactor DownloadRequest Stream function (#7175, @blackpiglet)
* Add `--skip-immediately` flag to schedule commands; `--schedule-skip-immediately` server and install (#7169, @kaovilai)
* Add node-agent concurrency doc and change the config name from dataPathConcurrency to loadCocurrency (#7161, @Lyndon-Li)
* Enhance hooks tracker by adding a returned error to record function (#7153, @allenxu404)
* Track the skipped PV when SnapshotVolumes set as false (#7152, @reasonerjt)
* Add more linters part 2. (#7151, @blackpiglet)
* Fix issue #7135, check pod status before checking node-agent pod status (#7150, @Lyndon-Li)
* Treat namespace as a regular restorable item (#7143, @reasonerjt)
* Allow sparse option for Kopia & Restic restore (#7141, @qiuming-best)
* Use VolumeInfo to help restore the PV. (#7138, @blackpiglet)
* Node agent restart enhancement (#7130, @qiuming-best)
* Fix issue #6695, add describe for data mover backups (#7125, @Lyndon-Li)
* Add hooks status to backup/restore CR (#7117, @allenxu404)
* Include plugin name in the error message by operations (#7115, @reasonerjt)
* Fix issue #7068, due to a behavior of CSI external snapshotter, manipulations of VS and VSC may not be handled in the same order inside external snapshotter as the API is called. So add a protection finalizer to ensure the order (#7102, @Lyndon-Li)
* Generate VolumeInfo for backup. (#7100, @blackpiglet)
* Fix issue #7094, fallback to full backup if previous snapshot is not found (#7096, @Lyndon-Li)
* Fix issue #7068, due to an behavior of CSI external snapshotter, manipulations of VS and VSC may not be handled in the same order inside external snapshotter as the API is called. So add a protection finalizer to ensure the order (#7095, @Lyndon-Li)
* Skip syncing the backup which doesn't contain backup metadata (#7081, @ywk253100)
* Fix issue #6693, partially fail restore if CSI snapshot is involved but CSI feature is not ready, i.e., CSI feature gate is not enabled or CSI plugin is not installed. (#7077, @Lyndon-Li)
* Truncate the credential file to avoid the change of secret content messing it up (#7072, @ywk253100)
* Add VolumeInfo metadata structures. (#7070, @blackpiglet)
* improve discoveryHelper.Refresh() in restore (#7069, @27149chen)
* Add DataUpload Result and CSI VolumeSnapshot check for restore PV. (#7061, @blackpiglet)
* Add the implementation for design #6950, configurable data path concurrency (#7059, @Lyndon-Li)
* Make data mover fail early (#7052, @qiuming-best)
* Remove dependency of generated client part 3. (#7051, @blackpiglet)
* Update Backup.Status.CSIVolumeSnapshotsCompleted during finalize (#7046, @kaovilai)
* Remove the Velero generated client. (#7041, @blackpiglet)
* Fix issue #7027, data mover backup exposer should not assume the first volume as the backup volume in backup pod (#7038, @Lyndon-Li)
* Read information from the credential specified by BSL (#7034, @ywk253100)
* Fix #6857. Added check for matching Owner References when synchronizing backups, removing references that are not found/have mismatched uid. (#7032, @deefdragon)
* Add description markers for dataupload and datadownload CRDs (#7028, @shubham-pampattiwar)
* Add HealthCheckNodePort deletion logic for Service restore. (#7026, @blackpiglet)
* Fix inconsistent behavior of Backup and Restore hook execution (#7022, @allenxu404)
* Fix #6964. Don't use csiSnapshotTimeout (10 min) for waiting snapshot to readyToUse for data mover, so as to make the behavior complied with CSI snapshot backup (#7011, @Lyndon-Li)
* restore: Use warning when Create IsAlreadyExist and Get error (#7004, @kaovilai)
* Bump kopia to 0.15.0 (#7001, @Lyndon-Li)
* Make Kopia file parallelism configurable (#7000, @qiuming-best)
* Fix unified repository (kopia) s3 credentials profile selection (#6995, @kaovilai)
* Fix #6988, always get region from BSL if it is not empty (#6990, @Lyndon-Li)
* Limit PVC block mode logic to non-Windows platform. (#6989, @blackpiglet)
* It is a valid case that the Status.RestoreSize field in VolumeSnapshot is not set, if so, get the volume size from the source PVC to create the backup PVC (#6976, @Lyndon-Li)
* Check whether the action is a CSI action and whether CSI feature is enabled, before executing the action. (#6968, @blackpiglet)
* Add the PV backup information design document. (#6962, @blackpiglet)
* Change controller-runtime List option from MatchingFields to ListOptions (#6958, @blackpiglet)
* Add the design for node-agent concurrency (#6950, @Lyndon-Li)
* Import auth provider plugins (#6947, @0x113)
* Fix #6668, add a limitation for file system restore parallelism with other types of restores (CSI snapshot restore, CSI snapshot movement restore) (#6946, @Lyndon-Li)
* Add MSI Support for Azure plugin. (#6938, @yanggangtony)
* Partially fix #6734, guide Kubernetes' scheduler to spread backup pods evenly across nodes as much as possible, so that data mover backup could achieve better parallelism (#6926, @Lyndon-Li)
* Bump up aws sdk to aws-sdk-go-v2 (#6923, @reasonerjt)
* Optional check if targeted container is ready before executing a hook (#6918, @Ripolin)
* Support JSON Merge Patch and Strategic Merge Patch in Resource Modifiers (#6917, @27149chen)
* Fix issue 6913: Velero Built-in Datamover: Backup stucks in phase WaitingForPluginOperations when Node Agent pod gets restarted (#6914, @shubham-pampattiwar)
* Set ParallelUploadAboveSize as MaxInt64 and flush repo after setting up policy so that policy is retrieved correctly by TreeForSource (#6885, @Lyndon-Li)
* Replace the base image with paketobuildpacks image (#6883, @ywk253100)
* Fix issue #6859, move plugin depending podvolume functions to util pkg, so as to remove the dependencies to unnecessary repository packages like kopia, azure, etc. (#6875, @Lyndon-Li)
* Fix #6861. Only Restic path requires repoIdentifier, so for non-restic path, set the repoIdentifier fields as empty in PVB and PVR and also remove the RepoIdentifier column in the get output of PVBs and PVRs (#6872, @Lyndon-Li)
* Add volume types filter in resource policies (#6863, @qiuming-best)
* change the metrics backup_attempt_total default value to 1. (#6838, @yanggangtony)
* Bump kopia to v0.14 (#6833, @Lyndon-Li)
* Retry failed create when using generateName (#6830, @sseago)
* Fix issue #6786, always delete VSC regardless of the deletion policy (#6827, @Lyndon-Li)
* Proposal to support JSON Merge Patch and Strategic Merge Patch in Resource Modifiers (#6797, @27149chen)
* Fix the node-agent missing metrics-address defines. (#6784, @yanggangtony)
* Fix default BSL setting not work (#6771, @qiuming-best)
* Update restore controller logic for restore deletion (#6770, @ywk253100)
* Fix #6752: add namespace exclude check. (#6760, @blackpiglet)
* Fix issue #6753, remove the check for read-only BSL in restore async operation controller since Velero cannot fully support read-only mode BSL in restore at present (#6757, @Lyndon-Li)
* Fix issue #6647, add the --default-snapshot-move-data parameter to Velero install, so that users don't need to specify --snapshot-move-data per backup when they want to move snapshot data for all backups (#6751, @Lyndon-Li)
* Use old(origin) namespace in resource modifier conditions in case namespace may change during restore (#6724, @27149chen)
* Perf improvements for existing resource restore (#6723, @sseago)
* Remove schedule-related metrics on schedule delete (#6715, @nilesh-akhade)
* Kubernetes 1.27 new job label batch.kubernetes.io/controller-uid are deleted during restore per https://github.com/kubernetes/kubernetes/pull/114930 (#6712, @kaovilai)
* This pr made some improvements in Resource Modifiers: 1. add label selector 2. change the field name from groupKind to groupResource (#6704, @27149chen)
* Make Kopia support Azure AD (#6686, @ywk253100)
* Add support for block volumes with Kopia (#6680, @dzaninovic)
* Delete PartiallyFailed orphaned backups as well as Completed ones (#6649, @sseago)
* Add CSI snapshot data movement doc (#6637, @Lyndon-Li)
* Fixes #6636, skip subresource in resource discovery (#6635, @27149chen)
* Add `orLabelSelectors` for backup, restore commands (#6475, @nilesh-akhade)
* fix run preHook and postHook on completed pods (#5211, @cleverhu)

View File

@@ -0,0 +1,105 @@
## v1.14
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.14.0
### Container Image
`velero/velero:v1.14.0`
### Documentation
https://velero.io/docs/v1.14/
### Upgrading
https://velero.io/docs/v1.14/upgrade-to-1.14/
### Highlights
#### The maintenance work for kopia/restic backup repositories is run in jobs
Since velero started using kopia as the approach for filesystem-level backup/restore, we've noticed an issue when velero connects to the kopia backup repositories and performs maintenance, it sometimes consumes excessive memory that can cause the velero pod to get OOM Killed. To mitigate this issue, the maintenance work will be moved out of velero pod to a separate kubernetes job, and the user will be able to specify the resource request in "velero install".
#### Volume Policies are extended to support more actions to handle volumes
In an earlier release, a flexible volume policy was introduced to skip certain volumes from a backup. In v1.14 we've made enhancement to this policy to allow the user to set how the volumes should be backed up. The user will be able to set "fs-backup" or "snapshot" as value of “action" in the policy and velero will backup the volumes accordingly. This enhancement allows the user to achieve a fine-grained control like "opt-in/out" without having to update the target workload. For more details please refer to https://velero.io/docs/v1.14/resource-filtering/#supported-volumepolicy-actions
#### Node Selection for Data Movement Backup
In velero the data movement flow relies on datamover pods, and these pods may take substantial resources and keep running for a long time. In v1.14, the user will be able to create a configmap to define the eligible nodes on which the datamover pods are launched. For more details refer to https://velero.io/docs/v1.14/data-movement-backup-node-selection/
#### VolumeInfo metadata for restored volumes
In v1.13, we introduced volumeinfo metadata for backup to help velero CLI and downstream adopter understand how velero handles each volume during backup. In v1.14, similar metadata will be persisted for each restore. velero CLI is also updated to bring more info in the output of "velero restore describe".
#### "Finalizing" phase is introduced to restores
The "Finalizing" phase is added to the state transition flow to restore, which helps us fix several issues: The labels added to PVs will be restored after the data in the PV is restored via volumesnapshotter. The post restore hook will be executed after datamovement is finished.
#### Certificate-based authentication support for Azure
Besides the service principal with secret(password)-based authentication, Velero introduces the new support for service principal with certificate-based authentication in v1.14.0. This approach enables you to adopt a phishing resistant authentication by using conditional access policies, which better protects Azure resources and is the recommended way by Azure.
### Runtime and dependencies
* Golang runtime: v1.22.2
* kopia: v0.17.0
### Limitations/Known issues
* For the external BackupItemAction plugins that take snapshots for PVs, such as vsphere plugin. If the plugin checks the value of the field "snapshotVolumes" in the backup spec as a criteria for snapshot, the settings in the volume policy will not take effect. For example, if the "snapshotVolumes" is set to False in the backup spec, but a volume meets the condition in the volume policy for "snapshot" action, because the plugin will not check the settings in the volume policy, the plugin will not take snapshot for the volume. For more details please refer to #7818
### Breaking changes
* CSI plugin has been merged into velero repo in v1.14 release. It will be installed by default as an internal plugin, and should not be installed via "plugins " parameter in "velero install" command.
* The default resource requests and limitations for node agent are removed in v1.14, to make the node agent pods have the QoS class of "BestEffort", more details please refer to #7391
* There's a change in namespace filtering behavior during backup: In v1.14, when the includedNamespaces/excludedNamespaces fields are not set and the labelSelector/OrLabelSelectors are set in the backup spec, the backup will only include the namespaces which contain the resources that match the label selectors, while in previous releases all namespaces will be included in the backup with such settings. More details refer to #7105
* Patching the PV in the "Finalizing" state may cause the restore to be in "PartiallyFailed" state when the PV is blocked in "Pending" state, while in the previous release the restore may end up being in "Complete" state. For more details refer to #7866
### All Changes
* Fix backup log to show error string, not index (#7805, @piny940)
* Modify the volume helper logic. (#7794, @blackpiglet)
* Add documentation for extension of volume policy feature (#7779, @shubham-pampattiwar)
* Surface errors when waiting for backupRepository and timeout occurs (#7762, @kaovilai)
* Add existingResourcePolicy restore CR validation to controller (#7757, @kaovilai)
* Fix condition matching in resource modifier when there are multiple rules (#7715, @27149chen)
* Bump up the version of KinD and k8s in github actions (#7702, @reasonerjt)
* Implementation for Extending VolumePolicies to support more actions (#7664, @shubham-pampattiwar)
* Migrate from `github.com/Azure/azure-storage-blob-go` to `github.com/Azure/azure-sdk-for-go/sdk/storage/azblob` (#7598, @mmorel-35)
* When Included/ExcludedNamespaces are omitted, and LabelSelector or OrLabelSelector is used, namespaces without selected items are excluded from backup. (#7697, @blackpiglet)
* Display CSI snapshot restores in restore describe (#7687, @reasonerjt)
* Use specific credential rather than the credential chain for Azure (#7680, @ywk253100)
* Modify hook docs for clarity on displaying hook execution results (#7679, @allenxu404)
* Wait for results of restore exec hook executions in Finalizing phase instead of InProgress phase (#7619, @allenxu404)
* migrating to `sdk/resourcemanager/**/arm**` from `services/**/mgmt/**` (#7596, @mmorel-35)
* Bump up to go1.22 (#7666, @reasonerjt)
* Fix issue #7648. Adjust the exposing logic to avoid exposing failure and snapshot leak when expose fails (#7662, @Lyndon-Li)
* Track and persist restore volume info (#7630, @reasonerjt)
* Check the existence of the namespaces provided in the "--include-namespaces" option (#7569, @ywk253100)
* Add the finalization phase to the restore workflow (#7377, @allenxu404)
* Upgrade the version of go plugin related libs/tools (#7373, @ywk253100)
* Check resource Group Version and Kind is available in cluster before attempting restore to prevent being stuck. (#7322, @kaovilai)
* Merge CSI plugin code into Velero. (#7609, @blackpiglet)
* Fix issue #7391, remove the default constraint for node-agent pods (#7488, @Lyndon-Li)
* Fix DataDownload fails during restore for empty PVC workload (#7521, @qiuming-best)
* Add repository maintenance job (#7451, @qiuming-best)
* Check whether the VolumeSnapshot's source PVC is nil before using it.
Skip populate VolumeInfo for data-moved PV when CSI is not enabled. (#7515, @blackpiglet)
* Fix issue #7308, change the data path requeue time to 5 second for data mover backup/restore, PVB and PVR. (#7458, @Lyndon-Li)
* Patch newly dynamically provisioned PV with volume info to restore custom setting of PV (#7504, @allenxu404)
* Adjust the logic for the backup_last_status metrics to stop incorrectly incrementing over time (#7445, @allenxu404)
* dependabot: support github-actions updates (#7594, @mmorel-35)
* Include the design for adding the finalization phase to the restore workflow (#7317, @allenxu404)
* Fix issue #7211. Enable advanced feature capability and add support to concatenate objects for unified repo. (#7452, @Lyndon-Li)
* Add design to introduce restore volume info (#7610, @reasonerjt)
* Increase the k8s client QPS/burst to avoid throttling request errors (#7311, @ywk253100)
* Support update the backup VolumeInfos by the Async ops result. (#7554, @blackpiglet)
* FS backup create PodVolumeBackup when the backup excluded PVC,
so I added logic to skip PVC volume type when PVC is not included in the backup resources to be backed up. (#7472, @sbahar619)
* Respect and use `credentialsFile` specified in BSL.spec.config when IRSA is configured over Velero Pod Environment credentials (#7374, @reasonerjt)
* Move the native snapshot definition code into internal directory (#7544, @blackpiglet)
* Fix issue #7036. Add the implementation of node selection for data mover backups (#7437, @Lyndon-Li)
* Fix issue #7535, add the MustHave resource check during item collection and item filter for restore (#7585, @Lyndon-Li)
* build(deps): bump json-patch to v5.8.0 (#7584, @mmorel-35)
* Add confirm flag to velero plugin add (#7566, @kaovilai)
* do not skip unknown gvr at the beginning and get new gr when kind is changed (#7523, @27149chen)
* Fix snapshot leak for backup (#7558, @qiuming-best)
* For issue #7036, add the document for data mover node selection (#7640, @Lyndon-Li)
* Add design for Extending VolumePolicies to support more actions (#6956, @shubham-pampattiwar)
* BackupRepositories associated with a BSL are invalidated when BSL is (re-)created. (#7380, @kaovilai)
* Improve the concurrency for PVBs in different pods (#7571, @ywk253100)
* Bump up Kopia to v0.16.0 and open kopia repo with no index change (#7559, @Lyndon-Li)
* Bump up the versions of several Kubernetes-related libs (#7489, @ywk253100)
* Make parallel restore configurable (#7512, @qiuming-best)
* Support certificate-based authentication for Azure (#7549, @ywk253100)
* Fix issue #7281, batch delete snapshots in the same repo (#7438, @Lyndon-Li)
* Add CRD name to error message when it is not ready to use (#7295, @josemarevalo)
* Add the design for node selection for data mover backup (#7383, @Lyndon-Li)
* Bump up aws-sdk to latest version to leverage Pod Identity credentials. (#7307, @guikcd)
* Fix issue #7246. Document the behavior for repo snapshot deletion (#7622, @Lyndon-Li)
* Fix issue #7583, set backupName optional for Restore CRD (#7617, @Lyndon-Li)

View File

@@ -0,0 +1,145 @@
## v1.15
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.15.0
### Container Image
`velero/velero:v1.15.0`
### Documentation
https://velero.io/docs/v1.15/
### Upgrading
https://velero.io/docs/v1.15/upgrade-to-1.15/
### Highlights
#### Data mover micro service
Data transfer activities for CSI Snapshot Data Movement are moved from node-agent pods to dedicate backupPods or restorePods. This brings many benefits such as:
- This avoids to access volume data through host path, while host path access is privileged and may involve security escalations, which are concerned by users.
- This enables users to to control resource (i.e., cpu, memory) allocations in a granular manner, e.g., control them per backup/restore of a volume.
- This enhances the resilience, crash of one data movement activity won't affect others.
- This prevents unnecessary full backup because of host path changes after workload pods restart.
- For more information, check the design https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/vgdp-micro-service/vgdp-micro-service.md.
#### Item Block concepts and ItemBlockAction (IBA) plugin
Item Block concepts are introduced for resource backups to help to achieve multiple thread backups. Specifically, correlated resources are categorized in the same item block and item blocks could be processed concurrently in multiple threads.
ItemBlockAction plugin is introduced to help Velero to categorize resources into item blocks. At present, Velero provides built-in IBAs for pods and PVCs and Velero also supports customized IBAs for any resources.
In v1.15, Velero doesn't support multiple thread process of item blocks though item block concepts and IBA plugins are fully supported. The multiple thread support will be delivered in future releases.
For more information, check the design https://github.com/vmware-tanzu/velero/blob/main/design/backup-performance-improvements.md.
#### Node selection for repository maintenance job
Repository maintenance are resource consuming tasks, Velero now allows you to configure the nodes to run repository maintenance jobs, so that you can run repository maintenance jobs in idle nodes or avoid them to run in nodes hosting critical workloads.
To support the configuration, a new repository maintenance configuration configMap is introduced.
For more information, check the document https://velero.io/docs/v1.15/repository-maintenance/.
#### Backup PVC read-only configuration
In 1.15, Velero allows you to configure the data mover backupPods to read-only mount the backupPVCs. In this way, the data mover expose process could be significantly accelerated for some storages (i.e., ceph).
To support the configuration, a new backup PVC configuration configMap is introduced.
For more information, check the document https://velero.io/docs/v1.15/data-movement-backup-pvc-configuration/.
#### Backup PVC storage class configuration
In 1.15, Velero allows you to configure the storageclass used by the data mover backupPods. In this way, the provision of backupPVCs don't need to adhere to the same pattern as workload PVCs, e.g., for a backupPVC, it only needs one replica, whereas, the a workload PVC may have multiple replicas.
To support the configuration, the same backup PVC configuration configMap is used.
For more information, check the document https://velero.io/docs/v1.15/data-movement-backup-pvc-configuration/.
#### Backup repository data cache configuration
The backup repository may need to cache data on the client side during various repository operations, i.e., read, write, maintenance, etc. The cache consumes the root file system space of the pod where the repository access happens.
In 1.15, Velero allows you to configure the total size of the cache per repository. In this way, if your pod doesn't have enough space in its root file system, the pod won't be evicted due to running out of ephemeral storage.
To support the configuration, a new backup repository configuration configMap is introduced.
For more information, check the document https://velero.io/docs/v1.15/backup-repository-configuration/.
#### Performance improvements
In 1.15, several performance related issues/enhancements are included, which makes significant performance improvements in specific scenarios:
- There was a memory leak of Velero server after plugin calls, now it is fixed, see issue https://github.com/vmware-tanzu/velero/issues/7925
- The `client-burst/client-qps` parameters are automatically inherited to plugins, so that you can use the same velero server parameters to accelerate the plugin executions when large number of API server calls happen, see issue https://github.com/vmware-tanzu/velero/issues/7806
- Maintenance of Kopia repository takes huge memory in scenarios that huge number of files have been backed up, Velero 1.15 has included the Kopia upstream enhancement to fix the problem, see issue https://github.com/vmware-tanzu/velero/issues/7510
### Runtime and dependencies
Golang runtime: v1.22.8
kopia: v0.17.0
### Limitations/Known issues
#### Read-only backup PVC may not work on SELinux environments
Due to an issue of Kubernetes upstream, if a volume is mounted as read-only in SELinux environments, the read privilege is not granted to any user, as a result, the data mover backup will fail. On the other hand, the backupPVC must be mounted as read-only in order to accelerate the data mover expose process.
Therefore, a user option is added in the same backup PVC configuration configMap, once the option is enabled, the backupPod container will run as a super privileged container and disable SELinux access control. If you have concern in this super privileged container or you have configured [pod security admissions](https://kubernetes.io/docs/concepts/security/pod-security-admission/) and don't allow super privileged containers, you will not be able to use this read-only backupPVC feature and lose the benefit to accelerate the data mover expose process.
### Breaking changes
#### Deprecation of Restic
Restic path for fs-backup is in deprecation process starting from 1.15. According to [Velero deprecation policy](https://github.com/vmware-tanzu/velero/blob/v1.15/GOVERNANCE.md#deprecation-policy), for 1.15, if Restic path is used the backup/restore of fs-backup still creates and succeeds, but you will see warnings in below scenarios:
- When `--uploader-type=restic` is used in Velero installation
- When Restic path is used to create backup/restore of fs-backup
#### node-agent configuration name is configurable
Previously, a fixed name is searched for node-agent configuration configMap. Now in 1.15, Velero allows you to customize the name of the configMap, on the other hand, the name must be specified by node-agent server parameter `node-agent-configmap`.
#### Repository maintenance job configurations in Velero server parameter are moved to repository maintenance job configuration configMap
In 1.15, below Velero server parameters for repository maintenance jobs are moved to the repository maintenance job configuration configMap. While for back compatibility reason, the same Velero sever parameters are preserved as is. But the configMap is recommended and the same values in the configMap take preference if they exist in both places:
```
--keep-latest-maintenance-jobs
--maintenance-job-cpu-request
--maintenance-job-mem-request
--maintenance-job-cpu-limit
--maintenance-job-mem-limit
```
#### Changing PVC selected-node feature is deprecated
In 1.15, the [Changing PVC selected-node feature](https://velero.io/docs/v1.15/restore-reference/#changing-pvc-selected-node) enters deprecation process and will be removed in future releases according to [Velero deprecation policy](https://github.com/vmware-tanzu/velero/blob/v1.15/GOVERNANCE.md#deprecation-policy). Usage of this feature for any purpose is not recommended.
### All Changes
* add no-relabeling option to backupPVC configmap (#8288, @sseago)
* only set spec.volumes readonly if PVC is readonly for datamover (#8284, @sseago)
* Add labels to maintenance job pods (#8256, @shubham-pampattiwar)
* Add the Carvel package related resources to the restore priority list (#8228, @ywk253100)
* Reduces indirect imports for plugin/framework importers (#8208, @kaovilai)
* Add controller name to periodical_enqueue_source. The logger parameter now includes an additional field with the value of reflect.TypeOf(objList).String() and another field with the value of controllerName. (#8198, @kaovilai)
* Update Openshift SCC docs link (#8170, @shubham-pampattiwar)
* Partially fix issue #8138, add doc for node-agent memory preserve (#8167, @Lyndon-Li)
* Pass Velero server command args to the plugins (#8166, @ywk253100)
* Fix issue #8155, Merge Kopia upstream commits for critical issue fixes and performance improvements (#8158, @Lyndon-Li)
* Implement the Repo maintenance Job configuration. (#8145, @blackpiglet)
* Add document for data mover micro service (#8144, @Lyndon-Li)
* Fix issue #8134, allow to config resource request/limit for data mover micro service pods (#8143, @Lyndon-Li)
* Apply backupPVCConfig to backupPod volume spec (#8141, @shubham-pampattiwar)
* Add resource modifier for velero restore describe CLI (#8139, @blackpiglet)
* Fix issue #7620, add doc for backup repo config (#8131, @Lyndon-Li)
* Modify E2E and perf test report generated directory (#8129, @blackpiglet)
* Add docs for backup pvc config support (#8119, @shubham-pampattiwar)
* Delete generated k8s client and informer. (#8114, @blackpiglet)
* Add support for backup PVC configuration (#8109, @shubham-pampattiwar)
* ItemBlock model and phase 1 (single-thread) workflow changes (#8102, @sseago)
* Fix issue #8032, make node-agent configMap name configurable (#8097, @Lyndon-Li)
* Fix issue #8072, add the warning messages for restic deprecation (#8096, @Lyndon-Li)
* Fix issue #7620, add backup repository configuration implementation and support cacheLimit configuration for Kopia repo (#8093, @Lyndon-Li)
* Patch dbr's status when error happens (#8086, @reasonerjt)
* According to design #7576, after node-agent restarts, if a DU/DD is in InProgress status, re-capture the data mover ms pod and continue the execution (#8085, @Lyndon-Li)
* Updates to IBM COS documentation to match current version (#8082, @gjanders)
* Data mover micro service DUCR/DDCR controller refactor according to design #7576 (#8074, @Lyndon-Li)
* add retries with timeout to existing patch calls that moves a backup/restore from InProgress/Finalizing to a final status phase. (#8068, @kaovilai)
* Data mover micro service restore according to design #7576 (#8061, @Lyndon-Li)
* Internal ItemBlockAction plugins (#8054, @sseago)
* Data mover micro service backup according to design #7576 (#8046, @Lyndon-Li)
* Avoid wrapping failed PVB status with empty message. (#8028, @mrnold)
* Created new ItemBlockAction (IBA) plugin type (#8026, @sseago)
* Make PVPatchMaximumDuration timeout configurable (#8021, @shubham-pampattiwar)
* Reuse existing plugin manager for get/put volume info (#8012, @sseago)
* Data mover ms watcher according to design #7576 (#7999, @Lyndon-Li)
* New data path for data mover ms according to design #7576 (#7988, @Lyndon-Li)
* For issue #7700 and #7747, add the design for backup PVC configurations (#7982, @Lyndon-Li)
* Only get VolumeSnapshotClass when DataUpload exists. (#7974, @blackpiglet)
* Fix issue #7972, sync the backupPVC deletion in expose clean up (#7973, @Lyndon-Li)
* Expose the VolumeHelper to third-party plugins. (#7969, @blackpiglet)
* Check whether the volume's source is PVC before fetching its PV. (#7967, @blackpiglet)
* Check whether the namespaces specified in namespace filter exist. (#7965, @blackpiglet)
* Add design for backup repository configurations for issue #7620, #7301 (#7963, @Lyndon-Li)
* New data path for data mover ms according to design #7576 (#7955, @Lyndon-Li)
* Skip PV patch step in Restoe workflow for WaitForFirstConsumer VolumeBindingMode Pending state PVCs (#7953, @shubham-pampattiwar)
* Fix issue #7904, add the deprecation and limitation clarification for change PVC selected-node feature (#7948, @Lyndon-Li)
* Expose the VolumeHelper to third-party plugins. (#7944, @blackpiglet)
* Don't consider unschedulable pods unrecoverable (#7899, @sseago)
* Upgrade to robfig/cron/v3 to support time zone specification. (#7793, @kaovilai)
* Add the result in the backup's VolumeInfo. (#7775, @blackpiglet)
* Migrate from github.com/golang/protobuf to google.golang.org/protobuf (#7593, @mmorel-35)
* Add the design for data mover micro service (#7576, @Lyndon-Li)
* Descriptive restore error when restoring into a terminating namespace. (#7424, @kaovilai)
* Ignore missing path error in conditional match (#7410, @seanblong)
* Propose a deprecation process for velero (#5532, @shubham-pampattiwar)

View File

@@ -0,0 +1,201 @@
## v1.16.2
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.16.2
### Container Image
`velero/velero:v1.16.2`
### Documentation
https://velero.io/docs/v1.16/
### Upgrading
https://velero.io/docs/v1.16/upgrade-to-1.16/
### All Changes
* Update "Default Volumes to Fs Backup" to "File System Backup (Default)" (#9105, @shubham-pampattiwar)
* Fix missing defaultVolumesToFsBackup flag output in Velero describe backup cmd (#9103, @shubham-pampattiwar)
* Add imagePullSecrets inheritance for VGDP pod and maintenance job. (#9102, @blackpiglet)
* Fix issue #9077, don't block backup deletion on list VS error (#9101, @Lyndon-Li)
* Mounted cloud credentials should not be world-readable (#9094, @sseago)
* Allow for proper tracking of multiple hooks per container (#9060, @sseago)
* Add BSL status check for backup/restore operations. (#9010, @blackpiglet)
## v1.16.1
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.16.1
### Container Image
`velero/velero:v1.16.1`
### Documentation
https://velero.io/docs/v1.16/
### Upgrading
https://velero.io/docs/v1.16/upgrade-to-1.16/
### All Changes
* Call WaitGroup.Done() once only when PVB changes to final status the first time to avoid panic (#8940, @ywk253100)
* Add VolumeSnapshotContent into the RIA and the mustHave resource list. (#8926, @blackpiglet)
* Warn for not found error in patching managed fields (#8916, @sseago)
* Fix issue 8878, relief node os deduction error checks (#8911, @Lyndon-Li)
## v1.16
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.16.0
### Container Image
`velero/velero:v1.16.0`
### Documentation
https://velero.io/docs/v1.16/
### Upgrading
https://velero.io/docs/v1.16/upgrade-to-1.16/
### Highlights
#### Windows cluster support
In v1.16, Velero supports to run in Windows clusters and backup/restore Windows workloads, either stateful or stateless:
* Hybrid build and all-in-one image: the build process is enhanced to build an all-in-one image for hybrid CPU architecture and hybrid platform. For more information, check the design https://github.com/vmware-tanzu/velero/blob/main/design/multiple-arch-build-with-windows.md
* Deployment in Windows clusters: Velero node-agent, data mover pods and maintenance jobs now support to run in both linux and Windows nodes
* Data mover backup/restore Windows workloads: Velero built-in data mover supports Windows workloads throughout its full cycle, i.e., discovery, backup, restore, pre/post hook, etc. It automatically identifies Windows workloads and schedules data mover pods to the right group of nodes
Check the epic issue https://github.com/vmware-tanzu/velero/issues/8289 for more information.
#### Parallel Item Block backup
v1.16 now supports to back up item blocks in parallel. Specifically, during backup, correlated resources are grouped in item blocks and Velero backup engine creates a thread pool to back up the item blocks in parallel. This significantly improves the backup throughput, especially when there are large scale of resources.
Pre/post hooks also belongs to item blocks, so will also run in parallel along with the item blocks.
Users are allowed to configure the parallelism through the `--item-block-worker-count` Velero server parameter. If not configured, the default parallelism is 1.
For more information, check issue https://github.com/vmware-tanzu/velero/issues/8334.
#### Data mover restore enhancement in scalability
In previous releases, for each volume of WaitForFirstConsumer mode, data mover restore is only allowed to happen in the node that the volume is attached. This severely degrades the parallelism and the balance of node resource(CPU, memory, network bandwidth) consumption for data mover restore (https://github.com/vmware-tanzu/velero/issues/8044).
In v1.16, users are allowed to configure data mover restores running and spreading evenly across all nodes in the cluster. The configuration is done through a new flag `ignoreDelayBinding` in node-agent configuration (https://github.com/vmware-tanzu/velero/issues/8242).
#### Data mover enhancements in observability
In 1.16, some observability enhancements are added:
* Output various statuses of intermediate objects for failures of data mover backup/restore (https://github.com/vmware-tanzu/velero/issues/8267)
* Output the errors when Velero fails to delete intermediate objects during clean up (https://github.com/vmware-tanzu/velero/issues/8125)
The outputs are in the same node-agent log and enabled automatically.
#### CSI snapshot backup/restore enhancement in usability
In previous releases, a unnecessary VolumeSnapshotContent object is retained for each backup and synced to other clusters sharing the same backup storage location. And during restore, the retained VolumeSnapshotContent is also restored unnecessarily.
In 1.16, the retained VolumeSnapshotContent is removed from the backup, so no unnecessary CSI objects are synced or restored.
For more information, check issue https://github.com/vmware-tanzu/velero/issues/8725.
#### Backup Repository Maintenance enhancement in resiliency and observability
In v1.16, some enhancements of backup repository maintenance are added to improve the observability and resiliency:
* A new backup repository maintenance history section, called `RecentMaintenance`, is added to the BackupRepository CR. Specifically, for each BackupRepository, including start/completion time, completion status and error message. (https://github.com/vmware-tanzu/velero/issues/7810)
* Running maintenance jobs are now recaptured after Velero server restarts. (https://github.com/vmware-tanzu/velero/issues/7753)
* The maintenance job will not be launched for readOnly BackupStorageLocation. (https://github.com/vmware-tanzu/velero/issues/8238)
* The backup repository will not try to initialize a new repository for readOnly BackupStorageLocation. (https://github.com/vmware-tanzu/velero/issues/8091)
* Users now are allowed to configure the intervals of an effective maintenance in the way of `normalGC`, `fastGC` and `eagerGC`, through the `fullMaintenanceInterval` parameter in backupRepository configuration. (https://github.com/vmware-tanzu/velero/issues/8364)
#### Volume Policy enhancement of filtering volumes by PVC labels
In v1.16, Volume Policy is extended to support filtering volumes by PVC labels. (https://github.com/vmware-tanzu/velero/issues/8256).
#### Resource Status restore per object
In v1.16, users are allowed to define whether to restore resource status per object through an annotation `velero.io/restore-status` set on the object. (https://github.com/vmware-tanzu/velero/issues/8204).
#### Velero Restore Helper binary is merged into Velero image
In v1.16, Velero banaries, i.e., velero, velero-helper and velero-restore-helper, are all included into the single Velero image. (https://github.com/vmware-tanzu/velero/issues/8484).
### Runtime and dependencies
Golang runtime: 1.23.7
kopia: 0.19.0
### Limitations/Known issues
#### Limitations of Windows support
* fs-backup is not supported for Windows workloads and so fs-backup runs only in linux nodes for linux workloads
* Backup/restore of NTFS extended attributes/advanced features are not supported, i.e., Security Descriptors, System/Hidden/ReadOnly attributes, Creation Time, NTFS Streams, etc.
### All Changes
* Add third party annotation support for maintenance job, so that the declared third party annotations could be added to the maintenance job pods (#8812, @Lyndon-Li)
* Fix issue #8803, use deterministic name to create backupRepository (#8808, @Lyndon-Li)
* Refactor restoreItem and related functions to differentiate the backup resource name and the restore target resource name. (#8797, @blackpiglet)
* ensure that PV is removed before VS is deleted (#8777, @ix-rzi)
* host_pods should not be mandatory to node-agent (#8774, @mpryc)
* Log doesn't show pv name, but displays %!s(MISSING) instead (#8771, @hu-keyu)
* Fix issue #8754, add third party annotation support for data mover (#8770, @Lyndon-Li)
* Add docs for volume policy with labels as a criteria (#8759, @shubham-pampattiwar)
* Move pvc annotation removal from CSI RIA to regular PVC RIA (#8755, @sseago)
* Add doc for maintenance history (#8747, @Lyndon-Li)
* Fix issue #8733, add doc for restorePVC (#8737, @Lyndon-Li)
* Fix issue #8426, add doc for Windows support (#8736, @Lyndon-Li)
* Fix issue #8475, refactor build-from-source doc for hybrid image build (#8729, @Lyndon-Li)
* Return directly if no pod volme backup are tracked (#8728, @ywk253100)
* Fix issue #8706, for immediate volumes, there is no selected-node annotation on PVC, so deduce the attached node from VolumeAttachment CRs (#8715, @Lyndon-Li)
* Add labels as a criteria for volume policy (#8713, @shubham-pampattiwar)
* Copy SecurityContext from Containers[0] if present for PVR (#8712, @sseago)
* Support pushing images to an insecure registry (#8703, @ywk253100)
* Modify golangci configuration to make it work. (#8695, @blackpiglet)
* Run backup post hooks inside ItemBlock synchronously (#8694, @ywk253100)
* Add docs for object level status restore (#8693, @shubham-pampattiwar)
* Clean artifacts generated during CSI B/R. (#8684, @blackpiglet)
* Don't run maintenance on the ReadOnly BackupRepositories. (#8681, @blackpiglet)
* Fix #8657: WaitGroup panic issue (#8679, @ywk253100)
* Fixes issue #8214, validate `--from-schedule` flag in create backup command to prevent empty or whitespace-only values. (#8665, @aj-2000)
* Implement parallel ItemBlock processing via backup_controller goroutines (#8659, @sseago)
* Clean up leaked CSI snapshot for incomplete backup (#8637, @raesonerjt)
* Handle update conflict when restoring the status (#8630, @ywk253100)
* Fix issue #8419, support repo maintenance job to run on Windows nodes (#8626, @Lyndon-Li)
* Always create DataUpload configmap in restore namespace (#8621, @sseago)
* Fix issue #8091, avoid to create new repo when BSL is readonly (#8615, @Lyndon-Li)
* Fix issue #8242, distribute dd evenly across nodes (#8611, @Lyndon-Li)
* Fix issue #8497, update du/dd progress on completion (#8608, @Lyndon-Li)
* Fix issue #8418, add Windows toleration to data mover pods (#8606, @Lyndon-Li)
* Check the PVB status via podvolume Backupper rather than calling API server to avoid API server issue (#8603, @ywk253100)
* Fix issue #8067, add tmp folder (/tmp for linux, C:\Windows\Temp for Windows) as an alternative of udmrepo's config file location (#8602, @Lyndon-Li)
* Data mover restore for Windows (#8594, @Lyndon-Li)
* Skip patching the PV in finalization for failed operation (#8591, @reasonerjt)
* Fix issue #8579, set event burst to block event broadcaster from filtering events (#8590, @Lyndon-Li)
* Configurable Kopia Maintenance Interval. backup-repository-configmap adds an option for configurable`fullMaintenanceInterval` where fastGC (12 hours), and eagerGC (6 hours) allowing for faster removal of deleted velero backups from kopia repo. (#8581, @kaovilai)
* Fix issue #7753, recall repo maintenance history on Velero server restart (#8580, @Lyndon-Li)
* Clear validation errors when schedule is valid (#8575, @ywk253100)
* Merge restore helper image into Velero server image (#8574, @ywk253100)
* Don't include excluded items in ItemBlocks (#8572, @sseago)
* fs uploader and block uploader support Windows nodes (#8569, @Lyndon-Li)
* Fix issue #8418, support data mover backup for Windows nodes (#8555, @Lyndon-Li)
* Fix issue #8044, allow users to ignore delay binding the restorePVC of data mover when it is in WaitForFirstConsumer mode (#8550, @Lyndon-Li)
* Fix issue #8539, validate uploader types when o.CRDsOnly is set to false only since CRD installation doesn't rely on uploader types (#8538, @Lyndon-Li)
* Fix issue #7810, add maintenance history for backupRepository CRs (#8532, @Lyndon-Li)
* Make fs-backup work on linux nodes with the new Velero deployment and disable fs-backup if the source/target pod is running in non-linux node (#8424) (#8518, @Lyndon-Li)
* Fix issue: backup schedule pause/unpause doesn't work (#8512, @ywk253100)
* Fix backup post hook issue #8159 (caused by #7571): always execute backup post hooks after PVBs are handled (#8509, @ywk253100)
* Fix issue #8267, enhance the error message when expose fails (#8508, @Lyndon-Li)
* Fix issue #8416, #8417, deploy Velero server and node-agent in linux/Windows hybrid env (#8504, @Lyndon-Li)
* Design to add label selector as a criteria for volume policy (#8503, @shubham-pampattiwar)
* Related to issue #8485, move the acceptedByNode and acceptedTimestamp to Status of DU/DD CRD (#8498, @Lyndon-Li)
* Add SecurityContext to restore-helper (#8491, @reasonerjt)
* Fix issue #8433, add third party labels to data mover pods when the same labels exist in node-agent pods (#8487, @Lyndon-Li)
* Fix issue #8485, add an accepted time so as to count the prepare timeout (#8486, @Lyndon-Li)
* Fix issue #8125, log diagnostic info for data mover exposers when expose timeout (#8482, @Lyndon-Li)
* Fix issue #8415, implement multi-arch build and Windows build (#8476, @Lyndon-Li)
* Pin kopia to 0.18.2 (#8472, @Lyndon-Li)
* Add nil check for updating DataUpload VolumeInfo in finalizing phase (#8471, @blackpiglet)
* Allowing Object-Level Resource Status Restore (#8464, @shubham-pampattiwar)
* For issue #8429. Add the design for multi-arch build and windows build (#8459, @Lyndon-Li)
* Upgrade go.mod k8s.io/ go.mod to v0.31.3 and implemented proper logger configuration for both client-go and controller-runtime libraries. This change ensures that logging format and level settings are properly applied throughout the codebase. The update improves logging consistency and control across the Velero system. (#8450, @kaovilai)
* Add Design for Allowing Object-Level Resource Status Restore (#8403, @shubham-pampattiwar)
* Fix issue #8391, check ErrCancelled from suffix of data mover pod's termination message (#8396, @Lyndon-Li)
* Fix issue #8394, don't call closeDataPath in VGDP callbacks, otherwise, the VGDP cleanup will hang (#8395, @Lyndon-Li)
* Adding support in velero Resource Policies for filtering PVs based on additional VolumeAttributes properties under CSI PVs (#8383, @mayankagg9722)
* Add --item-block-worker-count flag to velero install and server (#8380, @sseago)
* Make BackedUpItems thread safe (#8366, @sseago)
* Include --annotations flag in backup and restore create commands (#8354, @alromeros)
* Use aggregated discovery API to discovery API groups and resources (#8353, @ywk253100)
* Copy "envFrom" from Velero server when creating maintenance jobs (#8343, @evhan)
* Set hinting region to use for GetBucketRegion() in pkg/repository/config/aws.go (#8297, @kaovilai)
* Bump up version of client-go and controller-runtime (#8275, @ywk253100)
* fix(pkg/repository/maintenance): don't panic when there's no container statuses (#8271, @mcluseau)
* Add Backup warning for inclusion of NS managed by ArgoCD (#8257, @shubham-pampattiwar)
* Added tracking for deleted namespace status check in restore flow. (#8233, @sangitaray2021)

View File

@@ -61,7 +61,7 @@ in progress for 1.9.
* Add rbac and annotation test cases (#4455, @mqiu)
* remove --crds-version in velero install command. (#4446, @jxun)
* Upgrade e2e test vsphere plugin (#4440, @mqiu)
* Fix e2e test failures for the inappropriate optimaze of velero install (#4438, @mqiu)
* Fix e2e test failures for the inappropriate optimize of velero install (#4438, @mqiu)
* Limit backup namespaces on test resource filtering cases (#4437, @mqiu)
* Bump up Go to 1.17 (#4431, @reasonerjt)
* Added `<backup name>`-itemsnapshots.json.gz to the backup format. This file exists

View File

@@ -0,0 +1 @@
Update AzureAD Microsoft Authentication Library to v1.5.0

View File

@@ -0,0 +1 @@
Backport to 1.16 (PR#9244 Update AzureAD Microsoft Authentication Library to v1.5.0)

View File

@@ -1,3 +1,19 @@
/*
Copyright The Velero Contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (

View File

@@ -66,14 +66,14 @@ func done() bool {
doneFile := filepath.Join("/restores", child.Name(), ".velero", os.Args[1])
if _, err := os.Stat(doneFile); os.IsNotExist(err) {
fmt.Printf("Not found: %s\n", doneFile)
fmt.Printf("The filesystem restore done file %s is not found yet. Retry later.\n", doneFile)
return false
} else if err != nil {
fmt.Fprintf(os.Stderr, "ERROR looking for %s: %s\n", doneFile, err)
fmt.Fprintf(os.Stderr, "ERROR looking filesystem restore done file %s: %s\n", doneFile, err)
return false
}
fmt.Printf("Found %s", doneFile)
fmt.Printf("Found the done file %s\n", doneFile)
}
return true

View File

@@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.12.0
controller-gen.kubebuilder.io/version: v0.16.5
name: backuprepositories.velero.io
spec:
group: velero.io
@@ -26,14 +26,19 @@ spec:
openAPIV3Schema:
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
@@ -41,13 +46,21 @@ spec:
description: BackupRepositorySpec is the specification for a BackupRepository.
properties:
backupStorageLocation:
description: BackupStorageLocation is the name of the BackupStorageLocation
description: |-
BackupStorageLocation is the name of the BackupStorageLocation
that should contain this repository.
type: string
maintenanceFrequency:
description: MaintenanceFrequency is how often maintenance should
be run.
type: string
repositoryConfig:
additionalProperties:
type: string
description: RepositoryConfig is for repository-specific configuration
fields.
nullable: true
type: object
repositoryType:
description: RepositoryType indicates the type of the backend repository
enum:
@@ -56,12 +69,14 @@ spec:
- ""
type: string
resticIdentifier:
description: ResticIdentifier is the full restic-compatible string
for identifying this repository.
description: |-
ResticIdentifier is the full restic-compatible string for identifying
this repository.
type: string
volumeNamespace:
description: VolumeNamespace is the namespace this backup repository
contains pod volume backups for.
description: |-
VolumeNamespace is the namespace this backup repository contains
pod volume backups for.
type: string
required:
- backupStorageLocation
@@ -73,8 +88,8 @@ spec:
description: BackupRepositoryStatus is the current status of a BackupRepository.
properties:
lastMaintenanceTime:
description: LastMaintenanceTime is the last time maintenance was
run.
description: LastMaintenanceTime is the last time repo maintenance
succeeded.
format: date-time
nullable: true
type: string
@@ -89,6 +104,33 @@ spec:
- Ready
- NotReady
type: string
recentMaintenance:
description: RecentMaintenance is status of the recent repo maintenance.
items:
properties:
completeTimestamp:
description: CompleteTimestamp is the completion time of the
repo maintenance.
format: date-time
nullable: true
type: string
message:
description: Message is a message about the current status of
the repo maintenance.
type: string
result:
description: Result is the result of the repo maintenance.
enum:
- Succeeded
- Failed
type: string
startTimestamp:
description: StartTimestamp is the start time of the repo maintenance.
format: date-time
nullable: true
type: string
type: object
type: array
type: object
type: object
served: true

View File

@@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.12.0
controller-gen.kubebuilder.io/version: v0.16.5
name: backups.velero.io
spec:
group: velero.io
@@ -17,18 +17,24 @@ spec:
- name: v1
schema:
openAPIV3Schema:
description: Backup is a Velero resource that represents the capture of Kubernetes
description: |-
Backup is a Velero resource that represents the capture of Kubernetes
cluster state at a point in time (API objects and associated volume state).
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
@@ -36,55 +42,62 @@ spec:
description: BackupSpec defines the specification for a Velero backup.
properties:
csiSnapshotTimeout:
description: CSISnapshotTimeout specifies the time used to wait for
CSI VolumeSnapshot status turns to ReadyToUse during creation, before
returning error as timeout. The default value is 10 minute.
description: |-
CSISnapshotTimeout specifies the time used to wait for CSI VolumeSnapshot status turns to
ReadyToUse during creation, before returning error as timeout.
The default value is 10 minute.
type: string
datamover:
description: DataMover specifies the data mover to be used by the
backup. If DataMover is "" or "velero", the built-in data mover
will be used.
description: |-
DataMover specifies the data mover to be used by the backup.
If DataMover is "" or "velero", the built-in data mover will be used.
type: string
defaultVolumesToFsBackup:
description: DefaultVolumesToFsBackup specifies whether pod volume
file system backup should be used for all volumes by default.
description: |-
DefaultVolumesToFsBackup specifies whether pod volume file system backup should be used
for all volumes by default.
nullable: true
type: boolean
defaultVolumesToRestic:
description: "DefaultVolumesToRestic specifies whether restic should
be used to take a backup of all pod volumes by default. \n Deprecated:
this field is no longer used and will be removed entirely in future.
Use DefaultVolumesToFsBackup instead."
description: |-
DefaultVolumesToRestic specifies whether restic should be used to take a
backup of all pod volumes by default.
Deprecated: this field is no longer used and will be removed entirely in future. Use DefaultVolumesToFsBackup instead.
nullable: true
type: boolean
excludedClusterScopedResources:
description: ExcludedClusterScopedResources is a slice of cluster-scoped
resource type names to exclude from the backup. If set to "*", all
cluster-scoped resource types are excluded. The default value is
empty.
description: |-
ExcludedClusterScopedResources is a slice of cluster-scoped
resource type names to exclude from the backup.
If set to "*", all cluster-scoped resource types are excluded.
The default value is empty.
items:
type: string
nullable: true
type: array
excludedNamespaceScopedResources:
description: ExcludedNamespaceScopedResources is a slice of namespace-scoped
resource type names to exclude from the backup. If set to "*", all
namespace-scoped resource types are excluded. The default value
is empty.
description: |-
ExcludedNamespaceScopedResources is a slice of namespace-scoped
resource type names to exclude from the backup.
If set to "*", all namespace-scoped resource types are excluded.
The default value is empty.
items:
type: string
nullable: true
type: array
excludedNamespaces:
description: ExcludedNamespaces contains a list of namespaces that
are not included in the backup.
description: |-
ExcludedNamespaces contains a list of namespaces that are not
included in the backup.
items:
type: string
nullable: true
type: array
excludedResources:
description: ExcludedResources is a slice of resource names that are
not included in the backup.
description: |-
ExcludedResources is a slice of resource names that are not
included in the backup.
items:
type: string
nullable: true
@@ -97,9 +110,9 @@ spec:
description: Resources are hooks that should be executed when
backing up individual instances of a resource.
items:
description: BackupResourceHookSpec defines one or more BackupResourceHooks
that should be executed based on the rules defined for namespaces,
resources, and label selector.
description: |-
BackupResourceHookSpec defines one or more BackupResourceHooks that should be executed based on
the rules defined for namespaces, resources, and label selector.
properties:
excludedNamespaces:
description: ExcludedNamespaces specifies the namespaces
@@ -116,17 +129,17 @@ spec:
nullable: true
type: array
includedNamespaces:
description: IncludedNamespaces specifies the namespaces
to which this hook spec applies. If empty, it applies
description: |-
IncludedNamespaces specifies the namespaces to which this hook spec applies. If empty, it applies
to all namespaces.
items:
type: string
nullable: true
type: array
includedResources:
description: IncludedResources specifies the resources to
which this hook spec applies. If empty, it applies to
all resources.
description: |-
IncludedResources specifies the resources to which this hook spec applies. If empty, it applies
to all resources.
items:
type: string
nullable: true
@@ -140,8 +153,8 @@ spec:
description: matchExpressions is a list of label selector
requirements. The requirements are ANDed.
items:
description: A label selector requirement is a selector
that contains values, a key, and an operator that
description: |-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
properties:
key:
@@ -149,33 +162,33 @@ spec:
applies to.
type: string
operator:
description: operator represents a key's relationship
to a set of values. Valid operators are In,
NotIn, Exists and DoesNotExist.
description: |-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values.
If the operator is In or NotIn, the values array
must be non-empty. If the operator is Exists
or DoesNotExist, the values array must be empty.
This array is replaced during a strategic merge
patch.
description: |-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
items:
type: string
type: array
x-kubernetes-list-type: atomic
required:
- key
- operator
type: object
type: array
x-kubernetes-list-type: atomic
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs.
A single {key,value} in the matchLabels map is equivalent
to an element of matchExpressions, whose key field
is "key", the operator is "In", and the values array
contains only "value". The requirements are ANDed.
description: |-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
@@ -183,10 +196,9 @@ spec:
description: Name is the name of this hook.
type: string
post:
description: PostHooks is a list of BackupResourceHooks
to execute after storing the item in the backup. These
are executed after all "additional items" from item actions
are processed.
description: |-
PostHooks is a list of BackupResourceHooks to execute after storing the item in the backup.
These are executed after all "additional items" from item actions are processed.
items:
description: BackupResourceHook defines a hook for a resource.
properties:
@@ -201,10 +213,9 @@ spec:
minItems: 1
type: array
container:
description: Container is the container in the
pod where the command should be executed. If
not specified, the pod's first container is
used.
description: |-
Container is the container in the pod where the command should be executed. If not specified,
the pod's first container is used.
type: string
onError:
description: OnError specifies how Velero should
@@ -215,9 +226,9 @@ spec:
- Fail
type: string
timeout:
description: Timeout defines the maximum amount
of time Velero should wait for the hook to complete
before considering the execution a failure.
description: |-
Timeout defines the maximum amount of time Velero should wait for the hook to complete before
considering the execution a failure.
type: string
required:
- command
@@ -227,10 +238,9 @@ spec:
type: object
type: array
pre:
description: PreHooks is a list of BackupResourceHooks to
execute prior to storing the item in the backup. These
are executed before any "additional items" from item actions
are processed.
description: |-
PreHooks is a list of BackupResourceHooks to execute prior to storing the item in the backup.
These are executed before any "additional items" from item actions are processed.
items:
description: BackupResourceHook defines a hook for a resource.
properties:
@@ -245,10 +255,9 @@ spec:
minItems: 1
type: array
container:
description: Container is the container in the
pod where the command should be executed. If
not specified, the pod's first container is
used.
description: |-
Container is the container in the pod where the command should be executed. If not specified,
the pod's first container is used.
type: string
onError:
description: OnError specifies how Velero should
@@ -259,9 +268,9 @@ spec:
- Fail
type: string
timeout:
description: Timeout defines the maximum amount
of time Velero should wait for the hook to complete
before considering the execution a failure.
description: |-
Timeout defines the maximum amount of time Velero should wait for the hook to complete before
considering the execution a failure.
type: string
required:
- command
@@ -277,91 +286,99 @@ spec:
type: array
type: object
includeClusterResources:
description: IncludeClusterResources specifies whether cluster-scoped
resources should be included for consideration in the backup.
description: |-
IncludeClusterResources specifies whether cluster-scoped resources
should be included for consideration in the backup.
nullable: true
type: boolean
includedClusterScopedResources:
description: IncludedClusterScopedResources is a slice of cluster-scoped
resource type names to include in the backup. If set to "*", all
cluster-scoped resource types are included. The default value is
empty, which means only related cluster-scoped resources are included.
description: |-
IncludedClusterScopedResources is a slice of cluster-scoped
resource type names to include in the backup.
If set to "*", all cluster-scoped resource types are included.
The default value is empty, which means only related
cluster-scoped resources are included.
items:
type: string
nullable: true
type: array
includedNamespaceScopedResources:
description: IncludedNamespaceScopedResources is a slice of namespace-scoped
resource type names to include in the backup. The default value
is "*".
description: |-
IncludedNamespaceScopedResources is a slice of namespace-scoped
resource type names to include in the backup.
The default value is "*".
items:
type: string
nullable: true
type: array
includedNamespaces:
description: IncludedNamespaces is a slice of namespace names to include
objects from. If empty, all namespaces are included.
description: |-
IncludedNamespaces is a slice of namespace names to include objects
from. If empty, all namespaces are included.
items:
type: string
nullable: true
type: array
includedResources:
description: IncludedResources is a slice of resource names to include
description: |-
IncludedResources is a slice of resource names to include
in the backup. If empty, all resources are included.
items:
type: string
nullable: true
type: array
itemOperationTimeout:
description: ItemOperationTimeout specifies the time used to wait
for asynchronous BackupItemAction operations The default value is
1 hour.
description: |-
ItemOperationTimeout specifies the time used to wait for asynchronous BackupItemAction operations
The default value is 4 hour.
type: string
labelSelector:
description: LabelSelector is a metav1.LabelSelector to filter with
when adding individual objects to the backup. If empty or nil, all
objects are included. Optional.
description: |-
LabelSelector is a metav1.LabelSelector to filter with
when adding individual objects to the backup. If empty
or nil, all objects are included. Optional.
nullable: true
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements.
The requirements are ANDed.
items:
description: A label selector requirement is a selector that
contains values, a key, and an operator that relates the key
and values.
description: |-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
properties:
key:
description: key is the label key that the selector applies
to.
type: string
operator:
description: operator represents a key's relationship to
a set of values. Valid operators are In, NotIn, Exists
and DoesNotExist.
description: |-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the
operator is In or NotIn, the values array must be non-empty.
If the operator is Exists or DoesNotExist, the values
array must be empty. This array is replaced during a strategic
description: |-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
items:
type: string
type: array
x-kubernetes-list-type: atomic
required:
- key
- operator
type: object
type: array
x-kubernetes-list-type: atomic
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs. A single
{key,value} in the matchLabels map is equivalent to an element
of matchExpressions, whose key field is "key", the operator
is "In", and the values array contains only "value". The requirements
are ANDed.
description: |-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
@@ -373,56 +390,58 @@ spec:
type: object
type: object
orLabelSelectors:
description: OrLabelSelectors is list of metav1.LabelSelector to filter
with when adding individual objects to the backup. If multiple provided
description: |-
OrLabelSelectors is list of metav1.LabelSelector to filter with
when adding individual objects to the backup. If multiple provided
they will be joined by the OR operator. LabelSelector as well as
OrLabelSelectors cannot co-exist in backup request, only one of
them can be used.
OrLabelSelectors cannot co-exist in backup request, only one of them
can be used.
items:
description: A label selector is a label query over a set of resources.
The result of matchLabels and matchExpressions are ANDed. An empty
label selector matches all objects. A null label selector matches
no objects.
description: |-
A label selector is a label query over a set of resources. The result of matchLabels and
matchExpressions are ANDed. An empty label selector matches all objects. A null
label selector matches no objects.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements.
The requirements are ANDed.
items:
description: A label selector requirement is a selector that
contains values, a key, and an operator that relates the
key and values.
description: |-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
properties:
key:
description: key is the label key that the selector applies
to.
type: string
operator:
description: operator represents a key's relationship
to a set of values. Valid operators are In, NotIn, Exists
and DoesNotExist.
description: |-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the
operator is In or NotIn, the values array must be non-empty.
If the operator is Exists or DoesNotExist, the values
array must be empty. This array is replaced during a
strategic merge patch.
description: |-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
items:
type: string
type: array
x-kubernetes-list-type: atomic
required:
- key
- operator
type: object
type: array
x-kubernetes-list-type: atomic
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs. A single
{key,value} in the matchLabels map is equivalent to an element
of matchExpressions, whose key field is "key", the operator
is "In", and the values array contains only "value". The requirements
are ANDed.
description: |-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
@@ -431,11 +450,10 @@ spec:
orderedResources:
additionalProperties:
type: string
description: OrderedResources specifies the backup order of resources
of specific Kind. The map key is the resource name and value is
a list of object names separated by commas. Each resource name has
format "namespace/objectname". For cluster resources, simply use
"objectname".
description: |-
OrderedResources specifies the backup order of resources of specific Kind.
The map key is the resource name and value is a list of object names separated by commas.
Each resource name has format "namespace/objectname". For cluster resources, simply use "objectname".
nullable: true
type: object
resourcePolicy:
@@ -443,10 +461,10 @@ spec:
that backup should follow
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced.
If APIGroup is not specified, the specified Kind must be in
the core API group. For any other third-party types, APIGroup
is required.
description: |-
APIGroup is the group for the resource being referenced.
If APIGroup is not specified, the specified Kind must be in the core API group.
For any other third-party types, APIGroup is required.
type: string
kind:
description: Kind is the type of resource being referenced
@@ -465,8 +483,10 @@ spec:
nullable: true
type: boolean
snapshotVolumes:
description: SnapshotVolumes specifies whether to take snapshots of
any PV's referenced in the set of objects included in the Backup.
description: |-
SnapshotVolumes specifies whether to take snapshots
of any PV's referenced in the set of objects included
in the Backup.
nullable: true
type: boolean
storageLocation:
@@ -474,9 +494,19 @@ spec:
BackupStorageLocation where the backup should be stored.
type: string
ttl:
description: TTL is a time.Duration-parseable string describing how
long the Backup should be retained for.
description: |-
TTL is a time.Duration-parseable string describing how long
the Backup should be retained for.
type: string
uploaderConfig:
description: UploaderConfig specifies the configuration for the uploader.
nullable: true
properties:
parallelFilesUpload:
description: ParallelFilesUpload is the number of files parallel
uploads to perform when using the uploader.
type: integer
type: object
volumeSnapshotLocations:
description: VolumeSnapshotLocations is a list containing names of
VolumeSnapshotLocations associated with this backup.
@@ -488,39 +518,44 @@ spec:
description: BackupStatus captures the current status of a Velero backup.
properties:
backupItemOperationsAttempted:
description: BackupItemOperationsAttempted is the total number of
attempted async BackupItemAction operations for this backup.
description: |-
BackupItemOperationsAttempted is the total number of attempted
async BackupItemAction operations for this backup.
type: integer
backupItemOperationsCompleted:
description: BackupItemOperationsCompleted is the total number of
successfully completed async BackupItemAction operations for this
backup.
description: |-
BackupItemOperationsCompleted is the total number of successfully completed
async BackupItemAction operations for this backup.
type: integer
backupItemOperationsFailed:
description: BackupItemOperationsFailed is the total number of async
BackupItemAction operations for this backup which ended with an
error.
description: |-
BackupItemOperationsFailed is the total number of async
BackupItemAction operations for this backup which ended with an error.
type: integer
completionTimestamp:
description: CompletionTimestamp records the time a backup was completed.
Completion time is recorded even on failed backups. Completion time
is recorded before uploading the backup object. The server's time
is used for CompletionTimestamps
description: |-
CompletionTimestamp records the time a backup was completed.
Completion time is recorded even on failed backups.
Completion time is recorded before uploading the backup object.
The server's time is used for CompletionTimestamps
format: date-time
nullable: true
type: string
csiVolumeSnapshotsAttempted:
description: CSIVolumeSnapshotsAttempted is the total number of attempted
description: |-
CSIVolumeSnapshotsAttempted is the total number of attempted
CSI VolumeSnapshots for this backup.
type: integer
csiVolumeSnapshotsCompleted:
description: CSIVolumeSnapshotsCompleted is the total number of successfully
description: |-
CSIVolumeSnapshotsCompleted is the total number of successfully
completed CSI VolumeSnapshots for this backup.
type: integer
errors:
description: Errors is a count of all error messages that were generated
during execution of the backup. The actual errors are in the backup's
log file in object storage.
description: |-
Errors is a count of all error messages that were generated during
execution of the backup. The actual errors are in the backup's log
file in object storage.
type: integer
expiration:
description: Expiration is when this Backup is eligible for garbage-collection.
@@ -535,6 +570,22 @@ spec:
description: FormatVersion is the backup format version, including
major, minor, and patch version.
type: string
hookStatus:
description: HookStatus contains information about the status of the
hooks.
nullable: true
properties:
hooksAttempted:
description: |-
HooksAttempted is the total number of attempted hooks
Specifically, HooksAttempted represents the number of hooks that failed to execute
and the number of hooks that executed successfully.
type: integer
hooksFailed:
description: HooksFailed is the total number of hooks which ended
with an error
type: integer
type: object
phase:
description: Phase is the current state of the Backup.
enum:
@@ -551,53 +602,62 @@ spec:
- Deleting
type: string
progress:
description: Progress contains information about the backup's execution
progress. Note that this information is best-effort only -- if Velero
fails to update it during a backup for any reason, it may be inaccurate/stale.
description: |-
Progress contains information about the backup's execution progress. Note
that this information is best-effort only -- if Velero fails to update it
during a backup for any reason, it may be inaccurate/stale.
nullable: true
properties:
itemsBackedUp:
description: ItemsBackedUp is the number of items that have actually
been written to the backup tarball so far.
description: |-
ItemsBackedUp is the number of items that have actually been written to the
backup tarball so far.
type: integer
totalItems:
description: TotalItems is the total number of items to be backed
up. This number may change throughout the execution of the backup
due to plugins that return additional related items to back
up, the velero.io/exclude-from-backup label, and various other
description: |-
TotalItems is the total number of items to be backed up. This number may change
throughout the execution of the backup due to plugins that return additional related
items to back up, the velero.io/exclude-from-backup label, and various other
filters that happen as items are processed.
type: integer
type: object
startTimestamp:
description: StartTimestamp records the time a backup was started.
Separate from CreationTimestamp, since that value changes on restores.
description: |-
StartTimestamp records the time a backup was started.
Separate from CreationTimestamp, since that value changes
on restores.
The server's time is used for StartTimestamps
format: date-time
nullable: true
type: string
validationErrors:
description: ValidationErrors is a slice of all validation errors
(if applicable).
description: |-
ValidationErrors is a slice of all validation errors (if
applicable).
items:
type: string
nullable: true
type: array
version:
description: 'Version is the backup format major version. Deprecated:
Please see FormatVersion'
description: |-
Version is the backup format major version.
Deprecated: Please see FormatVersion
type: integer
volumeSnapshotsAttempted:
description: VolumeSnapshotsAttempted is the total number of attempted
description: |-
VolumeSnapshotsAttempted is the total number of attempted
volume snapshots for this backup.
type: integer
volumeSnapshotsCompleted:
description: VolumeSnapshotsCompleted is the total number of successfully
description: |-
VolumeSnapshotsCompleted is the total number of successfully
completed volume snapshots for this backup.
type: integer
warnings:
description: Warnings is a count of all warning messages that were
generated during execution of the backup. The actual warnings are
in the backup's log file in object storage.
description: |-
Warnings is a count of all warning messages that were generated during
execution of the backup. The actual warnings are in the backup's log
file in object storage.
type: integer
type: object
type: object

View File

@@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.12.0
controller-gen.kubebuilder.io/version: v0.16.5
name: backupstoragelocations.velero.io
spec:
group: velero.io
@@ -40,14 +40,19 @@ spec:
objects
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
@@ -81,8 +86,13 @@ spec:
valid secret key.
type: string
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
TODO: Add other useful fields. apiVersion, kind, uid?'
default: ""
description: |-
Name of the referent.
This field is effectively required, but due to backwards compatibility is
allowed to be empty. Instances of this type with an empty value here are
almost certainly wrong.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
type: string
optional:
description: Specify whether the Secret or its key must be defined
@@ -131,29 +141,34 @@ spec:
BackupStorageLocation
properties:
accessMode:
description: "AccessMode is an unused field. \n Deprecated: there
is now an AccessMode field on the Spec and this field will be removed
entirely as of v2.0."
description: |-
AccessMode is an unused field.
Deprecated: there is now an AccessMode field on the Spec and this field
will be removed entirely as of v2.0.
enum:
- ReadOnly
- ReadWrite
type: string
lastSyncedRevision:
description: "LastSyncedRevision is the value of the `metadata/revision`
file in the backup storage location the last time the BSL's contents
were synced into the cluster. \n Deprecated: this field is no longer
updated or used for detecting changes to the location's contents
and will be removed entirely in v2.0."
description: |-
LastSyncedRevision is the value of the `metadata/revision` file in the backup
storage location the last time the BSL's contents were synced into the cluster.
Deprecated: this field is no longer updated or used for detecting changes to
the location's contents and will be removed entirely in v2.0.
type: string
lastSyncedTime:
description: LastSyncedTime is the last time the contents of the location
were synced into the cluster.
description: |-
LastSyncedTime is the last time the contents of the location were synced into
the cluster.
format: date-time
nullable: true
type: string
lastValidationTime:
description: LastValidationTime is the last time the backup store
location was validated the cluster.
description: |-
LastValidationTime is the last time the backup store location was validated
the cluster.
format: date-time
nullable: true
type: string

View File

@@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.12.0
controller-gen.kubebuilder.io/version: v0.16.5
name: deletebackuprequests.velero.io
spec:
group: velero.io
@@ -29,14 +29,19 @@ spec:
description: DeleteBackupRequest is a request to delete one or more backups.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object

View File

@@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.12.0
controller-gen.kubebuilder.io/version: v0.16.5
name: downloadrequests.velero.io
spec:
group: velero.io
@@ -17,18 +17,24 @@ spec:
- name: v1
schema:
openAPIV3Schema:
description: DownloadRequest is a request to download an artifact from backup
object storage, such as a backup log file.
description: |-
DownloadRequest is a request to download an artifact from backup object storage, such as a backup
log file.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
@@ -53,6 +59,8 @@ spec:
- RestoreItemOperations
- CSIBackupVolumeSnapshots
- CSIBackupVolumeSnapshotContents
- BackupVolumeInfos
- RestoreVolumeInfo
type: string
name:
description: Name is the name of the Kubernetes resource with

View File

@@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.12.0
controller-gen.kubebuilder.io/version: v0.16.5
name: podvolumebackups.velero.io
spec:
group: velero.io
@@ -35,10 +35,6 @@ spec:
jsonPath: .spec.volume
name: Volume
type: string
- description: Backup repository identifier for this backup
jsonPath: .spec.repoIdentifier
name: Repository ID
type: string
- description: The type of the uploader to handle data transfer
jsonPath: .spec.uploaderType
name: Uploader Type
@@ -56,14 +52,19 @@ spec:
openAPIV3Schema:
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
@@ -71,8 +72,9 @@ spec:
description: PodVolumeBackupSpec is the specification for a PodVolumeBackup.
properties:
backupStorageLocation:
description: BackupStorageLocation is the name of the backup storage
location where the backup repository is stored.
description: |-
BackupStorageLocation is the name of the backup storage location
where the backup repository is stored.
type: string
node:
description: Node is the name of the node that the Pod is running
@@ -86,33 +88,39 @@ spec:
description: API version of the referent.
type: string
fieldPath:
description: 'If referring to a piece of an object instead of
an entire object, this string should contain a valid JSON/Go
field access statement, such as desiredState.manifest.containers[2].
For example, if the object reference is to a container within
a pod, this would take on a value like: "spec.containers{name}"
(where "name" refers to the name of the container that triggered
the event) or if no container name is specified "spec.containers[2]"
(container with index 2 in this pod). This syntax is chosen
only to have some well-defined way of referencing a part of
an object. TODO: this design is not final and this field is
subject to change in the future.'
description: |-
If referring to a piece of an object instead of an entire object, this string
should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2].
For example, if the object reference is to a container within a pod, this would take on a value like:
"spec.containers{name}" (where "name" refers to the name of the container that triggered
the event) or if no container name is specified "spec.containers[2]" (container with
index 2 in this pod). This syntax is chosen only to have some well-defined way of
referencing a part of an object.
type: string
kind:
description: 'Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description: |-
Kind of the referent.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names'
description: |-
Name of the referent.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
type: string
namespace:
description: 'Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/'
description: |-
Namespace of the referent.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
type: string
resourceVersion:
description: 'Specific resourceVersion to which this reference
is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency'
description: |-
Specific resourceVersion to which this reference is made, if any.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
type: string
uid:
description: 'UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids'
description: |-
UID of the referent.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
type: string
type: object
x-kubernetes-map-type: atomic
@@ -122,8 +130,17 @@ spec:
tags:
additionalProperties:
type: string
description: Tags are a map of key-value pairs that should be applied
to the volume backup as tags.
description: |-
Tags are a map of key-value pairs that should be applied to the
volume backup as tags.
type: object
uploaderSettings:
additionalProperties:
type: string
description: |-
UploaderSettings are a map of key-value pairs that should be applied to the
uploader configuration.
nullable: true
type: object
uploaderType:
description: UploaderType is the type of the uploader to handle the
@@ -134,8 +151,9 @@ spec:
- ""
type: string
volume:
description: Volume is the name of the volume within the Pod to be
backed up.
description: |-
Volume is the name of the volume within the Pod to be backed
up.
type: string
required:
- backupStorageLocation
@@ -148,10 +166,11 @@ spec:
description: PodVolumeBackupStatus is the current status of a PodVolumeBackup.
properties:
completionTimestamp:
description: CompletionTimestamp records the time a backup was completed.
Completion time is recorded even on failed backups. Completion time
is recorded before uploading the backup object. The server's time
is used for CompletionTimestamps
description: |-
CompletionTimestamp records the time a backup was completed.
Completion time is recorded even on failed backups.
Completion time is recorded before uploading the backup object.
The server's time is used for CompletionTimestamps
format: date-time
nullable: true
type: string
@@ -171,9 +190,10 @@ spec:
- Failed
type: string
progress:
description: Progress holds the total number of bytes of the volume
and the current number of backed up bytes. This can be used to display
progress information about the backup operation.
description: |-
Progress holds the total number of bytes of the volume and the current
number of backed up bytes. This can be used to display progress information
about the backup operation.
properties:
bytesDone:
format: int64
@@ -187,8 +207,10 @@ spec:
pod volume.
type: string
startTimestamp:
description: StartTimestamp records the time a backup was started.
Separate from CreationTimestamp, since that value changes on restores.
description: |-
StartTimestamp records the time a backup was started.
Separate from CreationTimestamp, since that value changes
on restores.
The server's time is used for StartTimestamps
format: date-time
nullable: true

View File

@@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.12.0
controller-gen.kubebuilder.io/version: v0.16.5
name: podvolumerestores.velero.io
spec:
group: velero.io
@@ -53,14 +53,19 @@ spec:
openAPIV3Schema:
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
@@ -68,8 +73,9 @@ spec:
description: PodVolumeRestoreSpec is the specification for a PodVolumeRestore.
properties:
backupStorageLocation:
description: BackupStorageLocation is the name of the backup storage
location where the backup repository is stored.
description: |-
BackupStorageLocation is the name of the backup storage location
where the backup repository is stored.
type: string
pod:
description: Pod is a reference to the pod containing the volume to
@@ -79,33 +85,39 @@ spec:
description: API version of the referent.
type: string
fieldPath:
description: 'If referring to a piece of an object instead of
an entire object, this string should contain a valid JSON/Go
field access statement, such as desiredState.manifest.containers[2].
For example, if the object reference is to a container within
a pod, this would take on a value like: "spec.containers{name}"
(where "name" refers to the name of the container that triggered
the event) or if no container name is specified "spec.containers[2]"
(container with index 2 in this pod). This syntax is chosen
only to have some well-defined way of referencing a part of
an object. TODO: this design is not final and this field is
subject to change in the future.'
description: |-
If referring to a piece of an object instead of an entire object, this string
should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2].
For example, if the object reference is to a container within a pod, this would take on a value like:
"spec.containers{name}" (where "name" refers to the name of the container that triggered
the event) or if no container name is specified "spec.containers[2]" (container with
index 2 in this pod). This syntax is chosen only to have some well-defined way of
referencing a part of an object.
type: string
kind:
description: 'Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description: |-
Kind of the referent.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names'
description: |-
Name of the referent.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
type: string
namespace:
description: 'Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/'
description: |-
Namespace of the referent.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
type: string
resourceVersion:
description: 'Specific resourceVersion to which this reference
is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency'
description: |-
Specific resourceVersion to which this reference is made, if any.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
type: string
uid:
description: 'UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids'
description: |-
UID of the referent.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
type: string
type: object
x-kubernetes-map-type: atomic
@@ -119,6 +131,14 @@ spec:
description: SourceNamespace is the original namespace for namaspace
mapping.
type: string
uploaderSettings:
additionalProperties:
type: string
description: |-
UploaderSettings are a map of key-value pairs that should be applied to the
uploader configuration.
nullable: true
type: object
uploaderType:
description: UploaderType is the type of the uploader to handle the
data transfer.
@@ -143,9 +163,10 @@ spec:
description: PodVolumeRestoreStatus is the current status of a PodVolumeRestore.
properties:
completionTimestamp:
description: CompletionTimestamp records the time a restore was completed.
Completion time is recorded even on failed restores. The server's
time is used for CompletionTimestamps
description: |-
CompletionTimestamp records the time a restore was completed.
Completion time is recorded even on failed restores.
The server's time is used for CompletionTimestamps
format: date-time
nullable: true
type: string
@@ -161,9 +182,10 @@ spec:
- Failed
type: string
progress:
description: Progress holds the total number of bytes of the snapshot
and the current number of restored bytes. This can be used to display
progress information about the restore operation.
description: |-
Progress holds the total number of bytes of the snapshot and the current
number of restored bytes. This can be used to display progress information
about the restore operation.
properties:
bytesDone:
format: int64
@@ -173,7 +195,8 @@ spec:
type: integer
type: object
startTimestamp:
description: StartTimestamp records the time a restore was started.
description: |-
StartTimestamp records the time a restore was started.
The server's time is used for StartTimestamps
format: date-time
nullable: true

View File

@@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.12.0
controller-gen.kubebuilder.io/version: v0.16.5
name: restores.velero.io
spec:
group: velero.io
@@ -17,18 +17,24 @@ spec:
- name: v1
schema:
openAPIV3Schema:
description: Restore is a Velero resource that represents the application
of resources from a Velero backup to a target Kubernetes cluster.
description: |-
Restore is a Velero resource that represents the application of
resources from a Velero backup to a target Kubernetes cluster.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
@@ -36,19 +42,22 @@ spec:
description: RestoreSpec defines the specification for a Velero restore.
properties:
backupName:
description: BackupName is the unique name of the Velero backup to
restore from.
description: |-
BackupName is the unique name of the Velero backup to restore
from.
type: string
excludedNamespaces:
description: ExcludedNamespaces contains a list of namespaces that
are not included in the restore.
description: |-
ExcludedNamespaces contains a list of namespaces that are not
included in the restore.
items:
type: string
nullable: true
type: array
excludedResources:
description: ExcludedResources is a slice of resource names that are
not included in the restore.
description: |-
ExcludedResources is a slice of resource names that are not
included in the restore.
items:
type: string
nullable: true
@@ -64,9 +73,9 @@ spec:
properties:
resources:
items:
description: RestoreResourceHookSpec defines one or more RestoreResrouceHooks
that should be executed based on the rules defined for namespaces,
resources, and label selector.
description: |-
RestoreResourceHookSpec defines one or more RestoreResrouceHooks that should be executed based on
the rules defined for namespaces, resources, and label selector.
properties:
excludedNamespaces:
description: ExcludedNamespaces specifies the namespaces
@@ -83,17 +92,17 @@ spec:
nullable: true
type: array
includedNamespaces:
description: IncludedNamespaces specifies the namespaces
to which this hook spec applies. If empty, it applies
description: |-
IncludedNamespaces specifies the namespaces to which this hook spec applies. If empty, it applies
to all namespaces.
items:
type: string
nullable: true
type: array
includedResources:
description: IncludedResources specifies the resources to
which this hook spec applies. If empty, it applies to
all resources.
description: |-
IncludedResources specifies the resources to which this hook spec applies. If empty, it applies
to all resources.
items:
type: string
nullable: true
@@ -107,8 +116,8 @@ spec:
description: matchExpressions is a list of label selector
requirements. The requirements are ANDed.
items:
description: A label selector requirement is a selector
that contains values, a key, and an operator that
description: |-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
properties:
key:
@@ -116,33 +125,33 @@ spec:
applies to.
type: string
operator:
description: operator represents a key's relationship
to a set of values. Valid operators are In,
NotIn, Exists and DoesNotExist.
description: |-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values.
If the operator is In or NotIn, the values array
must be non-empty. If the operator is Exists
or DoesNotExist, the values array must be empty.
This array is replaced during a strategic merge
patch.
description: |-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
items:
type: string
type: array
x-kubernetes-list-type: atomic
required:
- key
- operator
type: object
type: array
x-kubernetes-list-type: atomic
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs.
A single {key,value} in the matchLabels map is equivalent
to an element of matchExpressions, whose key field
is "key", the operator is "In", and the values array
contains only "value". The requirements are ANDed.
description: |-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
@@ -168,15 +177,14 @@ spec:
minItems: 1
type: array
container:
description: Container is the container in the
pod where the command should be executed. If
not specified, the pod's first container is
used.
description: |-
Container is the container in the pod where the command should be executed. If not specified,
the pod's first container is used.
type: string
execTimeout:
description: ExecTimeout defines the maximum amount
of time Velero should wait for the hook to complete
before considering the execution a failure.
description: |-
ExecTimeout defines the maximum amount of time Velero should wait for the hook to complete before
considering the execution a failure.
type: string
onError:
description: OnError specifies how Velero should
@@ -186,10 +194,16 @@ spec:
- Continue
- Fail
type: string
waitForReady:
description: WaitForReady ensures command will
be launched when container is Ready instead
of Running.
nullable: true
type: boolean
waitTimeout:
description: WaitTimeout defines the maximum amount
of time Velero should wait for the container
to be Ready before attempting to run the command.
description: |-
WaitTimeout defines the maximum amount of time Velero should wait for the container to be Ready
before attempting to run the command.
type: string
required:
- command
@@ -219,136 +233,145 @@ spec:
type: array
type: object
includeClusterResources:
description: IncludeClusterResources specifies whether cluster-scoped
resources should be included for consideration in the restore. If
null, defaults to true.
description: |-
IncludeClusterResources specifies whether cluster-scoped resources
should be included for consideration in the restore. If null, defaults
to true.
nullable: true
type: boolean
includedNamespaces:
description: IncludedNamespaces is a slice of namespace names to include
objects from. If empty, all namespaces are included.
description: |-
IncludedNamespaces is a slice of namespace names to include objects
from. If empty, all namespaces are included.
items:
type: string
nullable: true
type: array
includedResources:
description: IncludedResources is a slice of resource names to include
description: |-
IncludedResources is a slice of resource names to include
in the restore. If empty, all resources in the backup are included.
items:
type: string
nullable: true
type: array
itemOperationTimeout:
description: ItemOperationTimeout specifies the time used to wait
for RestoreItemAction operations The default value is 1 hour.
description: |-
ItemOperationTimeout specifies the time used to wait for RestoreItemAction operations
The default value is 4 hour.
type: string
labelSelector:
description: LabelSelector is a metav1.LabelSelector to filter with
when restoring individual objects from the backup. If empty or nil,
all objects are included. Optional.
description: |-
LabelSelector is a metav1.LabelSelector to filter with
when restoring individual objects from the backup. If empty
or nil, all objects are included. Optional.
nullable: true
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements.
The requirements are ANDed.
items:
description: A label selector requirement is a selector that
contains values, a key, and an operator that relates the key
and values.
description: |-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
properties:
key:
description: key is the label key that the selector applies
to.
type: string
operator:
description: operator represents a key's relationship to
a set of values. Valid operators are In, NotIn, Exists
and DoesNotExist.
description: |-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the
operator is In or NotIn, the values array must be non-empty.
If the operator is Exists or DoesNotExist, the values
array must be empty. This array is replaced during a strategic
description: |-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
items:
type: string
type: array
x-kubernetes-list-type: atomic
required:
- key
- operator
type: object
type: array
x-kubernetes-list-type: atomic
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs. A single
{key,value} in the matchLabels map is equivalent to an element
of matchExpressions, whose key field is "key", the operator
is "In", and the values array contains only "value". The requirements
are ANDed.
description: |-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaceMapping:
additionalProperties:
type: string
description: NamespaceMapping is a map of source namespace names to
target namespace names to restore into. Any source namespaces not
included in the map will be restored into namespaces of the same
name.
description: |-
NamespaceMapping is a map of source namespace names
to target namespace names to restore into. Any source
namespaces not included in the map will be restored into
namespaces of the same name.
type: object
orLabelSelectors:
description: OrLabelSelectors is list of metav1.LabelSelector to filter
with when restoring individual objects from the backup. If multiple
provided they will be joined by the OR operator. LabelSelector as
well as OrLabelSelectors cannot co-exist in restore request, only
one of them can be used
description: |-
OrLabelSelectors is list of metav1.LabelSelector to filter with
when restoring individual objects from the backup. If multiple provided
they will be joined by the OR operator. LabelSelector as well as
OrLabelSelectors cannot co-exist in restore request, only one of them
can be used
items:
description: A label selector is a label query over a set of resources.
The result of matchLabels and matchExpressions are ANDed. An empty
label selector matches all objects. A null label selector matches
no objects.
description: |-
A label selector is a label query over a set of resources. The result of matchLabels and
matchExpressions are ANDed. An empty label selector matches all objects. A null
label selector matches no objects.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements.
The requirements are ANDed.
items:
description: A label selector requirement is a selector that
contains values, a key, and an operator that relates the
key and values.
description: |-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
properties:
key:
description: key is the label key that the selector applies
to.
type: string
operator:
description: operator represents a key's relationship
to a set of values. Valid operators are In, NotIn, Exists
and DoesNotExist.
description: |-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the
operator is In or NotIn, the values array must be non-empty.
If the operator is Exists or DoesNotExist, the values
array must be empty. This array is replaced during a
strategic merge patch.
description: |-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
items:
type: string
type: array
x-kubernetes-list-type: atomic
required:
- key
- operator
type: object
type: array
x-kubernetes-list-type: atomic
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs. A single
{key,value} in the matchLabels map is equivalent to an element
of matchExpressions, whose key field is "key", the operator
is "In", and the values array contains only "value". The requirements
are ANDed.
description: |-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
@@ -365,10 +388,10 @@ spec:
nullable: true
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced.
If APIGroup is not specified, the specified Kind must be in
the core API group. For any other third-party types, APIGroup
is required.
description: |-
APIGroup is the group for the resource being referenced.
If APIGroup is not specified, the specified Kind must be in the core API group.
For any other third-party types, APIGroup is required.
type: string
kind:
description: Kind is the type of resource being referenced
@@ -382,13 +405,15 @@ spec:
type: object
x-kubernetes-map-type: atomic
restorePVs:
description: RestorePVs specifies whether to restore all included
description: |-
RestorePVs specifies whether to restore all included
PVs from snapshot
nullable: true
type: boolean
restoreStatus:
description: RestoreStatus specifies which resources we should restore
the status field. If nil, no objects are included. Optional.
description: |-
RestoreStatus specifies which resources we should restore the status
field. If nil, no objects are included. Optional.
nullable: true
properties:
excludedResources:
@@ -399,41 +424,71 @@ spec:
nullable: true
type: array
includedResources:
description: IncludedResources specifies the resources to which
will restore the status. If empty, it applies to all resources.
description: |-
IncludedResources specifies the resources to which will restore the status.
If empty, it applies to all resources.
items:
type: string
nullable: true
type: array
type: object
scheduleName:
description: ScheduleName is the unique name of the Velero schedule
to restore from. If specified, and BackupName is empty, Velero will
restore from the most recent successful backup created from this
schedule.
description: |-
ScheduleName is the unique name of the Velero schedule to restore
from. If specified, and BackupName is empty, Velero will restore
from the most recent successful backup created from this schedule.
type: string
required:
- backupName
uploaderConfig:
description: UploaderConfig specifies the configuration for the restore.
nullable: true
properties:
parallelFilesDownload:
description: ParallelFilesDownload is the concurrency number setting
for restore.
type: integer
writeSparseFiles:
description: WriteSparseFiles is a flag to indicate whether write
files sparsely or not.
nullable: true
type: boolean
type: object
type: object
status:
description: RestoreStatus captures the current status of a Velero restore
properties:
completionTimestamp:
description: CompletionTimestamp records the time the restore operation
was completed. Completion time is recorded even on failed restore.
description: |-
CompletionTimestamp records the time the restore operation was completed.
Completion time is recorded even on failed restore.
The server's time is used for StartTimestamps
format: date-time
nullable: true
type: string
errors:
description: Errors is a count of all error messages that were generated
during execution of the restore. The actual errors are stored in
object storage.
description: |-
Errors is a count of all error messages that were generated during
execution of the restore. The actual errors are stored in object storage.
type: integer
failureReason:
description: FailureReason is an error that caused the entire restore
to fail.
type: string
hookStatus:
description: HookStatus contains information about the status of the
hooks.
nullable: true
properties:
hooksAttempted:
description: |-
HooksAttempted is the total number of attempted hooks
Specifically, HooksAttempted represents the number of hooks that failed to execute
and the number of hooks that executed successfully.
type: integer
hooksFailed:
description: HooksFailed is the total number of hooks which ended
with an error
type: integer
type: object
phase:
description: Phase is the current state of the Restore
enum:
@@ -445,11 +500,14 @@ spec:
- Completed
- PartiallyFailed
- Failed
- Finalizing
- FinalizingPartiallyFailed
type: string
progress:
description: Progress contains information about the restore's execution
progress. Note that this information is best-effort only -- if Velero
fails to update it during a restore for any reason, it may be inaccurate/stale.
description: |-
Progress contains information about the restore's execution progress. Note
that this information is best-effort only -- if Velero fails to update it
during a restore for any reason, it may be inaccurate/stale.
nullable: true
properties:
itemsRestored:
@@ -457,42 +515,46 @@ spec:
been restored so far
type: integer
totalItems:
description: TotalItems is the total number of items to be restored.
This number may change throughout the execution of the restore
due to plugins that return additional related items to restore
description: |-
TotalItems is the total number of items to be restored. This number may change
throughout the execution of the restore due to plugins that return additional related
items to restore
type: integer
type: object
restoreItemOperationsAttempted:
description: RestoreItemOperationsAttempted is the total number of
attempted async RestoreItemAction operations for this restore.
description: |-
RestoreItemOperationsAttempted is the total number of attempted
async RestoreItemAction operations for this restore.
type: integer
restoreItemOperationsCompleted:
description: RestoreItemOperationsCompleted is the total number of
successfully completed async RestoreItemAction operations for this
restore.
description: |-
RestoreItemOperationsCompleted is the total number of successfully completed
async RestoreItemAction operations for this restore.
type: integer
restoreItemOperationsFailed:
description: RestoreItemOperationsFailed is the total number of async
RestoreItemAction operations for this restore which ended with an
error.
description: |-
RestoreItemOperationsFailed is the total number of async
RestoreItemAction operations for this restore which ended with an error.
type: integer
startTimestamp:
description: StartTimestamp records the time the restore operation
was started. The server's time is used for StartTimestamps
description: |-
StartTimestamp records the time the restore operation was started.
The server's time is used for StartTimestamps
format: date-time
nullable: true
type: string
validationErrors:
description: ValidationErrors is a slice of all validation errors
(if applicable)
description: |-
ValidationErrors is a slice of all validation errors (if
applicable)
items:
type: string
nullable: true
type: array
warnings:
description: Warnings is a count of all warning messages that were
generated during execution of the restore. The actual warnings are
stored in object storage.
description: |-
Warnings is a count of all warning messages that were generated during
execution of the restore. The actual warnings are stored in object storage.
type: integer
type: object
type: object

View File

@@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.12.0
controller-gen.kubebuilder.io/version: v0.16.5
name: schedules.velero.io
spec:
group: velero.io
@@ -36,18 +36,24 @@ spec:
name: v1
schema:
openAPIV3Schema:
description: Schedule is a Velero resource that represents a pre-scheduled
or periodic Backup that should be run.
description: |-
Schedule is a Velero resource that represents a pre-scheduled or
periodic Backup that should be run.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
@@ -58,63 +64,79 @@ spec:
description: Paused specifies whether the schedule is paused or not
type: boolean
schedule:
description: Schedule is a Cron expression defining when to run the
Backup.
description: |-
Schedule is a Cron expression defining when to run
the Backup.
type: string
skipImmediately:
description: |-
SkipImmediately specifies whether to skip backup if schedule is due immediately from `schedule.status.lastBackup` timestamp when schedule is unpaused or if schedule is new.
If true, backup will be skipped immediately when schedule is unpaused if it is due based on .Status.LastBackupTimestamp or schedule is new, and will run at next schedule time.
If false, backup will not be skipped immediately when schedule is unpaused, but will run at next schedule time.
If empty, will follow server configuration (default: false).
type: boolean
template:
description: Template is the definition of the Backup to be run on
the provided schedule
description: |-
Template is the definition of the Backup to be run
on the provided schedule
properties:
csiSnapshotTimeout:
description: CSISnapshotTimeout specifies the time used to wait
for CSI VolumeSnapshot status turns to ReadyToUse during creation,
before returning error as timeout. The default value is 10 minute.
description: |-
CSISnapshotTimeout specifies the time used to wait for CSI VolumeSnapshot status turns to
ReadyToUse during creation, before returning error as timeout.
The default value is 10 minute.
type: string
datamover:
description: DataMover specifies the data mover to be used by
the backup. If DataMover is "" or "velero", the built-in data
mover will be used.
description: |-
DataMover specifies the data mover to be used by the backup.
If DataMover is "" or "velero", the built-in data mover will be used.
type: string
defaultVolumesToFsBackup:
description: DefaultVolumesToFsBackup specifies whether pod volume
file system backup should be used for all volumes by default.
description: |-
DefaultVolumesToFsBackup specifies whether pod volume file system backup should be used
for all volumes by default.
nullable: true
type: boolean
defaultVolumesToRestic:
description: "DefaultVolumesToRestic specifies whether restic
should be used to take a backup of all pod volumes by default.
\n Deprecated: this field is no longer used and will be removed
entirely in future. Use DefaultVolumesToFsBackup instead."
description: |-
DefaultVolumesToRestic specifies whether restic should be used to take a
backup of all pod volumes by default.
Deprecated: this field is no longer used and will be removed entirely in future. Use DefaultVolumesToFsBackup instead.
nullable: true
type: boolean
excludedClusterScopedResources:
description: ExcludedClusterScopedResources is a slice of cluster-scoped
resource type names to exclude from the backup. If set to "*",
all cluster-scoped resource types are excluded. The default
value is empty.
description: |-
ExcludedClusterScopedResources is a slice of cluster-scoped
resource type names to exclude from the backup.
If set to "*", all cluster-scoped resource types are excluded.
The default value is empty.
items:
type: string
nullable: true
type: array
excludedNamespaceScopedResources:
description: ExcludedNamespaceScopedResources is a slice of namespace-scoped
resource type names to exclude from the backup. If set to "*",
all namespace-scoped resource types are excluded. The default
value is empty.
description: |-
ExcludedNamespaceScopedResources is a slice of namespace-scoped
resource type names to exclude from the backup.
If set to "*", all namespace-scoped resource types are excluded.
The default value is empty.
items:
type: string
nullable: true
type: array
excludedNamespaces:
description: ExcludedNamespaces contains a list of namespaces
that are not included in the backup.
description: |-
ExcludedNamespaces contains a list of namespaces that are not
included in the backup.
items:
type: string
nullable: true
type: array
excludedResources:
description: ExcludedResources is a slice of resource names that
are not included in the backup.
description: |-
ExcludedResources is a slice of resource names that are not
included in the backup.
items:
type: string
nullable: true
@@ -127,9 +149,9 @@ spec:
description: Resources are hooks that should be executed when
backing up individual instances of a resource.
items:
description: BackupResourceHookSpec defines one or more
BackupResourceHooks that should be executed based on the
rules defined for namespaces, resources, and label selector.
description: |-
BackupResourceHookSpec defines one or more BackupResourceHooks that should be executed based on
the rules defined for namespaces, resources, and label selector.
properties:
excludedNamespaces:
description: ExcludedNamespaces specifies the namespaces
@@ -146,16 +168,16 @@ spec:
nullable: true
type: array
includedNamespaces:
description: IncludedNamespaces specifies the namespaces
to which this hook spec applies. If empty, it applies
description: |-
IncludedNamespaces specifies the namespaces to which this hook spec applies. If empty, it applies
to all namespaces.
items:
type: string
nullable: true
type: array
includedResources:
description: IncludedResources specifies the resources
to which this hook spec applies. If empty, it applies
description: |-
IncludedResources specifies the resources to which this hook spec applies. If empty, it applies
to all resources.
items:
type: string
@@ -170,43 +192,42 @@ spec:
description: matchExpressions is a list of label
selector requirements. The requirements are ANDed.
items:
description: A label selector requirement is a
selector that contains values, a key, and an
operator that relates the key and values.
description: |-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
properties:
key:
description: key is the label key that the
selector applies to.
type: string
operator:
description: operator represents a key's relationship
to a set of values. Valid operators are
In, NotIn, Exists and DoesNotExist.
description: |-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string
values. If the operator is In or NotIn,
the values array must be non-empty. If the
operator is Exists or DoesNotExist, the
values array must be empty. This array is
replaced during a strategic merge patch.
description: |-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
items:
type: string
type: array
x-kubernetes-list-type: atomic
required:
- key
- operator
type: object
type: array
x-kubernetes-list-type: atomic
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value}
pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions,
whose key field is "key", the operator is "In",
and the values array contains only "value". The
requirements are ANDed.
description: |-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
@@ -214,10 +235,9 @@ spec:
description: Name is the name of this hook.
type: string
post:
description: PostHooks is a list of BackupResourceHooks
to execute after storing the item in the backup. These
are executed after all "additional items" from item
actions are processed.
description: |-
PostHooks is a list of BackupResourceHooks to execute after storing the item in the backup.
These are executed after all "additional items" from item actions are processed.
items:
description: BackupResourceHook defines a hook for
a resource.
@@ -233,10 +253,9 @@ spec:
minItems: 1
type: array
container:
description: Container is the container in
the pod where the command should be executed.
If not specified, the pod's first container
is used.
description: |-
Container is the container in the pod where the command should be executed. If not specified,
the pod's first container is used.
type: string
onError:
description: OnError specifies how Velero
@@ -247,10 +266,9 @@ spec:
- Fail
type: string
timeout:
description: Timeout defines the maximum amount
of time Velero should wait for the hook
to complete before considering the execution
a failure.
description: |-
Timeout defines the maximum amount of time Velero should wait for the hook to complete before
considering the execution a failure.
type: string
required:
- command
@@ -260,10 +278,9 @@ spec:
type: object
type: array
pre:
description: PreHooks is a list of BackupResourceHooks
to execute prior to storing the item in the backup.
These are executed before any "additional items" from
item actions are processed.
description: |-
PreHooks is a list of BackupResourceHooks to execute prior to storing the item in the backup.
These are executed before any "additional items" from item actions are processed.
items:
description: BackupResourceHook defines a hook for
a resource.
@@ -279,10 +296,9 @@ spec:
minItems: 1
type: array
container:
description: Container is the container in
the pod where the command should be executed.
If not specified, the pod's first container
is used.
description: |-
Container is the container in the pod where the command should be executed. If not specified,
the pod's first container is used.
type: string
onError:
description: OnError specifies how Velero
@@ -293,10 +309,9 @@ spec:
- Fail
type: string
timeout:
description: Timeout defines the maximum amount
of time Velero should wait for the hook
to complete before considering the execution
a failure.
description: |-
Timeout defines the maximum amount of time Velero should wait for the hook to complete before
considering the execution a failure.
type: string
required:
- command
@@ -312,50 +327,56 @@ spec:
type: array
type: object
includeClusterResources:
description: IncludeClusterResources specifies whether cluster-scoped
resources should be included for consideration in the backup.
description: |-
IncludeClusterResources specifies whether cluster-scoped resources
should be included for consideration in the backup.
nullable: true
type: boolean
includedClusterScopedResources:
description: IncludedClusterScopedResources is a slice of cluster-scoped
resource type names to include in the backup. If set to "*",
all cluster-scoped resource types are included. The default
value is empty, which means only related cluster-scoped resources
are included.
description: |-
IncludedClusterScopedResources is a slice of cluster-scoped
resource type names to include in the backup.
If set to "*", all cluster-scoped resource types are included.
The default value is empty, which means only related
cluster-scoped resources are included.
items:
type: string
nullable: true
type: array
includedNamespaceScopedResources:
description: IncludedNamespaceScopedResources is a slice of namespace-scoped
resource type names to include in the backup. The default value
is "*".
description: |-
IncludedNamespaceScopedResources is a slice of namespace-scoped
resource type names to include in the backup.
The default value is "*".
items:
type: string
nullable: true
type: array
includedNamespaces:
description: IncludedNamespaces is a slice of namespace names
to include objects from. If empty, all namespaces are included.
description: |-
IncludedNamespaces is a slice of namespace names to include objects
from. If empty, all namespaces are included.
items:
type: string
nullable: true
type: array
includedResources:
description: IncludedResources is a slice of resource names to
include in the backup. If empty, all resources are included.
description: |-
IncludedResources is a slice of resource names to include
in the backup. If empty, all resources are included.
items:
type: string
nullable: true
type: array
itemOperationTimeout:
description: ItemOperationTimeout specifies the time used to wait
for asynchronous BackupItemAction operations The default value
is 1 hour.
description: |-
ItemOperationTimeout specifies the time used to wait for asynchronous BackupItemAction operations
The default value is 4 hour.
type: string
labelSelector:
description: LabelSelector is a metav1.LabelSelector to filter
with when adding individual objects to the backup. If empty
description: |-
LabelSelector is a metav1.LabelSelector to filter with
when adding individual objects to the backup. If empty
or nil, all objects are included. Optional.
nullable: true
properties:
@@ -363,41 +384,42 @@ spec:
description: matchExpressions is a list of label selector
requirements. The requirements are ANDed.
items:
description: A label selector requirement is a selector
that contains values, a key, and an operator that relates
the key and values.
description: |-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
properties:
key:
description: key is the label key that the selector
applies to.
type: string
operator:
description: operator represents a key's relationship
to a set of values. Valid operators are In, NotIn,
Exists and DoesNotExist.
description: |-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If
the operator is In or NotIn, the values array must
be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced
during a strategic merge patch.
description: |-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
items:
type: string
type: array
x-kubernetes-list-type: atomic
required:
- key
- operator
type: object
type: array
x-kubernetes-list-type: atomic
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs. A
single {key,value} in the matchLabels map is equivalent
to an element of matchExpressions, whose key field is "key",
the operator is "In", and the values array contains only
"value". The requirements are ANDed.
description: |-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
@@ -409,56 +431,58 @@ spec:
type: object
type: object
orLabelSelectors:
description: OrLabelSelectors is list of metav1.LabelSelector
to filter with when adding individual objects to the backup.
If multiple provided they will be joined by the OR operator.
LabelSelector as well as OrLabelSelectors cannot co-exist in
backup request, only one of them can be used.
description: |-
OrLabelSelectors is list of metav1.LabelSelector to filter with
when adding individual objects to the backup. If multiple provided
they will be joined by the OR operator. LabelSelector as well as
OrLabelSelectors cannot co-exist in backup request, only one of them
can be used.
items:
description: A label selector is a label query over a set of
resources. The result of matchLabels and matchExpressions
are ANDed. An empty label selector matches all objects. A
null label selector matches no objects.
description: |-
A label selector is a label query over a set of resources. The result of matchLabels and
matchExpressions are ANDed. An empty label selector matches all objects. A null
label selector matches no objects.
properties:
matchExpressions:
description: matchExpressions is a list of label selector
requirements. The requirements are ANDed.
items:
description: A label selector requirement is a selector
that contains values, a key, and an operator that relates
the key and values.
description: |-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
properties:
key:
description: key is the label key that the selector
applies to.
type: string
operator:
description: operator represents a key's relationship
to a set of values. Valid operators are In, NotIn,
Exists and DoesNotExist.
description: |-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values.
If the operator is In or NotIn, the values array
must be non-empty. If the operator is Exists or
DoesNotExist, the values array must be empty. This
array is replaced during a strategic merge patch.
description: |-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
items:
type: string
type: array
x-kubernetes-list-type: atomic
required:
- key
- operator
type: object
type: array
x-kubernetes-list-type: atomic
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs.
A single {key,value} in the matchLabels map is equivalent
to an element of matchExpressions, whose key field is
"key", the operator is "In", and the values array contains
only "value". The requirements are ANDed.
description: |-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
@@ -467,11 +491,10 @@ spec:
orderedResources:
additionalProperties:
type: string
description: OrderedResources specifies the backup order of resources
of specific Kind. The map key is the resource name and value
is a list of object names separated by commas. Each resource
name has format "namespace/objectname". For cluster resources,
simply use "objectname".
description: |-
OrderedResources specifies the backup order of resources of specific Kind.
The map key is the resource name and value is a list of object names separated by commas.
Each resource name has format "namespace/objectname". For cluster resources, simply use "objectname".
nullable: true
type: object
resourcePolicy:
@@ -479,10 +502,10 @@ spec:
policies that backup should follow
properties:
apiGroup:
description: APIGroup is the group for the resource being
referenced. If APIGroup is not specified, the specified
Kind must be in the core API group. For any other third-party
types, APIGroup is required.
description: |-
APIGroup is the group for the resource being referenced.
If APIGroup is not specified, the specified Kind must be in the core API group.
For any other third-party types, APIGroup is required.
type: string
kind:
description: Kind is the type of resource being referenced
@@ -501,9 +524,10 @@ spec:
nullable: true
type: boolean
snapshotVolumes:
description: SnapshotVolumes specifies whether to take snapshots
of any PV's referenced in the set of objects included in the
Backup.
description: |-
SnapshotVolumes specifies whether to take snapshots
of any PV's referenced in the set of objects included
in the Backup.
nullable: true
type: boolean
storageLocation:
@@ -511,9 +535,20 @@ spec:
a BackupStorageLocation where the backup should be stored.
type: string
ttl:
description: TTL is a time.Duration-parseable string describing
how long the Backup should be retained for.
description: |-
TTL is a time.Duration-parseable string describing how long
the Backup should be retained for.
type: string
uploaderConfig:
description: UploaderConfig specifies the configuration for the
uploader.
nullable: true
properties:
parallelFilesUpload:
description: ParallelFilesUpload is the number of files parallel
uploads to perform when using the uploader.
type: integer
type: object
volumeSnapshotLocations:
description: VolumeSnapshotLocations is a list containing names
of VolumeSnapshotLocations associated with this backup.
@@ -522,8 +557,9 @@ spec:
type: array
type: object
useOwnerReferencesInBackup:
description: UseOwnerReferencesBackup specifies whether to use OwnerReferences
on backups created by this Schedule.
description: |-
UseOwnerReferencesBackup specifies whether to use
OwnerReferences on backups created by this Schedule.
nullable: true
type: boolean
required:
@@ -534,11 +570,17 @@ spec:
description: ScheduleStatus captures the current state of a Velero schedule
properties:
lastBackup:
description: LastBackup is the last time a Backup was run for this
description: |-
LastBackup is the last time a Backup was run for this
Schedule schedule
format: date-time
nullable: true
type: string
lastSkipped:
description: LastSkipped is the last time a Schedule was skipped
format: date-time
nullable: true
type: string
phase:
description: Phase is the current phase of the Schedule
enum:
@@ -547,8 +589,9 @@ spec:
- FailedValidation
type: string
validationErrors:
description: ValidationErrors is a slice of all validation errors
(if applicable)
description: |-
ValidationErrors is a slice of all validation errors (if
applicable)
items:
type: string
type: array

View File

@@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.12.0
controller-gen.kubebuilder.io/version: v0.16.5
name: serverstatusrequests.velero.io
spec:
group: velero.io
@@ -19,18 +19,24 @@ spec:
- name: v1
schema:
openAPIV3Schema:
description: ServerStatusRequest is a request to access current status information
about the Velero server.
description: |-
ServerStatusRequest is a request to access current status information about
the Velero server.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
@@ -63,8 +69,9 @@ spec:
nullable: true
type: array
processedTimestamp:
description: ProcessedTimestamp is when the ServerStatusRequest was
processed by the ServerStatusRequestController.
description: |-
ProcessedTimestamp is when the ServerStatusRequest was processed
by the ServerStatusRequestController.
format: date-time
nullable: true
type: string

View File

@@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.12.0
controller-gen.kubebuilder.io/version: v0.16.5
name: volumesnapshotlocations.velero.io
spec:
group: velero.io
@@ -23,14 +23,19 @@ spec:
snapshots.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
@@ -52,8 +57,13 @@ spec:
valid secret key.
type: string
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
TODO: Add other useful fields. apiVersion, kind, uid?'
default: ""
description: |-
Name of the referent.
This field is effectively required, but due to backwards compatibility is
allowed to be empty. Instances of this type with an empty value here are
almost certainly wrong.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
type: string
optional:
description: Specify whether the Secret or its key must be defined

File diff suppressed because one or more lines are too long

View File

@@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.12.0
controller-gen.kubebuilder.io/version: v0.16.5
name: datadownloads.velero.io
spec:
group: velero.io
@@ -48,16 +48,23 @@ spec:
name: v2alpha1
schema:
openAPIV3Schema:
description: DataDownload acts as the protocol between data mover plugins
and data mover controller for the datamover restore operation
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
@@ -65,12 +72,14 @@ spec:
description: DataDownloadSpec is the specification for a DataDownload.
properties:
backupStorageLocation:
description: BackupStorageLocation is the name of the backup storage
location where the backup repository is stored.
description: |-
BackupStorageLocation is the name of the backup storage location
where the backup repository is stored.
type: string
cancel:
description: Cancel indicates request to cancel the ongoing DataDownload.
It can be set when the DataDownload is in InProgress phase
description: |-
Cancel indicates request to cancel the ongoing DataDownload. It can be set
when the DataDownload is in InProgress phase
type: boolean
dataMoverConfig:
additionalProperties:
@@ -79,22 +88,30 @@ spec:
fields.
type: object
datamover:
description: DataMover specifies the data mover to be used by the
backup. If DataMover is "" or "velero", the built-in data mover
will be used.
description: |-
DataMover specifies the data mover to be used by the backup.
If DataMover is "" or "velero", the built-in data mover will be used.
type: string
nodeOS:
description: NodeOS is OS of the node where the DataDownload is processed.
enum:
- auto
- linux
- windows
type: string
operationTimeout:
description: OperationTimeout specifies the time used to wait internal
operations, before returning error as timeout.
description: |-
OperationTimeout specifies the time used to wait internal operations,
before returning error as timeout.
type: string
snapshotID:
description: SnapshotID is the ID of the Velero backup snapshot to
be restored from.
type: string
sourceNamespace:
description: SourceNamespace is the original namespace where the volume
is backed up from. It may be different from SourcePVC's namespace
if namespace is remapped during restore.
description: |-
SourceNamespace is the original namespace where the volume is backed up from.
It may be different from SourcePVC's namespace if namespace is remapped during restore.
type: string
targetVolume:
description: TargetVolume is the information of the target PVC and
@@ -126,10 +143,21 @@ spec:
status:
description: DataDownloadStatus is the current status of a DataDownload.
properties:
acceptedByNode:
description: Node is name of the node where the DataUpload is prepared.
type: string
acceptedTimestamp:
description: |-
AcceptedTimestamp records the time the DataUpload is to be prepared.
The server's time is used for AcceptedTimestamp
format: date-time
nullable: true
type: string
completionTimestamp:
description: CompletionTimestamp records the time a restore was completed.
Completion time is recorded even on failed restores. The server's
time is used for CompletionTimestamps
description: |-
CompletionTimestamp records the time a restore was completed.
Completion time is recorded even on failed restores.
The server's time is used for CompletionTimestamps
format: date-time
nullable: true
type: string
@@ -152,9 +180,10 @@ spec:
- Failed
type: string
progress:
description: Progress holds the total number of bytes of the snapshot
and the current number of restored bytes. This can be used to display
progress information about the restore operation.
description: |-
Progress holds the total number of bytes of the snapshot and the current
number of restored bytes. This can be used to display progress information
about the restore operation.
properties:
bytesDone:
format: int64
@@ -164,7 +193,8 @@ spec:
type: integer
type: object
startTimestamp:
description: StartTimestamp records the time a restore was started.
description: |-
StartTimestamp records the time a restore was started.
The server's time is used for StartTimestamps
format: date-time
nullable: true

View File

@@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.12.0
controller-gen.kubebuilder.io/version: v0.16.5
name: datauploads.velero.io
spec:
group: velero.io
@@ -49,16 +49,23 @@ spec:
name: v2alpha1
schema:
openAPIV3Schema:
description: DataUpload acts as the protocol between data mover plugins and
data mover controller for the datamover backup operation
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
@@ -66,12 +73,14 @@ spec:
description: DataUploadSpec is the specification for a DataUpload.
properties:
backupStorageLocation:
description: BackupStorageLocation is the name of the backup storage
location where the backup repository is stored.
description: |-
BackupStorageLocation is the name of the backup storage location
where the backup repository is stored.
type: string
cancel:
description: Cancel indicates request to cancel the ongoing DataUpload.
It can be set when the DataUpload is in InProgress phase
description: |-
Cancel indicates request to cancel the ongoing DataUpload. It can be set
when the DataUpload is in InProgress phase
type: boolean
csiSnapshot:
description: If SnapshotType is CSI, CSISnapshot provides the information
@@ -102,22 +111,23 @@ spec:
nullable: true
type: object
datamover:
description: DataMover specifies the data mover to be used by the
backup. If DataMover is "" or "velero", the built-in data mover
will be used.
description: |-
DataMover specifies the data mover to be used by the backup.
If DataMover is "" or "velero", the built-in data mover will be used.
type: string
operationTimeout:
description: OperationTimeout specifies the time used to wait internal
operations, before returning error as timeout.
description: |-
OperationTimeout specifies the time used to wait internal operations,
before returning error as timeout.
type: string
snapshotType:
description: SnapshotType is the type of the snapshot to be backed
up.
type: string
sourceNamespace:
description: SourceNamespace is the original namespace where the volume
is backed up from. It is the same namespace for SourcePVC and CSI
namespaced objects.
description: |-
SourceNamespace is the original namespace where the volume is backed up from.
It is the same namespace for SourcePVC and CSI namespaced objects.
type: string
sourcePVC:
description: SourcePVC is the name of the PVC which the snapshot is
@@ -133,11 +143,23 @@ spec:
status:
description: DataUploadStatus is the current status of a DataUpload.
properties:
acceptedByNode:
description: AcceptedByNode is name of the node where the DataUpload
is prepared.
type: string
acceptedTimestamp:
description: |-
AcceptedTimestamp records the time the DataUpload is to be prepared.
The server's time is used for AcceptedTimestamp
format: date-time
nullable: true
type: string
completionTimestamp:
description: CompletionTimestamp records the time a backup was completed.
Completion time is recorded even on failed backups. Completion time
is recorded before uploading the backup object. The server's time
is used for CompletionTimestamps
description: |-
CompletionTimestamp records the time a backup was completed.
Completion time is recorded even on failed backups.
Completion time is recorded before uploading the backup object.
The server's time is used for CompletionTimestamps
format: date-time
nullable: true
type: string
@@ -154,6 +176,13 @@ spec:
node:
description: Node is name of the node where the DataUpload is processed.
type: string
nodeOS:
description: NodeOS is OS of the node where the DataUpload is processed.
enum:
- auto
- linux
- windows
type: string
path:
description: Path is the full path of the snapshot volume being backed
up.
@@ -171,9 +200,10 @@ spec:
- Failed
type: string
progress:
description: Progress holds the total number of bytes of the volume
and the current number of backed up bytes. This can be used to display
progress information about the backup operation.
description: |-
Progress holds the total number of bytes of the volume and the current
number of backed up bytes. This can be used to display progress information
about the backup operation.
properties:
bytesDone:
format: int64
@@ -187,8 +217,10 @@ spec:
backup repository.
type: string
startTimestamp:
description: StartTimestamp records the time a backup was started.
Separate from CreationTimestamp, since that value changes on restores.
description: |-
StartTimestamp records the time a backup was started.
Separate from CreationTimestamp, since that value changes
on restores.
The server's time is used for StartTimestamps
format: date-time
nullable: true

File diff suppressed because one or more lines are too long

View File

@@ -8,17 +8,7 @@ rules:
- ""
resources:
- persistentvolumerclaims
verbs:
- get
- apiGroups:
- ""
resources:
- persistentvolumes
verbs:
- get
- apiGroups:
- ""
resources:
- pods
verbs:
- get
@@ -26,6 +16,18 @@ rules:
- velero.io
resources:
- backuprepositories
- backups
- backupstoragelocations
- datadownloads
- datauploads
- deletebackuprequests
- downloadrequests
- podvolumebackups
- podvolumerestores
- restores
- schedules
- serverstatusrequests
- volumesnapshotlocations
verbs:
- create
- delete
@@ -38,239 +40,18 @@ rules:
- velero.io
resources:
- backuprepositories/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- backups
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- backups/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- backupstoragelocations
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- backupstoragelocations/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- datadownloads
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- datadownloads/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- datauploads
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- datauploads/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- deletebackuprequests
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- deletebackuprequests/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- downloadrequests
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- downloadrequests/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- podvolumebackups
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- podvolumebackups/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- podvolumerestores
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- podvolumerestores/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- restores
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- restores/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- schedules
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- schedules/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- serverstatusrequests
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- serverstatusrequests/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- volumesnapshotlocations
verbs:
- create
- delete
- get
- list
- patch
- update
- watch

View File

@@ -49,6 +49,9 @@ spec:
- mountPath: /host_pods
mountPropagation: HostToContainer
name: host-pods
- mountPath: /var/lib/kubelet/plugins
mountPropagation: HostToContainer
name: host-plugins
- mountPath: /scratch
name: scratch
- mountPath: /credentials
@@ -60,6 +63,9 @@ spec:
- hostPath:
path: /var/lib/kubelet/pods
name: host-pods
- hostPath:
path: /var/lib/kubelet/plugins
name: host-plugins
- emptyDir: {}
name: scratch
- name: cloud-credentials

View File

@@ -0,0 +1,344 @@
# Extend VolumePolicies to support more actions
## Abstract
Currently, the [VolumePolicies feature](https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/handle-backup-of-volumes-by-resources-filters.md) which can be used to filter/handle volumes during backup only supports the skip action on matching conditions. Users need more actions to be supported.
## Background
The `VolumePolicies` feature was introduced in Velero 1.11 as a flexible way to handle volumes. The main agenda of
introducing the VolumePolicies feature was to improve the overall user experience when performing backup operations
for volume resources, the feature enables users to group volumes according the `conditions` (criteria) specified and
also lets you specify the `action` that velero needs to take for these grouped volumes during the backup operation.
The limitation being that currently `VolumePolicies` only supports `skip` as an action, We want to extend the `action`
functionality to support more usable options like `fs-backup` (File system backup) and `snapshot` (VolumeSnapshots).
## Goals
- Extending the VolumePolicies to support more actions like `fs-backup` (File system backup) and `snapshot` (VolumeSnapshots).
- Improve user experience when backing up Volumes via Velero
## Non-Goals
- No changes to existing approaches to opt-in/opt-out annotations for volumes
- No changes to existing `VolumePolicies` functionalities
- No additions or implementations to support more granular actions like `snapshot-csi` and `snapshot-datamover`. These actions can be implemented as a future enhancement
## Use-cases/Scenarios
**Use-case 1:**
- A user wants to use `snapshot` (volumesnapshots) backup option for all the csi supported volumes and `fs-backup` for the rest of the volumes.
- Currently, velero supports this use-case but the user experience is not that great.
- The user will have to individually annotate the volume mounting pod with the annotation "backup.velero.io/backup-volumes" for `fs-backup`
- This becomes cumbersome at scale.
- Using `VolumePolicies`, the user can just specify 2 simple `VolumePolicies` like for csi supported volumes as `snapshot` action and rest can be backed up`fs-backup` action:
```yaml
version: v1
volumePolicies:
- conditions:
storageClass:
- gp2
action:
type: snapshot
- conditions: {}
action:
type: fs-backup
```
**Use-case 2:**
- A user wants to use `fs-backup` for nfs volumes pertaining to a particular server
- In such a scenario the user can just specify a `VolumePolicy` like:
```yaml
version: v1
volumePolicies:
- conditions:
nfs:
server: 192.168.200.90
action:
type: fs-backup
```
## High-Level Design
- When the VolumePolicy action is set as `fs-backup` the backup workflow modifications would be:
- We call [backupItem() -> backupItemInternal()](https://github.com/vmware-tanzu/velero/blob/main/pkg/backup/item_backupper.go#L95) on all the items that are to be backed up
- Here when we encounter [Pod as an item ](https://github.com/vmware-tanzu/velero/blob/main/pkg/backup/item_backupper.go#L195)
- We will have to modify the backup workflow to account for the `fs-backup` VolumePolicy action
- When the VolumePolicy action is set as `snapshot` the backup workflow modifications would be:
- Once again, We call [backupItem() -> backupItemInternal()](https://github.com/vmware-tanzu/velero/blob/main/pkg/backup/item_backupper.go#L95) on all the items that are to be backed up
- Here when we encounter [Persistent Volume as an item](https://github.com/vmware-tanzu/velero/blob/d4128542590470b204a642ee43311921c11db880/pkg/backup/item_backupper.go#L253)
- And we call the [takePVSnapshot func](https://github.com/vmware-tanzu/velero/blob/d4128542590470b204a642ee43311921c11db880/pkg/backup/item_backupper.go#L508)
- We need to modify the takePVSnapshot function to account for the `snapshot` VolumePolicy action.
- In case of csi snapshots for PVC objects, these snapshot actions are taken by the velero-plugin-for-csi, we need to modify the [executeActions()](https://github.com/vmware-tanzu/velero/blob/512fe0dabdcb3bbf1ca68a9089056ae549663bcf/pkg/backup/item_backupper.go#L232) function to account for the `snapshot` VolumePolicy action.
**Note:** `Snapshot` action can either be a native snapshot or a csi snapshot, as is the case with the current flow where velero itself makes the decision based on the backup CR.
## Detailed Design
- Update VolumePolicy action type validation to account for `fs-backup` and `snapshot` as valid VolumePolicy actions.
- Modifications needed for `fs-backup` action:
- Now based on the specification of volume policy on backup request we will decide whether to go via legacy pod annotations approach or the newer volume policy based fs-backup action approach.
- If there is a presence of volume policy(fs-backup/snapshot) on the backup request that matches as an action for a volume we use the newer volume policy approach to get the list of the volumes for `fs-backup` action
- Else continue with the annotation based legacy approach workflow.
- Modifications needed for `snapshot` action:
- In the [takePVSnapshot function](https://github.com/vmware-tanzu/velero/blob/d4128542590470b204a642ee43311921c11db880/pkg/backup/item_backupper.go#L508) we will check the PV fits the volume policy criteria and see if the associated action is `snapshot`
- If it is not snapshot then we skip the further workflow and avoid taking the snapshot of the PV
- Similarly, For csi snapshot of PVC object, we need to do similar changes in [executeAction() function](https://github.com/vmware-tanzu/velero/blob/512fe0dabdcb3bbf1ca68a9089056ae549663bcf/pkg/backup/item_backupper.go#L348). we will check the PVC fits the volume policy criteria and see if the associated action is `snapshot` via csi plugin
- If it is not snapshot then we skip the csi BIA execute action and avoid taking the snapshot of the PVC by not invoking the csi plugin action for the PVC
**Note:**
- When we are using the `VolumePolicy` approach for backing up the volumes then the volume policy criteria and action need to be specific and explicit, there is no default behavior, if a volume matches `fs-backup` action then `fs-backup` method will be used for that volume and similarly if the volume matches the criteria for `snapshot` action then the snapshot workflow will be used for the volume backup.
- Another thing to note is the workflow proposed in this design uses the legacy `opt-in/opt-out` approach as a fallback option. For instance, the user specifies a VolumePolicy but for a particular volume included in the backup there are no actions(fs-backup/snapshot) matching in the volume policy for that volume, in such a scenario the legacy approach will be used for backing up the particular volume.
- The relation between the `VolumePolicy` and the backup's legacy parameter `SnapshotVolumes`:
- The `VolumePolicy`'s `snapshot` action matching for volume has higher priority. When there is a `snapshot` action matching for the selected volume, it will be backed by the snapshot way, no matter of the `backup.Spec.SnapshotVolumes` setting.
- If there is no `snapshot` action matching the selected volume in the `VolumePolicy`, then the volume will be backed up by `snapshot` way, if the `backup.Spec.SnapshotVolumes` is not set to false.
- The relation between the `VolumePolicy` and the backup's legacy filesystem `opt-in/opt-out` approach:
- The `VolumePolicy`'s `fs-backup` action matching for volume has higher priority. When there is a `fs-backup` action matching for the selected volume, it will be backed by the fs-backup way, no matter of the `backup.Spec.DefaultVolumesToFsBackup` setting and the pod's `opt-in/opt-out` annotation setting.
- If there is no `fs-backup` action matching the selected volume in the `VolumePolicy`, then the volume will be backed up by the legacy `opt-in/opt-out` way.
## Implementation
- The implementation should be included in velero 1.14
- We will introduce a `VolumeHelper` interface. It will consist of two methods:
```go
type VolumeHelper interface {
ShouldPerformSnapshot(obj runtime.Unstructured, groupResource schema.GroupResource) (bool, error)
ShouldPerformFSBackup(volume corev1api.Volume, pod corev1api.Pod) (bool, error)
}
```
- The `VolumeHelperImpl` struct will implement the `VolumeHelper` interface and will consist of the functions that we will use through the backup workflow to accommodate volume policies for PVs and PVCs.
```go
type volumeHelperImpl struct {
volumePolicy *resourcepolicies.Policies
snapshotVolumes *bool
logger logrus.FieldLogger
client crclient.Client
defaultVolumesToFSBackup bool
backupExcludePVC bool
}
```
- We will create an instance of the structure `volumeHelperImpl` in `item_backupper.go`
```go
itemBackupper := &itemBackupper{
...
volumeHelperImpl: volumehelper.NewVolumeHelperImpl(
resourcePolicy,
backupRequest.Spec.SnapshotVolumes,
log,
kb.kbClient,
boolptr.IsSetToTrue(backupRequest.Spec.DefaultVolumesToFsBackup),
!backupRequest.ResourceIncludesExcludes.ShouldInclude(kuberesource.PersistentVolumeClaims.String()),
),
}
```
#### FS-Backup
- Regarding `fs-backup` action to decide whether to use legacy annotation based approach or volume policy based approach:
- We will use the `vh.ShouldPerformFSBackup()` function from the `volumehelper` package
- Functions involved in processing `fs-backup` volume policy action will somewhat look like:
```go
func (v volumeHelperImpl) ShouldPerformFSBackup(volume corev1api.Volume, pod corev1api.Pod) (bool, error) {
if !v.shouldIncludeVolumeInBackup(volume) {
v.logger.Debugf("skip fs-backup action for pod %s's volume %s, due to not pass volume check.", pod.Namespace+"/"+pod.Name, volume.Name)
return false, nil
}
if v.volumePolicy != nil {
pvc, err := kubeutil.GetPVCForPodVolume(&volume, &pod, v.client)
if err != nil {
v.logger.WithError(err).Errorf("fail to get PVC for pod %s", pod.Namespace+"/"+pod.Name)
return false, err
}
pv, err := kubeutil.GetPVForPVC(pvc, v.client)
if err != nil {
v.logger.WithError(err).Errorf("fail to get PV for PVC %s", pvc.Namespace+"/"+pvc.Name)
return false, err
}
action, err := v.volumePolicy.GetMatchAction(pv)
if err != nil {
v.logger.WithError(err).Errorf("fail to get VolumePolicy match action for PV %s", pv.Name)
return false, err
}
if action != nil {
if action.Type == resourcepolicies.FSBackup {
v.logger.Infof("Perform fs-backup action for volume %s of pod %s due to volume policy match",
volume.Name, pod.Namespace+"/"+pod.Name)
return true, nil
} else {
v.logger.Infof("Skip fs-backup action for volume %s for pod %s because the action type is %s",
volume.Name, pod.Namespace+"/"+pod.Name, action.Type)
return false, nil
}
}
}
if v.shouldPerformFSBackupLegacy(volume, pod) {
v.logger.Infof("Perform fs-backup action for volume %s of pod %s due to opt-in/out way",
volume.Name, pod.Namespace+"/"+pod.Name)
return true, nil
} else {
v.logger.Infof("Skip fs-backup action for volume %s of pod %s due to opt-in/out way",
volume.Name, pod.Namespace+"/"+pod.Name)
return false, nil
}
}
```
- The main function from the above will be called when we encounter Pods during the backup workflow:
```go
for _, volume := range pod.Spec.Volumes {
shouldDoFSBackup, err := ib.volumeHelperImpl.ShouldPerformFSBackup(volume, *pod)
if err != nil {
backupErrs = append(backupErrs, errors.WithStack(err))
}
...
}
```
#### Snapshot (PV)
- Making sure that `snapshot` action is skipped for PVs that do not fit the volume policy criteria, for this we will use the `vh.ShouldPerformSnapshot` from the `VolumeHelperImpl(vh)` receiver.
```go
func (v *volumeHelperImpl) ShouldPerformSnapshot(obj runtime.Unstructured, groupResource schema.GroupResource) (bool, error) {
// check if volume policy exists and also check if the object(pv/pvc) fits a volume policy criteria and see if the associated action is snapshot
// if it is not snapshot then skip the code path for snapshotting the PV/PVC
pvc := new(corev1api.PersistentVolumeClaim)
pv := new(corev1api.PersistentVolume)
var err error
if groupResource == kuberesource.PersistentVolumeClaims {
if err = runtime.DefaultUnstructuredConverter.FromUnstructured(obj.UnstructuredContent(), &pvc); err != nil {
return false, err
}
pv, err = kubeutil.GetPVForPVC(pvc, v.client)
if err != nil {
return false, err
}
}
if groupResource == kuberesource.PersistentVolumes {
if err = runtime.DefaultUnstructuredConverter.FromUnstructured(obj.UnstructuredContent(), &pv); err != nil {
return false, err
}
}
if v.volumePolicy != nil {
action, err := v.volumePolicy.GetMatchAction(pv)
if err != nil {
return false, err
}
// If there is a match action, and the action type is snapshot, return true,
// or the action type is not snapshot, then return false.
// If there is no match action, go on to the next check.
if action != nil {
if action.Type == resourcepolicies.Snapshot {
v.logger.Infof(fmt.Sprintf("performing snapshot action for pv %s", pv.Name))
return true, nil
} else {
v.logger.Infof("Skip snapshot action for pv %s as the action type is %s", pv.Name, action.Type)
return false, nil
}
}
}
// If this PV is claimed, see if we've already taken a (pod volume backup)
// snapshot of the contents of this PV. If so, don't take a snapshot.
if pv.Spec.ClaimRef != nil {
pods, err := podvolumeutil.GetPodsUsingPVC(
pv.Spec.ClaimRef.Namespace,
pv.Spec.ClaimRef.Name,
v.client,
)
if err != nil {
v.logger.WithError(err).Errorf("fail to get pod for PV %s", pv.Name)
return false, err
}
for _, pod := range pods {
for _, vol := range pod.Spec.Volumes {
if vol.PersistentVolumeClaim != nil &&
vol.PersistentVolumeClaim.ClaimName == pv.Spec.ClaimRef.Name &&
v.shouldPerformFSBackupLegacy(vol, pod) {
v.logger.Infof("Skipping snapshot of pv %s because it is backed up with PodVolumeBackup.", pv.Name)
return false, nil
}
}
}
}
if !boolptr.IsSetToFalse(v.snapshotVolumes) {
// If the backup.Spec.SnapshotVolumes is not set, or set to true, then should take the snapshot.
v.logger.Infof("performing snapshot action for pv %s as the snapshotVolumes is not set to false", pv.Name)
return true, nil
}
v.logger.Infof(fmt.Sprintf("skipping snapshot action for pv %s possibly due to no volume policy setting or snapshotVolumes is false", pv.Name))
return false, nil
}
```
- The function `ShouldPerformSnapshot` will be used as follows in `takePVSnapshot` function of the backup workflow:
```go
snapshotVolume, err := ib.volumeHelperImpl.ShouldPerformSnapshot(obj, kuberesource.PersistentVolumes)
if err != nil {
return err
}
if !snapshotVolume {
log.Info(fmt.Sprintf("skipping volume snapshot for PV %s as it does not fit the volume policy criteria specified by the user for snapshot action", pv.Name))
ib.trackSkippedPV(obj, kuberesource.PersistentVolumes, volumeSnapshotApproach, "does not satisfy the criteria for volume policy based snapshot action", log)
return nil
}
```
#### Snapshot (PVC)
- Making sure that `snapshot` action is skipped for PVCs that do not fit the volume policy criteria, for this we will again use the `vh.ShouldPerformSnapshot` from the `VolumeHelperImpl(vh)` receiver.
- We will pass the `VolumeHelperImpl(vh)` instance in `executeActions` method so that it is available to use.
```go
```
- The above function will be used as follows in the `executeActions` function of backup workflow.
- Considering the vSphere plugin doesn't support the VolumePolicy yet, don't use the VolumePolicy for vSphere plugin by now.
```go
if groupResource == kuberesource.PersistentVolumeClaims {
if actionName == csiBIAPluginName {
snapshotVolume, err := ib.volumeHelperImpl.ShouldPerformSnapshot(obj, kuberesource.PersistentVolumeClaims)
if err != nil {
return nil, itemFiles, errors.WithStack(err)
}
if !snapshotVolume {
log.Info(fmt.Sprintf("skipping csi volume snapshot for PVC %s as it does not fit the volume policy criteria specified by the user for snapshot action", namespace+"/"+name))
ib.trackSkippedPV(obj, kuberesource.PersistentVolumeClaims, volumeSnapshotApproach, "does not satisfy the criteria for volume policy based snapshot action", log)
continue
}
}
}
```
## Future Implementation
It makes sense to add more specific actions in the future, once we deprecate the legacy opt-in/opt-out approach to keep things simple. Another point of note is, csi related action can be
easier to implement once we decide to merge the csi plugin into main velero code flow.
In the future, we envision the following actions that can be implemented:
- `snapshot-native`: only use volume snapshotter (native cloud provider snapshots), do nothing if not present/not compatible
- `snapshot-csi`: only use csi-plugin, don't use volume snapshotter(native cloud provider snapshots), don't use datamover even if snapshotMoveData is true
- `snapshot-datamover`: only use csi with datamover, don't use volume snapshotter (native cloud provider snapshots), use datamover even if snapshotMoveData is false
**Note:** The above actions are just suggestions for future scope, we may not use/implement them as is. We could definitely merge these suggested actions as `Snapshot` actions and use volume policy parameters and criteria to segregate them instead of making the user explicitly supply the action names to such granular levels.
## Related to Design
[Handle backup of volumes by resources filters](https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/handle-backup-of-volumes-by-resources-filters.md)
## Alternatives Considered
Same as the earlier design as this is an extension of the original VolumePolicies design

View File

@@ -0,0 +1,370 @@
# Velero Backup performance Improvements and VolumeGroupSnapshot enablement
There are two different goals here, linked by a single primary missing feature in the Velero backup workflow.
The first goal is to enhance backup performance by allowing the primary backup controller to run in multiple threads, enabling Velero to back up multiple items at the same time for a given backup.
The second goal is to enable Velero to eventually support VolumeGroupSnapshots.
For both of these goals, Velero needs a way to determine which items should be backed up together.
This design proposal will include two development phases:
- Phase 1 will refactor the backup workflow to identify blocks of related items that should be backed up together, and then coordinate backup hooks among items in the block.
- Phase 2 will add multiple worker threads for backing up item blocks, so instead of backing up each block as it identified, the velero backup workflow will instead add the block to a channel and one of the workers will pick it up.
- Actual support for VolumeGroupSnapshots is out-of-scope here and will be handled in a future design proposal, but the item block refactor introduced in Phase 1 is a primary building block for this future proposal.
## Background
Currently, during backup processing, the main Velero backup controller runs in a single thread, completely finishing the primary backup processing for one resource before moving on to the next one.
We can improve the overall backup performance by backing up multiple items for a backup at the same time, but before we can do this we must first identify resources that need to be backed up together.
Generally speaking, resources that need to be backed up together are resources with interdependencies -- pods with their PVCs, PVCs with their PVs, groups of pods that form a single application, CRs, pods, and other resources that belong to the same operator, etc.
As part of this initial refactoring, once these "Item Blocks" are identified, an additional change will be to move pod hook processing up to the ItemBlock level.
If there are multiple pods in the ItemBlock, pre-hooks for all pods will be run before backing up the items, followed by post-hooks for all pods.
This change to hook processing is another prerequisite for future VolumeGroupSnapshot support, since supporting this will require backing up the pods and volumes together for any volumes which belong to the same group.
Once we are backing up items by block, the next step will be to create multiple worker threads to process and back up ItemBlocks, so that we can back up multiple ItemBlocks at the same time.
In looking at the different kinds of large backups that Velero must deal with, two obvious scenarios come to mind:
1. Backups with a relatively small number of large volumes
2. Backups with a large number of relatively small volumes.
In case 1, the majority of the time spent on the backup is in the asynchronous phases -- CSI snapshot creation actions after the snaphandle exists, and DataUpload processing. In that case, parallel item processing will likely have a minimal impact on overall backup completion time.
In case 2, the majority of time spent on the backup will likely be during the synchronous actions. Especially as regards CSI snapshot creation, the waiting for the VSC snaphandle to exist will result in significant passage of time with thousands of volumes. This is the sort of use case which will benefit the most from parallel item processing.
## Goals
- Identify groups of related items to back up together (ItemBlocks).
- Manage backup hooks at the ItemBlock level rather than per-item.
- Using worker threads, back up ItemBlocks at the same time.
## Non Goals
- Support VolumeGroupSnapshots: this is a future feature, although certain prerequisites for this enhancement are included in this proposal.
- Process multiple backups in parallel: this is a future feature, although certain prerequisites for this enhancement are included in this proposal.
- Refactoring plugin infrastructure to avoid RPC calls for internal plugins.
- Restore performance improvements: this is potentially a future feature
## High-Level Design
### ItemBlock concept
The updated design is based on a new struct/type called `ItemBlock`.
Essentially, an `ItemBlock` is a group of items that must be backed up together in order to guarantee backup integrity.
When we eventually split item backup across multiple worker threads, `ItemBlocks` will be kept together as the basic unit of backup.
To facilitate this, a new plugin type, `ItemBlockAction` will allow relationships between items to be identified by velero -- any resources that must be backed up with other resources will need IBA plugins defined for them.
Examples of `ItemBlocks` include:
1. A pod, its mounted PVCs, and the bound PVs for those PVCs.
2. A VolumeGroup (related PVCs and PVs) along with any pods mounting these volumes.
3. For a ReadWriteMany PVC, the PVC, its bound PV, and all pods mounting this PVC.
### Phase 1: ItemBlock processing
- A new plugin type, `ItemBlockAction`, will be created
- `ItemBlockAction` will contain the API method `GetRelatedItems`, which will be needed for determining which items to group together into `ItemBlocks`.
- When processing the list of items returned from the item collector, instead of simply calling `BackupItem` on each in turn, we will use the `GetRelatedItems` API call to determine other items to include with the current item in an ItemBlock. Repeat recursively on each item returned.
- Don't include an item in more than one ItemBlock -- if the next item from the item collector is already in a block, skip it.
- Once ItemBlock is determined, call new func `BackupItemBlock` instead of `BackupItem`.
- New func `BackupItemBlock` will call pre hooks for any pods in the block, then back up the items in the block (`BackupItem` will no longer run hooks directly), then call post hooks for any pods in the block.
- The finalize phase will not be affected by the ItemBlock design, since this is just updating resources after async operations are completed on the items and there is no need to run these updates in parallel.
### Phase 2: Process ItemBlocks for a single backup in multiple threads
- Concurrent `BackupItemBlock` operations will be executed by worker threads invoked by the backup controller, which will communicate with the backup controller operation via a shared channel.
- The ItemBlock processing loop implemented in Phase 1 will be modified to send each newly-created ItemBlock to the shared channel rather than calling `BackupItemBlock` inline.
- Users will be able to configure the number of workers available for concurrent `BackupItemBlock` operations.
- Access to the BackedUpItems map must be synchronized
## Detailed Design
### Phase 1: ItemBlock processing
#### New ItemBlockAction plugin type
In order for Velero to identify groups of items to back up together in an ItemBlock, we need a way to identify items which need to be backed up along with the current item. While the current `Execute` BackupItemAction method does return a list of additional items which are required by the current item, we need to know this *before* we start the item backup. To support this, we need a new plugin type, `ItemBlockAction` (IBA) with an API method, `GetRelatedItems` which Velero will call on each item as it processes. The expectation is that the registered IBA plugins will return the same items as returned as additional items by the BIA `Execute` method, with the exception that items which are not created until calling `Execute` should not be returned here, as they don't exist yet.
#### Proto definition (compiled into golang by protoc)
The ItemBlockAction plugin type is defined as follows:
```
service ItemBlockAction {
rpc AppliesTo(ItemBlockActionAppliesToRequest) returns (ItemBlockActionAppliesToResponse);
rpc GetRelatedItems(ItemBlockActionGetRelatedItemsRequest) returns (ItemBlockActionGetRelatedItemsResponse);
}
message ItemBlockActionAppliesToRequest {
string plugin = 1;
}
message ItemBlockActionAppliesToResponse {
ResourceSelector ResourceSelector = 1;
}
message ItemBlockActionGetRelatedItemsRequest {
string plugin = 1;
bytes item = 2;
bytes backup = 3;
}
message ItemBlockActionGetRelatedItemsResponse {
repeated generated.ResourceIdentifier relatedItems = 1;
}
```
A new PluginKind, `ItemBlockAction`, will be created, and the backup process will be modified to use this plugin kind.
For any BIA plugins which return additional items from `Execute()` that need to be backed up at the same time or sequentially in the same worker thread as the current items should add a new IBA plugin to return these same items (minus any which won't exist before BIA `Execute()` is called).
This mainly applies to plugins that operate on pods which reference resources which must be backed up along with the pod and are potentially affected by pod hooks or for plugins which connect multiple pods whose volumes should be backed up at the same time.
### Changes to processing item list from the Item Collector
#### New structs BackupItemBlock, ItemBlock, and ItemBlockItem
```go
package backup
type BackupItemBlock struct {
itemblock.ItemBlock
// This is a reference to the shared itemBackupper for the backup
itemBackupper *itemBackupper
}
package itemblock
type ItemBlock struct {
Log logrus.FieldLogger
Items []ItemBlockItem
}
type ItemBlockItem struct {
Gr schema.GroupResource
Item *unstructured.Unstructured
PreferredGVR schema.GroupVersionResource
}
```
#### Current workflow
In the `BackupWithResolvers` func, the current Velero implementation iterates over the list of items for backup returned by the Item Collector. For each item, Velero loads the item from the file created by the Item Collector, we call `backupItem`, update the GR map if successful, remove the (temporary) file containing item metadata, and update progress for the backup.
#### Modifications to the loop over ItemCollector results
The `kubernetesResource` struct used by the item collector will be modified to add an `orderedResource` bool which will be set true for all of the resources moved to the beginning for each GroupResource as a result of being ordered resources.
In addition, an `inItemBlock` bool is added to the struct which will be set to true later when processing the list when each item is added to an ItemBlock.
While the item collector already puts ordered resources first for each GR, there is no indication in the list which of these initial items are from the ordered resources list and which are the remaining (unordered) items.
Velero needs to know which resources are ordered because when we process them later, the ordered resources for each GroupResource must be processed sequentially in a single ItemBlock.
The current workflow within each iteration of the ItemCollector.items loop will replaced with the following:
- (note that some of the below should be pulled out into a helper func to facilitate recursive call to it for items returned from `GetRelatedItems`.)
- Before loop iteration, create a pointer to a `BackupItemBlock` which will represent the current ItemBlock being processed.
- If `item` has `inItemBlock==true`, continue. This one has already been processed.
- If current `itemBlock` is nil, create it.
- Add `item` to `itemBlock`.
- Load item from ItemCollector file. Close/remove file after loading (on error return or not, possibly with similar anonymous func to current impl)
- If other versions of the same item exist (via EnableAPIGroupVersions), add these to the `itemBlock` as well (and load from ItemCollector file)
- Get matching IBA plugins for item, call `GetRelatedItems` for each. For each item returned, get full item content from ItemCollector (if present in item list, pulling from file, removing file when done) or from cluster (if not present in item list), add item to the current block, add item to `itemsInBlock` map, and then recursively apply current step to each (i.e. call IBA method, add to block, etc.)
- If current item and next item are both ordered items for the same GR, then continue to next item, adding to current `itemBlock`.
- Once full ItemBlock list is generated, call `backupItemBlock(block ItemBlock)
- Add `backupItemBlock` return values to `backedUpGroupResources` map
#### New func `backupItemBlock`
Method signature for new func `backupItemBlock` is as follows:
```go
func (kb *kubernetesBackupper) backupItemBlock(block BackupItemBlock) []schema.GroupResource
```
The return value is a slice of GRs for resources which were backed up. Velero tracks these to determine which CRDs need to be included in the backup. Note that we need to make sure we include in this not only those resources that were backed up directly, but also those backed up indirectly via additional items BIA execute returns.
In order to handle backup hooks, this func will first take the input item list (`block.items`) and get a list of included pods, filtered to include only those not yet backed up (using `block.itemBackupper.backupRequest.BackedUpItems`). Iterate over this list and execute pre hooks (pulled out of `itemBackupper.backupItemInternal`) for each item.
Now iterate over the full list (`block.items`) and call `backupItem` for each. After the first, the later items should already have been backed up, but calling a second time is harmless, since the first thing Velero does is check the `BackedUpItems` map, exiting if item is already backed up). We still need this call in case there's a plugin which returns something in `GetAdditionalItems` but forgets to return it in the `Execute` additional items return value. If we don't do this, we could end up missing items.
After backing up the items in the block, we now execute post hooks using the same filtered item list we used for pre hooks, again taking the logic from `itemBackupper.backupItemInternal`).
#### `itemBackupper.backupItemInternal` cleanup
After implementing backup hooks in `backupItemBlock`, hook processing should be removed from `itemBackupper.backupItemInternal`.
### Phase 2: Process ItemBlocks for a single backup in multiple threads
#### New input field for number of ItemBlock workers
The velero installer and server CLIs will get a new input field `itemBlockWorkerCount`, which will be passed along to the `backupReconciler`.
The `backupReconciler` struct will also have this new field added.
#### Worker pool for item block processing
A new type, `ItemBlockWorker` will be added which will manage a pool of worker goroutines which will process item blocks, a shared input channel for passing blocks to workers, and a WaitGroup to shut down cleanly when the reconciler exits.
```go
type ItemBlockWorkerPool struct {
itemBlockChannel chan ItemBlockInput
wg *sync.WaitGroup
logger logrus.FieldLogger
}
type ItemBlockInput struct {
itemBlock *BackupItemBlock
returnChan chan ItemBlockReturn
}
type ItemBlockReturn struct {
itemBlock *BackupItemBlock
resources []schema.GroupResource
err error
}
func (*p ItemBlockWorkerPool) getInputChannel() chan ItemBlockInput
func StartItemBlockWorkerPool(context context.Context, workers int, logger logrus.FieldLogger) ItemBlockWorkerPool
func processItemBlockWorker(context context.Context, itemBlockChannel chan ItemBlockInput, logger logrus.FieldLogger, wg *sync.WaitGroup)
```
The worker pool will be started by calling `StartItemBlockWorkerPool` in `NewBackupReconciler()`, passing in the worker count and reconciler context.
`backupreconciler.prepareBackupRequest` will also add the input channel to the `backupRequest` so that it will be available during backup processing.
The func `StartItemBlockWorkerPool` will create the `ItemBlockWorkerPool` with a shared buffered input channel (fixed buffer size) and start `workers` gororoutines which will each call `processItemBlockWorker`.
The `processItemBlockWorker` func (run by the worker goroutines) will read from `itemBlockChannel`, call `BackupItemBlock` on the retrieved `ItemBlock`, and then send the return value to the retrieved `returnChan`, and then process the next block.
#### Modify ItemBlock processing loop to send ItemBlocks to the worker pool rather than backing them up directly
The ItemBlock processing loop implemented in Phase 1 will be modified to send each newly-created ItemBlock to the shared channel rather than calling `BackupItemBlock` inline, using a WaitGroup to manage in-process items. A separate goroutine will be created to process returns for this backup. After completion of the ItemBlock processing loop, velero will use the WaitGroup to wait for all ItemBlock processing to complete before moving forward.
A simplified example of what this response goroutine might look like:
```go
// omitting cancel handling, context, etc
ret := make(chan ItemBlockReturn)
wg := &sync.WaitGroup{}
// Handle returns
go func() {
for {
select {
case response := <-ret: // process each BackupItemBlock response
func() {
defer wg.Done()
responses = append(responses, response)
}()
case <-ctx.Done():
return
}
}
}()
// Simplified illustration, looping over and assumed already-determined ItemBlock list
for _, itemBlock := range itemBlocks {
wg.Add(1)
inputChan <- ItemBlockInput{itemBlock: itemBlock, returnChan: ret}
}
done := make(chan struct{})
go func() {
defer close(done)
wg.Wait()
}()
// Wait for all the ItemBlocks to be processed
select {
case <-done:
logger.Info("done processing ItemBlocks")
}
// responses from BackupItemBlock calls are in responses
```
When processing the responses, the main thing is to set `backedUpGroupResources[item.groupResource]=true` for each GR returned, which will give the same result as the current implementation calling items one-by-one and setting that field as needed.
The ItemBlock processing loop described above will be split into two separate iterations. For the first iteration, velero will only process those items at the beginning of the loop identified as `orderedResources` -- when the groups generated from these resources are passed to the worker channel, velero will wait for the response before moving on to the next ItemBlock.
This is to ensure that the ordered resources are processed in the required order. Once the last ordered resource is processed, the remaining ItemBlocks will be processed and sent to the worker channel without waiting for a response, in order to allow these ItemBlocks to be processed in parallel.
The reason we must execute `ItemBlocks` with ordered resources first (and one at a time) is that this is a list of resources identified by the user as resources which must be backed up first, and in a particular order.
#### Synchronize access to the BackedUpItems map
Velero uses a map of BackedUpItems to track which items have already been backed up. This prevents velero from attempting to back up an item more than once, as well as guarding against creating infinite loops due to circular dependencies in the additional items returns. Since velero will now be accessing this map from the parallel goroutines, access to the map must be synchronized with mutexes.
### Backup Finalize phase
The finalize phase will not be affected by the ItemBlock design, since this is just updating resources after async operations are completed on the items and there is no need to run these updates in parallel.
## Alternatives considered
### BackpuItemAction v3 API
Instead of adding a new `ItemBlockAction` plugin type, we could add a `GetAdditionalItems` method to BackupItemAction.
This was rejected because the new plugin type provides a cleaner interface, and keeps the function of grouping related items separate from the function of modifying item content for the backup.
### Per-backup worker pool
The current design makes use of a permanent worker pool, started at backup controller startup time. With this design, when we follow on with running multiple backups in parallel, the same set of workers will take ItemBlock inputs from more than one backup. Another approach that was initially considered was a temporary worker pool, created while processing a backup, and deleted upon backup completion.
#### User-visible API differences between the two approaches
The main user-visible difference here is in the configuration API. For the permanent worker approach, the worker count represents the total worker count for all backups. The concurrent backup count represents the number of backups running at the same time. At any given time, though, the maximum number of worker threads backing up items concurrently is equal to the worker count. If worker count is 15 and the concurrent backup count is 3, then there will be, at most, 15 items being processed at the same time, split among up to three running backups.
For the per-backup worker approach, the worker count represents the worker count for each backup. The concurrent backup count, as before, represents the number of backups running at the same time. If worker count is 15 and the concurrent backup count is 3, then there will be, at most, 45 items being processed at the same time, up to 15 for each of up to three running backups.
#### Comparison of the two approaches
- Permanent worker pool advantages:
- This is the more commonly-followed Kubernetes pattern. It's generally better to follow standard practices, unless there are genuine reasons for the use case to go in a different way.
- It's easier for users to understand the maximum number of concurrent items processed, which will have performance impact and impact on the resource requirements for the Velero pod. Users will not have to multiply the config numbers in their heads when working out how many total workers are present.
- It will give us more flexibility for future enhancements around concurrent backups. One possible use case: backup priority. Maybe a user wants scheduled backups to have a lower priority than user-generated backups, since a user is sitting there waiting for completion -- a shared worker pool could react to the priority by taking ItemBlocks for the higher priority backup first, which would allow a large lower-priority backup's items to be preempted by a higher-priority backup's items without needing to explicitly stop the main controller flow for that backup.
- Per-backup worker pool advantages:
- Lower memory consumption than permanent worker pool, but the total memory used by a worker blocked on input will be pretty low, so if we're talking only 10-20 workers, the impact will be minimal.
## Compatibility
### Example IBA implementation for BIA plugins which return additional items
Included below is an example of what might be required for a BIA plugin which returns additional items.
The code is taken from the internal velero `pod_action.go` which identifies the items required for a given pod.
In this particular case, the only function of pod_action is to return additional items, so we can really just convert this plugin to an IBA plugin. If there were other actions, such as modifying the pod content on backup, then we would still need the pod action, and the related items vs. content manipulation functions would need to be separated.
```go
// PodAction implements ItemBlockAction.
type PodAction struct {
log logrus.FieldLogger
}
// NewPodAction creates a new ItemAction for pods.
func NewPodAction(logger logrus.FieldLogger) *PodAction {
return &PodAction{log: logger}
}
// AppliesTo returns a ResourceSelector that applies only to pods.
func (a *PodAction) AppliesTo() (velero.ResourceSelector, error) {
return velero.ResourceSelector{
IncludedResources: []string{"pods"},
}, nil
}
// GetRelatedItems scans the pod's spec.volumes for persistentVolumeClaim volumes and returns a
// ResourceIdentifier list containing references to all of the persistentVolumeClaim volumes used by
// the pod. This ensures that when a pod is backed up, all referenced PVCs are backed up too.
func (a *PodAction) GetRelatedItems(item runtime.Unstructured, backup *v1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, error) {
pod := new(corev1api.Pod)
if err := runtime.DefaultUnstructuredConverter.FromUnstructured(item.UnstructuredContent(), pod); err != nil {
return nil, errors.WithStack(err)
}
var relatedItems []velero.ResourceIdentifier
if pod.Spec.PriorityClassName != "" {
a.log.Infof("Adding priorityclass %s to relatedItems", pod.Spec.PriorityClassName)
relatedItems = append(relatedItems, velero.ResourceIdentifier{
GroupResource: kuberesource.PriorityClasses,
Name: pod.Spec.PriorityClassName,
})
}
if len(pod.Spec.Volumes) == 0 {
a.log.Info("pod has no volumes")
return relatedItems, nil
}
for _, volume := range pod.Spec.Volumes {
if volume.PersistentVolumeClaim != nil && volume.PersistentVolumeClaim.ClaimName != "" {
a.log.Infof("Adding pvc %s to relatedItems", volume.PersistentVolumeClaim.ClaimName)
relatedItems = append(relatedItems, velero.ResourceIdentifier{
GroupResource: kuberesource.PersistentVolumeClaims,
Namespace: pod.Namespace,
Name: volume.PersistentVolumeClaim.ClaimName,
})
}
}
return relatedItems, nil
}
// API call
func (a *PodAction) Name() string {
return "PodAction"
}
```
## Implementation
Phase 1 and Phase 2 could be implemented within the same Velero release cycle, but they need not be.
Phase 1 is expected to be implemented in Velero 1.15.
Phase 2 is expected to be implemented in Velero 1.16.

View File

@@ -0,0 +1,94 @@
# Backup PVC Configuration Design
## Glossary & Abbreviation
**Velero Generic Data Path (VGDP)**: VGDP is the collective modules that is introduced in [Unified Repository design][1]. Velero uses these modules to finish data transfer for various purposes (i.e., PodVolume backup/restore, Volume Snapshot Data Movement). VGDP modules include uploaders and the backup repository.
**Exposer**: Exposer is a module that is introduced in [Volume Snapshot Data Movement Design][2]. Velero uses this module to expose the volume snapshots to Velero node-agent pods or node-agent associated pods so as to complete the data movement from the snapshots.
**backupPVC**: The intermediate PVC created by the exposer for VGDP to access data from, see [Volume Snapshot Data Movement Design][2] for more details.
**backupPod**: The pod consumes the backupPVC so that VGDP could access data from the backupPVC, see [Volume Snapshot Data Movement Design][2] for more details.
**sourcePVC**: The PVC to be backed up, see [Volume Snapshot Data Movement Design][2] for more details.
## Background
As elaberated in [Volume Snapshot Data Movement Design][2], a backupPVC may be created by the Exposer and the VGDP reads data from the backupPVC.
In some scenarios, users may need to configure some advanced settings of the backupPVC so that the data movement could work in best performance in their environments. Specifically:
- For some storage providers, when creating a read-only volume from a snapshot, it is very fast; whereas, if a writable volume is created from the snapshot, they need to clone the entire disk data, which is time consuming. If the backupPVC's `accessModes` is set as `ReadOnlyMany`, the volume driver is able to tell the storage to create a read-only volume, which may dramatically shorten the snapshot expose time. On the other hand, `ReadOnlyMany` is not supported by all volumes. Therefore, users should be allowed to configure the `accessModes` for the backupPVC.
- Some storage providers create one or more replicas when creating a volume, the number of replicas is defined in the storage class. However, it doesn't make any sense to keep replicas when an intermediate volume used by the backup. Therefore, users should be allowed to configure another storage class specifically used by the backupPVC.
## Goals
- Create a mechanism for users to specify various configurations for backupPVC
## Non-Goals
## Solution
We will use the ConfigMap specified by `velero node-agent` CLI's parameter `--node-agent-configmap` to host the backupPVC configurations.
This configMap is not created by Velero, users should create it manually on demand. The configMap should be in the same namespace where Velero is installed. If multiple Velero instances are installed in different namespaces, there should be one configMap in each namespace which applies to node-agent in that namespace only.
Node-agent server checks these configurations at startup time and use it to initiate the related Exposer modules. Therefore, users could edit this configMap any time, but in order to make the changes effective, node-agent server needs to be restarted.
Inside the ConfigMap we will add one new kind of configuration as the data in the configMap, the name is ```backupPVC```.
Users may want to set different backupPVC configurations for different volumes, therefore, we define the configurations as a map and allow users to specific configurations by storage class. Specifically, the key of the map element is the storage class name used by the sourcePVC and the value is the set of configurations for the backupPVC created for the sourcePVC.
The data structure is as below:
```go
type Configs struct {
// LoadConcurrency is the config for data path load concurrency per node.
LoadConcurrency *LoadConcurrency `json:"loadConcurrency,omitempty"`
// LoadAffinity is the config for data path load affinity.
LoadAffinity []*LoadAffinity `json:"loadAffinity,omitempty"`
// BackupPVC is the config for backupPVC of snapshot data movement.
BackupPVC map[string]BackupPVC `json:"backupPVC,omitempty"`
}
type BackupPVC struct {
// StorageClass is the name of storage class to be used by the backupPVC.
StorageClass string `json:"storageClass,omitempty"`
// ReadOnly sets the backupPVC's access mode as read only.
ReadOnly bool `json:"readOnly,omitempty"`
}
```
### Sample
A sample of the ConfigMap is as below:
```json
{
"backupPVC": {
"storage-class-1": {
"storageClass": "snapshot-storage-class",
"readOnly": true
},
"storage-class-2": {
"storageClass": "snapshot-storage-class"
},
"storage-class-3": {
"readOnly": true
}
}
}
```
To create the configMap, users need to save something like the above sample to a json file and then run below command:
```
kubectl create cm <ConfigMap name> -n velero --from-file=<json file name>
```
### Implementation
The `backupPVC` is passed to the exposer and the exposer sets the related specification and create the backupPVC.
If `backupPVC.storageClass` doesn't exist or set as empty, the sourcePVC's storage class will be used.
If `backupPVC.readOnly` is set to true, `ReadOnlyMany` will be the only value set to the backupPVC's `accessModes`, otherwise, `ReadWriteOnce` is used.
Once `backupPVC.storageClass` is set, users must make sure that the specified storage class exists in the cluster and can be used the the backupPVC, otherwise, the corresponding DataUpload CR will stay in `Accepted` phase until the prepare timeout (by default 30min).
Once `backupPVC.readOnly` is set to true, users must make sure that the storage supports to create a `ReadOnlyMany` PVC from a snapshot, otherwise, the corresponding DataUpload CR will stay in `Accepted` phase until the prepare timeout (by default 30min).
Once above problems happen, the DataUpload CR is cancelled after prepare timeout and the backupPVC and backupPod will be deleted, so there is no way to tell the cause is one of the above problems or others.
To help the troubleshooting, we can add some diagnostic mechanism to discover the status of the backupPod before deleting it as a result of the prepare timeout.
[1]: unified-repo-and-kopia-integration/unified-repo-and-kopia-integration.md
[2]: volume-snapshot-data-movement/volume-snapshot-data-movement.md

View File

@@ -0,0 +1,123 @@
# Backup Repository Configuration Design
## Glossary & Abbreviation
**Backup Storage**: The storage to store the backup data. Check [Unified Repository design][1] for details.
**Backup Repository**: Backup repository is layered between BR data movers and Backup Storage to provide BR related features that is introduced in [Unified Repository design][1].
## Background
According to the [Unified Repository design][1] Velero uses selectable backup repositories for various backup/restore methods, i.e., fs-backup, volume snapshot data movement, etc. To achieve the best performance, backup repositories may need to be configured according to the running environments.
For example, if there are sufficient CPU and memory resources in the environment, users may enable compression feature provided by the backup repository, so as to achieve the best backup throughput.
As another example, if the local disk space is not sufficient, users may want to constraint the backup repository's cache size, so as to prevent the repository from running out of the disk space.
Therefore, it is worthy to allow users to configure some essential parameters of the backup repsoitories, and the configuration may vary from backup repositories.
## Goals
- Create a mechanism for users to specify configurations for backup repositories
## Non-Goals
## Solution
### BackupRepository CRD
After a backup repository is initialized, a BackupRepository CR is created to represent the instance of the backup repository. The BackupRepository's spec is a core parameter used by Unified Repo modules when interactive with the backup repsoitory. Therefore, we can add the configurations into the BackupRepository CR called ```repositoryConfig```.
The configurations may be different varying from backup repositories, therefore, we will not define each of the configurations explicitly. Instead, we add a map in the BackupRepository's spec to take any configuration to be set to the backup repository.
During various operations to the backup repository, the Unified Repo modules will retrieve from the map for the specific configuration that is required at that time. So even though it is specified, a configuration may not be visited/hornored if the operations don't require it for the specific backup repository, this won't bring any issue. When and how a configuration is hornored is decided by the configuration itself and should be clarified in the configuration's specification.
Below is the new BackupRepository's spec after adding the configuration map:
```yaml
spec:
description: BackupRepositorySpec is the specification for a BackupRepository.
properties:
backupStorageLocation:
description: |-
BackupStorageLocation is the name of the BackupStorageLocation
that should contain this repository.
type: string
maintenanceFrequency:
description: MaintenanceFrequency is how often maintenance should
be run.
type: string
repositoryConfig:
additionalProperties:
type: string
description: RepositoryConfig contains configurations for the specific
repository.
type: object
repositoryType:
description: RepositoryType indicates the type of the backend repository
enum:
- kopia
- restic
- ""
type: string
resticIdentifier:
description: |-
ResticIdentifier is the full restic-compatible string for identifying
this repository.
type: string
volumeNamespace:
description: |-
VolumeNamespace is the namespace this backup repository contains
pod volume backups for.
type: string
required:
- backupStorageLocation
- maintenanceFrequency
- resticIdentifier
- volumeNamespace
type: object
```
### BackupRepository configMap
The BackupRepository CR is not created explicitly by a Velero CLI, but created as part of the backup/restore/maintenance operation if the CR doesn't exist. As a result, users don't have any way to specify the configurations before the BackupRepository CR is created.
Therefore, a BackupRepository configMap is introduced as a template of the configurations to be applied to the backup repository CR.
When the backup repository CR is created by the BackupRepository controller, the configurations in the configMap are copied to the ```repositoryConfig``` field.
For an existing BackupRepository CR, the configMap is never visited, if users want to modify the configuration value, they should directly edit the BackupRepository CR.
The BackupRepository configMap is created by users in velero installation namespace. The configMap name must be specified in the velero server parameter ```--backup-repository-configmap```, otherwise, it won't effect.
If the configMap name is specified but the configMap doesn't exist by the time of a backup repository is created, the configMap name is ignored.
For any reason, if the configMap doesn't effect, nothing is specified to the backup repository CR, so the Unified Repo modules use the hard-coded values to configure the backup repository.
The BackupRepository configMap supports backup repository type specific configurations, even though users can only specify one configMap.
So in the configMap struct, multiple entries are supported, indexed by the backup repository type. During the backup repository creation, the configMap is searched by the repository type.
### Configurations
With the above mechanisms, any kind of configuration could be added. Here list the configurations defined at present:
```cacheLimitMB```: specifies the size limit(in MB) for the local data cache. The more data is cached locally, the less data may be downloaded from the backup storage, so the better performance may be achieved. Practically, users can specify any size that is smaller than the free space so that the disk space won't run out. This parameter is for each repository connection, that is, users could change it before connecting to the repository. If a backup repository doesn't use local cache, this parameter will be ignored. For Kopia repository, this parameter is supported.
```enableCompression```: specifies to enable/disable compression for a backup repsotiory. Most of the backup repositories support the data compression feature, if it is not supported by a backup repository, this parameter is ignored. Most of the backup repositories support to dynamically enable/disable compression, so this parameter is defined to be used whenever creating a write connection to the backup repository, if the dynamically changing is not supported, this parameter will be hornored only when initializing the backup repository. For Kopia repository, this parameter is supported and can be dynamically modified.
### Sample
Below is an example of the BackupRepository configMap with the configurations:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: <config-name>
namespace: velero
data:
<repository-type-1>: |
{
"cacheLimitMB": 2048,
"enableCompression": true
}
<repository-type-2>: |
{
"cacheLimitMB": 1,
"enableCompression": false
}
```
To create the configMap, users need to save something like the above sample to a file and then run below commands:
```
kubectl apply -f <yaml file name>
```
[1]: unified-repo-and-kopia-integration/unified-repo-and-kopia-integration.md

View File

@@ -175,7 +175,7 @@ If there are one or more, download the backup tarball from backup storage, untar
## Alternatives Considered
Another proposal for higher level `DeleteItemActions` was initially included, which would require implementors to individually download the backup tarball themselves.
Another proposal for higher level `DeleteItemActions` was initially included, which would require implementers to individually download the backup tarball themselves.
While this may be useful long term, it is not a good fit for the current goals as each plugin would be re-implementing a lot of boilerplate.
See the deletion-plugins.md file for this alternative proposal in more detail.

View File

@@ -86,7 +86,7 @@ volumePolicies:
# capacity condition matches the volumes whose capacity falls into the range
capacity: "0,100Gi"
csi:
driver: aws.ebs.csi.driver
driver: ebs.csi.aws.com
fsType: ext4
storageClass:
- gp2
@@ -174,7 +174,7 @@ data:
- conditions:
capacity: "0,100Gi"
csi:
driver: aws.ebs.csi.driver
driver: ebs.csi.aws.com
fsType: ext4
storageClass:
- gp2

View File

@@ -0,0 +1,193 @@
# Proposal to Support JSON Merge Patch and Strategic Merge Patch in Resource Modifiers
- [Proposal to Support JSON Merge Patch and Strategic Merge Patch in Resource Modifiers](#proposal-to-support-json-merge-patch-and-strategic-merge-patch-in-resource-modifiers)
- [Abstract](#abstract)
- [Goals](#goals)
- [Non Goals](#non-goals)
- [User Stories](#user-stories)
- [Scenario 1](#scenario-1)
- [Scenario 2](#scenario-2)
- [Detailed Design](#detailed-design)
- [How to choose the right patch type](#how-to-choose-the-right-patch-type)
- [New Field MergePatches](#new-field-mergepatches)
- [New Field StrategicPatches](#new-field-strategicpatches)
- [Conditional Patches in ALL Patch Types](#conditional-patches-in-all-patch-types)
- [Wildcard Support for GroupResource](#wildcard-support-for-groupresource)
- [Helper Command to Generate Merge Patch and Strategic Merge Patch](#helper-command-to-generate-merge-patch-and-strategic-merge-patch)
- [Security Considerations](#security-considerations)
- [Compatibility](#compatibility)
- [Implementation](#implementation)
- [Future Enhancements](#future-enhancements)
- [Open Issues](#open-issues)
## Abstract
Velero introduced the concept of Resource Modifiers in v1.12.0. This feature allows the user to specify a configmap with a set of rules to modify the resources during restore. The user can specify the filters to select the resources and then specify the JSON Patch to apply on the resource. This feature is currently limited to the operations supported by JSON Patch RFC.
This proposal is to add support for JSON Merge Patch and Strategic Merge Patch in the Resource Modifiers. This will allow the user to use the same configmap to apply JSON Merge Patch and Strategic Merge Patch on the resources during restore.
## Goals
- Allow the user to specify a JSON patch, JSON Merge Patch or Strategic Merge Patch for modification.
- Allow the user to specify multiple JSON Patch, JSON Merge Patch or Strategic Merge Patch.
- Allow the user to specify mixed JSON Patch, JSON Merge Patch and Strategic Merge Patch in the same configmap.
## Non Goals
- Deprecating the existing RestoreItemAction plugins for standard substitutions(like changing the namespace, changing the storage class, etc.)
## User Stories
### Scenario 1
- Alice has some Pods and part of them have an annotation `{"for": "bar"}`.
- Alice wishes to restore these Pods to a different cluster without this annotation.
- Alice can use this feature to remove this annotation during restore.
### Scenario 2
- Bob has a Pod with several containers and one container with name nginx has an image `repo1/nginx`.
- Bob wishes to restore this Pod to a different cluster, but new cluster can not access repo1, so he pushes the image to repo2.
- Bob can use this feature to update the image of container nginx to `repo2/nginx` during restore.
## Detailed Design
- The design and approach is inspired by kubectl patch command and [this doc](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/).
- New fields `MergePatches` and `StrategicPatches` will be added to the `ResourceModifierRule` struct to support all three patch types.
- Only one of the three patch types can be specified in a single `ResourceModifierRule`.
- Add wildcard support for `groupResource` in `conditions` struct.
- The workflow to create Resource Modifier ConfigMap and reference it in RestoreSpec will remain the same as described in document [Resource Modifiers](https://github.com/vmware-tanzu/velero/blob/main/site/content/docs/main/restore-resource-modifiers.md).
### How to choose the right patch type
- [JSON Merge Patch](https://datatracker.ietf.org/doc/html/rfc7386) is a naively simple format, with limited usability. Probably it is a good choice if you are building something small, with very simple JSON Schema.
- [JSON Patch](https://datatracker.ietf.org/doc/html/rfc6902) is a more complex format, but it is applicable to any JSON documents. For a comparison of JSON patch and JSON merge patch, see [JSON Patch and JSON Merge Patch](https://erosb.github.io/post/json-patch-vs-merge-patch/).
- Strategic Merge Patch is a Kubernetes defined patch type, mainly used to process resources of type list. You can replace/merge a list, add/remove items from a list by key, change the order of items in a list, etc. Strategic merge patch is not supported for custom resources. For more details, see [this doc](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/).
### New Field MergePatches
MergePatches is a list to specify the merge patches to be applied on the resource. The merge patches will be applied in the order specified in the configmap. A subsequent patch is applied in order and if multiple patches are specified for the same path, the last patch will override the previous patches.
Example of MergePatches in ResourceModifierRule
```yaml
version: v1
resourceModifierRules:
- conditions:
groupResource: pods
namespaces:
- ns1
mergePatches:
- patchData: |
{
"metadata": {
"annotations": {
"foo": null
}
}
}
```
- The above configmap will apply the Merge Patch to all the pods in namespace ns1 and remove the annotation `foo` from the pods.
- Both json and yaml format are supported for the patchData.
### New Field StrategicPatches
StrategicPatches is a list to specify the strategic merge patches to be applied on the resource. The strategic merge patches will be applied in the order specified in the configmap. A subsequent patch is applied in order and if multiple patches are specified for the same path, the last patch will override the previous patches.
Example of StrategicPatches in ResourceModifierRule
```yaml
version: v1
resourceModifierRules:
- conditions:
groupResource: pods
resourceNameRegex: "^my-pod$"
namespaces:
- ns1
strategicPatches:
- patchData: |
{
"spec": {
"containers": [
{
"name": "nginx",
"image": "repo2/nginx"
}
]
}
}
```
- The above configmap will apply the Strategic Merge Patch to the pod with name my-pod in namespace ns1 and update the image of container nginx to `repo2/nginx`.
- Both json and yaml format are supported for the patchData.
### Conditional Patches in ALL Patch Types
Since JSON Merge Patch and Strategic Merge Patch do not support conditional patches, we will use the `test` operation of JSON Patch to support conditional patches in all patch types by adding it to `Conditions` struct in `ResourceModifierRule`.
Example of test in conditions
```yaml
version: v1
resourceModifierRules:
- conditions:
groupResource: persistentvolumeclaims.storage.k8s.io
matches:
- path: "/spec/storageClassName"
value: "premium"
mergePatches:
- patchData: |
{
"metadata": {
"annotations": {
"foo": null
}
}
}
```
- The above configmap will apply the Merge Patch to all the PVCs in all namespaces with storageClassName premium and remove the annotation `foo` from the PVCs.
- You can specify multiple rules in the `matches` list. The patch will be applied only if all the matches are satisfied.
### Wildcard Support for GroupResource
The user can specify a wildcard for groupResource in the conditions' struct. This will allow the user to apply the patches for all the resources of a particular group or all resources in all groups. For example, `*.apps` will apply to all the resources in the `apps` group, `*` will apply to all the resources in all groups.
### Helper Command to Generate Merge Patch and Strategic Merge Patch
The patchData of Strategic Merge Patch is sometimes a bit complex for user to write. We can provide a helper command to generate the patchData for Strategic Merge Patch. The command will take the original resource and the modified resource as input and generate the patchData.
It can also be used in JSON Merge Patch.
Here is a sample code snippet to achieve this:
```go
package main
import (
"fmt"
corev1 "k8s.io/api/core/v1"
"sigs.k8s.io/controller-runtime/pkg/client"
)
func main() {
pod := &corev1.Pod{
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "web",
Image: "nginx",
},
},
},
}
newPod := pod.DeepCopy()
patch := client.StrategicMergeFrom(pod)
newPod.Spec.Containers[0].Image = "nginx1"
data, _ := patch.Data(newPod)
fmt.Println(string(data))
// Output:
// {"spec":{"$setElementOrder/containers":[{"name":"web"}],"containers":[{"image":"nginx1","name":"web"}]}}
}
```
## Security Considerations
No security impact.
## Compatibility
Compatible with current Resource Modifiers.
## Implementation
- Use "github.com/evanphx/json-patch" to support JSON Merge Patch.
- Use "k8s.io/apimachinery/pkg/util/strategicpatch" to support Strategic Merge Patch.
- Use glob to support wildcard for `groupResource` in `conditions` struct.
- Use `test` operation of JSON Patch to calculate the `matches` in `conditions` struct.
## Future enhancements
- add a Velero subcommand to generate/validate the patchData for Strategic Merge Patch and JSON Merge Patch.
- add jq support for more complex conditions or patches, to meet the situations that the current conditions or patches can not handle. like [this issue](https://github.com/vmware-tanzu/velero/issues/6344)
## Open Issues
N/A

View File

@@ -65,7 +65,7 @@ This page contains a pre-migration checklist for ensuring a repo migration goes
#### Updating Netlify
The settings for Netflify should remain the same, except that it now needs to be installed in the new repo. The instructions on how to install Netlify on the new repo are here: https://www.netlify.com/docs/github-permissions/.
The settings for Netlify should remain the same, except that it now needs to be installed in the new repo. The instructions on how to install Netlify on the new repo are here: https://www.netlify.com/docs/github-permissions/.
#### Communication strategy

View File

@@ -0,0 +1,122 @@
# Multi-arch Build and Windows Build Support
## Background
At present, Velero images could be built for linux-amd64 and linux-arm64. We need to support other platforms, i.e., windows-amd64.
At present, for linux image build, we leverage Buildkit's `--platform` option to create the image manifest list in one build call. However, it is a limited way and doesn't fully support all multi-arch scenarios. Specifically, since the build is done in one call with the same parameters, it is impossbile to build images with different configurations (e.g., Windows build requires a different Dockerfile).
At present, Velero by default build images locally, or no image or manifest is pushed to registry. However, docker doesn't support multi-arch build locally. We need to clarify the behavior of local build.
## Goals
- Refactor the `make container` process to fully support multi-arch build
- Add Windows build to the existing build process
- Clarify the behavior of local build with multi-arch build capabilities
- Don't change the pattern of the final image tag to be used by users
## Non-Goals
- There may be some workarounds to make the multi-arch image/manifest fully available locally. These workarounds will not be adopted, so local build always build single-arch images
## Local Build
For local build, two values of `--output` parameter for `docker buildx build` are supported:
- `docker`: a docker format image is built, but the image is only built for the platform (`<os>/<arch>`) as same as the building env. E.g., when building from linux-amd64 env, a single manifest of linux-amd64 is created regardless how the input parameters are configured.
- `tar`: one or more images are built as tarballs according to the input platform (`<os>/<arch>`) parameters. Specifically, one tarball is generated for each platform. The build process is the same with the `Build Separate Manifests` of `Push Build` as detailed below. Merely, the `--output` parameter diffs, as `type=tar;dest=<tarball generated path>`. The tarball is generated to the `_output` folder and named with the platform info, e.g., `_output/velero-main-linux-amd64.tar`.
## Push Build
For push build, the `--output` parameter for `docker buildx build` is always `registry`. And build will go according to the input parameters and create multi-arch manifest lists.
### Step 1: Build Separate Manifests
Instead of specifying multiple platforms (`<os>/<arch>`) to `--platform` option, we add multiple `container-%` targets in Makefile and each target builds one platform representively.
The goal here is to build multiple manifests through the multiple targets. However, `docker buildx build` by default creates a manifest list even though there is only one element in `--platform`. Therefore, two flags `--provenance=false` and `--sbom=false` will be set additionally to force `docker buildx build` to create manifests.
Each manifest has a unique tag, the OS type and arch is added to the tag, in the pattern `$(REGISTRY)/$(BIN):$(VERSION)-$(OS)-$(ARCH)`. For example, `velero/velero:main-linux-amd64`.
All the created manifests will be pushed to registry so that the all-in-one manifest list could be created.
### Step 2: Create All-In-One Manifest List
The next step is to create a manifest list to include all the created manifests. This could be done by `docker manifest create` command, the tags created and pushed at Step 1 are passed to this command.
A tag is also created for the manifest list, in the pattern `$(REGISTRY)/$(BIN):$(VERSION)`. For example, `velero/velero:main`.
### Step 3: Push All-In-One Manifest List
The created manifest will be pushed to registry by command `docker manifest push`.
## Input Parameters
Below are the input parameters that are configurable to meet different build purposes during Dev and release cycle:
- BUILD_OUTPUT_TYPE: the type of output for the build, i.e., `docker`, `tar`, `registry`, while `docker` and `tar` is for local build; `registry` means push build. Default value is `docker`
- BUILD_OS: which types of OS should be built for. Multiple values are accepted, e.g., `linux,windows`. Default value is `linux`
- BUILD_ARCH: which types of architecture should be built for. Multiple values are accepted, e.g., `amd64,arm64`. Default value is `amd64`
- BUILDX_INSTANCE: an existing buildx instance to be used by the build. Default value is <empty> which indicates the build to create a new buildx instance
## Windows Build
Windows container images vary from Windows OS versions, e.g., `ltsc2022` for Windows server 2022 and `1809` for Windows server 2019. Images for different OS versions should be built separately.
Therefore, separate build targets are added for each OS version, like `container-windows-%`.
For the same reason, a new input parameter is added, `BUILD_WINDOWS_VERSION`. The default value is `ltsc2022`. Windows server 2022 is the only base image we will deliver officially, Windows server 2019 is not supported. In future, we may need to support Windows server 2025 base image.
For local build to tar, the Windows OS version is also added to the name of the tarball, e.g., `_output/velero-main-windows-ltsc2022-amd64.tar`.
At present, Windows container image only supports `amd64` as the architecture, so `BUILD_ARCH` is ignored for Windows.
The Windows manifests need to be annotated with os type, arch, and os version. This will be done through `docker manifest annotate` command.
## Use Malti-arch Images
In order to use the images, the manifest list's tag should be provided to `velero install` command or helm, the individual manifests are covered by the manifest list. During launch time, the container engine will load the right image to the container according to the platform of the running node.
## Build Samples
**Local build to docker**
```
make container
```
The built image could be listed by `docker image ls`.
**Local build for linux-amd64 and windows-amd64 to tar**
```
BUILD_OUTPUT_TYPE=tar BUILD_OS=linux,windows make container
```
Under `_output` directory, below files are generated:
```
velero-main-linux-amd64.tar
velero-main-windows-ltsc2022-amd64.tar
```
**Local build for linux-amd64, linux-arm64 and windows-amd64 to tar**
```
BUILD_OUTPUT_TYPE=tar BUILD_OS=linux,windows BUILD_ARCH=amd64,arm64 make container
```
Under `_output` directory, below files are generated:
```
velero-main-linux-amd64.tar
velero-main-linux-arm64.tar
velero-main-windows-ltsc2022-amd64.tar
```
**Push build for linux-amd64 and windows-amd64**
Prerequisite: login to registry, e.g., through `docker login`
```
BUILD_OUTPUT_TYPE=registry REGISTRY=<registry> BUILD_OS=linux,windows make container
```
Nothing is available locally, in the registry 3 tags are available:
```
velero/velero:main
velero/velero:main-windows-ltsc2022-amd64
velero/velero:main-linux-amd64
```
**Push build for linux-amd64, linux-arm64 and windows-amd64**
Prerequisite: login to registry, e.g., through `docker login`
```
BUILD_OUTPUT_TYPE=registry REGISTRY=<registry> BUILD_OS=linux,windows BUILD_ARCH=amd64,arm64 make container
```
Nothing is available locally, in the registry 4 tags are available:
```
velero/velero:main
velero/velero:main-windows-ltsc2022-amd64
velero/velero:main-linux-amd64
velero/velero:main-linux-arm64
```

View File

@@ -0,0 +1,132 @@
# Node-agent Load Affinity Design
## Glossary & Abbreviation
**Velero Generic Data Path (VGDP)**: VGDP is the collective modules that is introduced in [Unified Repository design][1]. Velero uses these modules to finish data transfer for various purposes (i.e., PodVolume backup/restore, Volume Snapshot Data Movement). VGDP modules include uploaders and the backup repository.
**Exposer**: Exposer is a module that is introduced in [Volume Snapshot Data Movement Design][2]. Velero uses this module to expose the volume snapshots to Velero node-agent pods or node-agent associated pods so as to complete the data movement from the snapshots.
## Background
Velero node-agent is a daemonset hosting controllers and VGDP modules to complete the concrete work of backups/restores, i.e., PodVolume backup/restore, Volume Snapshot Data Movement backup/restore.
Specifically, node-agent runs DataUpload controllers to watch DataUpload CRs for Volume Snapshot Data Movement backups, so there is one controller instance in each node. One controller instance takes a DataUpload CR and then launches a VGDP instance, which initializes a uploader instance and the backup repository connection, to finish the data transfer. The VGDP instance runs inside a node-agent pod or in a pod associated to the node-agent pod in the same node.
Varying from the data size, data complexity, resource availability, VGDP may take a long time and remarkable resources (CPU, memory, network bandwidth, etc.).
Technically, VGDP instances are able to run in any node that allows pod schedule. On the other hand, users may want to constrain the nodes where VGDP instances run for various reasons, below are some examples:
- Prevent VGDP instances from running in specific nodes because users have more critical workloads in the nodes
- Constrain VGDP instances to run in specific nodes because these nodes have more resources than others
- Constrain VGDP instances to run in specific nodes because the storage allows volume/snapshot provisions in these nodes only
Therefore, in order to improve the compatibility, it is worthy to configure the affinity of VGDP to nodes, especially for backups for which VGDP instances run frequently and centrally.
## Goals
- Define the behaviors of node affinity of VGDP instances in node-agent for volume snapshot data movement backups
- Create a mechanism for users to specify the node affinity of VGDP instances for volume snapshot data movement backups
## Non-Goals
- It is also beneficial to support VGDP instances affinity for PodVolume backup/restore, however, it is not possible since VGDP instances for PodVolume backup/restore should always run in the node where the source/target pods are created.
- It is also beneficial to support VGDP instances affinity for data movement restores, however, it is not possible in some cases. For example, when the `volumeBindingMode` in the StorageClass is `WaitForFirstConsumer`, the restore volume must be mounted in the node where the target pod is scheduled, so the VGDP instance must run in the same node. On the other hand, considering the fact that restores may not frequently and centrally run, we will not support data movement restores.
- As elaborated in the [Volume Snapshot Data Movement Design][2], the Exposer may take different ways to expose snapshots, i.e., through backup pods (this is the only way supported at present). The implementation section below only considers this approach currently, if a new expose method is introduced in future, the definition of the affinity configurations and behaviors should still work, but we may need a new implementation.
## Solution
We will use the ConfigMap specified by `velero node-agent` CLI's parameter `--node-agent-configmap` to host the node affinity configurations.
This configMap is not created by Velero, users should create it manually on demand. The configMap should be in the same namespace where Velero is installed. If multiple Velero instances are installed in different namespaces, there should be one configMap in each namespace which applies to node-agent in that namespace only.
Node-agent server checks these configurations at startup time and use it to initiate the related VGDP modules. Therefore, users could edit this configMap any time, but in order to make the changes effective, node-agent server needs to be restarted.
Inside the ConfigMap we will add one new kind of configuration as the data in the configMap, the name is ```loadAffinity```.
Users may want to set different LoadAffinity configurations according to different conditions (i.e., for different storages represented by StorageClass, CSI driver, etc.), so we define ```loadAffinity``` as an array. This is for extensibility consideration, at present, we don't implement multiple configurations support, so if there are multiple configurations, we always take the first one in the array.
The data structure is as below:
```go
type Configs struct {
// LoadConcurrency is the config for load concurrency per node.
LoadConcurrency *LoadConcurrency `json:"loadConcurrency,omitempty"`
// LoadAffinity is the config for data path load affinity.
LoadAffinity []*LoadAffinity `json:"loadAffinity,omitempty"`
}
type LoadAffinity struct {
// NodeSelector specifies the label selector to match nodes
NodeSelector metav1.LabelSelector `json:"nodeSelector"`
}
```
### Affinity
Affinity configuration means allowing VGDP instances running in the nodes specified. There are two ways to define it:
- It could be defined by `MatchLabels` of `metav1.LabelSelector`. The labels defined in `MatchLabels` means a `LabelSelectorOpIn` operation by default, so in the current context, they will be treated as affinity rules.
- It could be defined by `MatchExpressions` of `metav1.LabelSelector`. The labels are defined in `Key` and `Values` of `MatchExpressions` and the `Operator` should be defined as `LabelSelectorOpIn` or `LabelSelectorOpExists`.
### Anti-affinity
Anti-affinity configuration means preventing VGDP instances running in the nodes specified. Below is the way to define it:
- It could be defined by `MatchExpressions` of `metav1.LabelSelector`. The labels are defined in `Key` and `Values` of `MatchExpressions` and the `Operator` should be defined as `LabelSelectorOpNotIn` or `LabelSelectorOpDoesNotExist`.
### Sample
A sample of the ConfigMap is as below:
```json
{
"loadAffinity": [
{
"nodeSelector": {
"matchLabels": {
"beta.kubernetes.io/instance-type": "Standard_B4ms"
},
"matchExpressions": [
{
"key": "kubernetes.io/hostname",
"values": [
"node-1",
"node-2",
"node-3"
],
"operator": "In"
},
{
"key": "xxx/critial-workload",
"operator": "DoesNotExist"
}
]
}
}
]
}
```
This sample showcases two affinity configurations:
- matchLabels: VGDP instances will run only in nodes with label key `beta.kubernetes.io/instance-type` and value `Standard_B4ms`
- matchExpressions: VGDP instances will run in node `node1`, `node2` and `node3` (selected by `kubernetes.io/hostname` label)
This sample showcases one anti-affinity configuration:
- matchExpressions: VGDP instances will not run in nodes with label key `xxx/critial-workload`
To create the configMap, users need to save something like the above sample to a json file and then run below command:
```
kubectl create cm <ConfigMap name> -n velero --from-file=<json file name>
```
### Implementation
As mentioned in the [Volume Snapshot Data Movement Design][2], the exposer decides where to launch the VGDP instances. At present, for volume snapshot data movement backups, the exposer creates backupPods and the VGDP instances will be initiated in the nodes where backupPods are scheduled. So the loadAffinity will be translated (from `metav1.LabelSelector` to `corev1.Affinity`) and set to the backupPods.
It is possible that node-agent pods, as a daemonset, don't run in every worker nodes, users could fulfil this by specify `nodeSelector` or `nodeAffinity` to the node-agent daemonset spec. On the other hand, at present, VGDP instances must be assigned to nodes where node-agent pods are running. Therefore, if there is any node selection for node-agent pods, users must consider this into this load affinity configuration, so as to guarantee that VGDP instances are always assigned to nodes where node-agent pods are available. This is done by users, we don't inherit any node selection configuration from node-agent daemonset as we think daemonset scheduler works differently from plain pod scheduler, simply inheriting all the configurations may cause unexpected result of backupPod schedule.
Otherwise, if a backupPod are scheduled to a node where node-agent pod is absent, the corresponding DataUpload CR will stay in `Accepted` phase until the prepare timeout (by default 30min).
At present, as part of the expose operations, the exposer creates a volume, represented by backupPVC, from the snapshot. The backupPVC uses the same storageClass with the source volume. If the `volumeBindingMode` in the storageClass is `Immediate`, the volume is immediately allocated from the underlying storage without waiting for the backupPod. On the other hand, the loadAffinity is set to the backupPod's affinity. If the backupPod is scheduled to a node where the snapshot volume is not accessible, e.g., because of storage topologies, the backupPod won't get into Running state, concequently, the data movement won't complete.
Once this problem happens, the backupPod stays in `Pending` phase, and the corresponding DataUpload CR stays in `Accepted` phase until the prepare timeout (by default 30min). Below is an example of the backupPod's status when the problem happens:
```
status:
conditions:
- lastProbeTime: null
message: '0/2 nodes are available: 1 node(s) didn''t match Pod''s node affinity/selector,
1 node(s) had volume node affinity conflict. preemption: 0/2 nodes are available:
2 Preemption is not helpful for scheduling..'
reason: Unschedulable
status: "False"
type: PodScheduled
phase: Pending
```
On the other hand, the backupPod is deleted after the prepare timeout, so there is no way to tell the cause is one of the above problems or others.
To help the troubleshooting, we can add some diagnostic mechanism to discover the status of the backupPod and node-agent in the same node before deleting it as a result of the prepare timeout.
[1]: Implemented/unified-repo-and-kopia-integration/unified-repo-and-kopia-integration.md
[2]: volume-snapshot-data-movement/volume-snapshot-data-movement.md

View File

@@ -0,0 +1,131 @@
# Node-agent Concurrency Design
## Glossary & Abbreviation
**Velero Generic Data Path (VGDP)**: VGDP is the collective of modules that is introduced in [Unified Repository design][1]. Velero uses these modules to finish data transfer for various purposes (i.e., PodVolume backup/restore, Volume Snapshot Data Movement). VGDP modules include uploaders and the backup repository.
## Background
Velero node-agent is a daemonset hosting controllers and VGDP modules to complete the concrete work of backups/restores, i.e., PodVolume backup/restore, Volume Snapshot Data Movement backup/restore.
For example, node-agent runs DataUpload controllers to watch DataUpload CRs for Volume Snapshot Data Movement backups, so there is one controller instance in each node. One controller instance takes a DataUpload CR and then launches a VGDP instance, which initializes a uploader instance and the backup repository connection, to finish the data transfer. The VGDP instance runs inside the node-agent pod or in a pod associated to the node-agent pod in the same node.
Varying from the data size, data complexity, resource availability, VGDP may take a long time and remarkable resources (CPU, memory, network bandwidth, etc.).
Technically, VGDP instances are able to run in concurrent regardless of the requesters. For example, a VGDP instance for a PodVolume backup could run in parallel with another VGDP instance for a DataUpload. Then the two VGDP instances share the same resources if they are running in the same node.
Therefore, in order to gain the optimized performance with the limited resources, it is worthy to configure the concurrent number of VGDP per node. When the resources are sufficient in nodes, users can set a large concurrent number, so as to reduce the backup/restore time; otherwise, the concurrency should be reduced, otherwise, the backup/restore may encounter problems, i.e., time lagging, hang or OOM kill.
## Goals
- Define the behaviors of concurrent VGDP instances in node-agent
- Create a mechanism for users to specify the concurrent number of VGDP per node
## Non-Goals
- VGDP instances from different nodes always run in concurrent since in most common cases the resources are isolated. For special cases that some resources are shared across nodes, there is no support at present
- In practice, restores run in prioritized scenarios, e.g., disaster recovery. However, the current design doesn't consider this difference, a VGDP instance for a restore is blocked if it reaches to the limit of the concurrency, even though the ones block it are for backups. If users do meet some problems here, they should consider to stop the backups first
- Sometimes, users wants to totally block backups/restores from running in a specific node, this is out of the scope the current design. To archive this, more modules need to be considered (i.e., expoers of data movers), simply blocking the VGDP (e.g., by setting its concurrent number to 0) doesn't work. E.g., for a fs backup, VGDP instance must run in the node the source pod is running in, if we simply block from VGDP instance, the PodVolumeBackup CR is still submitted but never processed.
## Solution
We introduce a ConfigMap specified by `velero node-agent` CLI's parameter `--node-agent-configmap` for users to specify the node-agent related configurations. This configMap is not created by Velero, users should create it manually on demand. The configMap should be in the same namespace where Velero is installed. If multiple Velero instances are installed in different namespaces, there should be one configMap in each namespace which applies to node-agent in that namespace only.
Node-agent server checks these configurations at startup time and use it to initiate the related VGDP modules. Therefore, users could edit this configMap any time, but in order to make the changes effective, node-agent server needs to be restarted.
The ConfigMap may be used for other purpose of configuring node-agent in future, at present, there is only one kind of configuration as the data in the configMap, the name is ```loadConcurrency```.
The data structure is as below:
```go
type Configs struct {
// LoadConcurrency is the config for load concurrency per node.
LoadConcurrency *LoadConcurrency `json:"loadConcurrency,omitempty"`
}
type LoadConcurrency struct {
// GlobalConfig specifies the concurrency number to all nodes for which per-node config is not specified
GlobalConfig int `json:"globalConfig,omitempty"`
// PerNodeConfig specifies the concurrency number to nodes matched by rules
PerNodeConfig []RuledConfigs `json:"perNodeConfig,omitempty"`
}
type RuledConfigs struct {
// NodeSelector specifies the label selector to match nodes
NodeSelector metav1.LabelSelector `json:"nodeSelector"`
// Number specifies the number value associated to the matched nodes
Number int `json:"number"`
}
```
### Global concurrent number
We allow users to specify a concurrent number that will be applied to all nodes if the per-node number is not specified. This number is set through ```globalConfig```.
The number starts from 1 which means there is no concurrency, only one instance of VGDP is allowed. There is no roof limit.
If this number is not specified or not valid, a hard-coded default value will be used, the value is set to 1.
### Per-node concurrent number
We allow users to specify different concurrent number per node, for example, users can set 3 concurrent instances in Node-1, 2 instances in Node-2 and 1 instance in Node-3. This is for below considerations:
- The resources may be different among nodes. Then users could specify smaller concurrent number for nodes with less resources while larger number for the ones with more resources
- Help users to isolate critical environments. Users may run some critical workloads in some specified nodes, since VGDP instances may take large resource consumption, users may want to run less number of instances in the nodes with critical workloads
The range of Per-node concurrent number is the same with Global concurrent number.
Per-node concurrent number is preferable to Global concurrent number, so it will overwrite the Global concurrent number for that node.
Per-node concurrent number is implemented through ```perNodeConfig``` field.
```perNodeConfig``` is a list of ```RuledConfigs``` each item of which matches one or more nodes by label selectors and specify the concurrent number for the matched nodes. This means, the nodes are identified by labels.
For example, the ```perNodeConfig`` could have below elements:
```
"nodeSelector: kubernetes.io/hostname=node1; number: 3"
"nodeSelector: beta.kubernetes.io/instance-type=Standard_B4ms; number: 5"
```
The first element means the node with host name ```node1``` gets the Per-node concurrent number of 3.
The second element means all the nodes with label ```beta.kubernetes.io/instance-type``` of value ```Standard_B4ms``` get the Per-node concurrent number of 5.
At least one node is expected to have a label with the specified ```RuledConfigs``` element (rule). If no node is with this label, the Per-node rule makes no effect.
If one node falls into more than one rules, e.g., if node1 also has the label ```beta.kubernetes.io/instance-type=Standard_B4ms```, the smallest number (3) will be used.
### Sample
A sample of the ConfigMap is as below:
```json
{
"loadConcurrency": {
"globalConfig": 2,
"perNodeConfig": [
{
"nodeSelector": {
"matchLabels": {
"kubernetes.io/hostname": "node1"
}
},
"number": 3
},
{
"nodeSelector": {
"matchLabels": {
"beta.kubernetes.io/instance-type": "Standard_B4ms"
}
},
"number": 5
}
]
}
}
```
To create the configMap, users need to save something like the above sample to a json file and then run below command:
```
kubectl create cm <ConfigMap name> -n velero --from-file=<json file name>
```
### Global data path manager
As for the code implementation, data path manager is to maintain the total number of the running VGDP instances and ensure the limit is not excceeded. At present, there is one data path manager instance per controller, as a result, the concurrent numbers are calculated separately for each controller. This doesn't help to limit the concurrency among different requesters.
Therefore, we need to create one global data path manager instance server-wide, and pass it to different controllers. The instance will be created at node-agent server startup.
The concurrent number is required to initiate a data path manager, the number comes from either Per-node concurrent number or Global concurrent number.
Below are some prototypes related to data path manager:
```go
func NewManager(cocurrentNum int) *Manager
func (m *Manager) CreateFileSystemBR(jobName string, requestorType string, ctx context.Context, client client.Client, namespace string, callbacks Callbacks, log logrus.FieldLogger) (AsyncBR, error)
```
[1]: Implemented/unified-repo-and-kopia-integration/unified-repo-and-kopia-integration.md

View File

@@ -241,7 +241,7 @@ In cases where the methods signatures remain the same, the adaptation layer will
Examples where an adaptation may be safe:
- A method signature is being changed to add a new parameter but the parameter could be optional (for example, adding a context parameter). The adaptation could call through to the method provided in the previous version but omit the parameter.
- A method signature is being changed to remove a parameter, but it is safe to pass a default value to the previous version. The adaptation could call through to the method provided in the previous version but use a default value for the parameter.
- A new method is being added but does not impact any existing behaviour of Velero (for example, a new method which will allow Velero to [wait for additional items to be ready](https://github.com/vmware-tanzu/velero/blob/main/design/wait-for-additional-items.md)). The adaptation would return a value which allows the existing behaviour to be performed.
- A new method is being added but does not impact any existing behaviour of Velero (for example, a new method which will allow Velero to [wait for additional items to be ready](https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/wait-for-additional-items.md)). The adaptation would return a value which allows the existing behaviour to be performed.
- A method is being deleted as it is no longer used. The adaptation would call through to any methods which are still included but would omit the deleted method in the adaptation.
Examples where an adaptation may not be safe:

View File

@@ -0,0 +1,186 @@
# PersistentVolume backup information design
## Abstract
Create a new metadata file in the backup repository's backup name sub-directory to store the backup-including PVC and PV information. The information includes the way of backing up the PVC and PV data, snapshot information, and status. The needed snapshot status can also be recorded there, but the Velero-Native snapshot plugin doesn't provide a way to get the snapshot size from the API, so it's possible that not all snapshot size information is available.
This new additional metadata file is needed when:
* Get a summary of the backup's PVC and PV information, including how the data in them is backed up, or whether the data in them is skipped from backup.
* Find out how the PVC and PV should be restored in the restore process.
* Retrieve the PV's snapshot information for backup.
## Background
There is already a [PR](https://github.com/vmware-tanzu/velero/pull/6496) to track the skipped PVC in the backup. This design will depend on it and go further to get a summary of PVC and PV information, then persist into a metadata file in the backup repository.
In the restore process, the Velero server needs to decide how the PV resource should be restored according to how the PV is backed up. The current logic is to check whether it's backed up by Velero-native snapshot, by file-system backup, or having `DeletionPolicy` set as `Delete`.
The checks are made by the backup-generated PVBs or Snapshots. There is no generic way to find this information, and the CSI backup and Snapshot data movement backup are not covered.
Another thing that needs noticing is when describing the backup, there is no generic way to find the PV's snapshot information.
## Goals
- Create a new metadata file to store backup's PVCs and PVs information and volume data backing up method. The file can be used to let downstream consumers generate a summary.
- Create a generic way to let the Velero server know how the PV resources are backed up.
- Create a generic way to let the Velero server find the PV corresponding snapshot information.
## Non Goals
- Unify how to get snapshot size information for all PV backing-up methods, and all other currently not ready PVs' information.
## High-Level Design
Create _backup-name_-volumes-info.json metadata file in the backup's repository. This file will be encoded to contain all the PVC and PV information included in the backup. The information covers whether the PV or PVC's data is skipped during backup, how its data is backed up, and the backed-up detail information.
Please notice that the new metadata file includes all skipped volume information. This is used to address [the second phase needs of skipped volumes information](https://github.com/vmware-tanzu/velero/issues/5834#issuecomment-1526624211).
The `restoreItem` function can decode the _backup-name_-volumes-info.json file to determine how to handle the PV resource.
## Detailed Design
### The VolumeInfo structure
_backup-name_-volumes-info.json file is a structure that contains an array of structure `VolumeInfo`.
``` golang
type VolumeInfo struct {
PVCName string // The PVC's name.
PVCNamespace string // The PVC's namespace.
PVName string // The PV name.
BackupMethod string // The way the volume data is backed up. The valid value includes `VeleroNativeSnapshot`, `PodVolumeBackup` and `CSISnapshot`.
SnapshotDataMoved bool // Whether the volume's snapshot data is moved to specified storage.
Skipped boolean // Whether the Volume is skipped in this backup.
SkippedReason string // The reason for the volume is skipped in the backup.
StartTimestamp *metav1.Time // Snapshot starts timestamp.
OperationID string // The Async Operation's ID.
CSISnapshotInfo CSISnapshotInfo
SnapshotDataMovementInfo SnapshotDataMovementInfo
NativeSnapshotInfo VeleroNativeSnapshotInfo
PVBInfo PodVolumeBackupInfo
PVInfo PVInfo
}
// CSISnapshotInfo is used for displaying the CSI snapshot status
type CSISnapshotInfo struct {
SnapshotHandle string // It's the storage provider's snapshot ID for CSI.
Size int64 // The snapshot corresponding volume size.
Driver string // The name of the CSI driver.
VSCName string // The name of the VolumeSnapshotContent.
}
// SnapshotDataMovementInfo is used for displaying the snapshot data mover status.
type SnapshotDataMovementInfo struct {
DataMover string // The data mover used by the backup. The valid values are `velero` and ``(equals to `velero`).
UploaderType string // The type of the uploader that uploads the snapshot data. The valid values are `kopia` and `restic`.
RetainedSnapshot string // The name or ID of the snapshot associated object(SAO). SAO is used to support local snapshots for the snapshot data mover, e.g. it could be a VolumeSnapshot for CSI snapshot data moign/pv_backup_info.
SnapshotHandle string // It's the filesystem repository's snapshot ID.
}
// VeleroNativeSnapshotInfo is used for displaying the Velero native snapshot status.
type VeleroNativeSnapshotInfo struct {
SnapshotHandle string // It's the storage provider's snapshot ID for the Velero-native snapshot.
VolumeType string // The cloud provider snapshot volume type.
VolumeAZ string // The cloud provider snapshot volume's availability zones.
IOPS string // The cloud provider snapshot volume's IOPS.
}
// PodVolumeBackupInfo is used for displaying the PodVolumeBackup snapshot status.
type PodVolumeBackupInfo struct {
SnapshotHandle string // It's the file-system uploader's snapshot ID for PodVolumeBackup.
Size int64 // The snapshot corresponding volume size.
UploaderType string // The type of the uploader that uploads the data. The valid values are `kopia` and `restic`.
VolumeName string // The PVC's corresponding volume name used by Pod: https://github.com/kubernetes/kubernetes/blob/e4b74dd12fa8cb63c174091d5536a10b8ec19d34/pkg/apis/core/types.go#L48
PodName string // The Pod name mounting this PVC.
PodNamespace string // The Pod namespace.
NodeName string // The PVB-taken k8s node's name.
}
// PVInfo is used to store some PV information modified after creation.
// Those information are lost after PV recreation.
type PVInfo struct {
ReclaimPolicy string // ReclaimPolicy of PV. It could be different from the referenced StorageClass.
Labels map[string]string // The PV's labels should be kept after recreation.
}
```
### How the VolumeInfo array is generated.
The function `persistBackup` has `backup *pkgbackup.Request` in parameters.
From it, the `VolumeSnapshots`, `PodVolumeBackups`, `CSISnapshots`, `itemOperationsList`, and `SkippedPVTracker` can be read. All of them will be iterated and merged into the `VolumeInfo` array, and then persisted into backup repository in function `persistBackup`.
Please notice that the change happened in async operations are not reflected in the new metadata file. The file only covers the volume changes happen in the Velero server process scope.
A new methods are added to BackupStore to download the VolumeInfo metadata file.
Uploading the metadata file is covered in the exiting `PutBackup` method.
``` golang
type BackupStore interface {
...
GetVolumeInfos(name string) ([]*VolumeInfo, error)
...
}
```
### How the VolumeInfo array is used.
#### Generate the PVC backed-up information summary
The downstream tools can use this VolumeInfo array to format and display their volume information. This is not in the scope of this feature.
#### Retrieve volume backed-up information for `velero backup describe` command
The `velero backup describe` can also use this VolumeInfo array structure to display the volume information. The snapshot data mover volume should use this structure at first, then the Velero native snapshot, CSI snapshot, and PodVolumeBackup can also use this structure. The detailed implementation is also not in this feature's scope.
#### Let restore know how to restore the PV
In the function `restoreItem`, it will determine whether to restore the PV resource by checking it in the Velero native snapshots list, PodVolumeBackup list, and its DeletionPolicy. This logic is still kept. The logic will be used when the new `VolumeInfo` metadata cannot be found to support backward compatibility.
``` golang
if groupResource == kuberesource.PersistentVolumes {
switch {
case hasSnapshot(name, ctx.volumeSnapshots):
...
case hasPodVolumeBackup(obj, ctx):
...
case hasDeleteReclaimPolicy(obj.Object):
...
default:
...
```
After introducing the VolumeInfo array, the following logic will be added.
``` golang
if groupResource == kuberesource.PersistentVolumes {
volumeInfo := GetVolumeInfo(pvName)
switch volumeInfo.BackupMethod {
case VeleroNativeSnapshot:
...
case PodVolumeBackup:
...
case CSISnapshot:
...
default:
// Need to check whether the volume is backed up by the SnapshotDataMover.
if volumeInfo.SnapshotDataMovement:
// Check whether the Velero server should restore the PV depending on the DeletionPolicy setting.
if volumeInfo.Skipped:
```
### How the VolumeInfo metadata file is deleted
_backup-name_-volumes-info.json file is deleted during backup deletion.
## Alternatives Considered
The restore process needs more information about how the PVs are backed up to determine whether this PV should be restored. The released branches also need a similar function, but backporting a new feature into previous releases may not be a good idea, so according to [Anshul Ahuja's suggestion](https://github.com/vmware-tanzu/velero/issues/6595#issuecomment-1731081580), adding more cases here to support checking PV backed-up by CSI plugin and CSI snapshot data mover: https://github.com/vmware-tanzu/velero/blob/5ff5073cc3f364bafcfbd26755e2a92af68ba180/pkg/restore/restore.go#L1206-L1324.
## Security Considerations
There should be no security impact introduced by this design.
## Compatibility
After this design is implemented, there should be no impact on the existing [skipped PVC summary feature](https://github.com/vmware-tanzu/velero/pull/6496).
To support older version backup, which doesn't have the VolumeInfo metadata file, the old logic, which is checking the Velero native snapshots list, PodVolumeBackup list, and PVC DeletionPolicy, is still kept, and supporting CSI snapshots and snapshot data mover logic will be added too.
## Implementation
This will be implemented in the Velero v1.13 development cycle.
## Open Issues
There are no open issues identified by now.

View File

@@ -0,0 +1,143 @@
# Volume information for restore design
## Background
Velero has different ways to handle data in the volumes during restore. The users want to have more clarity in terms of how
the volumes are handled in restore process via either Velero CLI or other downstream product which consumes Velero.
## Goals
- Create new metadata to store the information of the restored volume, which will have the same life-cycle as the restore CR.
- Consume the metadata in velero CLI to enable it display more details for volumes in the output of `velero restore describe --details`
## Non Goals
- Provide finer grained control of the volume restore process. The focus of the design is to enable displaying more details.
- Persist additional metadata like podvolume, datadownloads etc to the restore folder in backup-location.
## Design
### Structure of the restore volume info
The restore volume info will be stored in a file named like `${restore_name}-vol-info.json`. The content of the file will
be a list of volume info objects, each of which will map to a volume that is restored, and will contain the information
like name of the restored PV/PVC, restore method and related objects to provide details depending on the way it's restored,
it will look like this:
```
[
{
"pvcName": "nginx-logs-2",
"pvcNamespace": "nginx-app-restore",
"pvName": "pvc-e320d75b-a788-41a3-b6ba-267a553efa5e",
"restoreMethod": "PodVolumeRestore",
"snapshotDataMoved": false,
"pvrInfo": {
"snapshotHandle": "81973157c3a945a5229285c931b02c68",
"uploaderType": "kopia",
"volumeName": "nginx-logs",
"podName": "nginx-deployment-79b56c644b-mjdhp",
"podNamespace": "nginx-app-restore"
}
},
{
"pvcName": "nginx-logs-1",
"pvcNamespace": "nginx-app-restore",
"pvName": "pvc-98c151f4-df47-4980-ba6d-470842f652cc",
"restoreMethod": "CSISnapshot",
"snapshotDataMoved": false,
"csiSnapshotInfo": {
"snapshotHandle": "snap-01a3b21a5e9f85528",
"size": 2147483648,
"driver": "ebs.csi.aws.com",
"vscName": "velero-velero-nginx-logs-1-jxmbg-hx9x5"
}
}
......
]
```
Each field will have the same meaning as the corresponding field in the backup volume info. It will not have the fields
that were introduced to help with the backup process, like `pvInfo`, `dataupload` etc.
### How the restore volume info is generated
Two steps are involved in generating the restore volume info, the first is "collection", which is to gather the information
for restoration of the volumes, the second is "generation", which is to iterate through the data collected in the first step
and generate the volume info list as is described above.
Unlike backup, the CR objects created during the restore process will not be persisted to the backup storage location.
Therefore, to gather the information needed to generate volume information, we either need to collect the CRs in the middle
of the restore process, or retrieve the objects based on the `resouce-list.json` of the restore via API server.
The information to be collected are:
- **PV/PVC mapping relationship:** It will be collected via the `restore-resource-list.json`, b/c at the time the json is ready, all
PVCs and PVs are already created.
- **Native snapshot information:** It will be collected in the restore workflow when each snapshot is restored.
- **podvolumerestore CRs:** It will be collected in the restore workflow after each pvr is created.
- **volumesnapshot CRs for CSI snapshot:** It will be collected in the step of collecting PVC info, by reading the `dataSource`
field in the spec of the PVC.
- **datadownload CRs** It will be collected in the phase of collecting PVC info, by querying the API-server to list the datadownload
CRs labeled with the restore name.
After the collection step, the generation step is relatively straight-forward, as we have all the information needed in
the data structures.
The whole collection and generation steps will be done with the "best-effort" manner, i.e. if there are any failures we
will only log the error in restore log, rather than failing the whole restore process, we will not put these errors or warnings
into the `result.json`, b/c it won't impact the restored resources.
Depending on the number of the restored PVCs the "collection" step may involve many API calls, but it's considered acceptable
b/c at that time the resources are already created, so the actual RTO is not impacted. By using the client of controller runtime
we can make the collection step more efficient by using the cache of the API server. We may consider to make improvements if
we observe performance issues, like using multiple go-routines in the collection.
### Implementation
Because the restore volume info shares the same data structures with the backup volume info, we will refactor the code in
package `internal/volume` to make the sub-components in backup volume info shared by both backup and restore volume info.
We'll introduce a struct called `RestoreVolumeInfoTracker` which encapsulates the logic of collecting and generating the restore volume info:
```
// RestoreVolumeInfoTracker is used to track the volume information during restore.
// It is used to generate the RestoreVolumeInfo array.
type RestoreVolumeInfoTracker struct {
*sync.Mutex
restore *velerov1api.Restore
log logrus.FieldLogger
client kbclient.Client
pvPvc *pvcPvMap
// map of PV name to the NativeSnapshotInfo from which the PV is restored
pvNativeSnapshotMap map[string]NativeSnapshotInfo
// map of PV name to the CSISnapshot object from which the PV is restored
pvCSISnapshotMap map[string]snapshotv1api.VolumeSnapshot
datadownloadList *velerov2alpha1.DataDownloadList
pvrs []*velerov1api.PodVolumeRestore
}
```
The `RestoreVolumeInfoTracker` will be created when the restore request is initialized, and it will be passed to the `restoreContext`
and carried over the whole restore process.
The `client` in this struct is to be used to query the resources in the restored namespace, and the current client in restore
reconciler only watches the resources in the namespace where velero is installed. Therefore, we need to introduce the
`CrClient` which has the same life-cycle of velero server to the restore reconciler, because this is the client that watches all the
resources on the cluster.
In addition to that, we will make small changes in the restore workflow to collect the information needed. We'll make the
changes un-intrusive and make sure not to change the logic of the restore to avoid break change or regression.
We'll also introduce routine changes in the package `pkg/persistence` to persist the restore volume info to the backup storage location.
Last but not least, the `velero restore describe --details` will be updated to display the volume info in the output.
## Alternatives Considered
There used to be suggestion that to provide more details about volume, we can query the `backup-vol-info.json` with the resource
identifier in `restore-resource-list.json`. This will not work when there're resource modifiers involved in the restore process,
which may change the metadata of PVC/PV. In addition, we may add more detailed restore-specific information about the volumes that is not available
in the `backup-vol-info.json`. Therefore, the `restore-vol-info.json` is a better approach.
## Security Considerations
There should be no security impact introduced by this design.
## Compatibility
The restore volume info will be consumed by Velero CLI and downstream products for displaying details. So the functionality
of backup and restore will not be impacted for restores created by older versions of Velero which do not have the restore volume info
metadata. The client should properly handle the case when the restore volume info does not exist.
The data structures referenced by volume info is shared between both restore and backup and it's not versioned, so in the future
we must make sure there will only be incremental changes to the metadata, such that no break change will be introduced to the client.
## Open Issues
https://github.com/vmware-tanzu/velero/issues/7546
https://github.com/vmware-tanzu/velero/issues/6478

View File

@@ -0,0 +1,311 @@
# Repository maintenance job configuration design
## Abstract
Add this design to make the repository maintenance job can read configuration from a dedicate ConfigMap and make the Job's necessary parts configurable, e.g. `PodSpec.Affinity` and `PodSpec.Resources`.
## Background
Repository maintenance is split from the Velero server to a k8s Job in v1.14 by design [repository maintenance job](Implemented/repository-maintenance.md).
The repository maintenance Job configuration was read from the Velero server CLI parameter, and it inherits the most of Velero server's Deployment's PodSpec to fill un-configured fields.
This design introduces a new way to let the user to customize the repository maintenance behavior instead of inheriting from the Velero server Deployment or reading from `velero server` CLI parameters.
The configurations added in this design including the resource limitations, node selection.
It's possible new configurations are introduced in future releases based on this design.
For the node selection, the repository maintenance Job also inherits from the Velero server deployment before, but the Job may last for a while and cost noneligible resources, especially memory.
The users have the need to choose which k8s node to run the maintenance Job.
This design reuses the data structure introduced by design [node-agent affinity configuration](Implemented/node-agent-affinity.md) to make the repository maintenance job can choose which node running on.
## Goals
- Unify the repository maintenance Job configuration at one place.
- Let user can choose repository maintenance Job running on which nodes.
## Non Goals
- There was an [issue](https://github.com/vmware-tanzu/velero/issues/7911) to require the whole Job's PodSpec should be configurable. That's not in the scope of this design.
- Please notice this new configuration is dedicated for the repository maintenance. Repository itself configuration is not covered.
## Compatibility
v1.14 uses the `velero server` CLI's parameter to pass the repository maintenance job configuration.
In v1.15, those parameters are still kept, including `--maintenance-job-cpu-request`, `--maintenance-job-mem-request`, `--maintenance-job-cpu-limit`, `--maintenance-job-mem-limit`, and `--keep-latest-maintenance-jobs`.
But the parameters read from the ConfigMap specified by `velero server` CLI parameter `--repo-maintenance-job-configmap` introduced by this design have a higher priority.
If there `--repo-maintenance-job-configmap` is not specified, then the `velero server` parameters are used if provided.
If the `velero server` parameters are not specified too, then the default values are used.
* `--keep-latest-maintenance-jobs` default value is 3.
* `--maintenance-job-cpu-request` default value is 0.
* `--maintenance-job-mem-request` default value is 0.
* `--maintenance-job-cpu-limit` default value is 0.
* `--maintenance-job-mem-limit` default value is 0.
## Deprecation
Propose to deprecate the `velero server` parameters `--maintenance-job-cpu-request`, `--maintenance-job-mem-request`, `--maintenance-job-cpu-limit`, `--maintenance-job-mem-limit`, and `--keep-latest-maintenance-jobs` in release-1.15.
That means those parameters will be deleted in release-1.17.
After deletion, those resources-related parameters are replaced by the ConfigMap specified by `velero server` CLI's parameter `--repo-maintenance-job-configmap`.
`--keep-latest-maintenance-jobs` is deleted from `velero server` CLI. It turns into a non-configurable internal parameter, and its value is 3.
Please check [issue 7923](https://github.com/vmware-tanzu/velero/issues/7923) for more information why deleting this parameter.
## Design
This design introduces a new ConfigMap specified by `velero server` CLI parameter `--repo-maintenance-job-configmap` as the source of the repository maintenance job configuration. The specified ConfigMap is read from the namespace where Velero is installed.
If the ConfigMap doesn't exist, the internal default values are used.
Example of using the parameter `--repo-maintenance-job-configmap`:
```
velero server \
...
--repo-maintenance-job-configmap repo-job-config
...
```
**Notice**
* Velero doesn't own this ConfigMap. If the user wants to customize the repository maintenance job, the user needs to create this ConfigMap.
* Velero reads this ConfigMap content at starting a new repository maintenance job, so the ConfigMap change will not take affect until the next created job.
### Structure
The data structure is as below:
```go
type Configs struct {
// LoadAffinity is the config for data path load affinity.
LoadAffinity []*LoadAffinity `json:"loadAffinity,omitempty"`
// PodResources is the config for the CPU and memory resources setting.
PodResources *kube.PodResources `json:"podResources,omitempty"`
}
type LoadAffinity struct {
// NodeSelector specifies the label selector to match nodes
NodeSelector metav1.LabelSelector `json:"nodeSelector"`
}
type PodResources struct {
CPURequest string `json:"cpuRequest,omitempty"`
MemoryRequest string `json:"memoryRequest,omitempty"`
CPULimit string `json:"cpuLimit,omitempty"`
MemoryLimit string `json:"memoryLimit,omitempty"`
}
```
The ConfigMap content is a map.
If there is a key value as `global` in the map, the key's value is applied to all BackupRepositories maintenance jobs that cannot find their own specific configuration in the ConfigMap.
The other keys in the map is the combination of three elements of a BackupRepository:
* The namespace in which BackupRepository backs up volume data.
* The BackupRepository referenced BackupStorageLocation's name.
* The BackupRepository's type. Possible values are `kopia` and `restic`.
Those three keys can identify a [unique BackupRepository](https://github.com/vmware-tanzu/velero/blob/2fc6300f2239f250b40b0488c35feae59520f2d3/pkg/repository/backup_repo_op.go#L32-L37).
If there is a key match with BackupRepository, the key's value is applied to the BackupRepository's maintenance jobs.
By this way, it's possible to let user configure before the BackupRepository is created.
This is especially convenient for administrator configuring during the Velero installation.
For example, the following BackupRepository's key should be `test-default-kopia`.
``` yaml
- apiVersion: velero.io/v1
kind: BackupRepository
metadata:
generateName: test-default-kopia-
labels:
velero.io/repository-type: kopia
velero.io/storage-location: default
velero.io/volume-namespace: test
name: test-default-kopia-kgt6n
namespace: velero
spec:
backupStorageLocation: default
maintenanceFrequency: 1h0m0s
repositoryType: kopia
resticIdentifier: gs:jxun:/restic/test
volumeNamespace: test
```
The `LoadAffinity` structure is reused from design [node-agent affinity configuration](Implemented/node-agent-affinity.md).
It's possible that the users want to choose nodes that match condition A or condition B to run the job.
For example, the user want to let the nodes is in a specified machine type or the nodes locate in the us-central1-x zones to run the job.
This can be done by adding multiple entries in the `LoadAffinity` array.
### Affinity Example
A sample of the ConfigMap is as below:
``` bash
cat <<EOF > repo-maintenance-job-config.json
{
"global": {
podResources: {
"cpuRequest": "100m",
"cpuLimit": "200m",
"memoryRequest": "100Mi",
"memoryLimit": "200Mi"
},
"loadAffinity": [
{
"nodeSelector": {
"matchExpressions": [
{
"key": "cloud.google.com/machine-family",
"operator": "In",
"values": [
"e2"
]
}
]
}
},
{
"nodeSelector": {
"matchExpressions": [
{
"key": "topology.kubernetes.io/zone",
"operator": "In",
"values": [
"us-central1-a",
"us-central1-b",
"us-central1-c"
]
}
]
}
}
]
}
}
EOF
```
This sample showcases two affinity configurations:
- matchLabels: maintenance job runs on nodes with label key `cloud.google.com/machine-family` and value `e2`.
- matchLabels: maintenance job runs on nodes located in `us-central1-a`, `us-central1-b` and `us-central1-c`.
The nodes matching one of the two conditions are selected.
To create the configMap, users need to save something like the above sample to a json file and then run below command:
```
kubectl create cm repo-maintenance-job-config -n velero --from-file=repo-maintenance-job-config.json
```
### Value assigning rules
If the Velero BackupRepositoryController cannot find the introduced ConfigMap, the following default values are used for repository maintenance job:
``` go
config := Configs {
// LoadAffinity is the config for data path load affinity.
LoadAffinity: nil,
// Resources is the config for the CPU and memory resources setting.
PodResources: &kube.PodResources{
// The repository maintenance job CPU request setting
CPURequest: "0m",
// The repository maintenance job memory request setting
MemoryRequest: "0Mi",
// The repository maintenance job CPU limit setting
CPULimit: "0m",
// The repository maintenance job memory limit setting
MemoryLimit: "0Mi",
},
}
```
If the Velero BackupRepositoryController finds the introduced ConfigMap with only `global` element, the `global` value is used.
If the Velero BackupRepositoryController finds the introduced ConfigMap with only element matches the BackupRepository, the matched element value is used.
If the Velero BackupRepositoryController finds the introduced ConfigMap with both `global` element and element matches the BackupRepository, the matched element defined values overwrite the `global` value, and the `global` value is still used for matched element undefined values.
For example, the ConfigMap content has two elements.
``` json
{
"global": {
"loadAffinity": [
{
"nodeSelector": {
"matchExpressions": [
{
"key": "cloud.google.com/machine-family",
"operator": "In",
"values": [
"e2"
]
}
]
}
},
],
"podResources": {
"cpuRequest": "100m",
"cpuLimit": "200m",
"memoryRequest": "100Mi",
"memoryLimit": "200Mi"
}
},
"ns1-default-kopia": {
"podResources": {
"memoryRequest": "400Mi",
"memoryLimit": "800Mi"
}
}
}
```
The config value used for BackupRepository backing up volume data in namespace `ns1`, referencing BSL `default`, and the type is `Kopia`:
``` go
config := Configs {
// LoadAffinity is the config for data path load affinity.
LoadAffinity: []*kube.LoadAffinity{
{
NodeSelector: metav1.LabelSelector{
MatchExpressions: []metav1.LabelSelectorRequirement{
{
Key: "cloud.google.com/machine-family",
Operator: metav1.LabelSelectorOpIn,
Values: []string{"e2"},
},
},
},
},
},
PodResources: &kube.PodResources{
// The repository maintenance job CPU request setting
CPURequest: "",
// The repository maintenance job memory request setting
MemoryRequest: "400Mi",
// The repository maintenance job CPU limit setting
CPULimit: "",
// The repository maintenance job memory limit setting
MemoryLimit: "800Mi",
}
}
```
### Implementation
During the Velero repository controller starts to maintain a repository, it will call the repository manager's `PruneRepo` function to build the maintenance Job.
The ConfigMap specified by `velero server` CLI parameter `--repo-maintenance-job-configmap` is get to reinitialize the repository `MaintenanceConfig` setting.
``` go
jobConfig, err := getMaintenanceJobConfig(
context.Background(),
m.client,
m.log,
m.namespace,
m.repoMaintenanceJobConfig,
repo,
)
if err != nil {
log.Infof("Cannot find the ConfigMap %s with error: %s. Use default value.",
m.namespace+"/"+m.repoMaintenanceJobConfig,
err.Error(),
)
}
log.Info("Start to maintenance repo")
maintenanceJob, err := m.buildMaintenanceJob(
jobConfig,
param,
)
if err != nil {
return errors.Wrap(err, "error to build maintenance job")
}
```
## Alternatives Considered
An other option is creating each ConfigMap for a BackupRepository.
This is not ideal for scenario that has a lot of BackupRepositories in the cluster.

View File

@@ -0,0 +1,318 @@
# Design for repository maintenance job
## Abstract
This design proposal aims to decouple repository maintenance from the Velero server by launching a maintenance job when needed, to mitigate the impact on the Velero server during backups.
## Background
During backups, Velero performs periodic maintenance on the repository. This operation may consume significant CPU and memory resources in some cases, leading to potential issues such as the Velero server being killed by OOM. This proposal addresses these challenges by separating repository maintenance from the Velero server.
## Goals
1. **Independent Repository Maintenance**: Decouple maintenance from Velero's main logic to reduce the impact on the Velero server pod.
2. **Configurable Resources Usage**: Make the resources used by the maintenance job configurable.
3. **No API Changes**: Retain existing APIs and workflow in the backup repository controller.
## Non Goals
We have lots of concerns over parallel maintenance, which will increase the complexity of our design currently.
- Non-blocking maintenance job: it may conflict with updating the same `backuprepositories` CR when parallel maintenance.
- Maintenance job concurrency control: there is no one suitable mechanism in Kubernetes to control the concurrency of different jobs.
- Parallel maintenance: Maintaining the same repo by multiple jobs at the same time would have some compatible cases that some providers may not support.
Unfortunately, parallel maintenance is currently not a priority because of the concerns above, improving maintenance efficiency is not the primary focus at this stage.
## High-Level Design
1. **Add Maintenance Subcommand**: Introduce a new Velero server subcommand for repository maintenance.
2. **Create Jobs by Repository Manager**: Modify the backup repository controller to create a maintenance job instead of directly calling the multiple chain calls for Kopia or Restic maintenance.
3. **Update Maintenance Job Result in BackupRepository CR**: Retrieve the result of the maintenance job and update the status of the `BackupRepository` CR accordingly.
4. **Add Setting for Maintenance Job**: Introduce a configuration option to set maintenance jobs, including resource limits (CPU and memory), keeping the latest N maintenance jobs for each repository.
## Detailed Design
### 1. Add Maintenance sub-command
The CLI command will be added to the Velero CLI, the command is designed for use in a pod of maintenance jobs.
Our CLI command is designed as follows:
```shell
$ velero repo-maintenance --repo-name $repo-name --repo-type $repo-type --backup-storage-location $bsl
```
Compared with other CLI commands, the maintenance command is used in a pod of maintenance jobs not for user use, and the job should show the result of maintenance after finish.
Here we will write the error message into one specific file which could be read by the maintenance job.
on the whole, we record two kinds of logs:
- one is the log output of the intermediate maintenance process: this log could be retrieved via the Kubernetes API server, including the error log.
- one is the result of the command which could indicate whether the execution is an error or not: the result could be redirected to a file that the maintenance job itself could read, and the file only contains the error message.
we will write the error message into the `/dev/termination-log` file if execution is failed.
The main maintenance logic would be using the repository provider to do the maintenance.
```golang
func checkError(err error, file *os.File) {
if err != nil {
if err != context.Canceled {
if _, errWrite := file.WriteString(fmt.Sprintf("An error occurred: %v", err)); errWrite != nil {
fmt.Fprintf(os.Stderr, "Failed to write error to termination log file: %v\n", errWrite)
}
file.Close()
os.Exit(1) // indicate the command executed failed
}
}
}
func (o *Options) Run(f veleroCli.Factory) {
logger := logging.DefaultLogger(o.LogLevelFlag.Parse(), o.FormatFlag.Parse())
logger.SetOutput(os.Stdout)
errorFile, err := os.Create("/dev/termination-log")
if err != nil {
fmt.Fprintf(os.Stderr, "Failed to create termination log file: %v\n", err)
return
}
defer errorFile.Close()
...
err = o.runRepoPrune(cli, f.Namespace(), logger)
checkError(err, errorFile)
...
}
func (o *Options) runRepoPrune(cli client.Client, namespace string, logger logrus.FieldLogger) error {
...
var repoProvider provider.Provider
if o.RepoType == velerov1api.BackupRepositoryTypeRestic {
repoProvider = provider.NewResticRepositoryProvider(credentialFileStore, filesystem.NewFileSystem(), logger)
} else {
repoProvider = provider.NewUnifiedRepoProvider(
credentials.CredentialGetter{
FromFile: credentialFileStore,
FromSecret: credentialSecretStore,
}, o.RepoType, cli, logger)
}
...
err = repoProvider.BoostRepoConnect(context.Background(), para)
if err != nil {
return errors.Wrap(err, "failed to boost repo connect")
}
err = repoProvider.PruneRepo(context.Background(), para)
if err != nil {
return errors.Wrap(err, "failed to prune repo")
}
return nil
}
```
### 2. Create Jobs by Repository Manager
Currently, the backup repository controller will call the repository manager to do the `PruneRepo`, and Kopia or Restic maintenance is then finally called through multiple chain calls.
We will keep using the `PruneRepo` function in the repository manager, but we cut off the multiple chain calls by creating a maintenance job.
The job definition would be like below:
```yaml
apiVersion: v1
items:
- apiVersion: batch/v1
kind: Job
metadata:
# labels or affinity or topology settings would inherit from the velero deployment
labels:
# label the job name for later list jobs by name
job-name: nginx-example-default-kopia-pqz6c
name: nginx-example-default-kopia-pqz6c
namespace: velero
spec:
# Not retry it again
backoffLimit: 1
# Only have one job one time
completions: 1
# Not parallel running job
parallelism: 1
template:
metadata:
labels:
job-name: nginx-example-default-kopia-pqz6c
name: kopia-maintenance-job
spec:
containers:
# arguments for repo maintenance job
- args:
- repo-maintenance
- --repo-name=nginx-example
- --repo-type=kopia
- --backup-storage-location=default
# inherit from Velero server
- --log-level=debug
command:
- /velero
# inherit environment variables from the velero deployment
env:
- name: AZURE_CREDENTIALS_FILE
value: /credentials/cloud
# inherit image from the velero deployment
image: velero/velero:main
imagePullPolicy: IfNotPresent
name: kopia-maintenance-container
# resource limitation set by Velero server configuration
# if not specified, it would apply best effort resources allocation strategy
resources: {}
# error message would be written to /dev/termination-log
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
# inherit volume mounts from the velero deployment
volumeMounts:
- mountPath: /credentials
name: cloud-credentials
dnsPolicy: ClusterFirst
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
# inherit service account from the velero deployment
serviceAccount: velero
serviceAccountName: velero
volumes:
# inherit cloud credentials from the velero deployment
- name: cloud-credentials
secret:
defaultMode: 420
secretName: cloud-credentials
# ttlSecondsAfterFinished set the job expired seconds
ttlSecondsAfterFinished: 86400
status:
# which contains the result after maintenance
message: ""
lastMaintenanceTime: ""
```
Now, the backup repository controller will call the repository manager to create one maintenance job and wait for the job to complete. The Kopia or Restic maintenance multiple chains are called by the job.
### 3. Update the Result of the Maintenance Job into BackupRepository CR
The backup repository controller will update the result of the maintenance job into the backup repository CR.
For how to get the result of the maintenance job we could refer to [here](https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/#writing-and-reading-a-termination-message).
After the maintenance job is finished, we could get the result of maintenance by getting the terminated message from the related pod:
```golang
func GetContainerTerminatedMessage(pod *v1.Pod) string {
...
for _, containerStatus := range pod.Status.ContainerStatuses {
if containerStatus.LastTerminationState.Terminated != nil {
return containerStatus.LastTerminationState.Terminated.Message
}
}
...
return ""
}
```
Then we could update the status of backupRepository CR with the message.
### 4. Add Setting for Resource Usage of Maintenance
Add one configuration for setting the resource limit of maintenance jobs as below:
```shell
velero server --maintenance-job-cpu-request $cpu-request --maintenance-job-mem-request $mem-request --maintenance-job-cpu-limit $cpu-limit --maintenance-job-mem-limit $mem-limit
```
Our default value is 0, which means we don't limit the resources, and the resource allocation strategy would be [best effort](https://kubernetes.io/docs/concepts/workloads/pods/pod-qos/#besteffort).
### 5. Automatic Cleanup for Finished Maintenance Jobs
Add configuration for clean up maintenance jobs:
- keep-latest-maintenance-jobs: the number of keeping latest maintenance jobs for each repository.
```shell
velero server --keep-latest-maintenance-jobs $num
```
We would check and keep the latest N jobs after a new job is finished.
```golang
func deleteOldMaintenanceJobs(cli client.Client, repo string, keep int) error {
// Get the maintenance job list by label
jobList := &batchv1.JobList{}
err := cli.List(context.TODO(), jobList, client.MatchingLabels(map[string]string{RepositoryNameLabel: repo}))
if err != nil {
return err
}
// Delete old maintenance jobs
if len(jobList.Items) > keep {
sort.Slice(jobList.Items, func(i, j int) bool {
return jobList.Items[i].CreationTimestamp.Before(&jobList.Items[j].CreationTimestamp)
})
for i := 0; i < len(jobList.Items)-keep; i++ {
err = cli.Delete(context.TODO(), &jobList.Items[i], client.PropagationPolicy(metav1.DeletePropagationBackground))
if err != nil {
return err
}
}
}
return nil
}
```
### 6 Velero Install with Maintenance Options
All the above maintenance options should be supported by Velero install command.
### 7. Observability and Debuggability
Some monitoring metrics are added for backup repository maintenance:
- repo_maintenance_total
- repo_maintenance_success_total
- repo_maintenance_failed_total
- repo_maintenance_duration_seconds
We will keep the latest N maintenance jobs for each repo, and users can get the log from the job. the job log level inherent from the Velero server setting.
Also, we would integrate maintenance job logs and `backuprepositories` CRs into `velero debug`.
Roughly, the process is as follows:
1. The backup repository controller will check the BackupRepository request in the queue periodically.
2. If the maintenance period of the repository checked by `runMaintenanceIfDue` in `Reconcile` is due, then the backup repository controller will call the Repository manager to execute `PruneRepo`
3. The `PruneRepo` of the Repository manager will create one maintenance job, the resource limitation, environment variables, service account, images, etc. would inherit from the Velero server pod. Also, one clean up TTL would be set to maintenance job.
4. The maintenance job will execute the Velero maintenance command, wait for maintaining to finish and write the maintenance result into the terminationMessagePath file of the related pod.
5. Kubernetes could show the result in the status of the pod by reading the termination message in the pod.
6. The backup repository controller will wait for the maintenance job to finish and read the status of the maintenance job, then update the message field and phase in the status of `backuprepositories` CR accordingly.
6. Clean up old maintenance jobs and keep only N latest for each repository.
### 8. Codes Refinement
Once `backuprepositories` CR status is modified, the CR would re-queue to be reconciled, and re-execute logics in reconcile shortly not respecting the re-queue frequency configured by `repoSyncPeriod`.
For one abnormal scenario if the maintenance job fails, the status of `backuprepositories` CR would be updated and the CR will re-queue immediately, if the new maintenance job still fails, then it will re-queue again, making the logic of `backuprepositories` CR re-queue like a dead loop.
So we change the Predicates logic in Controller manager making it only re-queue if the Spec of `backuprepositories` CR is changed.
```golang
ctrl.NewControllerManagedBy(mgr).For(&velerov1api.BackupRepository{}, builder.WithPredicates(kube.SpecChangePredicate{}))
```
This change would bring the behavior different from the previous, errors that occurred in the maintenance job would retry in the next reconciliation period instead of retrying immediately.
## Prospects for Future Work
Future work may focus on improving the efficiency of Velero maintenance through non-blocking parallel modes. Potential areas for enhancement include:
**Non-blocking Mode**: Explore the implementation of a non-blocking mode for parallel maintenance to enhance overall efficiency.
**Concurrency Control**: Investigate mechanisms for better concurrency control of different maintenance jobs.
**Provider Support for Parallel Maintenance**: Evaluate the feasibility of parallel maintenance for different providers and address any compatibility issues.
**Efficiency Improvements**: Investigate strategies to optimize maintenance efficiency without compromising reliability.
By considering these areas, future iterations of Velero may benefit from enhanced parallelization and improved resource utilization during repository maintenance.

View File

@@ -0,0 +1,113 @@
# Allow Object-Level Resource Status Restore in Velero
## Abstract
This design proposes a way to enhance Veleros restore functionality by enabling object-level resource status restoration through annotations.
Currently, Velero allows restoring resource statuses only at a resource type level, which lacks granularity of restoring the status of specific resources.
By introducing an annotation that controllers can set on individual resource objects, this design aims to improve flexibility and autonomy for users/resource-controllers, providing a more way
to enable resource status restore.
## Background
Velero provides the `restoreStatus` field in the Restore API to specify resource types for status restoration. However, this feature is limited to resource types as a whole, lacking the granularity needed to restore specific objects of a resource type. Resource controllers, especially those managing custom resources with external dependencies, may need to restore status on a per-object basis based on internal logic and dependencies.
This design adds an annotation-based approach to allow controllers to specify status restoration at the object level, enabling Velero to handle status restores more flexibly.
## Goals
- Provide a mechanism to specify the restoration of a resources status at an object level.
- Maintain backwards compatibility with existing functionality, allowing gradual adoption of this feature.
- Integrate the new annotation-based objects-level status restore with Veleros existing resource-type-level `restoreStatus` configuration.
## Non-Goals
- Alter Veleros existing resource type-level status restoration mechanism for resources without annotations.
## Use-Cases/Scenarios
1. Controller managing specific Resources
- A resource controller identifies that a specific object of a resource should have its status restored due to particular dependencies
- The controller automatically sets the `velero.io/restore-status: true` annotation on the resource.
- During restore, Velero restores the status of this object, while leaving other resources unaffected.
- The status for the annotated object will be restored regardless of its inclusion/exclusion in `restoreStatus.includedResources`
2. A specific object must not have its status restored even if its included in `restoreStatus.includedResources`
- A user specifies a resource type in the `restoreStatus.includedResources` field within the Restore custom resource.
- A particular object of that resource type is annotated with `velero.io/restore-status: false` by the user.
- The status of the annotated object will not restored even though its included in `restoreStatus.includedResources` because annotation is `false` and it takes precedence.
4. Default Behavior for objects Without the Annotation
- Objects without the `velero.io/restore-status` annotation behave as they currently do: Velero skips their status restoration unless the resource type is specified in the `restoreStatus.includedResources` field.
## High-Level Design
- Object-Level Status Restore Annotation: We are introducing the `velero.io/restore-status` annotation at the resource object level to mark specific objects for status restoration.
- `true`: Indicates that the status should be restored for this object
- `false`: Skip restoring status for this specific object
- Invalid or missing annotations defer to the meaning of existing resource type-level logic.
- Restore logic precedence:
- Annotations take precedence when they exist with valid values (`true` or `false`).
- Restore spec `restoreStatus.includedResources` is only used when annotations are invalid or missing.
- Velero Restore Logic Update: During a restore operation, Velero will:
- Extend the existing restore logic to parse and prioritize annotations introduced in this design.
- Update resource objects accordingly based on their annotation values or fallback configuration.
## Detailed Design
- Annotation for object-Level Status Restore: The `velero.io/restore-status` annotation will be set on individual resource objects by users/controllers as needed:
```yaml
metadata:
annotations:
velero.io/restore-status: "true"
```
- Restore Logic Modifications: During the restore operation, the restore controller will follow these steps:
- Parse the `restoreStatus.includedResources` spec to determine resource types eligible for status restoration.
- For each resource object:
- Check for the `velero.io/restore-status` annotation.
- If the annotation value is:
- `true`: Restore the status of the object
- `false`: Skip restoring the status of the object
- If the annotation is invalid or missing:
- Default to the `restoreStatus.includedResources` configuration
## Implementation
We are targeting the implementation of this design for Velero 1.16 release.
Current restoreStatus logic resides here: https://github.com/vmware-tanzu/velero/blob/32a8c62920ad96c70f1465252c0197b83d5fa6b6/pkg/restore/restore.go#L1652
The modified logic would look somewhat like:
```go
// Determine whether to restore status from resource type configuration
shouldRestoreStatus := ctx.resourceStatusIncludesExcludes != nil && ctx.resourceStatusIncludesExcludes.ShouldInclude(groupResource.String())
// Check for object-level annotation
annotations := obj.GetAnnotations()
objectAnnotation := annotations["velero.io/restore-status"]
annotationValid := objectAnnotation == "true" || objectAnnotation == "false"
// Determine restore behavior based on annotation precedence
shouldRestoreStatus = (annotationValid && objectAnnotation == "true") || (!annotationValid && shouldRestoreStatus)
ctx.log.Debugf("status field for %s: exists: %v, should restore: %v (by annotation: %v)", newGR, statusFieldExists, shouldRestoreStatus, annotationValid)
if shouldRestoreStatus && statusFieldExists {
if err := unstructured.SetNestedField(obj.Object, objStatus, "status"); err != nil {
ctx.log.Errorf("Could not set status field %s: %v", kube.NamespaceAndName(obj), err)
errs.Add(namespace, err)
return warnings, errs, itemExists
}
obj.SetResourceVersion(createdObj.GetResourceVersion())
updated, err := resourceClient.UpdateStatus(obj, metav1.UpdateOptions{})
if err != nil {
ctx.log.Infof("Status field update failed %s: %v", kube.NamespaceAndName(obj), err)
warnings.Add(namespace, err)
} else {
createdObj = updated
}
}
```

View File

@@ -0,0 +1,120 @@
# Design for Adding Finalization Phase in Restore Workflow
## Abstract
This design proposes adding the finalization phase to the restore workflow. The finalization phase would be entered after all item restoration and plugin operations have been completed, similar to the way the backup process proceeds. Its purpose is to perform any wrap-up work necessary before transitioning the restore process to a terminal phase.
## Background
Currently, the restore process enters a terminal phase once all item restoration and plugin operations have been completed. However, there are some wrap-up works that need to be performed after item restoration and plugin operations have been fully executed. There is no suitable opportunity to perform them at present.
To address this, a new finalization phase should be added to the existing restore workflow. in this phase, all plugin operations and item restoration has been fully completed, which provides a clean opportunity to perform any wrap-up work before termination, improving the overall restore process.
Wrap-up tasks in Velero can serve several purposes:
- Post-restore modification - Velero can modify the restored data that was temporarily changed for some purpose but required to be changed back finally or data that was newly created but missing some information. For example, [issue6435](https://github.com/vmware-tanzu/velero/issues/6435) indicates that some custom settings(like labels, reclaim policy) on restored PVs was lost because those restored PVs was newly dynamically provisioned. Velero can address it by patching the PVs' custom settings back in the finalization phase.
- Clean up unused data - Velero can identify and delete any data that are no longer needed after a successful restore in the finalization phase.
- Post-restore validation - Velero can validate the state of restored data and report any errors to help users locate the issue in the finalization phase.
The uses of wrap-up tasks are not limited to these examples. Additional needs may be addressed as they develop over time.
## Goals
- Add the finalization phase and the corresponding controller to restore workflow.
## Non Goals
- Implement the specific wrap-up work.
## High-Level Design
- The finalization phase will be added to current restore workflow.
- The logic for handling current phase transition in restore and restore operations controller will be modified with the introduction of the finalization phase.
- A new restore finalizer controller will be implemented to handle the finalization phase.
## Detailed Design
### phase transition
Two new phases related to finalization will be added to restore workflow, which are `FinalizingPartiallyFailed` and `Finalizing`. The new phase transition will be similar to backup workflow, proceeding as follow:
![image](restore-phases-transition.png)
### restore finalizer controller
The new restore finalizer controller will be implemented to watch for restores in `FinalizingPartiallyFailed` and `Finalizing` phases. Any wrap-up work that needs to wait for the completion of item restoration and plugin operations will be executed by this controller, and the phase will be set to either `Completed` or `PartiallyFailed` based on the results of these works.
Points worth noting about the new restore finalizer controller:
A new structure `finalizerContext` will be created to facilitate the implementation of any wrap-up tasks. It includes all the dependencies the tasks require as well as a function `execute()` to orderly implement task logic.
```
// finalizerContext includes all the dependencies required by wrap-up tasks
type finalizerContext struct {
.......
restore *velerov1api.Restore
log logrus.FieldLogger
.......
}
// execute executes all the wrap-up tasks and return the result
func (ctx *finalizerContext) execute() (results.Result, results.Result) {
// execute task1
.......
// execute task2
.......
// the task execution logic will be expanded as new tasks are included
.......
}
// newFinalizerContext returns a finalizerContext object, the parameters will be added as new tasks are included.
func newFinalizerContext(restore *velerov1api.Restore, log logrus.FieldLogger, ...) *finalizerContext{
return &finalizerContext{
.......
restore: restore,
log: log,
.......
}
}
```
The finalizer controller is responsible for collecting all dependencies and creating a `finalizerContext` object using those dependencies. It then invokes the `execute` function.
```
func (r *restoreFinalizerReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
.......
// collect all dependencies required by wrap-up tasks
.......
// create a finalizerContext object and invoke execute()
finalizerCtx := newFinalizerContext(restore, log, ...)
warnings, errs := finalizerCtx.execute()
.......
}
```
After completing all necessary tasks, the result metadata in object storage will be updated if any errors or warnings occur during the execution. This behavior breaks the feature of keeping metadata files in object storage immutable, However, we believe the tradeoff is justified because it provides users with the access to examine the error/warning details when the wrap-up tasks go wrong.
```
// UpdateResults updates the result metadata in object storage if necessary
func (r *restoreFinalizerReconciler) UpdateResults(restore *api.Restore, newWarnings *results.Result, newErrs *results.Result, backupStore persistence.BackupStore) error {
originResults, err := backupStore.GetRestoreResults(restore.Name)
if err != nil {
return errors.Wrap(err, "error getting restore results")
}
warnings := originResults["warnings"]
errs := originResults["errors"]
warnings.Merge(newWarnings)
errs.Merge(newErrs)
m := map[string]results.Result{
"warnings": warnings,
"errors": errs,
}
if err := putResults(restore, m, backupStore); err != nil {
return errors.Wrap(err, "error putting restore results")
}
return nil
}
```
## Compatibility
The new finalization phases are added without modifying the existing phases in the restore workflow. Both new and ongoing restore processes will continue to eventually transition to a terminal phase from any prior phase, ensuring backward compatibility.
## Implementation
This will be implemented during the Velero 1.14 development cycle.

Binary file not shown.

After

Width:  |  Height:  |  Size: 65 KiB

View File

@@ -29,7 +29,7 @@ During restore, the proposal is that Velero will determine if the `APIGroupVersi
The proposed code starts with creating three lists for each backed up resource. The three lists will be created by
(1) reading the directory names in the backup tarball file and seeing which API group versions were backed up from the source cluster,
(2) looking at the target cluster and determining which API group versions are supported, and
(3) getting config maps from the target cluster in order to get user-defined prioritization of versions.
(3) getting ConfigMaps from the target cluster in order to get user-defined prioritization of versions.
The three lists will be used to create a map of chosen versions for each resource to restore. If there is a user-defined list of priority versions, the versions will be checked against the supported versions lists. The highest user-defined priority version that is/was supported by both target and source clusters will be the chosen version for that resource. If no user specified versions are supported by neither target nor source, the versions will be logged and the restore will continue with other prioritizations.

View File

@@ -0,0 +1,111 @@
# Backup Restore Status Patch Retrying Configuration
## Abstract
When a backup/restore completes, we want to ensure that the custom resource progresses to the correct status.
If a patch call fails to update status to completion, it should be retried up to a certain time limit.
This design proposes a way to configure timeout for this retry time limit.
## Background
Original Issue: https://github.com/vmware-tanzu/velero/issues/7207
Velero was performing a restore when the API server was rolling out to a new version.
It had trouble connecting to the API server, but eventually, the restore was successful.
However, since the API server was still in the middle of rolling out, Velero failed to update the restore CR status and gave up.
After the connection was restored, it didn't attempt to update, causing the restore CR to be stuck at "In progress" indefinitely.
This can lead to incorrect decisions for other components that rely on the backup/restore CR status to determine completion.
## Goals
- Make timeout configurable for retry patching by reusing existing [`--resource-timeout` server flag](https://github.com/vmware-tanzu/velero/blob/d9ca14747925630664c9e4f85a682b5fc356806d/pkg/cmd/server/server.go#L245)
## Non Goals
- Create a new timeout flag
- Refactor backup/restore workflow
## High-Level Design
We will add retries with timeout to existing patch calls that moves a backup/restore from InProgress to a different status phase such as
- FailedValidation (final)
- Failed (final)
- WaitingForPluginOperations
- WaitingForPluginOperationsPartiallyFailed
- Finalizing
- FinalizingPartiallyFailed
and from above non final phases to
- Completed
- PartiallyFailed
Once backup/restore is in some phase it will already be reconciled again periodically and do not need additional retry
- WaitingForPluginOperations
- WaitingForPluginOperationsPartiallyFailed
## Detailed Design
Relevant reconcilers will have `resourceTimeout time.Duration` added to its struct and to parameters of New[Backup|Restore]XReconciler functions.
pkg/cmd/server/server.go in `func (s *server) runControllers(..) error` also update the New[Backup|Restore]XCReconciler with added duration parameters using value from existing `--resource-timeout` server flag.
Current calls to kube.PatchResource involving status patch will be replaced with kube.PatchResourceWithRetriesOnErrors added to package `kube` below.
Calls where there is a ...client.Patch() will be wrapped with client.RetriesPhasePatchFuncOnErrors() added to package `client` below.
pkg/util/kube/client.go
```go
// PatchResourceWithRetries patches the original resource with the updated resource, retrying when the provided retriable function returns true.
func PatchResourceWithRetries(maxDuration time.Duration, original, updated client.Object, kbClient client.Client, retriable func(error) bool) error {
return veleroPkgClient.RetryOnRetriableMaxBackOff(maxDuration, func() error { return PatchResource(original, updated, kbClient) }, retriable)
}
// PatchResourceWithRetriesOnErrors patches the original resource with the updated resource, retrying when the operation returns an error.
func PatchResourceWithRetriesOnErrors(maxDuration time.Duration, original, updated client.Object, kbClient client.Client) error {
return PatchResourceWithRetries(maxDuration, original, updated, kbClient, func(err error) bool {
// retry using DefaultBackoff to resolve connection refused error that may occur when the server is under heavy load
// TODO: consider using a more specific error type to retry, for now, we retry on all errors
// specific errors:
// - connection refused: https://pkg.go.dev/syscall#:~:text=Errno(0x67)-,ECONNREFUSED,-%3D%20Errno(0x6f
return err != nil
})
}
```
pkg/client/retry.go
```go
// CapBackoff provides a backoff with a set backoff cap
func CapBackoff(cap time.Duration) wait.Backoff {
if cap < 0 {
cap = 0
}
return wait.Backoff{
Steps: math.MaxInt,
Duration: 10 * time.Millisecond,
Cap: cap,
Factor: retry.DefaultBackoff.Factor,
Jitter: retry.DefaultBackoff.Jitter,
}
}
// RetryOnRetriableMaxBackOff accepts a patch function param, retrying when the provided retriable function returns true.
func RetryOnRetriableMaxBackOff(maxDuration time.Duration, fn func() error, retriable func(error) bool) error {
return retry.OnError(CapBackoff(maxDuration), func(err error) bool { return retriable(err) }, fn)
}
// RetryOnErrorMaxBackOff accepts a patch function param, retrying when the error is not nil.
func RetryOnErrorMaxBackOff(maxDuration time.Duration, fn func() error) error {
return RetryOnRetriableMaxBackOff(maxDuration, fn, func(err error) bool { return err != nil })
}
```
## Alternatives Considered
- Requeuing InProgress backups that is not known by current velero instance to still be in progress as failed (attempted in [#7863](https://github.com/vmware-tanzu/velero/pull/7863))
- It was deemed as making backup restore flow hard to enhance for future reconciler updates such as adding cancel or adding parallel backups.
## Security Considerations
None
## Compatibility
Retry should only trigger a restore or backup that is already in progress and not patching successfully by current instance. Prior InProgress backups/restores will not be re-processed and will remain stuck InProgress until there is another velero server (re)start.
## Implementation
There is a past implementation in [#7845](https://github.com/vmware-tanzu/velero/pull/7845/) where implementation for this design will be based upon.

View File

@@ -0,0 +1,161 @@
# Schedule Skip Immediately Config Design
## Abstract
When unpausing schedule, a backup could be due immediately.
New Schedules also create new backup immediately.
This design allows user to *skip **immediately due** backup run upon unpausing or schedule creation*.
## Background
Currently, the default behavior of schedule when `.Status.LastBackup` is nil or is due immediately after unpausing, a backup will be created. This may not be a desired by all users (https://github.com/vmware-tanzu/velero/issues/6517)
User want ability to skip the first immediately due backup when schedule is unpaused and or created.
If you create a schedule with cron "45 * * * *" and pause it at say the 43rd minute and then unpause it at say 50th minute, a backup gets triggered (since .Status.LastBackup is nil or >60min ago).
With this design, user can skip the first immediately due backup when schedule is unpaused and or created.
## Goals
- Add an option so user can when unpausing (when immediately due) or creating new schedule, to not create a backup immediately.
## Non Goals
- Changing the default behavior
## High-Level Design
Add a new field with to the schedule spec and as a new cli flags for install, server, schedule commands; allowing user to skip immediately due backup when unpausing or schedule creation.
If CLI flag is specified during schedule unpause, velero will update the schedule spec accordingly and override prior spec for `skipImmediately``.
## Detailed Design
### CLI Changes
`velero schedule unpause` will now take an optional bool flag `--skip-immediately` to allow user to override the behavior configured for velero server (see `velero server` below).
`velero schedule unpause schedule-1 --skip-immediately=false` will unpause the schedule but not skip the backup if due immediately from `Schedule.Status.LastBackup` timestamp. Backup will be run at the next cron schedule.
`velero schedule unpause schedule-1 --skip-immediately=true` will unpause the schedule and skip the backup if due immediately from `Schedule.Status.LastBackup` timestamp. Backup will also be run at the next cron schedule.
`velero schedule unpause schedule-1` will check `.spec.SkipImmediately` in the schedule to determine behavior. This field will default to false to maintain prior behavior.
`velero server` will add a new flag `--schedule-skip-immediately` to configure default value to patch new schedules created without the field. This flag will default to false to maintain prior behavior if not set.
`velero install` will add a new flag `--schedule-skip-immediately` to configure default value to patch new schedules created without the field. This flag will default to false to maintain prior behavior if not set.
### API Changes
`pkg/apis/velero/v1/schedule_types.go`
```diff
// ScheduleSpec defines the specification for a Velero schedule
type ScheduleSpec struct {
// Template is the definition of the Backup to be run
// on the provided schedule
Template BackupSpec `json:"template"`
// Schedule is a Cron expression defining when to run
// the Backup.
Schedule string `json:"schedule"`
// UseOwnerReferencesBackup specifies whether to use
// OwnerReferences on backups created by this Schedule.
// +optional
// +nullable
UseOwnerReferencesInBackup *bool `json:"useOwnerReferencesInBackup,omitempty"`
// Paused specifies whether the schedule is paused or not
// +optional
Paused bool `json:"paused,omitempty"`
+ // SkipImmediately specifies whether to skip backup if schedule is due immediately from `Schedule.Status.LastBackup` timestamp when schedule is unpaused or if schedule is new.
+ // If true, backup will be skipped immediately when schedule is unpaused if it is due based on .Status.LastBackupTimestamp or schedule is new, and will run at next schedule time.
+ // If false, backup will not be skipped immediately when schedule is unpaused, but will run at next schedule time.
+ // If empty, will follow server configuration (default: false).
+ // +optional
+ SkipImmediately bool `json:"skipImmediately,omitempty"`
}
```
**Note:** The Velero server automatically patches the `skipImmediately` field back to `false` after it's been used. This is because `skipImmediately` is designed to be a one-time operation rather than a persistent state. When the controller detects that `skipImmediately` is set to `true`, it:
1. Sets the flag back to `false`
2. Records the current time in `schedule.Status.LastSkipped`
This "consume and reset" pattern ensures that after skipping one immediate backup, the schedule returns to normal behavior for subsequent runs. The `LastSkipped` timestamp is then used to determine when the next backup should run.
```go
// From pkg/controller/schedule_controller.go
if schedule.Spec.SkipImmediately != nil && *schedule.Spec.SkipImmediately {
*schedule.Spec.SkipImmediately = false
schedule.Status.LastSkipped = &metav1.Time{Time: c.clock.Now()}
}
```
`LastSkipped` will be added to `ScheduleStatus` struct to track the last time a schedule was skipped.
```diff
// ScheduleStatus captures the current state of a Velero schedule
type ScheduleStatus struct {
// Phase is the current phase of the Schedule
// +optional
Phase SchedulePhase `json:"phase,omitempty"`
// LastBackup is the last time a Backup was run for this
// Schedule schedule
// +optional
// +nullable
LastBackup *metav1.Time `json:"lastBackup,omitempty"`
+ // LastSkipped is the last time a Schedule was skipped
+ // +optional
+ // +nullable
+ LastSkipped *metav1.Time `json:"lastSkipped,omitempty"`
// ValidationErrors is a slice of all validation errors (if
// applicable)
// +optional
ValidationErrors []string `json:"validationErrors,omitempty"`
}
```
The `LastSkipped` field is crucial for the schedule controller to determine the next run time. When a backup is skipped, this timestamp is used instead of `LastBackup` to calculate when the next backup should occur, ensuring the schedule maintains its intended cadence even after skipping a backup.
When `schedule.spec.SkipImmediately` is `true`, `LastSkipped` will be set to the current time, and `schedule.spec.SkipImmediately` set to nil so it can be used again.
The `getNextRunTime()` function below is updated so `LastSkipped` which is after `LastBackup` will be used to determine next run time.
```go
func getNextRunTime(schedule *velerov1.Schedule, cronSchedule cron.Schedule, asOf time.Time) (bool, time.Time) {
var lastBackupTime time.Time
if schedule.Status.LastBackup != nil {
lastBackupTime = schedule.Status.LastBackup.Time
} else {
lastBackupTime = schedule.CreationTimestamp.Time
}
if schedule.Status.LastSkipped != nil && schedule.Status.LastSkipped.After(lastBackupTime) {
lastBackupTime = schedule.Status.LastSkipped.Time
}
nextRunTime := cronSchedule.Next(lastBackupTime)
return asOf.After(nextRunTime), nextRunTime
}
```
When schedule is unpaused, and `Schedule.Status.LastBackup` is not nil, if `Schedule.Status.LastSkipped` is recent, a backup will not be created.
When schedule is unpaused or created with `Schedule.Status.LastBackup` set to nil or schedule is newly created, normally a backup will be created immediately. If `Schedule.Status.LastSkipped` is recent, a backup will not be created.
Backup will be run at the next cron schedule based on LastBackup or LastSkipped whichever is more recent.
## Alternatives Considered
N/A
## Security Considerations
None
## Compatibility
Upon upgrade, the new field will be added to the schedule spec automatically and will default to the prior behavior of running a backup when schedule is unpaused if it is due based on .Status.LastBackup or schedule is new.
Since this is a new field, it will be ignored by older versions of velero.
## Implementation
TBD
## Open Issues
N/A

View File

@@ -0,0 +1,84 @@
# Adding Support For VolumeAttributes in Resource Policy
## Abstract
Currently [Velero Resource policies](https://velero.io/docs/main/resource-filtering/#creating-resource-policies) are only supporting "Driver" to be filtered for [CSI volume conditions](https://github.com/vmware-tanzu/velero/blob/8e23752a6ea83f101bd94a69dcf17f519a805388/internal/resourcepolicies/volume_resources_validator.go#L28)
If user want to skip certain CSI volumes based on other volume attributes like protocol or SKU, etc, they can't do it with the current Velero resource policies. It would be convenient if Velero resource policies could be extended to filter on volume attributes along with existing driver filter in the resource policies `conditions` to handle the backup of volumes just by `some specific volumes attributes conditions`.
## Background
As of Today, Velero resource policy already provides us the way to filter volumes based on the `driver` name. But it's not enough to handle the volumes based on other volume attributes like protocol, SKU, etc.
## Example:
- Provision Azure NFS: Define the Storage class with `protocol: nfs` under storage class parameters to provision [CSI NFS Azure File Shares](https://learn.microsoft.com/en-us/azure/aks/azure-files-csi#nfs-file-shares).
- User wants to back up AFS (Azure file shares) but only want to backup `SMB` type of file share volumes and not `NFS` file share volumes.
## Goals
- We are only bringing additional support in the resource policy to only handle volumes during backup.
- Introducing support for `VolumeAttributes` filter along with `driver` filter in CSI volume conditions to handle volumes.
## Non-Goals
- Currently, only handles volumes, and does not support other resources.
## Use-cases/Scenarios
### Skip backup volumes by some volume attributes:
Users want to skip PV with the requirements:
- option to skip specified PV on volume attributes type (like Protocol as NFS, SMB, etc)
### Sample Storage Class Used to create such Volumes
```
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: azurefile-csi-nfs
provisioner: file.csi.azure.com
allowVolumeExpansion: true
parameters:
protocol: nfs
```
## High-Level Design
Modifying the existing Resource Policies code for [csiVolumeSource](https://github.com/vmware-tanzu/velero/blob/8e23752a6ea83f101bd94a69dcf17f519a805388/internal/resourcepolicies/volume_resources_validator.go#L28C6-L28C22) to add the new `VolumeAttributes` filter for CSI volumes and adding validations in existing [csiCondition](https://github.com/vmware-tanzu/velero/blob/8e23752a6ea83f101bd94a69dcf17f519a805388/internal/resourcepolicies/volume_resources.go#L150) to match with volume attributes in the conditions from Resource Policy config map and original persistent volume.
## Detailed Design
The volume resources policies should contain a list of policies which is the combination of conditions and related `action`, when target volumes meet the conditions, the related `action` will take effection.
Below is the API Design for the user configuration:
### API Design
```go
type csiVolumeSource struct {
Driver string `yaml:"driver,omitempty"`
// [NEW] CSI volume attributes
VolumeAttributes map[string]string `yaml:"volumeAttributes,omitempty"`
}
```
The policies YAML config file would look like this:
```yaml
version: v1
volumePolicies:
- conditions:
csi:
driver: disk.csi.azure.com
action:
type: skip
- conditions:
csi:
driver: file.csi.azure.com
volumeAttributes:
protocol: nfs
action:
type: skip`
```
### New Supported Conditions
#### VolumeAttributes
Existing CSI Volume Condition can now add `volumeAttributes` which will be key and value pairs.
Specify details for the related volume source (currently only csi driver is supported filter)
```yaml
csi: // match volume using `file.csi.azure.com` and with volumeAttributes protocol as nfs
driver: file.csi.azure.com
volumeAttributes:
protocol: nfs
```

View File

@@ -433,23 +433,24 @@ spec:
volume: nginx-log
```
We will add the flag for both CLI installation and Helm Chart Installation. Specifically:
- Helm Chart Installation: add the "--pod-volume-backup-uploader" flag into its value.yaml and then generate the deployments according to the value. Value.yaml is the user-provided configuration file, therefore, users could set this value at the time of installation. The changes in Value.yaml are as below:
- Helm Chart Installation: add the "--uploaderType" and "--default-volumes-to-fs-backup" flag into its value.yaml and then generate the deployments according to the value. Value.yaml is the user-provided configuration file, therefore, users could set this value at the time of installation. The changes in Value.yaml are as below:
```
command:
- /velero
args:
- server
{{- with .Values.configuration }}
{{- if .pod-volume-backup-uploader "restic" }}
- --legacy
{{- end }}
- --uploader-type={{ default "restic" .uploaderType }}
{{- if .defaultVolumesToFsBackup }}
- --default-volumes-to-fs-backup
{{- end }}
```
- CLI Installation: add the "--pod-volume-backup-uploader" flag into the installation command line, and then create the two deployments accordingly. Users could change the option at the time of installation. The CLI is as below:
```velero install --pod-volume-backup-uploader=restic```
```velero install --pod-volume-backup-uploader=kopia```
- CLI Installation: add the "--uploaderType" and "--default-volumes-to-fs-backup" flag into the installation command line, and then create the two deployments accordingly. Users could change the option at the time of installation. The CLI is as below:
```velero install --uploader-type=restic --default-volumes-to-fs-backup --use-node-agent```
```velero install --uploader-type=kopia --default-volumes-to-fs-backup --use-node-agent```
## Upgrade
For upgrade, we allow users to change the path by specifying "--pod-volume-backup-uploader" flag in the same way as the fresh installation. Therefore, the flag change should be applied to the Velero server after upgrade. Additionally, We need to add a label to Velero server to indicate the current path, so as to provide an easy for querying it.
For upgrade, we allow users to change the path by specifying "--uploader-type" flag in the same way as the fresh installation. Therefore, the flag change should be applied to the Velero server after upgrade. Additionally, We need to add a label to Velero server to indicate the current path, so as to provide an easy for querying it.
Moreover, if users upgrade from the old release, we need to change the existing Restic Daemonset name to VeleroNodeAgent daemonSet. The name change should be applied after upgrade.
The recommended way for upgrade is to modify the related Velero resource directly through kubectl, the above changes will be applied in the same way. We need to modify the Velero doc for all these changes.
@@ -459,7 +460,7 @@ Below Velero CLI or its output needs some changes:
- ```Velero restore describe```: the output should indicate the path
- ```Velero restic repo get```: the name of this CLI should be changed to a generic one, for example, "Velero repo get"; the output of this CLI should print all the backup repository if Restic repository and Unified Repository exist at the same time
At present, we don't have a requirement for selecting the path during backup, so we don't change the ```Velero backup create``` CLI for now. If there is a requirement in future, we could simply add a flag similar to "--pod-volume-backup-uploader" to select the path.
At present, we don't have a requirement for selecting the path during backup, so we don't change the ```Velero backup create``` CLI for now. If there is a requirement in future, we could simply add a flag similar to "--uploader-type" to select the path.
## CR Example
Below sample files demonstrate complete CRs with all the changes mentioned above:

View File

@@ -0,0 +1,230 @@
# Velero Uploader Configuration Integration and Extensibility
## Abstract
This design proposal aims to make Velero Uploader configurable by introducing a structured approach for managing Uploader settings. we will define and standardize a data structure to facilitate future additions to Uploader configurations. This enhancement provides a template for extending Uploader-related options. And also includes examples of adding sub-options to the Uploader Configuration.
## Background
Velero is widely used for backing up and restoring Kubernetes clusters. In various scenarios, optimizing the backup process is essential, future needs may arise for adding more configuration options related to the Uploader component especially when dealing with large datasets. Therefore, a standardized configuration template is required.
## Goals
1. **Extensible Uploader Configuration**: Provide an extensible approach to manage Uploader configurations, making it easy to add and modify configuration options related to the Velero uploader.
2. **User-friendliness**: Ensure that the new Uploader configuration options are easy to understand and use for Velero users without introducing excessive complexity.
## Non Goals
1. Expanding to other Velero components: The primary focus of this design is Uploader configuration and does not include extending to other components or modules within Velero. Configuration changes for other components may require separate design and implementation.
## High-Level Design
To achieve extensibility in Velero Uploader configurations, the following key components and changes are proposed:
### UploaderConfig Structure
Two new data structures, `UploaderConfigForBackup` and `UploaderConfigForRestore`, will be defined to store Uploader configurations. These structures will include the configuration options related to backup and restore for Uploader:
```go
type UploaderConfigForBackup struct {
}
type UploaderConfigForRestore struct {
}
```
### Integration with Backup & Restore CRD
The Velero CLI will support an uploader configuration-related flag, allowing users to set the value when creating backups or restores. This value will be stored in the `UploaderConfig` field within the `Backup` CRD and `Restore` CRD:
```go
type BackupSpec struct {
// UploaderConfig specifies the configuration for the uploader.
// +optional
// +nullable
UploaderConfig *UploaderConfigForBackup `json:"uploaderConfig,omitempty"`
}
type RestoreSpec struct {
// UploaderConfig specifies the configuration for the restore.
// +optional
// +nullable
UploaderConfig *UploaderConfigForRestore `json:"uploaderConfig,omitempty"`
}
```
### Configuration Propagated to Different CRDs
The configuration specified in `UploaderConfig` needs to be effective for backup and restore both by file system way and data-mover way.
Therefore, the `UploaderConfig` field value from the `Backup` CRD should be propagated to `PodVolumeBackup` and `DataUpload` CRDs.
We aim for the configurations in PodVolumeBackup to originate not only from UploaderConfig in Backup but also potentially from other sources such as the server or configmap. Simultaneously, to align with the configurations in DataUpload's `DataMoverConfig map[string]string`, we have defined an `UploaderSettings map[string]string` here to record the configurations in PodVolumeBackup.
```go
type PodVolumeBackupSpec struct {
// UploaderSettings are a map of key-value pairs that should be applied to the
// uploader configuration.
// +optional
// +nullable
UploaderSettings map[string]string `json:"uploaderSettings,omitempty"`
}
```
`UploaderConfig` will be stored in DataUpload's `DataMoverConfig map[string]string` field.
Also the `UploaderConfig` field value from the `Restore` CRD should be propagated to `PodVolumeRestore` and `DataDownload` CRDs:
```go
type PodVolumeRestoreSpec struct {
// UploaderSettings are a map of key-value pairs that should be applied to the
// uploader configuration.
// +optional
// +nullable
UploaderSettings map[string]string `json:"uploaderSettings,omitempty"`
}
```
Also `UploaderConfig` will be stored in DataUpload's `DataMoverConfig map[string]string` field.
### Store and Get Configuration
We need to store and retrieve configurations in the PodVolumeBackup and DataUpload structs. This involves type conversion based on the configuration type, storing it in a map[string]string, or performing type conversion from this map for retrieval.
PodVolumeRestore and DataDownload are also similar.
## Sub-options in UploaderConfig
Adding fields above in CRDs can accommodate any future additions to Uploader configurations by adding new fields to the `UploaderConfigForBackup` or `UploaderConfigForRestore` structures.
### Parallel Files Upload
This section focuses on enabling the configuration for the number of parallel file uploads during backups.
below are the key steps that should be added to support this new feature.
#### Velero CLI
The Velero CLI will support a `--parallel-files-upload` flag, allowing users to set the `ParallelFilesUpload` value when creating backups.
#### UploaderConfig
below the sub-option `ParallelFilesUpload` is added into UploaderConfig:
```go
// UploaderConfigForBackup defines the configuration for the uploader when doing backup.
type UploaderConfigForBackup struct {
// ParallelFilesUpload is the number of files parallel uploads to perform when using the uploader.
// +optional
ParallelFilesUpload int `json:"parallelFilesUpload,omitempty"`
}
```
#### Kopia Parallel Upload Policy
Velero Uploader can set upload policies when calling Kopia APIs. In the Kopia codebase, the structure for upload policies is defined as follows:
```go
// UploadPolicy describes the policy to apply when uploading snapshots.
type UploadPolicy struct {
...
MaxParallelFileReads *OptionalInt `json:"maxParallelFileReads,omitempty"`
}
```
Velero can set the `MaxParallelFileReads` parameter for Kopia's upload policy as follows:
```go
curPolicy := getDefaultPolicy()
if parallelUpload > 0 {
curPolicy.UploadPolicy.MaxParallelFileReads = newOptionalInt(parallelUpload)
}
```
#### Restic Parallel Upload Policy
As Restic does not support parallel file upload, the configuration would not take effect, so we should output a warning when the user sets the `ParallelFilesUpload` value by using Restic to do a backup.
```go
if parallelFilesUpload > 0 {
log.Warnf("ParallelFilesUpload is set to %d, but Restic does not support parallel file uploads. Ignoring", parallelFilesUpload)
}
```
Roughly, the process is as follows:
1. Users pass the ParallelFilesUpload parameter and its value through the Velero CLI. This parameter and its value are stored as a sub-option within UploaderConfig and then placed into the Backup CR.
2. When users perform file system backups, UploaderConfig is passed to the PodVolumeBackup CR. When users use the Data-mover for backups, it is passed to the DataUpload CR.
3. The configuration will be stored in map[string]string type of field in CR.
3. Each respective controller within the CRs calls the uploader, and the ParallelFilesUpload from map in CRs is passed to the uploader.
4. When the uploader subsequently calls the Kopia API, it can use the ParallelFilesUpload to set the MaxParallelFileReads parameter, and if the uploader calls the Restic command it would output one warning log for Restic does not support this feature.
### Sparse Option For Kopia & Restic Restore
In many system files, numerous zero bytes or empty blocks persist, occupying physical storage space. Sparse restore employs a more intelligent approach, including appropriately handling empty blocks, thereby achieving the correct system state. This write sparse files mechanism aims to enhance restore efficiency while maintaining restoration accuracy.
Below are the key steps that should be added to support this new feature.
#### Velero CLI
The Velero CLI will support a `--write-sparse-files` flag, allowing users to set the `WriteSparseFiles` value when creating restores with Restic or Kopia uploader.
#### UploaderConfig
below the sub-option `WriteSparseFiles` is added into UploaderConfig:
```go
// UploaderConfigForRestore defines the configuration for the restore.
type UploaderConfigForRestore struct {
// WriteSparseFiles is a flag to indicate whether write files sparsely or not.
// +optional
// +nullable
WriteSparseFiles *bool `json:"writeSparseFiles,omitempty"`
}
```
### Enable Sparse in Restic
For Restic, it could be enabled by pass the flag `--sparse` in creating restore:
```bash
restic restore create --sparse $snapshotID
```
### Enable Sparse in Kopia
For Kopia, it could be enabled this feature by the `WriteSparseFiles` field in the [FilesystemOutput](https://pkg.go.dev/github.com/kopia/kopia@v0.13.0/snapshot/restore#FilesystemOutput).
```go
fsOutput := &restore.FilesystemOutput{
WriteSparseFiles: uploaderutil.GetWriteSparseFiles(uploaderCfg),
}
```
Roughly, the process is as follows:
1. Users pass the WriteSparseFiles parameter and its value through the Velero CLI. This parameter and its value are stored as a sub-option within UploaderConfig and then placed into the Restore CR.
2. When users perform file system restores, UploaderConfig is passed to the PodVolumeRestore CR. When users use the Data-mover for restores, it is passed to the DataDownload CR.
3. The configuration will be stored in map[string]string type of field in CR.
4. Each respective controller within the CRs calls the uploader, and the WriteSparseFiles from map in CRs is passed to the uploader.
5. When the uploader subsequently calls the Kopia API, it can use the WriteSparseFiles to set the WriteSparseFiles parameter, and if the uploader calls the Restic command it would append `--sparse` flag within the restore command.
### Parallel Restore
Setting the parallelism of restore operations can improve the efficiency and speed of the restore process, especially when dealing with large amounts of data.
### Velero CLI
The Velero CLI will support a --parallel-files-download flag, allowing users to set the parallelism value when creating restores. when no value specified, the value of it would be the number of CPUs for the node that the node agent pod is running.
```bash
velero restore create --parallel-files-download $num
```
### UploaderConfig
below the sub-option parallel is added into UploaderConfig:
```go
type UploaderConfigForRestore struct {
// ParallelFilesDownload is the number of parallel for restore.
// +optional
ParallelFilesDownload int `json:"parallelFilesDownload,omitempty"`
}
```
#### Kopia Parallel Restore Policy
Velero Uploader can set restore policies when calling Kopia APIs. In the Kopia codebase, the structure for restore policies is defined as follows:
```go
// first get concurrrency from uploader config
restoreConcurrency, _ := uploaderutil.GetRestoreConcurrency(uploaderCfg)
// set restore concurrency into restore options
restoreOpt := restore.Options{
Parallel: restoreConcurrency,
}
// do restore with restore option
restore.Entry(..., restoreOpt)
```
#### Restic Parallel Restore Policy
Configurable parallel restore is not supported by restic, so we would return one error if the option is configured.
```go
restoreConcurrency, err := uploaderutil.GetRestoreConcurrency(uploaderCfg)
if err != nil {
return extraFlags, errors.Wrap(err, "failed to get uploader config")
}
if restoreConcurrency > 0 {
return extraFlags, errors.New("restic does not support parallel restore")
}
```
## Alternatives Considered
To enhance extensibility further, the option of storing `UploaderConfig` in a Kubernetes ConfigMap can be explored, this approach would allow the addition and modification of configuration options without the need to modify the CRD.

View File

@@ -0,0 +1,223 @@
# VGDP Micro Service For Volume Snapshot Data Movement
## Glossary & Abbreviation
**VGDP**: Velero Generic Data Path. The collective of modules that is introduced in [Unified Repository design][1]. Velero uses these modules to finish data transmission for various purposes. It includes uploaders and the backup repository.
**Volume Snapshot Data Movement**: The backup/restore method introduced in [Volume Snapshot Data Movement design][2]. It backs up snapshot data from the volatile and limited production environment into the durable, heterogeneous and scalable backup storage.
**VBDM**: Velero Built-in Data Mover as introduced in [Volume Snapshot Data Movement design][2], it is the built-in data mover shipped along with Velero.
**Exposer**: Exposer is introduced in [Volume Snapshot Data Movement design][2] and is used to expose the volume snapshots/target volumes for VGDP to access locally.
## Background
As the architecture introduced in [Volume Snapshot Data Movement design][2], VGDP instances are running inside the node-agent pods, however, more and more use cases require to run the VGDP instances in dedicated pods, or in another word, make them as micro services, the benefits are as below:
- This avoids VGDP to access volume data through host path, while host path access involves privilege escalations in some environments (e.g., must run under privileged mode), which makes challenge to users.
- This enable users to to control resource (i.e., cpu, memory) request/limit in a granular manner, e.g., control them per backup/restore of a volume
- This increases the resilience, crash of one VGDP activity won't affect others
- In the cases that the backup storage must be represented by a Kubernetes persistent volumes (i.e., nfs storage, [COSI][3]), this avoids to dynamically mount the persistent volumes to node-agent pods and cause node-agent pods to restart (this is not accepted since node-agent lose it current state after its pods restart)
- This prevents unnecessary full backup. Velero's fs uploaders support file level incremental backup by comparing the file name and metadata. However, at present the files are visited by host path, while pod and PVC's ID are part of the host path, so once the pod is recreated, the same file is regarded as a different file since the pod's ID has been changed. If the fs uploader is in a dedicated pod and files are visited by pod's volume path, files' full path are not changed after pod restarts, so incremental backups could continue.
## Goals
- Create a solution to make VGDP instances as micro services
- Modify the VBDM to offload the VGDP work from node-agent to the VGDP micro service
- Create the mechanism for VBDM to control and monitor the VGDP micro services in various scenarios
## Non-Goals
- The current solution covers Volume Snapshot Data Movement backup/restore type only, even though VGDP is also used by pod volume backup. It is less possible to do this for pod volume backup, since it must run inside the source workload pods.
- The current solution covers VBDM only. 3rd data movers still follow the **Replacement** section of [Volume Snapshot Data Movement design][2]. That is, 3rd data movers handle the DUCR/DDCR on their own and they are free to make themselves micro service style or monolith service style.
## Overview
The solution is based on [Volume Snapshot Data Movement design][2], the architecture is followed as is and existing components are not changed unless it is necessary.
Below lists the changed components, why and how:
**Exposer**: Exposer is to expose the snapshot/target volume as a path/device name/endpoint that are recognizable by VGDP. Varying from the type of snapshot/target volume, a pod may be created as part of the expose. Now, since we run the VGDP instance in a separate pod, a pod is created anyway, we assume exposer creates a pod all the time and make the appropriate exposing configurations to the pod so that VGDP instance could access the snapshot/target volume locally inside the pod. The pod is still called as backupPod or restorePod.
Then we need to change the command the backupPod/restorePod is running, the command launches VGDP-MS (VGDP Micro Service, see below) when the container starts up.
For CSI snapshot, the backupPod/restorePod is created as the result of expose, the only thing left is to change the backupPod/restorePod's image.
**VBDM**: VBDM contains the data mover controller, while the controller calls the Exposer and launches the VGDP instances. Now, since the VGDP instance is launched by the backupPod/restorePod, the controller should not launch the VGDP instance again. However, the controller still needs to monitor and control the VGDP instance. Moreover, in order to avoid any contest situations, the controller is still the only place to update DUCRs and DDCRs.
Besides the changes to above existing components, we need to add below new components:
**VGDP Watcher**: We create a new module to help the data mover controller to watch activities of the VGDP instance in the backupPod/restorePod. VGDP Watcher is a part of VBDM.
**VGDP-MS**: VGDP Micro Service is the binary for the command backupPod/restorePod runs. It accepts the parameters and then launches the VGDP instance according to the request type, specifically, backup or restore. VGDP-MS also runs other modules to sync-up with the data mover controller. VGDP-MS is also a part of VBDM.
Below diagram shows how these components work together:
![vgdp-ms-1.png](vgdp-ms-1.png)
The [Node-agent concurrency][4] is still used to control the concurrency of VGDP micro services. When there are too many volumes in the backup/restore, which takes too much computing resources(CPU, memory, etc.) or Kubernetes resources(pods, PVCs, PVs, etc.), users could set the concurrency in each node so as to control the total number of concurrent VGDP micro services in the cluster.
## Detailed Design
### Exposer
At present, the exposer creates backupPod/restorePod and sets ```velero-helper pause``` as the command run by backupPod/restorePod.
Now, VGDP-MS command will be used, and the ```velero``` image will be running inside the backupPod/restorePod. The command is like below:
```velero data-mover backup --volume-path xxx --volume-mode xxx --data-upload xxx --resource-timeout xxx --log-format xxx --log-level xxx```
Or:
```velero data-mover restore --volume-path xxx --volume-mode xxx --data-download xxx --resource-timeout xxx --log-format xxx --log-level xxx```
The first one is for backup and the other one is for restore.
Below are the parameters of the commands:
**volume-path**: Deliver the full path inside the backupPod/restorePod for the volume to be backed up/restored.
**volume-mode**: Deliver the mode for the volume be backed up/restored, at present either ```Filesystem``` mode or ```Block``` mode.
**data-upload**: DUCR for this backup.
**data-download**: DDCR for this backup.
**resource-timeout**: resource-timeout is used to control the timeout for operations related to resources. It has the same meaning with the resource-timeout for node-agent.
**log-format** and **log-level**: This is to control the behavior of log generation inside VGDP-MS.
In order to have the same capability and permission with node-agent, below pod configurations are inherited from node-agent and set to backupPod/restorePod's spec:
- Volumes: Some configMaps will be mapped as volumes to node-agent, so we add the same volumes of node-agent to the backupPod/restorePod
- Environment Variables
- Security Contexts
We may not actually need all the capabilities in the VGDP-MS as the node-agent. At present, we just duplicate all of them, if we find any problem in future, we can filter out the capabilities that are not required by VGDP-MS.
The backupPod/restorePod is not run in Privileged mode as it is not required since the volumes are visisted by pod path.
The root user is still required, especially by the restore (in order to restore the file system attributes, owners, etc.), so we will use root user for backupPod/restorePod.
We set backupPod/restorePod's ```RestartPolicy``` to ```RestartPolicyNever```, so that once VGDP-MS terminates in any reason, backupPod/restorePod won't restart and the DUCR/DDCR is marked as one of the terminal phases (Completed/Failed/Cancelled) accordingly.
### VGDP Watcher
#### Dual mode event watch
The primary task of VGDP Watcher is to watch the status change from backupPod/restorePod or the VGDP instance, so as to inform the data mover controller in below situations:
- backupPod/restorePod starts
- VGDP instance starts
- Progress update
- VGDP instance completes/fails/cancelled
- backupPod/restorePod stops
We use two mechanism to make the watch:
**Pod Phases**: VGDP Watcher watches the backupPod/restorePod's phases updated by Kubernetes. That is, VGDP Watcher creates an informer to watch the pod resource for the backupPod/restorePod and detect that the pod reaches to one of the terminated phases (i.e., PodSucceeded, PodFailed). We also check the availability & status of the backupPod/restorePod at the beginning of the watch so as to detect the starting of the backupPod/restorePod.
**Custom Kubernetes Events**: VGDP-MS generates Kubernetes events and associates them to the DUCR/DDCR at the time of VGDP instance starting/stopping and progress update, then VGDP Watcher creates another informer to watch the Event resource associated to the DUCR/DDCR.
Pod Phases watch covers the entire lifecycle of the backupPod/restorePod, but we don't know the status of the VGDP instance through it; and it can only deliver information by then end of the pod lifecycle.
Custom Event watch generates details of the VGDP instances and the events could be generated any time; but it cannot generate notifications before VGDP starts or in the case that VGDP crashes or shutdown abnormally.
Therefore, we adopt the both mechanisms to VGDP Watcher. In the end, there will be two sources generating the result of VGDP-MS:
- The termination message of backupPod/restorePod
- The message along with the VGDP Instance Completes/Fails/Cancelled event
On the one hand, in some cases only the backupPod/restorePod's termination message is available, e.g., the backupPod/restorePod crashes or or backupPod/restorePod quits before VGDP instance is started. So we refer to the first mechanism to get the notifications.
On the other hand, if they are both available, we have the results from them for mutual verification.
Conclusively, under the help of VGDP Watcher, data mover controller starts VGDP-MS controllably and waits until VGDP-MS ends under any circumstances.
#### AsyncBR adapter
VGDP Watcher needs to notify the data mover controller when one of the watched event happens, so that the controller could do the operations as if it receives the same callbacks from VGDP as the current behavior. In order not to break the existing code logics of data mover controllers, we make VGDP Watcher as an adapter of AsyncBR which is the interface implemented by VGDP and called by the data mover controller.
Since the parameters to call VGDP Watcher is different from the ones to call VGDP, we change the AsyncBR interface to hide some parameters from one another, the new interface is as below:
```
type AsyncBR interface {
// Init initializes an asynchronous data path instance
Init(ctx context.Context, res *exposer.ExposeResult, param interface{}) error
// StartBackup starts an asynchronous data path instance for backup
StartBackup(dataMoverConfig map[string]string, param interface{}) error
// StartRestore starts an asynchronous data path instance for restore
StartRestore(snapshotID string, dataMoverConfig map[string]string) error
// Cancel cancels an asynchronous data path instance
Cancel()
// Close closes an asynchronous data path instance
Close(ctx context.Context)
}
```
Some parameters are hidden into ```param```, but the functions and calling logics are not changed.
VGDP Watcher should be launched by the data mover controller before VGDP instance starts, otherwise, multiple corner problems may happen. E.g., VGDP-MS may run the VGDP instance immediately after the backupPod/restorePod is launched and completes it before the data mover controller starts VGDP Watcher, as a result, multiple informs are missed from VGDP Watcher.
Therefore, the controller launches VGDP Watcher first and then set the DUCR/DDCR to ```InProgress```; on the other hand, VGDP-MS waits DUCR/DDCR turns to ```InProgress``` before running the VGDP instance.
### VGDP-MS
VGDP-MS is represented by ```velero data-mover``` subcommand and has its own subcommand ```backup``` and ```restore```.
Below diagram shows the VGDP-MS workflow:
![vgdp-ms-2.png](vgdp-ms-2.png)
**Start DUCR/DDCR Watcher**: VGDP-MS needs to watch the corresponding DUCR/DDCR so as to react on some events happening to the DUCR/DDCR. E.g., when the data movement is cancelled, a ```Cancel``` flag is set to the DUCR/DDCR, by watching the DUCR/DDCR, VGDP-MS is able to see it and cancel the VGDP instance.
**Wait DUCR/DDCR InProgress**: As mentioned above, VGDP-MS won't start the VGDP instance until DUCR/DDCR turns to ```InProgress```, by which time VGDP Watcher has been started.
**Record VGDP Starts**: This generates the VGDP Instance Starts event.
**VGDP Callbacks**: When VGDP comes to one of the terminal states (i.e., completed, failed, cancelled), the corresponding callback is called.
**Record VGDP Ends**: This generates the VGDP Instance Completes/Fails/Cancelled event, and also generates backupPod/restorePod termination message.
**Record VGDP Progress**: This periodically generates/updates the Progress event with totalBytes/bytesDone to indicate the progress of the data movement.
**Set VGDP Output**: This writes the termination message to the backupPod/restorePod's termination log (by default, it is written to ```/dev/termination-log```).
If VGDP completes, VGDP Instance Completes event and backupPod/restorePod termination shares the same message as below:
```
type BackupResult struct {
SnapshotID string `json:"snapshotID"`
EmptySnapshot bool `json:"emptySnapshot"`
Source exposer.AccessPoint `json:"source,omitempty"`
}
```
```
type RestoreResult struct {
Target exposer.AccessPoint `json:"target,omitempty"`
}
```
```
type AccessPoint struct {
ByPath string `json:"byPath"`
VolMode uploader.PersistentVolumeMode `json:"volumeMode"`
}
```
The existing VGDP result structures are actually being reused, we just add the json markers so that they can be marshalled.
As mentioned above, once VGDP-MS ends in any way, the backupPod/restorePod terminates and never restarts, so the end of VGDP-MS means the end of DU/DD.
For Progress update, the existing Progress structure is being reused:
```
type Progress struct {
TotalBytes int64 `json:"totalBytes,omitempty"`
BytesDone int64 `json:"doneBytes,omitempty"`
}
```
### Log Collection
During the running of VGDP instance, some logs are generated which are important for troubleshooting. This includes all the logs generated by the uploader and repository. Therefore, it is important to collect these logs.
On the other hand, the logs are now generated in the backupPod/restorePod, while the backupPod/restorePod is deleted immediately after the data movement completes. Therefore, by default, ```velero debug``` is not able to collect these logs.
As a solution, we use logrus's hook mechanism to redirect the backupPod/restorePod's logs into node-agent's log, so that ```velero debug``` could collect VGDP logs as is without any changes.
Below diagram shows how VGDP logs are redirected:
![vgdp-ms-3.png](vgdp-ms-3.png)
This log redirecting mechanism is thread safe since the hook acquires the write lock before writing the log buffer, so it guarantees that in the node-agent log there is no corruptions after redirecting the log, and the redirected logs and the original node-agent logs are not projected into each other.
### Resource Control
The CPU/memory resource of backupPod/restorePod is configurable, which means users are allowed to configure resources per volume backup/restore.
By default, the [Best Effort policy][5] is used, and users are allowed to change it through the ConfigMap specified by `velero node-agent` CLI's parameter `--node-agent-configmap`. Specifically, we add below structures to the ConfigMap:
```
type Configs struct {
// PodResources is the resource config for various types of pods launched by node-agent, i.e., data mover pods.
PodResources *PodResources `json:"podResources,omitempty"`
}
type PodResources struct {
CPURequest string `json:"cpuRequest,omitempty"`
MemoryRequest string `json:"memoryRequest,omitempty"`
CPULimit string `json:"cpuLimit,omitempty"`
MemoryLimit string `json:"memoryLimit,omitempty"`
}
```
The string values must mactch Kubernetes Quantity expressions; for each resource, the "request" value must not be larger than the "limit" value. Otherwise, if any one of the values fail, all the resource configurations will be ignored.
The configurations are loaded by node-agent at start time, so users can change the values in the configMap any time, but the changes won't effect until node-agent restarts.
## node-agent
node-agent is still required. Even though VGDP is now not running inside node-agent, node-agent still hosts the data mover controller which reconciles DUCR/DDCR and operates DUCR/DDCR in other steps before the VGDP instance is started, i.e., Accept, Expose, etc.
Privileged mode and root user are not required for node-agent anymore by Volume Snapshot Data Movement, however, they are still required by PVB(PodVolumeBackup) and PVR(PodVolumeRestore). Therefore, we will keep the node-agent deamonset as is, for any users who don't use PVB/PVR and have concern about the privileged mode/root user, they need to manually modify the deamonset spec to remove the dependencies.
## CRD Changes
There is no changes to any CRD.
## Installation Changes
No changes to installation, the backupPod/restorePod's configurations are all inherited from node-agent.
## Upgrade
Upgrade is not impacted.
## CLI
CLI is not changed.
[1]: ../unified-repo-and-kopia-integration/unified-repo-and-kopia-integration.md
[2]: ../volume-snapshot-data-movement/volume-snapshot-data-movement.md
[3]: https://kubernetes.io/blog/2022/09/02/cosi-kubernetes-object-storage-management/
[4]: ../node-agent-concurrency.md
[5]: https://kubernetes.io/docs/concepts/workloads/pods/pod-qos/

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

View File

@@ -0,0 +1,202 @@
# Add Label Selector as a criteria for Volume Policy
## Abstract
Veleros volume policies currently support several criteria (such as capacity, storage class, and volume source type) to select volumes for backup. This update extends the design by allowing users to specify required labels on the associated PersistentVolumeClaim (PVC) via a simple key/value map. At runtime, Velero looks up the PVC (when a PV has a ClaimRef), extracts its labels, and compares them with the user-specified map. If all key/value pairs match, the volume qualifies for backup.
## Background
PersistentVolumes (PVs) in Kubernetes are typically bound to PersistentVolumeClaims (PVCs) that include labels (for example, indicating environment, application, or region). Basing backup policies on these PVC labels enables more precise control over which volumes are processed.
## Goals
- Allow users to specify a simple key/value mapping in the volume policy YAML so that only volumes whose associated PVCs contain those labels are selected.
- Support policies that target volumes based on criteria such as environment=production or region=us-west.
## Non-Goals
- No changes will be made to the actions (skip, snapshot, fs-backup) of the volume policy engine. This update focuses solely on how volumes are selected.
- The design does not support other label selector operations (e.g., NotIn, Exists, DoesNotExist) and only allows for exact key/value matching.
## Use-cases/scenarios
1. Environment-Specific Backup:
- A user wishes to back up only those volumes whose associated PVCs have labels such as `environment=production` and `app=database`.
- The volume policy specifies a pvcLabels map with those key/value pairs; only volumes whose PVCs match are processed.
```yaml
volumePolicies:
- conditions:
pvcLabels:
environment: production
app: database
action:
type: snapshot
```
2. Region-Specific Backup:
- A user operating in multiple regions wants to back up only volumes in the `us-west` region.
- The policy includes `pvcLabels: { region: us-west }`, so only PVs bound to PVCs with that label are selected.
```yaml
volumePolicies:
- conditions:
pvcLabels:
region: us-west
action:
type: snapshot
```
3. Automated Label-Based Backups:
- An external system automatically labels new PVCs (for example, `backup: true`).
- A volume policy with `pvcLabels: { backup: true }` ensures that any new volume whose PVC contains that label is included in backup operations.
```yaml
version: v1
volumePolicies:
- conditions:
pvcLabels:
backup: true
action:
type: snapshot
```
## High-Level Design
1. Extend Volume Policy Schema:
- The YAML schema for volume conditions is extended to include an optional field pvcLabels of type `map[string]string`.
2. Implement New Condition Type:
- A new condition, `pvcLabelsCondition`, is created. It implements the `volumeCondition` interface and simply compares the user-specified key/value pairs with the actual PVC labels (populated at runtime).
3. Update Structured Volume:
- The internal representation of a volume (`structuredVolume`) is extended with a new field `pvcLabels map[string]string` to store the labels from the associated PVC.
- A new helper function (or an updated parsing function) is used to perform a PVC lookup when a PV has a ClaimRef, populating the pvcLabels field.
4. Integrate with Policy Engine:
- The policy builder is updated to create and add a `pvcLabelsCondition` if the policy YAML contains a `pvcLabels` entry.
- The matching entry point uses the updated `structuredVolume` (populated with PVC labels) to evaluate all conditions, including the new PVC labels condition.
## Detailed Design
1. Update Volume Conditions Schema: Define the conditions struct with a simple map for PVC labels:
```go
// volumeConditions defines the current format of conditions we parse.
type volumeConditions struct {
Capacity string `yaml:"capacity,omitempty"`
StorageClass []string `yaml:"storageClass,omitempty"`
NFS *nFSVolumeSource `yaml:"nfs,omitempty"`
CSI *csiVolumeSource `yaml:"csi,omitempty"`
VolumeTypes []SupportedVolume `yaml:"volumeTypes,omitempty"`
// New field: pvcLabels for simple exact-match filtering.
PVCLabels map[string]string `yaml:"pvcLabels,omitempty"`
}
```
2. New Condition: `pvcLabelsCondition`: Implement a condition that compares expected labels with those on the PVC:
```go
// pvcLabelsCondition defines a condition that matches if the PVC's labels contain all the specified key/value pairs.
type pvcLabelsCondition struct {
labels map[string]string
}
func (c *pvcLabelsCondition) match(v *structuredVolume) bool {
if len(c.labels) == 0 {
return true // No label condition specified; always match.
}
if v.pvcLabels == nil {
return false // No PVC labels found.
}
for key, expectedVal := range c.labels {
if actualVal, exists := v.pvcLabels[key]; !exists || actualVal != expectedVal {
return false
}
}
return true
}
func (c *pvcLabelsCondition) validate() error {
// No extra validation needed for a simple map.
return nil
}
```
3. Update `structuredVolume`: Extend the internal volume representation with a field for PVC labels:
```go
// structuredVolume represents a volume with parsed fields.
type structuredVolume struct {
capacity resource.Quantity
storageClass string
// New field: pvcLabels stores labels from the associated PVC.
pvcLabels map[string]string
nfs *nFSVolumeSource
csi *csiVolumeSource
volumeType SupportedVolume
}
```
4. Update PVC Lookup `parsePVWithPVC`: Modify the PV parsing function to perform a PVC lookup:
```go
func (s *structuredVolume) parsePVWithPVC(pv *corev1.PersistentVolume, client crclient.Client) error {
s.capacity = *pv.Spec.Capacity.Storage()
s.storageClass = pv.Spec.StorageClassName
if pv.Spec.NFS != nil {
s.nfs = &nFSVolumeSource{
Server: pv.Spec.NFS.Server,
Path: pv.Spec.NFS.Path,
}
}
if pv.Spec.CSI != nil {
s.csi = &csiVolumeSource{
Driver: pv.Spec.CSI.Driver,
VolumeAttributes: pv.Spec.CSI.VolumeAttributes,
}
}
s.volumeType = getVolumeTypeFromPV(pv)
// If the PV is bound to a PVC, look it up and store its labels.
if pv.Spec.ClaimRef != nil {
pvc := &corev1.PersistentVolumeClaim{}
err := client.Get(context.Background(), crclient.ObjectKey{
Namespace: pv.Spec.ClaimRef.Namespace,
Name: pv.Spec.ClaimRef.Name,
}, pvc)
if err != nil {
return errors.Wrap(err, "failed to get PVC for PV")
}
s.pvcLabels = pvc.Labels
}
return nil
}
```
5. Update the Policy Builder: Add the new condition to the policy if pvcLabels is provided:
```go
func (p *Policies) BuildPolicy(resPolicies *ResourcePolicies) error {
for _, vp := range resPolicies.VolumePolicies {
con, err := unmarshalVolConditions(vp.Conditions)
if err != nil {
return errors.WithStack(err)
}
volCap, err := parseCapacity(con.Capacity)
if err != nil {
return errors.WithStack(err)
}
var volP volPolicy
volP.action = vp.Action
volP.conditions = append(volP.conditions, &capacityCondition{capacity: *volCap})
volP.conditions = append(volP.conditions, &storageClassCondition{storageClass: con.StorageClass})
volP.conditions = append(volP.conditions, &nfsCondition{nfs: con.NFS})
volP.conditions = append(volP.conditions, &csiCondition{csi: con.CSI})
volP.conditions = append(volP.conditions, &volumeTypeCondition{volumeTypes: con.VolumeTypes})
// If a pvcLabels map is provided, add the pvcLabelsCondition.
if con.PVCLabels != nil && len(con.PVCLabels) > 0 {
volP.conditions = append(volP.conditions, &pvcLabelsCondition{labels: con.PVCLabels})
}
p.volumePolicies = append(p.volumePolicies, volP)
}
p.version = resPolicies.Version
return nil
}
```
6. Update the Matching Entry Point: Use the updated PV parsing that performs a PVC lookup:
```go
func (p *Policies) GetMatchAction(res interface{}, client crclient.Client) (*Action, error) {
volume := &structuredVolume{}
switch obj := res.(type) {
case *corev1.PersistentVolume:
if err := volume.parsePVWithPVC(obj, client); err != nil {
return nil, errors.Wrap(err, "failed to parse PV with PVC lookup")
}
case *corev1.Volume:
volume.parsePodVolume(obj)
default:
return nil, errors.New("failed to convert object")
}
return p.match(volume), nil
}
```
Note: The matching loop (p.match(volume)) iterates over all conditions (including our new pvcLabelsCondition) and returns the corresponding action if all conditions match.

View File

Before

Width:  |  Height:  |  Size: 203 KiB

After

Width:  |  Height:  |  Size: 203 KiB

View File

Before

Width:  |  Height:  |  Size: 131 KiB

After

Width:  |  Height:  |  Size: 131 KiB

View File

Before

Width:  |  Height:  |  Size: 81 KiB

After

Width:  |  Height:  |  Size: 81 KiB

View File

Before

Width:  |  Height:  |  Size: 80 KiB

After

Width:  |  Height:  |  Size: 80 KiB

Some files were not shown because too many files have changed in this diff Show More