Compare commits

...

448 Commits

Author SHA1 Message Date
Wenkai Yin(尹文开)
4d961fb6fe Merge pull request #7652 from ywk253100/240410_changelog
Add changelog for v1.13.2
2024-04-11 10:39:20 +08:00
Wenkai Yin(尹文开)
17da80ff6a Add changelog for v1.13.2
Add changelog for v1.13.2

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-04-11 09:51:49 +08:00
Wenkai Yin(尹文开)
8f7121d471 Merge pull request #7606 from blackpiglet/bump_golang_version
Bump Golang version, and bump protobuf version.
2024-04-11 09:21:40 +08:00
Xun Jiang
2400651557 Bump Golang version, and bump protobuf version.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-04-10 18:30:22 +08:00
qiuming
35177cdf46 Merge pull request #7644 from ywk253100/240409_list
[cherry-pick]Empty the list before next round of listing
2024-04-10 10:56:37 +08:00
Wenkai Yin(尹文开)
27a4bfc7ba Empty the list before next round of listing
Empty the list before next round of listing

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-04-09 17:35:32 +08:00
Xun Jiang/Bruce Jiang
2c57ed8cbf Merge pull request #7645 from ywk253100/240409_action
[cherry-pick]Upgrade codecov action to v4
2024-04-09 17:34:39 +08:00
Wenkai Yin(尹文开)
c35fd60d2b Upgrade codecov action to v4
Upgrade codecov action to v4

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-04-09 17:21:30 +08:00
qiuming
9f9464c5fd Merge pull request #7586 from Lyndon-Li/release-1.13
[1.13] Issue 7535: add the MustHave resource check during item collection and item filter for restore
2024-03-29 12:20:45 +08:00
lyndon-li
6bcd5bee7c Merge branch 'release-1.13' into release-1.13 2024-03-29 10:58:26 +08:00
Lyndon-Li
c9d7708cd9 issue 7535: don't exclude resources in MustHave list during restore
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-03-29 10:55:24 +08:00
Lyndon-Li
420a123105 issue 7535: don't exclude resources in MustHave list during restore
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-03-29 10:52:49 +08:00
Daniel Jiang
4142722b29 Merge pull request #7577 from ywk253100/240328_bump
[cherry-pick]Bump up the versions of severel Kubernetes-related libs
2024-03-28 16:26:32 +08:00
Wenkai Yin(尹文开)
7b95d58d1a Bump up the versions of severel Kubernetes-related libs
Bump up the versions of severel Kubernetes-related libs

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-03-28 15:00:23 +08:00
Wenkai Yin(尹文开)
ea5a89f83b Merge pull request #7500 from ywk253100/240307_1.13.1
Generate the changelog for release 1.13.1
2024-03-08 13:03:11 +08:00
Wenkai Yin(尹文开)
642924d2bd Generate the changelog for release 1.13.1
Generate the changelog for release 1.13.1

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-03-07 11:23:07 +08:00
lyndon-li
8dca539314 Merge pull request #7468 from blackpiglet/7464_fix_release_1.13
[release-1.13]Modify the label used by the restore CLI to filter the PVR.
2024-03-01 09:47:55 +08:00
Xun Jiang
a6a6da5a72 Modify the label used by the restore CLI to filter the PVR.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-02-29 10:21:57 +08:00
danfeng
99376a3de6 Merge pull request #7461 from danfengliu/bumpup-upgrade-path
bump up upgrade path to 1.13
2024-02-27 14:51:41 +08:00
danfeng
eed1c383c8 Merge branch 'release-1.13' into bumpup-upgrade-path 2024-02-27 14:39:48 +08:00
Xun Jiang/Bruce Jiang
941ad1a993 Merge pull request #7450 from allenxu404/release-1.13
[cherry-pick]adjust the logic for the backup_last_status metrics to stop incorrectly incrementing over time
2024-02-26 10:04:06 +08:00
allenxu404
02d229cd06 Adjust the logic for the backup_last_status metrics to stop incorrectly incrementing over time
Signed-off-by: allenxu404 <qix2@vmware.com>
2024-02-26 09:26:04 +08:00
danfengl
c859f7bf11 bump up upgrade path to 1.13
Signed-off-by: danfengl <danfengl@vmware.com>
2024-02-23 06:42:29 +00:00
lyndon-li
e1222ffd74 Merge pull request #7459 from Lyndon-Li/release-1.13
[1.13] Issue 7308: change the data path requeue time to 5 second
2024-02-22 16:17:52 +08:00
Lyndon-Li
9cdaeadef3 issue 7308: change the data path requeue time to 5 second
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-02-22 16:02:35 +08:00
Wenkai Yin(尹文开)
cb7211d997 Merge pull request #7453 from ywk253100/240221_credential
[cherry-pick]Don't return error when no credential file found
2024-02-21 16:58:22 +08:00
Wenkai Yin(尹文开)
df08980618 Don't return error when no credential file found
Don't return error when no credential file found

Fixes #7395

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-02-21 16:05:15 +08:00
lyndon-li
51a90e7d2f Merge pull request #7399 from kaovilai/restic-recreate-repo-vel1.13
release-1.13: BackupRepositories associated with a BSL are invalidated when BSL is (re-)created. (#7380)
2024-02-20 11:13:46 +08:00
lyndon-li
62a531785f Merge branch 'release-1.13' into restic-recreate-repo-vel1.13 2024-02-20 10:50:18 +08:00
qiuming
5dd1d3bfe5 Merge pull request #7407 from blackpiglet/fix_velero_repo_get_bug_1.13
[cherry-pick][release-1.13]Fix the `velero repo get` nil pointer issue.
2024-02-19 10:53:44 +08:00
Xun Jiang
701e786150 Fix the velero repo get nil pointer issue.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-02-08 14:31:59 +08:00
Tiger Kaovilai
170fcc53ba BackupRepositories associated with a BSL are invalidated when BSL is (re-)created. (#7380)
* Add BackupRepositories invalidation on BSL Create
Simplify comments

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

* Simplify

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

---------

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-02-06 16:35:40 -05:00
Xun Jiang/Bruce Jiang
44aa6a7c6b Merge pull request #7372 from blackpiglet/add_uploader_config_for_schedule_v1.13
Add `ParallelFilesUpload` for schedule creation.
2024-01-31 15:42:04 +08:00
Xun Jiang
2a9f4fa576 Add ParallelFilesUpload for schedule creation.
Modify restore-helper print information.

Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-01-31 13:35:10 +08:00
Wenkai Yin(尹文开)
4d27ca99c1 Merge pull request #7369 from qiuming-best/release-1.13
[Cherry-Pick] Fix server start failure when no default BSL
2024-01-30 17:10:45 +08:00
Ming Qiu
8914c7209b Fix server start failure when no default BSL
Signed-off-by: Ming Qiu <mqiu@vmware.com>
2024-01-30 08:33:54 +00:00
Wenkai Yin(尹文开)
76670e940c Merge pull request #7351 from ywk253100/240124_log
Log the error details
2024-01-24 13:54:27 +08:00
Wenkai Yin(尹文开)
25d977e5bc Log the error details
Log the error details

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-01-24 12:43:59 +08:00
qiuming
94c7d4b6d4 Merge pull request #7346 from ywk253100/240122_changelog
Check whether the API resource exists before creating the informer cache
2024-01-24 10:47:16 +08:00
Wenkai Yin(尹文开)
09401c8454 Check whether the API resource exists before creating the informer cache
Check whether the API resource exists before creating the informer cache

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-01-23 17:19:09 +08:00
qiuming
981d64a1b8 Merge pull request #7338 from ywk253100/240122_changelog
Move unreleased changelogs to 1.13 changelog
2024-01-23 10:19:56 +08:00
Wenkai Yin(尹文开)
16b8b8da72 Move unreleased changelogs to 1.13 changelog
Move unreleased changelogs to 1.13 changelog

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-01-23 10:06:15 +08:00
lyndon-li
9fd73b2d13 Merge pull request #7339 from ywk253100/240122_log_erro
Log the error got from the discovery helper
2024-01-22 14:11:38 +08:00
Wenkai Yin(尹文开)
c377e472e8 Log the error got from the discovery helper
Log the error got from the discovery helper

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-01-22 11:12:00 +08:00
Wenkai Yin(尹文开)
f5714cb636 [cherry-pick]Do not attempt restore resource with no available GVK in cluster (#7336)
* Specify the Kind explicitly in the API resource

Specify the Kind explicitly in the API resource to avoid wrong Kind conversion


* Do not attempt restore resource with no available GVK in cluster (#7322)

Check for GVK before attempting restore.


---------

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
Co-authored-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-01-22 10:51:36 +08:00
Wenkai Yin(尹文开)
5ffa12189b Merge pull request #7328 from ywk253100/240118_release_node
Add release note for the informer cache memory consumption
2024-01-18 15:27:43 +08:00
Wenkai Yin(尹文开)
1882be763e Add release note for the informer cache memory consumption
Add release note for the informer cache memory consumption

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-01-18 13:47:34 +08:00
Wenkai Yin(尹文开)
42bbf87197 Merge pull request #7325 from ywk253100/240116_informer
Create informer per resources to avoid huge memory consumption
2024-01-18 10:44:15 +08:00
Wenkai Yin(尹文开)
8aa6a8e59d Create informer per resources to avoid huge memory consumption
Create informer per resources to avoid huge memory consumption

Fixes #7323

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-01-17 22:37:49 +08:00
Xun Jiang/Bruce Jiang
fdb29819b4 Merge pull request #7304 from blackpiglet/fix_7268_release_1.13
Add detail for parameter s3ForcePathStyle in MinIO page.
2024-01-15 13:31:30 +08:00
Xun Jiang
74f225037c Add detail for parameter s3ForcePathStyle in MinIO page.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-01-12 16:55:38 +08:00
Wenkai Yin(尹文开)
6e90e628aa Merge pull request #7303 from ywk253100/240110_pin
Pin the version of Golang and base image
2024-01-10 17:52:51 +08:00
Wenkai Yin(尹文开)
46f64f2f98 Pin the version of Golang and base image
Pin the version of Golang and base image

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-01-10 17:35:28 +08:00
Wenkai Yin(尹文开)
09af92c54f Merge pull request #7300 from ywk253100/240110_changelog
Add changelog for v1.13.0
2024-01-10 16:03:47 +08:00
Wenkai Yin(尹文开)
ac4c9ed919 Add changelog for v1.13.0
Add changelog for v1.13.0

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-01-10 13:36:51 +08:00
danfeng
b39d91aea3 Merge pull request #7296 from danfengliu/fix-nightly-informer-cache-param-issue
Fix nightly issue of missing param WithoutDisableInformerCacheParam during Velero installation
2024-01-10 13:04:09 +08:00
danfengl
a9c820c9d6 Fix nightly issue of missing param WithoutDisableInformerCacheParam during Velero installation
Signed-off-by: danfengl <danfengl@vmware.com>
2024-01-10 02:57:44 +00:00
Daniel Jiang
3b82395ee1 Merge pull request #7294 from ywk253100/240109_informer_cache
Make "disable-informer-cache" option false(enabled) by default to keep it consistent with the help message
2024-01-10 10:57:21 +08:00
Wenkai Yin(尹文开)
9a1be6f53f Make "disable-informer-cache" option false(enabled) by default to keep it consistent with the help message
Make "disable-informer-cache" option false(enabled) by default to keep it consi
stent with the help message

Fixes #7264

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-01-10 09:49:54 +08:00
Xun Jiang/Bruce Jiang
e65ef28948 Merge pull request #7272 from danfengliu/bumpup-plugins-matrix-for-1.13
Bumpup E2E test plugins matirx for v1.13
2024-01-09 10:38:36 +08:00
danfengl
1b22a49d22 Bumpup E2E test plugins matirx for v1.13
Signed-off-by: danfengl <danfengl@vmware.com>
2024-01-09 02:26:53 +00:00
lyndon-li
72f2da92b7 Merge pull request #7282 from Lyndon-Li/issue-fix-6928
Issue 6928: remove snapshot deletion timeout for PVB
2024-01-08 12:58:43 +08:00
Lyndon-Li
200fd80448 isue 6928: remove snapshot deletion timeout for PVB
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-01-08 11:28:23 +08:00
danfeng
c2177c24e8 Merge pull request #7277 from danfengliu/add-disable-informer-cache-param
Add test for disable informer cache param of velero installation
2024-01-05 15:34:26 +08:00
danfengl
fdca488209 Add param disable informer cache for velero installation
Signed-off-by: danfengl <danfengl@vmware.com>
2024-01-05 07:22:50 +00:00
Daniel Jiang
3401db47f9 Merge pull request #7274 from reasonerjt/fix-7263
Do not set "targetNamespace" to namepsace items
2024-01-05 14:41:27 +08:00
Daniel Jiang
a5d08ac5f0 Do not set "targetNamespace" to namepsace items
fixes #7263
This commit makes the data structures more consistent, that namespaces,
as cluster scoped resource will not have "targetNamespace" in the
"restoreableItem" instance.

Signed-off-by: Daniel Jiang <jiangd@vmware.com>
2024-01-05 14:01:16 +08:00
qiuming
e84a51deec Merge pull request #7262 from qiuming-best/intermediate-pv-delete
Fix intermediate PV delete for data mover
2024-01-04 15:45:32 +08:00
lyndon-li
c3c4c97914 Merge pull request #7265 from Lyndon-Li/change-node-agent-config-name
Change node-agent-config name
2024-01-04 15:43:43 +08:00
Ming Qiu
92fdf407c7 Fix intermediate pv delete for data mover
Signed-off-by: Ming Qiu <mqiu@vmware.com>
2024-01-04 03:26:47 +00:00
Lyndon-Li
58ead55fd1 change node-agent-config name
Signed-off-by: Lyndon-Li <yonghui.li@broadcom.com>
2024-01-03 22:02:04 +08:00
Xun Jiang/Bruce Jiang
6b632affe8 Merge pull request #7255 from ywk253100/240102_doc
Generate docs for v1.13
2024-01-03 14:13:55 +08:00
Daniel Jiang
6e641f44b9 Merge pull request #7260 from blackpiglet/rename_volumeinfo_metadata_file
Rename volumeinfo metadata file.
2024-01-03 13:33:01 +08:00
Xun Jiang
08dedd8b66 Rename volumeinfo metadata file.
Change from <backup-name>-volumeinfos.json.gz to
<backup-name>-volumeinfo.json.gz.

Signed-off-by: Xun Jiang <jxun@vmware.com>
2024-01-03 11:22:49 +08:00
qiuming
f6dfa8e7b2 Merge pull request #7176 from danfengliu/fix-issue-of-hiiting-snapshot-limit
Add sleep to avoid snapshot limitation issue
2024-01-02 17:09:43 +08:00
Wenkai Yin(尹文开)
d8dba993d3 Generate docs for v1.13
Generate docs for v1.13

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-01-02 13:54:28 +08:00
danfengl
b25578d6e1 Add sleep to avoid snapshot limitation issue and skip retain PV on vSphere pipeline
1. Add sleep to avoid snapshot limitation issue https://docs.aws.amazon.com/AWSEC2/latest/APIReference/errors-overview.html#:~:text=SnapshotCreationPerVolumeRateExceeded;
2. Move InstallVelero variable out of struct of Veleroconfig as a global one since it's not for controlling any individual case;
3. Unskip migration test case on AWS pipeline, because we added a new EKS pipeline and deleted TKG AWS pipline in internal E2E test, so this restriction for TKG AWS pipline is no long existed;
4. Skip retainPV test on vSphere pipeline due to PV longtime bounding issue;
5. Fix failing get snapshot by CSI from EC2 issue, snapshot by CSI has no label of backup name.

Signed-off-by: danfengl <danfengl@vmware.com>
2024-01-02 05:53:03 +00:00
qiuming
f109f38a72 Merge pull request #7253 from learner0810/fix-pvc-assignment
Fix pvc assignment
2024-01-02 13:25:37 +08:00
zhongjun.li
8c84836644 Fix pvc assignment
Signed-off-by: zhongjun.li <zhongjun.li@daocloud.io>
2023-12-29 15:09:41 +08:00
Shubham Pampattiwar
78bd67aa1d Merge pull request #7248 from rajats22/main
Adopter update for Azure Backup for AKS
2023-12-22 10:46:03 -08:00
rajats22
29997a3bfb <commit mesage>
Signed-off-by: rajats22 <111422846+rajats22@users.noreply.github.com>
2023-12-22 15:16:11 +05:30
lyndon-li
f5e36c12ad Merge pull request #7245 from Lyndon-Li/issue-fix-7244
Issue 7244: delete incomplete snapshot automatically for kopia uploader
2023-12-22 16:56:53 +08:00
Lyndon-Li
60d2c62c1a issue 7244: delete incomplete snapshot automatically for kopia uploader
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-12-22 16:44:00 +08:00
Qi Xu
ee345cf281 Adjust the newline output of resource list in restore describer (#7238)
Signed-off-by: allenxu404 <qix2@vmware.com>
2023-12-22 10:53:29 +05:30
Xun Jiang/Bruce Jiang
7d2c749abf Merge pull request #7231 from blackpiglet/update_volumeinfo_json_tag
Don't generate empty structure.
2023-12-21 16:32:58 +08:00
Xun Jiang
9be8eb0c6d Don't generate empty structure.
VolumeInfo contains several sub-structures. They are filled for
different scenarios. Do not generate empty structure for the
not filled sub-structures.

Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-12-21 14:53:03 +08:00
lyndon-li
b4f2469145 Merge pull request #7240 from Lyndon-Li/issue-fix-7237
Issue 7237: add pvc namespace to backup describe
2023-12-21 13:25:33 +08:00
Lyndon-Li
210838267f issue 7237: add pvc namespace to backup describe
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-12-21 10:02:27 +08:00
lyndon-li
e6b248ccc0 Merge pull request #7236 from Lyndon-Li/remove-csi-feature-check-from-backup-describe
Remove csi feature check from backup describe
2023-12-20 15:46:58 +08:00
Lyndon-Li
0da01842ad remove csi feature check from backup describe
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-12-20 14:51:21 +08:00
qiuming
79f0541574 Merge pull request #7234 from blackpiglet/bump_restic_golang_library_version
Bump Golang library versions for v1.13 Restic to fix CVEs.
2023-12-20 13:06:30 +08:00
Xun Jiang
3dc202d30a Bump Golang library versions for v1.13 Restic to fix CVEs.
Bump golang.org/x/crypto version to v0.17.0.
Bump google.golang.org/grpc version to v1.56.3.

Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-12-20 10:31:48 +08:00
qiuming
a44cd4be33 Merge pull request #7222 from qiuming-best/adjust-bsl-setting-logic
Adjust velero server side default backup location setting logic
2023-12-20 10:29:59 +08:00
Wenkai Yin(尹文开)
970af1ddfd Merge pull request #7225 from vmware-tanzu/dependabot/go_modules/golang.org/x/crypto-0.17.0
Bump golang.org/x/crypto from 0.14.0 to 0.17.0
2023-12-19 17:43:53 +08:00
Daniel Jiang
4fd40f19c7 Merge pull request #7229 from allenxu404/remove-newline
Remove the redundant newline in backup describe output
2023-12-19 16:22:58 +08:00
qiuming
93e29f13aa Merge pull request #7228 from qiuming-best/upload-config-doc
Update uploader configuration design doc
2023-12-19 15:42:30 +08:00
Ming Qiu
236c271cd4 Update uploader configuration design doc
Signed-off-by: Ming Qiu <mqiu@vmware.com>
2023-12-19 07:34:48 +00:00
allenxu404
8f6d46be87 Remove the redundant newline in backup describe output
Signed-off-by: allenxu404 <qix2@vmware.com>
2023-12-19 15:25:37 +08:00
lyndon-li
89cbdac0a3 Merge pull request #7226 from ywk253100/231219_upgrade_doc
Add upgrade doc for v1.13
2023-12-19 13:55:09 +08:00
Ming Qiu
7d2be128ae Move velero server side default backup location setting logic to server startup
Signed-off-by: Ming Qiu <mqiu@vmware.com>
2023-12-19 05:43:29 +00:00
Wenkai Yin(尹文开)
5b403c57b9 Add upgrade doc for v1.13
Add upgrade doc for v1.13

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2023-12-19 13:09:00 +08:00
Wenkai Yin(尹文开)
d99ad5cb7a Merge pull request #7220 from ywk253100/231218_doc
Update k8s metrix and move implemented designs
2023-12-19 10:57:25 +08:00
dependabot[bot]
ddb4889301 Bump golang.org/x/crypto from 0.14.0 to 0.17.0
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.14.0 to 0.17.0.
- [Commits](https://github.com/golang/crypto/compare/v0.14.0...v0.17.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-18 23:36:41 +00:00
Xun Jiang/Bruce Jiang
ee879fdcc3 Merge pull request #7221 from blackpiglet/schedule_cli_fix
Fix shedule get and describe CLI nil pointer issue
2023-12-18 20:44:03 +08:00
Xun Jiang
6222891d5b Fix shedule get and describe CLI issue.
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-12-18 16:41:10 +08:00
lyndon-li
71b947ab5b Merge pull request #7218 from Lyndon-Li/issue-fix-7214
Issue 7214: data mover backup describe for legacy backups
2023-12-18 14:18:51 +08:00
Wenkai Yin(尹文开)
b57cdb8f96 Update k8s metrix and move implemented designs
Update k8s metrix and move implemented designs

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2023-12-18 14:09:20 +08:00
Lyndon-Li
0313c2add0 issue 7214: data mover backup describe for legacy backups
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-12-18 11:07:01 +08:00
Shubham Pampattiwar
ea6c8ca127 fix finalizer typo in logs (#7204)
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2023-12-13 11:46:21 -05:00
lyndon-li
5f14628d69 Merge pull request #7201 from Lyndon-Li/issue-fix-7189
Issue 7189: generic restore - don't assume the first volume as the restore volume
2023-12-12 12:47:25 +08:00
Lyndon-Li
cf7d27c4bc issue 7189: generic restore - don't assume the first volume as the restore volume
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-12-12 10:04:31 +08:00
Shubham Pampattiwar
2bd9bf2903 Merge pull request #7076 from shubham-pampattiwar/update-backup-log
Update backup log to reflect appropriate backup phase
2023-12-11 12:49:06 -08:00
Daniel Jiang
804b9a8d91 Merge pull request #7171 from kaovilai/tests-explicit-enableCSI
Add explicit enableCSI to TestProcessBackupCompletions
2023-12-11 14:11:37 +08:00
Wenkai Yin(尹文开)
c0613f1cf6 Merge pull request #7195 from reasonerjt/fix-7190
Use a new variable for resource path
2023-12-11 10:47:19 +08:00
Daniel Jiang
0f49935720 Use a new variable for resource path
This commit avoids mistakes when checking the type of the resource
Fixes #7190

Signed-off-by: Daniel Jiang <jiangd@vmware.com>
2023-12-10 23:19:52 +08:00
qiuming
52d3fca652 Merge pull request #7191 from qiuming-best/uploader-configmapkey
Modify uploader config map key
2023-12-08 13:49:34 +08:00
Ming Qiu
df82691097 Modify uploader config map key
Signed-off-by: Ming Qiu <mqiu@vmware.com>
2023-12-08 03:07:13 +00:00
Wenkai Yin(尹文开)
fa73bcdd22 Merge pull request #7169 from kaovilai/schedule-skip-immediately
Add `--skip-immediately` to schedule CLI/API, and related to server, install commands
2023-12-08 11:06:29 +08:00
Tiger Kaovilai
eaba99b92e Add test skipImmediately is switched to false after reconcile
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-12-08 09:12:08 +07:00
Tiger Kaovilai
9e016c568a Address requeue feedback
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-12-08 09:12:08 +07:00
Tiger Kaovilai
e4bd59727f Schedule SkipImmediately
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-12-08 09:12:08 +07:00
Tiger Kaovilai
544c8481cc Schedule Skip Immediately Config Design
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

switch from "unpause triggers" to "skip immediately" for clarity

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

Apply suggestions from code review

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

Uncomment velero server option

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

Backup will also be triggered at the next cron schedule.

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

Clarify: unpauseTriggers trigger based from lastBackup timestamp,  CRD default blocks server flags

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

`velero schedule unpause schedule-1` will check `.spec.UnpauseTriggers`

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

Add `LastUnpaused` to ScheduleStatus

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

Add `velero install`

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-12-08 09:10:25 +07:00
lyndon-li
4070934f85 Merge pull request #7125 from Lyndon-Li/issue-fix-6695
Issue fix 6695: add describe for data mover backups
2023-12-07 16:23:30 +08:00
Xun Jiang/Bruce Jiang
759e8a9c63 Merge pull request #7184 from blackpiglet/7163_fix
Update CSIVolumeSnapshotsCompleted in backup's status and the metric
2023-12-07 11:14:28 +08:00
Xun Jiang
edb0860dd2 Fix issue #7163.
Update CSIVolumeSnapshotsCompleted in backup's status and the metric
during backup finalize stage according to async operations content.

Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-12-07 09:43:10 +08:00
lyndon-li
099acd2527 Merge pull request #7141 from qiuming-best/support-restore-sparse
Allow sparse option for Kopia & Restic restore
2023-12-06 18:25:34 +08:00
Daniel Jiang
10bd5b14e4 Merge pull request #7136 from davidhulick/fix-kubectl-port-forwarding-docs-link
docs: fix link to kubectl port forwarding docs
2023-12-06 18:15:38 +08:00
Ming Qiu
1a237d3e4c Update API
Signed-off-by: Ming Qiu <mqiu@vmware.com>
2023-12-06 08:59:12 +00:00
danfengliu
49e3e545be Merge pull request #7048 from danfengliu/add-readme-for-e2e-test
Update E2E README file to latest
2023-12-06 16:53:13 +08:00
Lyndon-Li
72fcd84a51 csi data mover backup describe
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-12-06 10:53:09 +08:00
lyndon-li
8d8d68d649 Merge pull request #7175 from blackpiglet/download_request
Refactor DownloadRequest Stream function
2023-12-06 10:28:44 +08:00
qiuming
ea04a86eb2 Merge pull request #6771 from qiuming-best/bsl-fix
Fix default BSL setting not work
2023-12-05 19:09:50 +08:00
Xun Jiang/Bruce Jiang
6093e651cb Merge pull request #7161 from Lyndon-Li/node-agent-config-doc
Add node-agent concurrency doc
2023-12-05 16:52:29 +08:00
Lyndon-Li
ac5d030ab4 Merge branch 'main' into issue-fix-6695 2023-12-05 16:46:31 +08:00
qiuming
2fa785a3dd Merge pull request #7052 from qiuming-best/data-mover-fail-early
Make data mover fail early
2023-12-05 16:33:46 +08:00
Lyndon-Li
434e073c67 csi data mover backup describe, support legacy backups
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-12-05 15:49:35 +08:00
Xun Jiang/Bruce Jiang
45ae68575d Merge pull request #7153 from allenxu404/hooktracker-update
Enhance hooks tracker by adding an returned error to record function
2023-12-05 13:43:38 +08:00
Xun Jiang
c8e76f4602 Fix the DownloadRequest context error.
Clean the DownloadRequest Stream function.

Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-12-05 13:29:23 +08:00
allenxu404
6051b3cbe0 Enhance hooks tracker by adding a returned error to record function
Signed-off-by: allenxu404 <qix2@vmware.com>
2023-12-05 12:56:42 +08:00
Daniel Jiang
f2ba625229 Merge pull request #7138 from blackpiglet/6595_volumeinfo_restore
Use VolumeInfo to help restore the PV.
2023-12-05 10:19:16 +08:00
Xun Jiang
28df14d9d5 Modify restore logic.
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-12-05 10:01:16 +08:00
Xun Jiang/Bruce Jiang
3b42abd139 Merge pull request #7174 from reasonerjt/snapshot-flag-skip-csi
Make sure the PVs skipped by CSI plugin due to settings in backup spec are tracked
2023-12-05 09:31:21 +08:00
Daniel Jiang
905de8cab1 Merge pull request #7167 from yanggangtony/fix-design-for-unified-repo
Discard --pod-volume-backup-uploader in unified-repo design doc.
2023-12-05 08:59:36 +08:00
Xun Jiang
c77bec73bb Move VolumesInformation to an independant package.
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-12-04 08:33:37 +08:00
Xun Jiang
ca97248f2a Use VolumeInfo to help restore the PV.
Add VolumeInfo for left PVs during backup.

Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-12-04 08:33:37 +08:00
Tiger Kaovilai
2132506e8c Add explicit enableCSI to TestProcessBackupCompletions
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-12-01 14:22:40 -05:00
Daniel Jiang
266ea5d55a Make sure the PVs skipped by CSI plugin due to settings in backup spec
are tracked

Signed-off-by: Daniel Jiang <jiangd@vmware.com>
2023-12-01 14:19:54 +08:00
Shashank Singh
a318e1da99 Fix floatation of error/message in the backup result. (#7159)
* Fix floatation of error/message in the backup/restore result

Signed-off-by: Shashank Singh <shashank1306s@gmail.com>

* fix for checkgates

Signed-off-by: Shashank Singh <shashank1306s@gmail.com>

* refactoring

Signed-off-by: Shashank Singh <shashank1306s@gmail.com>

---------

Signed-off-by: Shashank Singh <shashank1306s@gmail.com>
2023-12-01 09:50:01 +05:30
Ming Qiu
c6cba300fb Fix default BSL setting not work
Signed-off-by: Ming Qiu <mqiu@vmware.com>
2023-12-01 02:06:35 +00:00
Ming Qiu
0afaa70e9b Merge branch 'main' of https://github.com/qiuming-best/velero into support-restore-sparse 2023-11-30 10:55:55 +00:00
yanggang
fcf59376c1 Discard --pod-volume-backup-uploader in unified-repo design doc.
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-11-30 08:50:59 +00:00
Daniel Jiang
5cbfd9fffd Merge pull request #7150 from Lyndon-Li/issue-fix-7135
Issue 7135: check pod status before checking node-agent pod status
2023-11-29 15:47:23 +08:00
Lyndon-Li
81183f683e Merge branch 'main' into issue-fix-6695 2023-11-29 15:12:21 +08:00
Xun Jiang/Bruce Jiang
f5bbe82e78 Merge pull request #7152 from reasonerjt/track-skipped-SnapshotVolumes-false
Track the skipped PV when SnapshotVolumes set as false
2023-11-29 14:46:23 +08:00
Lyndon-Li
33b570d5cd Merge branch 'main' into node-agent-config-doc 2023-11-29 14:45:20 +08:00
Lyndon-Li
8968ae5ec4 add node-agent concurrency doc
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-11-29 14:33:51 +08:00
Lyndon-Li
e416b20148 issue 7135: check pod status before checking node-agent pod status
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-11-29 13:46:50 +08:00
lyndon-li
4d21e29d9d Merge pull request #7151 from blackpiglet/linter_part2
Linter part2
2023-11-29 13:17:59 +08:00
Xun Jiang
f5c159ce56 Resolve linter issues.
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-11-29 11:15:43 +08:00
Xun Jiang
d70535b6d2 Add nolintlint linter.
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-11-29 11:13:46 +08:00
Xun Jiang
ec03d1ebce Add noctx linter.
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-11-29 11:13:46 +08:00
Xun Jiang
dbd1a12d9f Add nilerr and ginkgolinter linter.
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-11-29 11:13:46 +08:00
Xun Jiang
cddc11e000 Enable linter errchkjson.
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-11-29 11:13:46 +08:00
Xun Jiang
3805a470a9 Enable dupword linter.
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-11-29 11:13:46 +08:00
Ming
03dff100a3 Make data mover fail early
Signed-off-by: Ming <mqiu@vmware.com>
2023-11-29 03:03:53 +00:00
Daniel Jiang
b8604b6a89 Treat namespace as a regular restorable item (#7143)
Fixes #1970

Namespaces will be handled as cluster-scope resource, but for
consistency they will still created via "Ensure namespace" flow for
consistency.

Signed-off-by: Daniel Jiang <jiangd@vmware.com>
2023-11-28 11:20:36 -05:00
Daniel Jiang
b759877f5b Track the skipped PV when SnapshotVolumes set as false
This commit makes sure if a PV is not taken snapshot b/c the flag
SnapshotVolumes is set to false in a backup CR, the PV is also also
tracked as skipped in the tracker.

Signed-off-by: Daniel Jiang <jiangd@vmware.com>
2023-11-28 22:52:17 +08:00
Ming Qiu
b57dde1572 Allow sparse option for Kopia & Restic restore
Signed-off-by: Ming Qiu <mqiu@vmware.com>
2023-11-28 13:48:09 +00:00
Daniel Jiang
85482aefaf Merge pull request #7117 from allenxu404/issue6567
Add hook status to backup/restore CR
2023-11-28 16:54:11 +08:00
allenxu404
5d1a632be4 Add hook status to backup/restore CR
Signed-off-by: allenxu404 <qix2@vmware.com>
2023-11-28 14:47:31 +08:00
Wenkai Yin(尹文开)
6ac7ff1230 Merge pull request #7130 from qiuming-best/data-mover-recoverbility
Node agent restart enhancement
2023-11-28 14:25:47 +08:00
Ming Qiu
98a56eb5c7 Node agent restart enhancement
Signed-off-by: Ming Qiu <mqiu@vmware.com>
2023-11-28 05:50:46 +00:00
qiuming
f6ed4558bf Merge pull request #7149 from yanggangtony/fix-test-VeleroInstall
Fix test code wrong code for VeleroInstall
2023-11-28 09:59:53 +08:00
Yang Gang
402a61481d [docs] Fix all typos in plugins typo. (#7129)
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-11-27 13:03:01 -05:00
yanggang
9ccb5a14bb Fix test code wrong code for VeleroInstall
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-11-27 11:13:52 +00:00
qiuming
3fdb3ec7c5 Merge pull request #7069 from 27149chen/imporve-discovery-refresh
improve discoveryHelper.Refresh() in restore
2023-11-27 18:02:36 +08:00
lou
179faf3e33 update after review
Signed-off-by: lou <alex1988@outlook.com>
2023-11-27 17:39:37 +08:00
Xun Jiang/Bruce Jiang
d336e2812e Merge pull request #6958 from blackpiglet/5156_list_option_fix
Change controller-runtime List option from MatchingFields to ListOpti…
2023-11-27 17:38:12 +08:00
Lyndon-Li
8ab0c017a9 issue 6695: add backup description for data mover
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-11-27 16:19:34 +08:00
qiuming
ccd3f220ad Merge pull request #7090 from qiuming-best/perf-test-0
Enhance perf test
2023-11-27 16:10:26 +08:00
Ming
507157f812 Add perf test namespace mapping when restore
Signed-off-by: Ming <mqiu@vmware.com>
2023-11-27 02:11:13 +00:00
Lyndon-Li
1815c1691f Merge branch 'main' into issue-fix-6695 2023-11-27 09:46:22 +08:00
danfengl
4590579105 Update E2E README file to latest
Signed-off-by: danfengl <danfengl@vmware.com>
2023-11-25 12:37:21 +00:00
danfengliu
7320bb7674 Merge pull request #7122 from danfengliu/add-csi-retain-policy-e2e-test
Add E2E test for taking CSI snapshot to PV with retain reclaim policy
2023-11-22 17:35:35 +08:00
qiuming
b276564b95 Merge pull request #7000 from qiuming-best/kopia-parallelism
Make Kopia file parallelism configurable
2023-11-22 12:13:14 +08:00
Ming Qiu
c2d4495efe Merge branch 'main' of https://github.com/qiuming-best/velero into kopia-parallelism 2023-11-22 03:52:20 +00:00
Wenkai Yin(尹文开)
5c958d820d Merge pull request #7100 from blackpiglet/6595_volumeinfo_generate
6595 volumeinfo generate
2023-11-22 11:14:36 +08:00
Ming Qiu
fea22bbbc9 Merge branch 'main' of https://github.com/qiuming-best/velero into kopia-parallelism 2023-11-22 01:42:39 +00:00
Xun Jiang
7f52321772 Generate VolumeInfo.
Remove CSI VolumeSnapshot listter and the informer.
Add download the VolumeInfos metadata for backup.

Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-11-22 09:40:38 +08:00
David Hulick
5e3b5317cd docs: fix link to kubectl port forwarding docs
Signed-off-by: David Hulick <dave.hulick@gmail.com>
2023-11-21 16:38:37 -05:00
danfengl
55a465a941 Add E2E test for taking CSI snapshot to PV with retain reclaim policy
Signed-off-by: danfengl <danfengl@vmware.com>
2023-11-21 07:11:22 +00:00
Tiger Kaovilai
a68ddd458c Close stale issue with not-planned status (#7128)
Instead of closing as completed which would signify work has been done.

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-11-21 09:24:43 +05:30
Anshul Ahuja
0e53cd0916 RM support for Escaped bool, float, null (#7118)
* RM support for Escaped bool, float, null

Signed-off-by: Anshul Ahuja <anshul.ahu@gmail.com>

* fix ci

Signed-off-by: Anshul Ahuja <anshul.ahu@gmail.com>

---------

Signed-off-by: Anshul Ahuja <anshul.ahu@gmail.com>
2023-11-21 09:18:34 +05:30
Shubham Pampattiwar
e58a7808e0 Merge pull request #7116 from adux6991/fix-docs-typo
Fix typo in documentation
2023-11-20 06:01:42 -08:00
qiuming
b8a5859fe7 Merge pull request #7091 from anshulahuja98/recoverplugin
Don't fail backup/restore on velero server restart in PhaseWaitingFor…
2023-11-20 14:49:15 +08:00
Daniel Jiang
e0edc8ee93 Merge pull request #7107 from yanggangtony/update-configmaps
Fix docs: Use camel case for API objects: configmaps and secrets
2023-11-20 14:48:47 +08:00
Wenkai Yin(尹文开)
e3fb94833d Merge pull request #7115 from reasonerjt/wrap-bia-err
Include plugin name in the error message by operations
2023-11-20 14:48:18 +08:00
Daniel Jiang
ca57756ff6 Include plugin name in the error message by operations
fixes #6512

Signed-off-by: Daniel Jiang <jiangd@vmware.com>
2023-11-20 12:12:02 +08:00
Lyndon-Li
4e4f0aa1da issue 6695: add backup describe for CSI snapshot data movement 02
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-11-20 12:11:21 +08:00
Lyndon-Li
582be97a63 Merge branch 'main' into issue-fix-6695 2023-11-18 00:12:25 +08:00
Lyndon-Li
b99ac448ae issue 6695: add backup describe for CSI snapshot data movement
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-11-18 00:11:29 +08:00
Wenkai Yin(尹文开)
939dd7149a Merge pull request #7070 from blackpiglet/6595_interface
Add VolumeInfo metadata structures.
2023-11-17 19:31:29 +08:00
Xun Jiang
b440a4f53f Add VolumeInfo metadata structures and object get method.
Modify design according to comments.
Add PVInfo structure.
Add backup VolumeInfo's object storage's put and get methods.

Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-11-17 17:23:47 +08:00
xuda
9c0c7a2a77 Fix typo in documentation 2023-11-17 15:37:24 +08:00
Xun Jiang/Bruce Jiang
c283edf4a5 Merge pull request #7032 from deefdragon/main
Add check for owner references in backup sync, removing if missing
2023-11-17 09:32:50 +08:00
yanggang
c78e8980d8 Use camel case for API objects: configmaps and secrets.
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-11-16 22:17:35 +00:00
Jeffrey Koehler
292aa34a48 move filtering code to seperate method, add tests
Signed-off-by: Jeffrey Koehler <koehler@streem.tech>
2023-11-16 03:57:36 -06:00
Jeffrey Koehler
8eec6865d1 Check only schedules, and verify UIDs are the same
Signed-off-by: Jeffrey Koehler <koehler@streem.tech>
2023-11-16 02:29:56 -06:00
Wenkai Yin(尹文开)
d42505ddd0 Merge pull request #7102 from Lyndon-Li/issue-fix-7068-2
Issue 7068: add a finalizer to protect retained VSC
2023-11-15 17:13:44 +08:00
Lyndon-Li
067984b13c Issue 7068: add a finalizer to protect retained VSC
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-11-15 16:04:07 +08:00
Wenkai Yin(尹文开)
d345bda3a1 Merge pull request #7081 from ywk253100/231110_sync
Skip syncing the backup which doesn't contain backup metadata
2023-11-15 16:00:06 +08:00
Wenkai Yin(尹文开)
2a533d01bf Merge pull request #7046 from kaovilai/backup-patch-status-unittest
Update Backup.Status.CSIVolumeSnapshotsCompleted during finalize
2023-11-15 15:32:51 +08:00
Wenkai Yin(尹文开)
9b5678f32a Merge pull request #7096 from Lyndon-Li/issue-fix-7094
Issue 7094: fallback to full backup if previous snapshot is not found
2023-11-14 11:45:32 +08:00
Lyndon-Li
50f8acda79 issue 7094: fallback to full backup if previous snapshot is not found
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-11-14 11:28:09 +08:00
Wenkai Yin(尹文开)
dde06472e5 Merge pull request #7095 from Lyndon-Li/issue-fix-7068
Issue 7068: add a finalizer to protect retained VSC
2023-11-14 10:44:47 +08:00
Lyndon-Li
cb651d0436 issue 7068: add a finalizer to protect retained VSC
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-11-14 10:18:07 +08:00
Daniel Jiang
e826b70327 Merge pull request #7086 from yanggangtony/fix-design-wrong-reference-link
Fix wrong reference link in design docs.
2023-11-13 14:34:44 +08:00
Anshul Ahuja
dd6ab8c32a Don't fail backup/restore on velero server restart in PhaseWaitingForPluginOperation
Signed-off-by: Anshul Ahuja <anshulahuja@microsoft.com>
2023-11-13 11:13:32 +05:30
lyndon-li
a0b8a503c8 Merge pull request #7077 from Lyndon-Li/issue-fix-6693
Issue 6693: partially fail restore if CSI snapshot is involved but CSI feature is not ready
2023-11-13 10:30:24 +08:00
yanggang
7fd692eb68 Fix wrong reference link in design docs.
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-11-10 22:57:13 +00:00
Lyndon-Li
efc5319c1c Issue 6693: partially fail restore if CSI snapshot is involved but CSI feature is not ready
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-11-10 12:40:41 +08:00
Wenkai Yin(尹文开)
84c96047b9 Skip syncing the backup which doesn't contain backup metadata
Skip syncing the backup which doesn't contain backup metadata

Fixes #6849

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2023-11-10 10:22:27 +08:00
Lyndon-Li
2841be7681 Merge branch 'main' into issue-fix-6693 2023-11-10 10:04:27 +08:00
Xun Jiang/Bruce Jiang
cb5ffe2753 Merge pull request #7061 from blackpiglet/6595_backward_compatability
Add DataUpload Result and CSI VolumeSnapshot check for restore PV.
2023-11-10 09:37:19 +08:00
Rémi Verchère
3fa7d29573 doc: add resourcePolicy for schedule (#7079)
Signed-off-by: Rémi Verchère <remi@verchere.fr>
2023-11-09 11:45:58 -05:00
Shubham Pampattiwar
ea7f249e90 Update backup log to reflect appropriate backup phase
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

use infof instead of sprintf

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2023-11-09 04:55:24 -08:00
Lyndon-Li
873197ff50 issue 6693: partially fail restore if CSI snapshot is involved but CSI feature is not ready
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-11-09 17:37:23 +08:00
qiuming
76e89f7dc5 Merge pull request #7059 from Lyndon-Li/issue-fix-6663
Issue 6663: changes for configurable data path concurrency
2023-11-09 14:37:28 +08:00
Lyndon-Li
db43200cc8 configurable data path concurrency: all in one json
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-11-08 12:02:02 +08:00
Lyndon-Li
c638ca557e Merge branch 'main' into issue-fix-6663 2023-11-08 10:45:40 +08:00
qiuming
5f7e16b98b Merge pull request #7072 from ywk253100/231108_truncate
[cherry-pick]Truncate the credential file to avoid the change of secret content messing it up
2023-11-08 10:43:17 +08:00
Wenkai Yin(尹文开)
5a10f9090a Truncate the credential file to avoid the change of secret content messing it up
Truncate the credential file to avoid the change of secret content messing it up

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2023-11-08 09:33:56 +08:00
Wenkai Yin(尹文开)
866fbb5cdb Merge pull request #6950 from Lyndon-Li/issue-fix-6663-design
Design for node-agent concurrency
2023-11-08 09:04:05 +08:00
lou
ebb21303ab add changelog
Signed-off-by: lou <alex1988@outlook.com>
2023-11-07 19:50:35 +08:00
lou
70483ded90 improve discoveryHelper.Refresh() in restore
Signed-off-by: lou <alex1988@outlook.com>
2023-11-07 19:12:30 +08:00
Xun Jiang
1fb0529d98 Add DataUpload Result and CSI VolumeSnapshot check for restore PV.
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-11-06 22:40:03 +08:00
Lyndon-Li
68579448d6 configurable data path concurrency: UT
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-11-06 20:29:33 +08:00
Lyndon-Li
262f10ff49 Merge branch 'main' into issue-fix-6663 2023-11-06 16:52:41 +08:00
Lyndon-Li
04a9851ee9 configurable data path concurrency: all in cm
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-11-06 16:46:13 +08:00
Anshul Ahuja
6b7ce6655d Merge pull request #7022 from allenxu404/i6721
Fix inconsistent behavior of Backup and Restore hook execution
2023-11-06 14:01:30 +05:30
lyndon
11938f9a5e Merge pull request #7051 from blackpiglet/6190_part_3
Remove dependency of generated client part 3
2023-11-06 15:22:02 +08:00
Xun Jiang
56b5e982d9 Remove dependency of generated client part 3
Replace generated discovery client with client-go client.
Remove generated client from PVR action.
Remove generated client from pkg/cmd directory.
Delete velero generate client from client factory.

Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-11-06 11:34:39 +08:00
lyndon
d6146ecff4 Merge pull request #7041 from blackpiglet/6190_part_2
Remove dependency of generated client part 2
2023-11-03 17:43:10 +08:00
Xun Jiang
a221a88945 Remove dependency of generated client part 2
Remove dependecy of generate client from pkg/cmd/cli/snapshotLocation.
Remove the Velero generated informer from PVB and PVR. 
Remove dependency of generated client from pkg/podvolume directory.
Replace generated codec with runtime codec. 

Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-11-03 17:11:36 +08:00
Tiger Kaovilai
8c727429c4 revert test changes
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-11-02 17:06:19 -04:00
Tiger Kaovilai
cd0ad74d31 make update
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-11-02 16:46:15 -04:00
Tiger Kaovilai
6896a1ffe4 update changelog to reflect removed waits
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-11-02 16:22:30 -04:00
Tiger Kaovilai
1c138b8f55 CSIFeatureFlag enable check
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-11-02 16:20:46 -04:00
Tiger Kaovilai
18acf005d6 remove waiting during finalize
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-11-02 16:16:27 -04:00
Tiger Kaovilai
f9e716a8c9 skip this if SnapshotMoveData
https://github.com/vmware-tanzu/velero/pull/7046/files#r1380708644
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-11-02 16:14:55 -04:00
Tiger Kaovilai
10245b05de restore: Use warning when Create IsAlreadyExist and Get error (#7004)
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-11-02 15:53:47 -04:00
Tiger Kaovilai
9311a4269b refactor backup snapshot status updates into UpdateBackupSnapshotsStatus() and run in backup_finalizer_controller
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-11-02 15:30:35 -04:00
allenxu404
3a3527553a Fix inconsistent behavior of Backup and Restore hook execution
Signed-off-by: allenxu404 <qix2@vmware.com>
2023-11-02 12:31:53 +08:00
lyndon
166a58bddc Merge pull request #6962 from blackpiglet/6595_design
Add the PV backup information design document.
2023-11-02 10:50:56 +08:00
Wenkai Yin(尹文开)
73c948d6bd Merge pull request #6917 from 27149chen/rm-improvement
support JSON Merge Patch and Strategic Merge Patch in Resource Modifiers
2023-11-02 10:36:40 +08:00
Xun Jiang
23b9484370 Add the PV backup information design document.
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-11-02 10:14:16 +08:00
Tiger Kaovilai
886e074b55 Add PatchResource unit test for backup status
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-11-01 15:28:56 -04:00
Shubham Pampattiwar
705a3bc355 fix typo in documentation (#7043)
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2023-11-01 11:26:14 -04:00
lou
e30937550e update after review
Signed-off-by: lou <alex1988@outlook.com>
2023-11-01 21:53:30 +08:00
Lyndon-Li
a0edad94db design for node-agent concurrency
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-11-01 11:35:06 +08:00
qiuming
38e1ae0405 Merge pull request #7034 from ywk253100/231030_cred
Read information from the credential specified by BSL
2023-11-01 09:41:25 +08:00
qiuming
e17751fd09 Merge pull request #7038 from Lyndon-Li/issue-fix-7027
Issue 7027: backup exposer -- don't assume first volume as the backup volume
2023-11-01 09:39:09 +08:00
Lyndon-Li
8e442407c3 issue 7027: backup exposer -- don't assume first volume as the backup volume
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-10-31 12:11:34 +08:00
Shubham Pampattiwar
03e582cb6c Merge pull request #6995 from kaovilai/kopias3profilecred
kopia/repository/config/aws.go: Set session.Options profile from config
2023-10-30 09:11:15 -07:00
Wenkai Yin(尹文开)
49a85e1636 Read information from the credential specified by BSL
Read information from the credential specified by BSL

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2023-10-30 17:28:10 +08:00
qiuming
1fcdc20d75 Merge pull request #7003 from mateusoliveira43/fix/make-verify-command
fix: make verify permission error
2023-10-30 16:28:07 +08:00
qiuming
6e703b81ff Merge pull request #7029 from yanggangtony/fix-docs-for-tencent-config
Fix the wrong url for Tencent COS.
2023-10-30 14:30:03 +08:00
Jeffrey Koehler
929af4f734 Add check for owner reference in backup sync, removing if missing
Signed-off-by: Jeffrey Koehler <koehler@streem.tech>
2023-10-29 22:06:14 -05:00
Shubham Pampattiwar
23921e5d29 add description markers for dataupload and datadownload CRDs (#7028)
add changelog file

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2023-10-27 11:05:10 -04:00
yanggang
5691371899 Fix the wrong url for Tencent COS.
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-10-27 12:55:40 +01:00
Lyndon-Li
0f765ceef2 Merge branch 'main' into issue-fix-6663 2023-10-27 17:44:17 +08:00
Lyndon-Li
c44a9b8956 issue 6663: changes for configurable data path concurrency
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-10-27 17:37:29 +08:00
Xun Jiang/Bruce Jiang
9ff4b1e079 Merge pull request #7026 from blackpiglet/6376_fix
Add HealthCheckNodePort deletion logic in Serivce restore
2023-10-27 16:40:04 +08:00
Xun Jiang
a94918026c Add HealthCheckNodePort deletion logic in Service restore.
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-10-27 14:13:52 +08:00
Shubham Pampattiwar
1e0fc77e4d Fix issue 6913 (#6914)
add changelog file



keep canceling phase const



fix data download as well



address PR feedback



minor fixes

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2023-10-26 09:39:38 -04:00
Anshul Ahuja
20a1118acf Make configmapref check case insensitive (#6804)
* Make configmapref check case insensitive

Signed-off-by: Anshul Ahuja <anshul.ahu@gmail.com>

* update resourcemodfier test case to validate case

Signed-off-by: Anshul Ahuja <anshulahuja@microsoft.com>

---------

Signed-off-by: Anshul Ahuja <anshul.ahu@gmail.com>
Signed-off-by: Anshul Ahuja <anshulahuja@microsoft.com>
Co-authored-by: Anshul Ahuja <anshulahuja@microsoft.com>
2023-10-26 15:30:21 +05:30
lyndon
638647cb7a Merge pull request #7018 from vmware-tanzu/dependabot/go_modules/google.golang.org/grpc-1.58.3
Bump google.golang.org/grpc from 1.58.2 to 1.58.3
2023-10-26 11:25:30 +08:00
Ming
481cb60493 Make Kopia file parallelism configurable
Signed-off-by: Ming <mqiu@vmware.com>
2023-10-26 02:28:36 +00:00
qiuming
3b22ff3358 Merge pull request #7005 from qiuming-best/kopia-parallelism-design
Design for Velero uploader configuration integration and extensibility
2023-10-26 10:01:55 +08:00
dependabot[bot]
8be1f4beff Bump google.golang.org/grpc from 1.58.2 to 1.58.3
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.58.2 to 1.58.3.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.58.2...v1.58.3)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-25 21:43:35 +00:00
Xun Jiang/Bruce Jiang
45ed3bf613 Record platform limitation of the Kopia block mode uploader in docs. (#7013)
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-10-25 19:43:46 +05:30
Mateus Oliveira
3bc23aeb84 fixup! fix: make verify permission error
Signed-off-by: Mateus Oliveira <msouzaol@redhat.com>
2023-10-25 08:12:41 -03:00
Mateus Oliveira
cbf849ab4c fix: make verify permission error
Signed-off-by: Mateus Oliveira <msouzaol@redhat.com>
2023-10-25 08:12:41 -03:00
lou
f66016d416 update docs
Signed-off-by: lou <alex1988@outlook.com>
2023-10-25 17:54:20 +08:00
lyndon
30bf6bd28c Merge pull request #7011 from Lyndon-Li/issue-fix-6964-2
Issue 6964: use preparingTimeout for snapshot readiness wait
2023-10-25 11:11:27 +08:00
Lyndon-Li
0eade6c615 issue 6964: use preparingTimeout for snapshot readiness wait
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-10-25 10:51:08 +08:00
Tiger Kaovilai
d5f238c83c kopia/repository/config/aws.go: Set session.Options profile from config
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-10-24 14:05:47 -04:00
Daniel Jiang
941dd0039f Merge pull request #6968 from blackpiglet/6585_fix
Check whether the action is a CSI action and whether CSI feature is
2023-10-25 00:39:58 +08:00
Daniel Jiang
317db25d20 Merge pull request #6923 from reasonerjt/aws-sdk-v2
Bump up aws sdk to aws-sdk-go-v2
2023-10-24 23:53:16 +08:00
lou
4ead4d6976 update after review
Signed-off-by: lou <alex1988@outlook.com>
2023-10-24 21:44:14 +08:00
Daniel Jiang
b71d2b3898 Bump up aws sdk to aws-sdk-go-v2
Signed-off-by: Daniel Jiang <jiangd@vmware.com>
2023-10-24 17:01:26 +08:00
Wenkai Yin(尹文开)
61d333a31a Merge pull request #6989 from blackpiglet/support_windows_build_main
[cherry-pick][main]Make Windows build skip BlockMode code.
2023-10-24 16:58:03 +08:00
Xun Jiang
908e2c63ba Check whether the action is a CSI action and whether CSI feature is
enabled, before executing the action.

The DeleteItemAction is not checked, because the DIA doesn't have a
method to get the action's plugin name.
This should be OK, because the CSI will check whether the VS and VSC
have a backup name annotation. If the VS and VSC is not handled by
the CSI plugin, then they don't have the annotation.

Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-10-24 16:54:38 +08:00
lyndon
e2ec855c4a Merge pull request #6983 from danfengliu/fix-resource-groupname-issue
Fix fail to get backup repo due to missing api group name issue
2023-10-24 15:26:55 +08:00
Ming
a86b3943fe Velero Uploader Configuration Integration and Extensibility
Signed-off-by: Ming <mqiu@vmware.com>
2023-10-24 06:10:03 +00:00
lyndon
27f301cb89 Merge pull request #7001 from Lyndon-Li/bump-to-kopia-0.15.0
Bump kopia to 0.15.0
2023-10-24 08:40:46 +08:00
Orlix
107c55813f Revert PR #6907 as site is not deploying (#6981)
Signed-off-by: OrlinVasilev <ovasilev@vmware.com>
2023-10-23 12:14:26 -04:00
Lyndon-Li
d3a1a83c6d bump to kopia 0.15.0
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-10-23 12:03:21 +08:00
Shubham Pampattiwar
b85dc271ef Merge pull request #6978 from yanggangtony/fix-tiny-errors
Fix wrong logs , add missiong license file.
2023-10-22 20:52:18 -07:00
Daniel Jiang
5fe53daf21 Merge pull request #6990 from Lyndon-Li/udmrepo-use-region-from-bsl
Issue 6988: udmrepo use region specified in BSL when s3URL is empty
2023-10-20 20:15:36 +08:00
Lyndon-Li
3d841dd8f1 udmrepo use region specified in BSL when s3URL is empty
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-10-20 19:58:54 +08:00
Xun Jiang
ecc6e1621e Make Windows build skip BlockMode code.
PVC block mode backup and restore introduced some OS specific
system calls. Those calls are not available for Windows, so
add both non Windows version and Windows version code, and
return error for block mode on the Windows platform.

Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-10-20 19:39:44 +08:00
danfengl
d2fc9fa1a9 Fix fail to get backup repo due to missing api group name issue
Signed-off-by: danfengl <danfengl@vmware.com>
2023-10-20 01:50:24 +00:00
yanggang
1efd533d0d Fix wrong logs in markDataDownloadsCancel() and add missiong license file.
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-10-19 14:04:41 +01:00
Xun Jiang
79c75718ca Change controller-runtime List option from MatchingFields to ListOptions.
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-10-19 17:09:12 +08:00
qiuming
fd8350f919 Merge pull request #6976 from Lyndon-Li/issue-fix-6964
Issue 6964: get volume size from source PVC if it is invalid in VS
2023-10-19 13:53:57 +08:00
Lyndon-Li
329c128279 issue 6964: get volume size from source PVC if it is invalid in VS
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-10-19 11:50:28 +08:00
lou
d1f5219cbb update after review
Signed-off-by: lou <alex1988@outlook.com>
2023-10-18 17:05:00 +08:00
Wenkai Yin(尹文开)
19f38f9623 Merge pull request #6947 from 0x113/SGLAB-CLOUDCASA-oidc-auth
Issue #6933: Import auth provider plugins
2023-10-18 16:01:50 +08:00
Sebastian Glab
265d285b1d Import auth provider plugins
Signed-off-by: Sebastian Glab <sglab@catalogicsoftware.com>
2023-10-18 08:53:35 +02:00
qiuming
5ff5073cc3 Add volume types filter in resource policies (#6863)
Signed-off-by: Ming Qiu <mqiu@vmware.com>
2023-10-16 17:36:54 -04:00
Yang Gang
7ca33f8f12 Add MSI Support for Azure plugin. (#6938)
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-10-16 09:47:53 +05:30
Xun Jiang/Bruce Jiang
b4fb2d9644 Merge pull request #6918 from Ripolin/main
Add WaitForReady flag to check container readiness state before exec a hook
2023-10-15 13:27:34 +08:00
Wenkai Yin(尹文开)
ed441de43c Merge pull request #6953 from blackpiglet/bump_golang
Bump golang version.
2023-10-13 18:23:11 +08:00
Xun Jiang
a726329e82 Bump golang version.
Bump golang version to v1.21.
Bump golang.org/x/net version to v0.17.0 in Velero and Restic.

Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-10-13 16:30:23 +08:00
Xun Jiang/Bruce Jiang
9606df624f Merge pull request #6784 from yanggangtony/node-agent-metrics-addr
Fix node-agent missing metrics-addr parms to define the server start. #6784
2023-10-13 14:28:45 +08:00
yanggang
069c280f03 Fix node-agent missing metrics-addr parms to define the server start.
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-10-13 03:33:18 +01:00
Ripolin
e5af7f5cea Add WaitForReady flag to check container readiness state before exec a hook
Signed-off-by: Ripolin <florent.david@gmail.com>
2023-10-12 20:31:36 +02:00
Shubham Pampattiwar
ad114f8f65 Merge pull request #6723 from sseago/restore-get-perf 2023-10-12 07:57:40 -07:00
Wenkai Yin(尹文开)
84734f1040 Merge pull request #6937 from blackpiglet/release_choco
Update the Velero chocolatey package release procedure.
2023-10-12 15:47:26 +08:00
lyndon
741b696180 Merge pull request #6946 from Lyndon-Li/issue-fix-6668
Issue fix 6668: add a limitation for fs restore parallelism with other types of restore
2023-10-12 14:53:29 +08:00
Lyndon-Li
b14bd2cd75 issue 6668: add a limitation for fs restore parallelism with other types of restores
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-10-12 11:58:26 +08:00
Shubham Pampattiwar
74ed994e5e Merge pull request #6830 from sseago/retry-generateName
issue #6807: Retry failed create when using generateName
2023-10-11 08:50:14 -07:00
Scott Seago
7750e12151 Perf improvements for existing resource restore
Use informer cache with dynamic client for Get calls on restore
When enabled, also make the Get call before create.

Add server and install parameter to allow disabling this feature,
but enable by default

Signed-off-by: Scott Seago <sseago@redhat.com>
2023-10-11 10:51:39 -04:00
lou
6d89780fb2 add more tests
Signed-off-by: lou <alex1988@outlook.com>
2023-10-10 22:33:35 +08:00
Xun Jiang
79e176086c Add some configurations to avoid ArgoCD pruning backups generated from schedule.
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-10-10 21:06:48 +08:00
Xun Jiang
dbc3ad7453 Update the Velero chocolatey package release procedure.
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-10-10 20:29:29 +08:00
lou
a607810b13 update design
Signed-off-by: lou <alex1988@outlook.com>
2023-10-10 19:11:43 +08:00
lou
19d5bee572 Merge branch 'main' into rm-improvement 2023-10-10 19:02:16 +08:00
lou
65082f33a4 add deserialization tests
Signed-off-by: lou <alex1988@outlook.com>
2023-10-10 18:59:45 +08:00
lyndon
b31610157d Merge pull request #6927 from blackpiglet/restricted_rbac
Add an working example for rbac.md.
2023-10-10 16:52:30 +08:00
lou
5932e263c9 update after review
Signed-off-by: lou <alex1988@outlook.com>
2023-10-10 16:00:46 +08:00
Wenkai Yin(尹文开)
5f71a662a4 Merge pull request #6907 from kaovilai/vmain
Resolve netlify site publish issues due to missing directory `site/site/public`
2023-10-10 15:24:19 +08:00
Xun Jiang
98a383d94a Add an working example for rbac.md.
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-10-10 13:56:50 +08:00
Wenkai Yin(尹文开)
5961253768 Merge pull request #6926 from Lyndon-Li/backup-pod-spread-evenly
Issue 6734: spread backup pod evenly
2023-10-10 10:05:41 +08:00
Lyndon-Li
0a6c89abc6 Merge branch 'main' into backup-pod-spread-evenly 2023-10-10 09:45:52 +08:00
Scott Seago
09be1f7995 issue #6807: Retry failed create when using generateName
When creating resources with generateName, apimachinery
does not guarantee uniqueness when it appends the random
suffix to the generateName stub, so if it fails with
already exists error, we need to retry.

Signed-off-by: Scott Seago <sseago@redhat.com>
2023-10-09 17:38:37 -04:00
Shubham Pampattiwar
541425ba97 Merge pull request #6844 from sseago/pr-standards 2023-10-09 14:33:06 -07:00
Mateus Oliveira
1c1054dedc doc: Alert that plugins run as separate processes, when turning on debug logs (#6882)
* doc: Alert that plugins run as binaries when turning on debug logs

Signed-off-by: Mateus Oliveira <msouzaol@redhat.com>

* fixup! doc: Alert that plugins run as binaries when turning on debug logs

Signed-off-by: Mateus Oliveira <msouzaol@redhat.com>

* fixup! doc: Alert that plugins run as binaries when turning on debug logs

Signed-off-by: Mateus Oliveira <msouzaol@redhat.com>

* fixup! doc: Alert that plugins run as binaries when turning on debug logs

Signed-off-by: Mateus Oliveira <msouzaol@redhat.com>

---------

Signed-off-by: Mateus Oliveira <msouzaol@redhat.com>
2023-10-09 11:12:36 -04:00
Yang Gang
e5e99c75a0 Fix dep package describe and ci words spell. (#6924)
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-10-09 12:12:14 +05:30
Lyndon-Li
d8d66381e7 issue 6734: spread backup pod evenly
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-10-08 20:01:12 +08:00
lou
e880c0d01b update after review
Signed-off-by: lou <alex1988@outlook.com>
2023-10-07 16:33:33 +08:00
Raghuram Devarakonda
b7cc62d077 Document about item action plugin ordering. (#6719)
Signed-off-by: Raghuram Devarakonda <draghuram@gmail.com>
2023-10-06 16:11:24 -04:00
Shubham Pampattiwar
0d4e61eb24 Merge pull request #6649 from sseago/orphaned-partially-failed 2023-10-06 10:35:57 -07:00
Scott Seago
cd7e2d6fcc Expanded PR section of code standards doc
Signed-off-by: Scott Seago <sseago@redhat.com>
2023-10-04 18:07:02 -04:00
lou
58d8425952 fix lint
Signed-off-by: lou <alex1988@outlook.com>
2023-10-05 01:19:05 +08:00
lou
06ed9dcc71 add changelog
Signed-off-by: lou <alex1988@outlook.com>
2023-10-04 16:02:23 +08:00
Guang Jiong Lou
7f73acab16 Proposal to support JSON Merge Patch and Strategic Merge Patch in Resource Modifiers (#6797)
* Proposal to support JSON Merge Patch and Strategic Merge Patch in Resource Modifiers

Signed-off-by: lou <alex1988@outlook.com>

* add changelog

Signed-off-by: lou <alex1988@outlook.com>

* add conditional patches

Signed-off-by: lou <alex1988@outlook.com>

* update design

Signed-off-by: lou <alex1988@outlook.com>

* update after review

Signed-off-by: lou <alex1988@outlook.com>

* update after review

Signed-off-by: lou <alex1988@outlook.com>

---------

Signed-off-by: lou <alex1988@outlook.com>
2023-10-04 09:29:09 +05:30
Shubham Pampattiwar
5ab66728e2 Merge pull request #6843 from yanggangtony/clean-and-addlicenses
Add missing file licences and do some clean works.
2023-10-02 12:11:28 -07:00
Tiger Kaovilai
09f7744e33 remove site/ prefix from publish
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-10-02 15:07:40 -04:00
Shubham Pampattiwar
cf1aebea04 Merge pull request #6901 from kaovilai/dcosignoff
Fix code-standards url rendering for `https://developercertificate.org/)`
2023-10-02 10:12:38 -07:00
Raghuram Devarakonda
13019b943a Document pod volume host path setting for Nutanix. (#6902)
Signed-off-by: Raghuram Devarakonda <draghuram@gmail.com>
2023-10-02 09:57:11 -04:00
Tiger Kaovilai
c51b599845 Fix code-standards url rendering for https://developercertificate.org/)
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-09-29 13:55:32 -04:00
Yang Gang
fd67ecb688 Code clean for backup cmd client. (#6750)
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-09-29 12:23:12 -04:00
Wenkai Yin(尹文开)
0d79afe049 Replace the base image with paketobuildpacks image (#6883)
Replace the base image with paketobuildpacks image

Fixes #6851

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2023-09-29 12:19:51 -04:00
yanggang
11745809c4 Add missing file licences and do some clean works.
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-09-29 04:25:01 +01:00
David Zaninovic
8e01d1b9be Add support for block volumes (#6680)
Signed-off-by: David Zaninovic <dzaninovic@catalogicsoftware.com>
2023-09-28 09:44:46 -04:00
danfengliu
a22f28e876 Merge pull request #6895 from blackpiglet/fix_main_push_action_failure
Add go clean in Dockerfile and action.
2023-09-28 21:16:33 +08:00
Xun Jiang
64595cc0f7 Add go clean in Dockerfile and action.
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-09-28 20:30:05 +08:00
qiuming
c6191797b4 Merge pull request #6884 from ywk253100/230928_repo_init
Create the backup repository only when it doesn't exist
2023-09-28 17:36:44 +08:00
qiuming
dffe4f85ce Merge pull request #6893 from Lyndon-Li/fix-main-ci-out-of-space-problem
Fix CI out of disk space problem
2023-09-28 17:36:14 +08:00
Lyndon-Li
24e37c5115 fix CI out of disk space problem
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-28 17:13:27 +08:00
Wenkai Yin(尹文开)
61a6c1ba2a Create the backup repository only when it doesn't exist
When preparing a backup repository, Velero tries to connect to it, if fails then create it. The repository status always records the error reported by creation but the real reason maybe caused by the connect operation. This is confuseing and hard to debug

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2023-09-28 14:53:59 +08:00
lyndon
af43d96ac9 Merge pull request #6885 from Lyndon-Li/issue-fix-6880
Issue 6880: set ParallelUploadAboveSize as MaxInt64
2023-09-28 14:24:08 +08:00
Lyndon-Li
3e3ffec7cd issue 6880: set ParallelUploadAboveSize as MaxInt64
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-28 12:34:30 +08:00
lyndon
73ea00b477 issue 6861: fill repoIdentifier only for restic repo (#6872)
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-27 16:49:35 -04:00
Wenkai Yin(尹文开)
563f1ccee1 Merge pull request #6475 from nilesh-akhade/main
Add `--or-selector` for backup and restore command
2023-09-27 20:09:07 +08:00
lyndon
b6b320c85b Merge pull request #6875 from Lyndon-Li/issue-fix-6859
Issue 6859: move plugin depdending podvolume functions to util pkg
2023-09-27 11:21:24 +08:00
Xun Jiang/Bruce Jiang
66f8e4fc68 Merge pull request #6874 from OrlinVasilev/dave-emeratus
Move Dave Smith-Uchida to Emeritus Maintainer
2023-09-27 03:02:26 +08:00
Lyndon-Li
2e71cffe0e issue: move plugin depdending podvolume functions to util pkg
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-26 16:39:33 +08:00
OrlinVasilev
df0c6724c6 Move Dave Smith-Uchida to Emeritus Maintainer
Signed-off-by: OrlinVasilev <ovasilev@vmware.com>
2023-09-26 10:58:31 +03:00
Shubham Pampattiwar
c3ec7b71c5 Merge pull request #6715 from nilesh-akhade/metric
Remove schedule-related metrics on schedule delete
2023-09-25 10:24:04 -07:00
lou
d8b9328310 support JSON Merge Patch and Strategic Merge Patch in Resource Modifiers
Signed-off-by: lou <alex1988@outlook.com>
2023-09-25 18:00:18 +08:00
Xun Jiang/Bruce Jiang
4bf87c01ea Add some description of update existing policy to state it works in a best-effort way. (#6856)
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-09-22 14:18:42 -04:00
Wenkai Yin(尹文开)
d3e5bb7451 Merge pull request #6838 from yanggangtony/fix-metrics-backup_last_status
Change the default value of the velero_backup_last_status metrics.
2023-09-20 10:18:52 +08:00
lyndon
b42fb23991 Merge pull request #6839 from Lyndon-Li/multiple-snapshot-class-doc
Doc for multiple snapshot class
2023-09-19 16:24:52 +08:00
Lyndon-Li
f73d9dcaed doc for multiple snapshot class
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-19 16:09:35 +08:00
yanggang
cda722cf9d Fix the metrics backup_last_status not report right value when the schedule down unexpectation.
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-09-19 15:25:21 +08:00
Wenkai Yin(尹文开)
63c6a48f92 Merge pull request #6686 from ywk253100/230612_kopia
Make Kopia support Azure AD
2023-09-19 14:31:14 +08:00
Wenkai Yin(尹文开)
b598150cd1 Support setting CA cert for BSL
Support setting CA cert for BSL

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2023-09-19 11:28:05 +08:00
Wenkai Yin(尹文开)
3a291e368a Make Kopia support Azure AD
This commit introduces our own Azure storage provider by wrapping Kopia's implementation rather than contributing to upstream based on the following considerations:
1. Velero needs the capability to interact with the repository concurrently while Kopia doesn't, this will increase the complexity of Kopia if we contribute to upstream
2. The configuration items provided by Velero and Kopia are conflict, e.g. Velero supports customizing storage account URI which is a full path while Kopia supports customizing storage account domain which is part of the URI. We need to consider the backward compatibility and upgrade case if we contribute to upstream which needs extra efforts
3. Contribute to upstream is a longer cycle when we need to introduce new changes. With this commit, we no longer depends on upstream for the Azure storage provider part and is easy for us to maintain

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2023-09-19 11:28:04 +08:00
lyndon
5af664d361 bump kopia to v0.14 (#6833)
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-18 21:05:21 +08:00
Daniel Jiang
cf3cb9c4ed Merge pull request #6712 from kaovilai/jobs-label-k8s1.27
On restore, delete Kubernetes 1.27 job controller uid label
2023-09-18 16:49:50 +08:00
lyndon
8481b4c035 Merge pull request #6816 from yanggangtony/fix-docs
Fix some typos about the docs.
2023-09-18 15:07:43 +08:00
lyndon
b3df028e83 Merge pull request #6815 from AgustinRamiroDiaz/main
Typo: remove double space
2023-09-18 12:06:27 +08:00
lyndon
c85638ddb6 Merge pull request #6827 from Lyndon-Li/issue-fix-6786
Issue 6786:always delete VSC regardless of the deletion policy
2023-09-15 14:18:38 +08:00
Lyndon-Li
53489b10ad issue 6786:always delete VSC regardless of the deletion policy
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-15 12:10:20 +08:00
Wenkai Yin(尹文开)
185a95585a Set data mover related properties for schedule (#6824)
Set data mover related properties for schedule

Fixes #6820

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2023-09-14 18:14:06 +08:00
lyndon
3d4d184a8d Merge pull request #6822 from reasonerjt/update-kopia-repo
Switch the kopia repo to new org
2023-09-14 11:53:05 +08:00
Daniel Jiang
b7bc9a31cb Switch the kopia repo to new org
Signed-off-by: Daniel Jiang <jiangd@vmware.com>
2023-09-14 11:18:11 +08:00
yanggang
4d1c23adfa Fix some typos about the docs.
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-09-13 23:04:08 +08:00
Agustín Díaz
ff45be6fdd Typo: remove double space
Signed-off-by: Agustín Díaz <agustin.ramiro.diaz@gmail.com>
2023-09-13 10:46:28 -03:00
Qi Xu
558a0eef03 Add doc changes after rc1 to v1.12 docs (#6812)
Signed-off-by: allenxu404 <qix2@vmware.com>
2023-09-13 18:01:01 +08:00
Clever Hu
9b1cffc007 check pod status before hook (#5211)
Signed-off-by: cleverhu <shouping.hu@daocloud.io>
Co-authored-by: cleverhu <shouping.hu@daocloud.io>
2023-09-13 14:49:46 +08:00
qiuming
402703f226 [Cherry-Pick] Optimize of removing finalizer no matter the dataupload datadownload cr is been deleted or not (#6808)
Signed-off-by: Ming Qiu <mqiu@vmware.com>
2023-09-12 11:33:33 -04:00
qiuming
8a366c6924 Merge pull request #6798 from yanggangtony/clean-some-code
Fix issue #6781,  and some code clean.
2023-09-12 14:56:27 +08:00
qiuming
c9fde84586 Merge pull request #6779 from yanggangtony/fix-log-ns-name
Keep the logs info ns/name is the same with other modules.
2023-09-12 14:55:56 +08:00
Yang Gang (成都)
ec11a5a4cc code clean for repository (#6768)
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-09-12 14:43:28 +08:00
yanggang
c97b31363d Fix some wrong logs and code clean.
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-09-11 13:38:32 +08:00
Guang Jiong Lou
246831de7b use old namespace in resource modifier (#6724)
* use old namespace in resource modifier

Signed-off-by: lou <alex1988@outlook.com>

* add changelog

Signed-off-by: lou <alex1988@outlook.com>

* update docs

Signed-off-by: lou <alex1988@outlook.com>

* updated after review

Signed-off-by: lou <alex1988@outlook.com>

---------

Signed-off-by: lou <alex1988@outlook.com>
2023-09-08 15:29:46 +05:30
lyndon
a4b5b0a79e add csi snapshot data mover doc (#6637)
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-08 17:17:42 +08:00
lyndon
2348099a73 Merge pull request #6788 from Lyndon-Li/issue-fix-6748-3
Fix issue 6748 [2]
2023-09-08 14:57:14 +08:00
lyndon
682422772a Merge pull request #6790 from Lyndon-Li/issue-fix-6785
Fix issue 6785
2023-09-08 14:48:41 +08:00
Lyndon-Li
13d61c27a6 fix issue 6785
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-08 12:34:12 +08:00
Lyndon-Li
9895428765 fix issue 6748
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-08 09:14:30 +08:00
lyndon
cddc89ea92 Merge pull request #6783 from kaovilai/patch-1
Show yaml example of repository password: file-system-backup.md
2023-09-07 17:44:57 +08:00
Tiger Kaovilai
d714c3c237 Show yaml example of repository password: file-system-backup.md
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-09-06 16:32:36 -04:00
yanggang
76b6077683 Keep the logs info ns/name is the same with other modules.
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-09-06 18:40:10 +08:00
Xun Jiang/Bruce Jiang
f72afc8a5a Merge pull request #6760 from blackpiglet/6752_fix
Fix #6752: add namespace exclude check.
2023-09-06 15:44:20 +08:00
Xun Jiang
79b810ed25 Fix #6752: add namespace exclude check.
Add PSA audit and warn labels.

Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-09-06 14:44:30 +08:00
Daniel Jiang
a6d61ec5f6 Merge pull request #6770 from ywk253100/230906_restore
[cherry-pick]Update restore controller logic for restore deletion
2023-09-06 12:06:04 +08:00
qiuming
49bb998e59 Merge pull request #6765 from Lyndon-Li/issue-fix-6748
Fix issue 6748
2023-09-06 11:12:44 +08:00
Wenkai Yin(尹文开)
da6ac026d1 Update restore controller logic for restore deletion
1. Skip deleting the restore files from storage if the backup/BSL is not found
2. Allow deleting the restore files from storage even though the BSL is readonly

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2023-09-06 09:19:42 +08:00
lyndon
8cb04d4f69 Merge pull request #6751 from Lyndon-Li/issue-fix-6647
Fix issue 6647
2023-09-06 09:03:00 +08:00
Lyndon-Li
d13a23364f fix issue 6748
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-05 19:29:28 +08:00
lyndon
c9e1ade1f7 fix issue 6753 (#6757)
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-05 10:58:28 +08:00
Lyndon-Li
778feba3ae fix issue 6647
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-04 16:55:36 +08:00
Daniel Jiang
8d3a67544d Merge pull request #6726 from yanggangtony/add-license-velero-helper
Add license notes for velero-helper.
2023-09-04 14:55:51 +08:00
Anshul Ahuja
24abbdcc02 Add anshulahuja98 maintainer details (#6737)
Signed-off-by: Anshul Ahuja <anshulahuja@microsoft.com>
Co-authored-by: Anshul Ahuja <anshulahuja@microsoft.com>
2023-09-04 14:54:06 +08:00
Yang Gang (成都)
25898305ef delete unused shcema package and parms. (#6716)
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-09-04 14:50:10 +08:00
lyndon
b9b2c88c5b Merge pull request #6738 from Lyndon-Li/issue-fix-6733
Fix issue 6733
2023-09-01 17:10:29 +08:00
lyndon
1615cfd7f3 fix issue 6709 (#6741)
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-01 16:52:24 +08:00
qiuming
f26ec9043a Fix kopia snapshot policy not work (#6739)
Signed-off-by: Ming <mqiu@vmware.com>
2023-09-01 16:21:43 +08:00
Lyndon-Li
c4443d506c fix issue 6733
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-01 15:49:13 +08:00
qiuming
0e5022254f [Cherry-pick Main] Fix velero uninstall bug (#6729)
Signed-off-by: Ming <mqiu@vmware.com>
2023-08-31 16:15:24 +08:00
yanggang
f408b9f6c4 Add license notes for velero-helper.
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-08-31 14:07:34 +08:00
Guang Jiong Lou
5dd7c5cd46 add label selector in Resource Modifiers (#6704)
* add label selector in resource modifier

Signed-off-by: lou <alex1988@outlook.com>

* add ut

Signed-off-by: lou <alex1988@outlook.com>

* update after review

Signed-off-by: lou <alex1988@outlook.com>

* update after review

Signed-off-by: lou <alex1988@outlook.com>

---------

Signed-off-by: lou <alex1988@outlook.com>
2023-08-31 10:36:59 +05:30
Xun Jiang/Bruce Jiang
db6784aa81 Merge pull request #6674 from danfengliu/monitor-velero-info
monitor velero logs and fix E2E issues
2023-08-29 10:30:51 +08:00
qiuming
499ee7c5d1 Merge pull request #6717 from qiuming-best/main
[Cherry-Pick main] make velero uninstall backward compatible
2023-08-29 10:19:59 +08:00
Ming
85d5785d68 [Cherry-Pick main] make velero uninstall backward compatible
Signed-off-by: Ming <mqiu@vmware.com>
2023-08-29 01:07:41 +00:00
Nilesh Akhade
c7c441364c Remove schedule-related metrics on schedule delete
Signed-off-by: Nilesh Akhade <nakhade@catalogicsoftware.com>
2023-08-28 20:52:32 +05:30
Tiger Kaovilai
c5aad9e488 Remove legacy label version check, to be added back when version is known
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-08-28 11:08:44 -04:00
Tiger Kaovilai
f6e8c208ad changelog
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-08-28 10:45:55 -04:00
Tiger Kaovilai
7d3d818f93 Handle 1.27 k8s job label changes
per  0e86fa5115/CHANGELOG/CHANGELOG-1.27.md (L1768)

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-08-28 10:42:09 -04:00
danfengl
15be42f47b monitor velero logs and fix E2E issues
1. Capture Velero pod log and K8S cluster event;
2. Fix wrong path of storageclass yaml file issue caused by pert test;
3. Fix change storageclass test issue that no sc named 'default' in EKS cluster;
4. Support AWS credential as config format;
5. Support more E2E script input parameters like standy cluster plugins and provider.

Signed-off-by: danfengl <danfengl@vmware.com>
2023-08-28 05:53:32 +00:00
lyndon
831be07dd3 fix issue 6391 (#6702)
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-08-25 16:36:41 +08:00
qiuming
164431b2b3 Merge pull request #6689 from qiuming-best/uninstall-fix
Fix delete dataupload datadownload failure when Velero uninstall
2023-08-25 11:09:47 +08:00
Xun Jiang/Bruce Jiang
497543774c Merge pull request #6618 from shubham-pampattiwar/restic-pass-doc
Add note for backup repository password configuration
2023-08-24 14:56:32 +08:00
Shubham Pampattiwar
c7422a207a add note for backup repository password configuration
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

address PR feedback

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

reword the note

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

change FS backups to normal backups in the note

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2023-08-23 20:40:08 -07:00
Ming
7f3b7fe853 Fix delete dataupload datadownload failure when Velero uninstall
Signed-off-by: Ming <mqiu@vmware.com>
2023-08-24 03:30:28 +00:00
Daniel Jiang
3e613862e6 Merge pull request #6635 from 27149chen/skip-subresource
skip subresource in resource discovery
2023-08-22 13:39:12 +08:00
Xun Jiang/Bruce Jiang
8d0a8bac34 Update changelogs/unreleased/6649-sseago
Co-authored-by: Tiger Kaovilai <passawit.kaovilai@gmail.com>
Signed-off-by: Xun Jiang/Bruce Jiang <59276555+blackpiglet@users.noreply.github.com>
2023-08-22 10:37:59 +08:00
Xun Jiang/Bruce Jiang
a62f2fa1a3 Merge pull request #6653 from yanggangtony/fix-backup-controller-err-check
fix backup_controller when credentials to volume snapshot location sh…
2023-08-21 17:12:17 +08:00
yanggang
46ef54e80a fix backup_controller when credentials to volume snapshot location show error.
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-08-15 19:36:07 +08:00
Scott Seago
441a32a861 Deal with PartiallyFailed orphaned backups as well as Completed ones
Fixes https://github.com/vmware-tanzu/velero/issues/6648

Signed-off-by: Scott Seago <sseago@redhat.com>
2023-08-14 13:40:32 -04:00
lou
0f9e582fd9 add changelog
Signed-off-by: lou <alex1988@outlook.com>
2023-08-11 10:05:23 +08:00
lou
dc83981871 skip subresource in resource discovery
Signed-off-by: lou <alex1988@outlook.com>
2023-08-10 19:13:25 +08:00
Nilesh Akhade
d9a7e2b6ca Add 'orLabelSelector' for backup, restore command
Signed-off-by: Nilesh Akhade <nakhade@catalogicsoftware.com>
2023-07-19 16:16:35 +05:30
566 changed files with 30688 additions and 5266 deletions

View File

@@ -16,6 +16,7 @@ reviewers:
- qiuming-best
- shubham-pampattiwar
- Lyndon-Li
- anshulahuja98
tech-writer:
- sseago

View File

@@ -14,7 +14,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.20'
go-version: '1.21.9'
id: go
# Look for a CLI that's made for this PR
- name: Fetch built CLI

View File

@@ -14,7 +14,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.20'
go-version: '1.21.9'
id: go
# Look for a CLI that's made for this PR
- name: Fetch built CLI
@@ -72,7 +72,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.20'
go-version: '1.21.9'
id: go
- name: Check out the code
uses: actions/checkout@v2

View File

@@ -10,7 +10,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.20'
go-version: '1.21.9'
id: go
- name: Check out the code
uses: actions/checkout@v2
@@ -24,7 +24,7 @@ jobs:
- name: Make ci
run: make ci
- name: Upload test coverage
uses: codecov/codecov-action@v3
uses: codecov/codecov-action@v4
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: coverage.out

View File

@@ -18,7 +18,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.20'
go-version: '1.21.9'
id: go
- uses: actions/checkout@v3
@@ -48,7 +48,10 @@ jobs:
version: latest
- name: Build
run: make local
run: |
make local
# Clean go cache to ease the build environment storage pressure.
go clean -modcache -cache
- name: Test
run: make test
@@ -73,7 +76,7 @@ jobs:
run: |
sudo swapoff -a
sudo rm -f /mnt/swapfile
docker image prune -a --force
docker system prune -a --force
# Build and push Velero image to docker registry
docker login -u ${{ secrets.DOCKER_USER }} -p ${{ secrets.DOCKER_PASSWORD }}

View File

@@ -7,7 +7,7 @@ jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v3
- uses: actions/stale@v6.0.1
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-issue-message: "This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 14 days. If a Velero team member has requested log or more information, please provide the output of the shared commands."

View File

@@ -16,6 +16,7 @@ If you're using Velero and want to add your organization to this list,
<a href="https://mayadata.io/" border="0" target="_blank"><img alt="mayadata.io" src="site/static/img/adopters/mayadata.svg" height="50"></a>&nbsp; &nbsp; &nbsp;
<a href="https://www.replicated.com/" border="0" target="_blank"><img alt="replicated.com" src="site/static/img/adopters/replicated-logo-red.svg" height="50"></a>
<a href="https://cloudcasa.io/" border="0" target="_blank"><img alt="cloudcasa.io" src="site/static/img/adopters/cloudcasa.svg" height="50"></a>
<a href="https://azure.microsoft.com/" border="0" target="_blank"><img alt="azure.com" src="site/static/img/adopters/azure.svg" height="50"></a>
## Success Stories
Below is a list of adopters of Velero in **production environments** that have
@@ -64,6 +65,9 @@ Replicated uses the Velero open source project to enable snapshots in [KOTS][101
**[CloudCasa][103]**<br>
[Catalogic Software][104] integrates Velero with [CloudCasa][103] - A Smart Home in the Cloud for Backups. CloudCasa is a simple, scalable, cloud-native solution providing data protection and disaster recovery as a service. This solution is built using Kubernetes for protecting Kubernetes clusters.<br>
**[Microsoft Azure][105]**<br>
[Azure Backup for AKS][106] is an Azure native, Kubernetes aware, Enterprise ready backup for containerized applications deployed on Azure Kubernetes Service (AKS). AKS Backup utilizes Velero to perform backup and restore operations to protect stateful applications in AKS clusters.<br>
## Adding your organization to the list of Velero Adopters
If you are using Velero and would like to be included in the list of `Velero Adopters`, add an SVG version of your logo to the `site/static/img/adopters` directory in this repo and submit a [pull request][3] with your change. Name the image file something that reflects your company (e.g., if your company is called Acme, name the image acme.png). See this for an example [PR][4].
@@ -118,3 +122,6 @@ If you would like to add your logo to a future `Adopters of Velero` section on [
[103]: https://cloudcasa.io/
[104]: https://www.catalogicsoftware.com/
[105]: https://azure.microsoft.com/
[106]: https://learn.microsoft.com/azure/backup/backup-overview

View File

@@ -1,7 +1,9 @@
## Current release:
* [CHANGELOG-1.11.md][21]
* [CHANGELOG-1.13.md][23]
## Older releases:
* [CHANGELOG-1.12.md][22]
* [CHANGELOG-1.11.md][21]
* [CHANGELOG-1.10.md][20]
* [CHANGELOG-1.9.md][19]
* [CHANGELOG-1.8.md][18]
@@ -24,6 +26,8 @@
* [CHANGELOG-0.3.md][1]
[23]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.13.md
[22]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.12.md
[21]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.11.md
[20]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.10.md
[19]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.9.md

View File

@@ -13,7 +13,7 @@
# limitations under the License.
# Velero binary build section
FROM --platform=$BUILDPLATFORM golang:1.20-bullseye as velero-builder
FROM --platform=$BUILDPLATFORM golang:1.21.9-bookworm as velero-builder
ARG GOPROXY
ARG BIN
@@ -43,10 +43,11 @@ RUN mkdir -p /output/usr/bin && \
go build -o /output/${BIN} \
-ldflags "${LDFLAGS}" ${PKG}/cmd/${BIN} && \
go build -o /output/velero-helper \
-ldflags "${LDFLAGS}" ${PKG}/cmd/velero-helper
-ldflags "${LDFLAGS}" ${PKG}/cmd/velero-helper && \
go clean -modcache -cache
# Restic binary build section
FROM --platform=$BUILDPLATFORM golang:1.20-bullseye as restic-builder
FROM --platform=$BUILDPLATFORM golang:1.21.9-bookworm as restic-builder
ARG BIN
ARG TARGETOS
@@ -65,10 +66,11 @@ COPY . /go/src/github.com/vmware-tanzu/velero
RUN mkdir -p /output/usr/bin && \
export GOARM=$(echo "${GOARM}" | cut -c2-) && \
/go/src/github.com/vmware-tanzu/velero/hack/build-restic.sh
/go/src/github.com/vmware-tanzu/velero/hack/build-restic.sh && \
go clean -modcache -cache
# Velero image packing section
FROM gcr.io/distroless/base-nossl-debian11:nonroot
FROM paketobuildpacks/run-jammy-tiny:0.2.19
LABEL maintainer="Xun Jiang <jxun@vmware.com>"
@@ -76,5 +78,5 @@ COPY --from=velero-builder /output /
COPY --from=restic-builder /output /
USER nonroot:nonroot
USER cnb:cnb

View File

@@ -4,16 +4,16 @@
## Maintainers
| Maintainer | GitHub ID | Affiliation |
|---------------------|---------------------------------------------------------------|-------------------------------------------|
| Dave Smith-Uchida | [dsu-igeek](https://github.com/dsu-igeek) | [Kasten](https://github.com/kastenhq/) |
| Scott Seago | [sseago](https://github.com/sseago) | [OpenShift](https://github.com/openshift) |
| Daniel Jiang | [reasonerjt](https://github.com/reasonerjt) | [VMware](https://www.github.com/vmware/) |
| Wenkai Yin | [ywk253100](https://github.com/ywk253100) | [VMware](https://www.github.com/vmware/) |
| Xun Jiang | [blackpiglet](https://github.com/blackpiglet) | [VMware](https://www.github.com/vmware/) |
| Ming Qiu | [qiuming-best](https://github.com/qiuming-best) | [VMware](https://www.github.com/vmware/) |
| Shubham Pampattiwar | [shubham-pampattiwar](https://github.com/shubham-pampattiwar) | [OpenShift](https://github.com/openshift) |
| Yonghui Li | [Lyndon-Li](https://github.com/Lyndon-Li) | [VMware](https://www.github.com/vmware/) |
| Maintainer | GitHub ID | Affiliation |
|---------------------|---------------------------------------------------------------|--------------------------------------------------|
| Scott Seago | [sseago](https://github.com/sseago) | [OpenShift](https://github.com/openshift) |
| Daniel Jiang | [reasonerjt](https://github.com/reasonerjt) | [VMware](https://www.github.com/vmware/) |
| Wenkai Yin | [ywk253100](https://github.com/ywk253100) | [VMware](https://www.github.com/vmware/) |
| Xun Jiang | [blackpiglet](https://github.com/blackpiglet) | [VMware](https://www.github.com/vmware/) |
| Ming Qiu | [qiuming-best](https://github.com/qiuming-best) | [VMware](https://www.github.com/vmware/) |
| Shubham Pampattiwar | [shubham-pampattiwar](https://github.com/shubham-pampattiwar) | [OpenShift](https://github.com/openshift) |
| Yonghui Li | [Lyndon-Li](https://github.com/Lyndon-Li) | [VMware](https://www.github.com/vmware/) |
| Anshul Ahuja | [anshulahuja98](https://github.com/anshulahuja98) | [Microsoft Azure](https://www.github.com/azure/) |
## Emeritus Maintainers
* Adnan Abdulhussein ([prydonius](https://github.com/prydonius))
@@ -25,12 +25,12 @@
* Carlisia Thompson ([carlisia](https://github.com/carlisia))
* Bridget McErlean ([zubron](https://github.com/zubron))
* JenTing Hsiao ([jenting](https://github.com/jenting))
* Dave Smith-Uchida ([dsu-igeek](https://github.com/dsu-igeek))
## Velero Contributors & Stakeholders
| Feature Area | Lead |
|------------------------|:------------------------------------------------------------------------------------:|
| Architect | Dave Smith-Uchida [dsu-igeek](https://github.com/dsu-igeek) |
| Technical Lead | Daniel Jiang [reasonerjt](https://github.com/reasonerjt) |
| Kubernetes CSI Liaison | |
| Deployment | |

View File

@@ -42,6 +42,7 @@ The following is a list of the supported Kubernetes versions for each Velero ver
| Velero version | Expected Kubernetes version compatibility | Tested on Kubernetes version |
|----------------|-------------------------------------------|----------------------------------------|
| 1.13 | 1.18-latest | 1.26.5, 1.27.3, 1.27.8, and 1.28.3 |
| 1.12 | 1.18-latest | 1.25.7, 1.26.5, 1.26.7, and 1.27.3 |
| 1.11 | 1.18-latest | 1.23.10, 1.24.9, 1.25.5, and 1.26.1 |
| 1.10 | 1.18-latest | 1.22.5, 1.23.8, 1.24.6 and 1.25.1 |

View File

@@ -52,7 +52,7 @@ git_sha = str(local("git rev-parse HEAD", quiet = True, echo_off = True)).strip(
tilt_helper_dockerfile_header = """
# Tilt image
FROM golang:1.20 as tilt-helper
FROM golang:1.21.9 as tilt-helper
# Support live reloading with Tilt
RUN wget --output-document /restart.sh --quiet https://raw.githubusercontent.com/windmilleng/rerun-process-wrapper/master/restart.sh && \

View File

@@ -100,7 +100,7 @@ To fix CVEs and keep pace with Golang, Velero made changes as follows:
* Enable staticcheck linter. (#5788, @blackpiglet)
* Set Kopia IgnoreUnknownTypes in ErrorHandlingPolicy to True for ignoring backup unknown file type (#5786, @qiuming-best)
* Bump up Restic version to 0.15.0 (#5784, @qiuming-best)
* Add File system backup related matrics to Grafana dashboard
* Add File system backup related metrics to Grafana dashboard
- Add metrics backup_warning_total for record of total warnings
- Add metrics backup_last_status for record of last status of the backup (#5779, @allenxu404)
* Design for Handling backup of volumes by resources filters (#5773, @qiuming-best)

View File

@@ -0,0 +1,209 @@
## v1.13.2
### 2024-04-17
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.13.2
### Container Image
`velero/velero:v1.13.2`
### Documentation
https://velero.io/docs/v1.13/
### Upgrading
https://velero.io/docs/v1.13/upgrade-to-1.13/
### All changes
* Bump up the versions of several Kubernetes-related libs (#7577, @ywk253100)
* Fix issue #7535, add the MustHave resource check during item collection and item filter for restore (#7586, @Lyndon-Li)
* Bump Golang version, and bump protobuf version (#7606, @blackpiglet)
## v1.13.1
### 2024-03-13
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.13.1
### Container Image
`velero/velero:v1.13.1`
### Documentation
https://velero.io/docs/v1.13/
### Upgrading
https://velero.io/docs/v1.13/upgrade-to-1.13/
### All changes
* Fix issue #7308, change the data path requeue time to 5 second for data mover backup/restore, PVB and PVR. (#7459, @Lyndon-Li)
* BackupRepositories associated with a BSL are invalidated when BSL is (re-)created. (#7399, @kaovilai)
* Adjust the logic for the backup_last_status metrics to stop incorrectly incrementing over time (#7445, @allenxu404)
## v1.13
### 2024-01-10
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.13.0
### Container Image
`velero/velero:v1.13.0`
### Documentation
https://velero.io/docs/v1.13/
### Upgrading
https://velero.io/docs/v1.13/upgrade-to-1.13/
### Highlights
#### Resource Modifier Enhancement
Velero introduced the Resource Modifiers in v1.12.0. This feature allows users to specify a ConfigMap with a set of rules to modify the resources during restoration. However, only the JSON Patch is supported when creating the rules, and JSON Patch has some limitations, which cannot cover all use cases. In v1.13.0, Velero adds new support for JSON Merge Patch and Strategic Merge Patch, which provide more power and flexibility and allow users to use the same ConfigMap to apply patches on the resources. More design details can be found in [Support JSON Merge Patch and Strategic Merge Patch in Resource Modifiers](https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/merge-patch-and-strategic-in-resource-modifier.md) design. For instructions on how to use the feature, please refer to the [Resource Modifiers](https://velero.io/docs/v1.13/restore-resource-modifiers/) doc.
#### Node-Agent Concurrency
Velero data movement activities from fs-backups and CSI snapshot data movements run in Velero node-agent, so may be hosted by every node in the cluster and consume resources (i.e. CPU, memory, network bandwidth) from there. With v1.13, users are allowed to configure how many data movement activities (a.k.a, loads) run in each node globally or by node, so that users can better leverage the performance of Velero data movement activities and the resource consumption in the cluster. For more information, check the [Node-Agent Concurrency](https://velero.io/docs/v1.13/node-agent-concurrency/) document.
#### Parallel Files Upload Options
Velero now supports configurable options for parallel files upload when using Kopia uploader to do fs-backups or CSI snapshot data movements which makes speed up backup possible.
For more information, please check [Here](https://velero.io/docs/v1.13/backup-reference/#parallel-files-upload).
#### Write Sparse Files Options
If using fs-restore or CSI snapshot data movements, its supported to write sparse files during restore. For more information, please check [Here](https://velero.io/docs/v1.13/restore-reference/#write-sparse-files).
#### Backup Describe
In v1.13, the Backup Volume section is added to the velero backup describe command output. The backup Volumes section describes information for all the volumes included in the backup of various backup types, i.e. native snapshot, fs-backup, CSI snapshot, and CSI snapshot data movement. Particularly, the velero backup description now supports showing the information of CSI snapshot data movements, which is not supported in v1.12.
Additionally, backup describe command will not check EnableCSI feature gate from client side, so if a backup has volumes with CSI snapshot or CSI snapshot data movement, backup describe command always shows the corresponding information in its output.
#### Backup's new VolumeInfo metadata
Create a new metadata file in the backup repository's backup name sub-directory to store the backup-including PVC and PV information. The information includes the backing-up method of the PVC and PV data, snapshot information, and status. The VolumeInfo metadata file determines how the PV resource should be restored. The Velero downstream software can also use this metadata file to get a summary of the backup's volume data information.
#### Enhancement for CSI Snapshot Data Movements when Velero Pod Restart
When performing backup and restore operations, enhancements have been implemented for Velero server pods or node agents to ensure that the current backup or restore process is not stuck or interrupted after restart due to certain exceptional circumstances.
#### New status fields added to show hook execution details
Hook execution status is now included in the backup/restore CR status and displayed in the backup/restore describe command output. Specifically, it will show the number of hooks which attempted to execute under the HooksAttempted field and the number of hooks which failed to execute under the HooksFailed field.
#### AWS SDK Bump Up
Bump up AWS SDK for Go to version 2, which offers significant performance improvements in CPU and memory utilization over version 1.
#### Azure AD/Workload Identity Support
Azure AD/Workload Identity is the recommended approach to do the authentication with Azure services/AKS, Velero has introduced support for Azure AD/Workload Identity on the Velero Azure plugin side in previous releases, and in v1.13.0 Velero adds new support for Kopia operations(file system backup/data mover/etc.) with Azure AD/Workload Identity.
#### Runtime and dependencies
To fix CVEs and keep pace with Golang, Velero made changes as follows:
* Bump Golang runtime to v1.21.6.
* Bump several dependent libraries to new versions.
* Bump Kopia to v0.15.0.
### Breaking changes
* Backup describe command: due to the backup describe output enhancement, some existing information (i.e. the output for native snapshot, CSI snapshot, and fs-backup) has been moved to the Backup Volumes section with some format changes.
* API type changes: changes the field [DataMoverConfig](https://github.com/vmware-tanzu/velero/blob/v1.13.0/pkg/apis/velero/v2alpha1/data_upload_types.go#L54) in DataUploadSpec from `*map[string][string]`` to `map[string]string`
* Velero install command: due to the issue [#7264](https://github.com/vmware-tanzu/velero/issues/7264), v1.13.0 introduces a break change that make the informer cache enabled by default to keep the actual behavior consistent with the helper message(the informer cache is disabled by default before the change).
### Limitations/Known issues
* The backup's VolumeInfo metadata doesn't have the information updated in the async operations. This function could be supported in v1.14 release.
### Note
* Velero introduces the informer cache which is enabled by default. The informer cache improves the restore performance but may cause higher memory consumption. Increase the memory limit of the Velero pod or disable the informer cache by specifying the `--disable-informer-cache` option when installing Velero if you get the OOM error.
### Deprecation announcement
* The generated k8s clients, informers, and listers are deprecated in the Velero v1.13 release. They are put in the Velero repository's pkg/generated directory. According to the n+2 supporting policy, the deprecated are kept for two more releases. The pkg/generated directory should be deleted in the v1.15 release.
* After the backup VolumeInfo metadata file is added to the backup, Velero decides how to restore the PV resource according to the VolumeInfo content. To support the backup generated by the older version of Velero, the old logic is also kept. The support for the backup without the VolumeInfo metadata file will be kept for two releases. The support logic will be deleted in the v1.15 release.
### All Changes
* Check resource Group Version and Kind is available in cluster before attempting restore to prevent being stuck (#7336, @kaovilai)
* Make "disable-informer-cache" option false(enabled) by default to keep it consistent with the help message (#7294, @ywk253100)
* Fix issue #6928, remove snapshot deletion timeout for PVB (#7282, @Lyndon-Li)
* Do not set "targetNamespace" to namespace items (#7274, @reasonerjt)
* Fix issue #7244. By the end of the upload, check the outstanding incomplete snapshots and delete them by calling ApplyRetentionPolicy (#7245, @Lyndon-Li)
* Adjust the newline output of resource list in restore describer (#7238, @allenxu404)
* Remove the redundant newline in backup describe output (#7229, @allenxu404)
* Fix issue #7189, data mover generic restore - don't assume the first volume as the restore volume (#7201, @Lyndon-Li)
* Update CSIVolumeSnapshotsCompleted in backup's status and the metric
during backup finalize stage according to async operations content. (#7184, @blackpiglet)
* Refactor DownloadRequest Stream function (#7175, @blackpiglet)
* Add `--skip-immediately` flag to schedule commands; `--schedule-skip-immediately` server and install (#7169, @kaovilai)
* Add node-agent concurrency doc and change the config name from dataPathConcurrency to loadCocurrency (#7161, @Lyndon-Li)
* Enhance hooks tracker by adding a returned error to record function (#7153, @allenxu404)
* Track the skipped PV when SnapshotVolumes set as false (#7152, @reasonerjt)
* Add more linters part 2. (#7151, @blackpiglet)
* Fix issue #7135, check pod status before checking node-agent pod status (#7150, @Lyndon-Li)
* Treat namespace as a regular restorable item (#7143, @reasonerjt)
* Allow sparse option for Kopia & Restic restore (#7141, @qiuming-best)
* Use VolumeInfo to help restore the PV. (#7138, @blackpiglet)
* Node agent restart enhancement (#7130, @qiuming-best)
* Fix issue #6695, add describe for data mover backups (#7125, @Lyndon-Li)
* Add hooks status to backup/restore CR (#7117, @allenxu404)
* Include plugin name in the error message by operations (#7115, @reasonerjt)
* Fix issue #7068, due to a behavior of CSI external snapshotter, manipulations of VS and VSC may not be handled in the same order inside external snapshotter as the API is called. So add a protection finalizer to ensure the order (#7102, @Lyndon-Li)
* Generate VolumeInfo for backup. (#7100, @blackpiglet)
* Fix issue #7094, fallback to full backup if previous snapshot is not found (#7096, @Lyndon-Li)
* Fix issue #7068, due to an behavior of CSI external snapshotter, manipulations of VS and VSC may not be handled in the same order inside external snapshotter as the API is called. So add a protection finalizer to ensure the order (#7095, @Lyndon-Li)
* Skip syncing the backup which doesn't contain backup metadata (#7081, @ywk253100)
* Fix issue #6693, partially fail restore if CSI snapshot is involved but CSI feature is not ready, i.e., CSI feature gate is not enabled or CSI plugin is not installed. (#7077, @Lyndon-Li)
* Truncate the credential file to avoid the change of secret content messing it up (#7072, @ywk253100)
* Add VolumeInfo metadata structures. (#7070, @blackpiglet)
* improve discoveryHelper.Refresh() in restore (#7069, @27149chen)
* Add DataUpload Result and CSI VolumeSnapshot check for restore PV. (#7061, @blackpiglet)
* Add the implementation for design #6950, configurable data path concurrency (#7059, @Lyndon-Li)
* Make data mover fail early (#7052, @qiuming-best)
* Remove dependency of generated client part 3. (#7051, @blackpiglet)
* Update Backup.Status.CSIVolumeSnapshotsCompleted during finalize (#7046, @kaovilai)
* Remove the Velero generated client. (#7041, @blackpiglet)
* Fix issue #7027, data mover backup exposer should not assume the first volume as the backup volume in backup pod (#7038, @Lyndon-Li)
* Read information from the credential specified by BSL (#7034, @ywk253100)
* Fix #6857. Added check for matching Owner References when synchronizing backups, removing references that are not found/have mismatched uid. (#7032, @deefdragon)
* Add description markers for dataupload and datadownload CRDs (#7028, @shubham-pampattiwar)
* Add HealthCheckNodePort deletion logic for Service restore. (#7026, @blackpiglet)
* Fix inconsistent behavior of Backup and Restore hook execution (#7022, @allenxu404)
* Fix #6964. Don't use csiSnapshotTimeout (10 min) for waiting snapshot to readyToUse for data mover, so as to make the behavior complied with CSI snapshot backup (#7011, @Lyndon-Li)
* restore: Use warning when Create IsAlreadyExist and Get error (#7004, @kaovilai)
* Bump kopia to 0.15.0 (#7001, @Lyndon-Li)
* Make Kopia file parallelism configurable (#7000, @qiuming-best)
* Fix unified repository (kopia) s3 credentials profile selection (#6995, @kaovilai)
* Fix #6988, always get region from BSL if it is not empty (#6990, @Lyndon-Li)
* Limit PVC block mode logic to non-Windows platform. (#6989, @blackpiglet)
* It is a valid case that the Status.RestoreSize field in VolumeSnapshot is not set, if so, get the volume size from the source PVC to create the backup PVC (#6976, @Lyndon-Li)
* Check whether the action is a CSI action and whether CSI feature is enabled, before executing the action. (#6968, @blackpiglet)
* Add the PV backup information design document. (#6962, @blackpiglet)
* Change controller-runtime List option from MatchingFields to ListOptions (#6958, @blackpiglet)
* Add the design for node-agent concurrency (#6950, @Lyndon-Li)
* Import auth provider plugins (#6947, @0x113)
* Fix #6668, add a limitation for file system restore parallelism with other types of restores (CSI snapshot restore, CSI snapshot movement restore) (#6946, @Lyndon-Li)
* Add MSI Support for Azure plugin. (#6938, @yanggangtony)
* Partially fix #6734, guide Kubernetes' scheduler to spread backup pods evenly across nodes as much as possible, so that data mover backup could achieve better parallelism (#6926, @Lyndon-Li)
* Bump up aws sdk to aws-sdk-go-v2 (#6923, @reasonerjt)
* Optional check if targeted container is ready before executing a hook (#6918, @Ripolin)
* Support JSON Merge Patch and Strategic Merge Patch in Resource Modifiers (#6917, @27149chen)
* Fix issue 6913: Velero Built-in Datamover: Backup stucks in phase WaitingForPluginOperations when Node Agent pod gets restarted (#6914, @shubham-pampattiwar)
* Set ParallelUploadAboveSize as MaxInt64 and flush repo after setting up policy so that policy is retrieved correctly by TreeForSource (#6885, @Lyndon-Li)
* Replace the base image with paketobuildpacks image (#6883, @ywk253100)
* Fix issue #6859, move plugin depending podvolume functions to util pkg, so as to remove the dependencies to unnecessary repository packages like kopia, azure, etc. (#6875, @Lyndon-Li)
* Fix #6861. Only Restic path requires repoIdentifier, so for non-restic path, set the repoIdentifier fields as empty in PVB and PVR and also remove the RepoIdentifier column in the get output of PVBs and PVRs (#6872, @Lyndon-Li)
* Add volume types filter in resource policies (#6863, @qiuming-best)
* change the metrics backup_attempt_total default value to 1. (#6838, @yanggangtony)
* Bump kopia to v0.14 (#6833, @Lyndon-Li)
* Retry failed create when using generateName (#6830, @sseago)
* Fix issue #6786, always delete VSC regardless of the deletion policy (#6827, @Lyndon-Li)
* Proposal to support JSON Merge Patch and Strategic Merge Patch in Resource Modifiers (#6797, @27149chen)
* Fix the node-agent missing metrics-address defines. (#6784, @yanggangtony)
* Fix default BSL setting not work (#6771, @qiuming-best)
* Update restore controller logic for restore deletion (#6770, @ywk253100)
* Fix #6752: add namespace exclude check. (#6760, @blackpiglet)
* Fix issue #6753, remove the check for read-only BSL in restore async operation controller since Velero cannot fully support read-only mode BSL in restore at present (#6757, @Lyndon-Li)
* Fix issue #6647, add the --default-snapshot-move-data parameter to Velero install, so that users don't need to specify --snapshot-move-data per backup when they want to move snapshot data for all backups (#6751, @Lyndon-Li)
* Use old(origin) namespace in resource modifier conditions in case namespace may change during restore (#6724, @27149chen)
* Perf improvements for existing resource restore (#6723, @sseago)
* Remove schedule-related metrics on schedule delete (#6715, @nilesh-akhade)
* Kubernetes 1.27 new job label batch.kubernetes.io/controller-uid are deleted during restore per https://github.com/kubernetes/kubernetes/pull/114930 (#6712, @kaovilai)
* This pr made some improvements in Resource Modifiers: 1. add label selector 2. change the field name from groupKind to groupResource (#6704, @27149chen)
* Make Kopia support Azure AD (#6686, @ywk253100)
* Add support for block volumes with Kopia (#6680, @dzaninovic)
* Delete PartiallyFailed orphaned backups as well as Completed ones (#6649, @sseago)
* Add CSI snapshot data movement doc (#6637, @Lyndon-Li)
* Fixes #6636, skip subresource in resource discovery (#6635, @27149chen)
* Add `orLabelSelectors` for backup, restore commands (#6475, @nilesh-akhade)
* fix run preHook and postHook on completed pods (#5211, @cleverhu)

View File

@@ -61,7 +61,7 @@ in progress for 1.9.
* Add rbac and annotation test cases (#4455, @mqiu)
* remove --crds-version in velero install command. (#4446, @jxun)
* Upgrade e2e test vsphere plugin (#4440, @mqiu)
* Fix e2e test failures for the inappropriate optimaze of velero install (#4438, @mqiu)
* Fix e2e test failures for the inappropriate optimize of velero install (#4438, @mqiu)
* Limit backup namespaces on test resource filtering cases (#4437, @mqiu)
* Bump up Go to 1.17 (#4431, @reasonerjt)
* Added `<backup name>`-itemsnapshots.json.gz to the backup format. This file exists

View File

@@ -1,3 +1,19 @@
/*
Copyright The Velero Contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (

View File

@@ -66,10 +66,10 @@ func done() bool {
doneFile := filepath.Join("/restores", child.Name(), ".velero", os.Args[1])
if _, err := os.Stat(doneFile); os.IsNotExist(err) {
fmt.Printf("Not found: %s\n", doneFile)
fmt.Printf("The filesystem restore done file %s is not found yet. Retry later.\n", doneFile)
return false
} else if err != nil {
fmt.Fprintf(os.Stderr, "ERROR looking for %s: %s\n", doneFile, err)
fmt.Fprintf(os.Stderr, "ERROR looking filesystem restore done file %s: %s\n", doneFile, err)
return false
}

View File

@@ -477,6 +477,15 @@ spec:
description: TTL is a time.Duration-parseable string describing how
long the Backup should be retained for.
type: string
uploaderConfig:
description: UploaderConfig specifies the configuration for the uploader.
nullable: true
properties:
parallelFilesUpload:
description: ParallelFilesUpload is the number of files parallel
uploads to perform when using the uploader.
type: integer
type: object
volumeSnapshotLocations:
description: VolumeSnapshotLocations is a list containing names of
VolumeSnapshotLocations associated with this backup.
@@ -535,6 +544,22 @@ spec:
description: FormatVersion is the backup format version, including
major, minor, and patch version.
type: string
hookStatus:
description: HookStatus contains information about the status of the
hooks.
nullable: true
properties:
hooksAttempted:
description: HooksAttempted is the total number of attempted hooks
Specifically, HooksAttempted represents the number of hooks
that failed to execute and the number of hooks that executed
successfully.
type: integer
hooksFailed:
description: HooksFailed is the total number of hooks which ended
with an error
type: integer
type: object
phase:
description: Phase is the current state of the Backup.
enum:

View File

@@ -53,6 +53,7 @@ spec:
- RestoreItemOperations
- CSIBackupVolumeSnapshots
- CSIBackupVolumeSnapshotContents
- BackupVolumeInfos
type: string
name:
description: Name is the name of the Kubernetes resource with

View File

@@ -35,10 +35,6 @@ spec:
jsonPath: .spec.volume
name: Volume
type: string
- description: Backup repository identifier for this backup
jsonPath: .spec.repoIdentifier
name: Repository ID
type: string
- description: The type of the uploader to handle data transfer
jsonPath: .spec.uploaderType
name: Uploader Type
@@ -125,6 +121,13 @@ spec:
description: Tags are a map of key-value pairs that should be applied
to the volume backup as tags.
type: object
uploaderSettings:
additionalProperties:
type: string
description: UploaderSettings are a map of key-value pairs that should
be applied to the uploader configuration.
nullable: true
type: object
uploaderType:
description: UploaderType is the type of the uploader to handle the
data transfer.

View File

@@ -119,6 +119,13 @@ spec:
description: SourceNamespace is the original namespace for namaspace
mapping.
type: string
uploaderSettings:
additionalProperties:
type: string
description: UploaderSettings are a map of key-value pairs that should
be applied to the uploader configuration.
nullable: true
type: object
uploaderType:
description: UploaderType is the type of the uploader to handle the
data transfer.

View File

@@ -186,6 +186,12 @@ spec:
- Continue
- Fail
type: string
waitForReady:
description: WaitForReady ensures command will
be launched when container is Ready instead
of Running.
nullable: true
type: boolean
waitTimeout:
description: WaitTimeout defines the maximum amount
of time Velero should wait for the container
@@ -412,6 +418,16 @@ spec:
restore from the most recent successful backup created from this
schedule.
type: string
uploaderConfig:
description: UploaderConfig specifies the configuration for the restore.
nullable: true
properties:
writeSparseFiles:
description: WriteSparseFiles is a flag to indicate whether write
files sparsely or not.
nullable: true
type: boolean
type: object
required:
- backupName
type: object
@@ -434,6 +450,22 @@ spec:
description: FailureReason is an error that caused the entire restore
to fail.
type: string
hookStatus:
description: HookStatus contains information about the status of the
hooks.
nullable: true
properties:
hooksAttempted:
description: HooksAttempted is the total number of attempted hooks
Specifically, HooksAttempted represents the number of hooks
that failed to execute and the number of hooks that executed
successfully.
type: integer
hooksFailed:
description: HooksFailed is the total number of hooks which ended
with an error
type: integer
type: object
phase:
description: Phase is the current state of the Restore
enum:

View File

@@ -61,6 +61,16 @@ spec:
description: Schedule is a Cron expression defining when to run the
Backup.
type: string
skipImmediately:
description: 'SkipImmediately specifies whether to skip backup if
schedule is due immediately from `schedule.status.lastBackup` timestamp
when schedule is unpaused or if schedule is new. If true, backup
will be skipped immediately when schedule is unpaused if it is due
based on .Status.LastBackupTimestamp or schedule is new, and will
run at next schedule time. If false, backup will not be skipped
immediately when schedule is unpaused, but will run at next schedule
time. If empty, will follow server configuration (default: false).'
type: boolean
template:
description: Template is the definition of the Backup to be run on
the provided schedule
@@ -514,6 +524,16 @@ spec:
description: TTL is a time.Duration-parseable string describing
how long the Backup should be retained for.
type: string
uploaderConfig:
description: UploaderConfig specifies the configuration for the
uploader.
nullable: true
properties:
parallelFilesUpload:
description: ParallelFilesUpload is the number of files parallel
uploads to perform when using the uploader.
type: integer
type: object
volumeSnapshotLocations:
description: VolumeSnapshotLocations is a list containing names
of VolumeSnapshotLocations associated with this backup.
@@ -539,6 +559,11 @@ spec:
format: date-time
nullable: true
type: string
lastSkipped:
description: LastSkipped is the last time a Schedule was skipped
format: date-time
nullable: true
type: string
phase:
description: Phase is the current phase of the Schedule
enum:

File diff suppressed because one or more lines are too long

View File

@@ -48,6 +48,8 @@ spec:
name: v2alpha1
schema:
openAPIV3Schema:
description: DataDownload acts as the protocol between data mover plugins
and data mover controller for the datamover restore operation
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation

View File

@@ -49,6 +49,8 @@ spec:
name: v2alpha1
schema:
openAPIV3Schema:
description: DataUpload acts as the protocol between data mover plugins and
data mover controller for the datamover backup operation
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation

File diff suppressed because one or more lines are too long

View File

@@ -49,6 +49,9 @@ spec:
- mountPath: /host_pods
mountPropagation: HostToContainer
name: host-pods
- mountPath: /var/lib/kubelet/plugins
mountPropagation: HostToContainer
name: host-plugins
- mountPath: /scratch
name: scratch
- mountPath: /credentials
@@ -60,6 +63,9 @@ spec:
- hostPath:
path: /var/lib/kubelet/pods
name: host-pods
- hostPath:
path: /var/lib/kubelet/plugins
name: host-plugins
- emptyDir: {}
name: scratch
- name: cloud-credentials

View File

@@ -175,7 +175,7 @@ If there are one or more, download the backup tarball from backup storage, untar
## Alternatives Considered
Another proposal for higher level `DeleteItemActions` was initially included, which would require implementors to individually download the backup tarball themselves.
Another proposal for higher level `DeleteItemActions` was initially included, which would require implementers to individually download the backup tarball themselves.
While this may be useful long term, it is not a good fit for the current goals as each plugin would be re-implementing a lot of boilerplate.
See the deletion-plugins.md file for this alternative proposal in more detail.

View File

@@ -26,7 +26,7 @@ Currently velero supports substituting certain values in the K8s resources durin
<!-- ## Background -->
## Goals
- Allow the user to specify a GroupKind, Name(optional), JSON patch for modification.
- Allow the user to specify a GroupResource, Name(optional), JSON patch for modification.
- Allow the user to specify multiple JSON patch.
## Non Goals
@@ -74,7 +74,7 @@ velero restore create --from-backup backup-1 --resource-modifier-configmap resou
### Resource Modifier ConfigMap Structure
- User first needs to provide details on which resources the JSON Substitutions need to be applied.
- For this the user will provide 4 inputs - Namespaces(for NS Scoped resources), GroupKind (kind.group format similar to includeResources field in velero) and Name Regex(optional).
- For this the user will provide 4 inputs - Namespaces(for NS Scoped resources), GroupResource (resource.group format similar to includeResources field in velero) and Name Regex(optional).
- If the user does not provide the Name, the JSON Substitutions will be applied to all the resources of the given Group and Kind under the given namespaces.
- Further the use will specify the JSON Patch using the structure of kubectl's "JSON Patch" based inputs.
@@ -83,7 +83,7 @@ velero restore create --from-backup backup-1 --resource-modifier-configmap resou
version: v1
resourceModifierRules:
- conditions:
groupKind: persistentvolumeclaims
groupResource: persistentvolumeclaims
resourceNameRegex: "mysql.*"
namespaces:
- bar
@@ -96,6 +96,7 @@ resourceModifierRules:
path: "/metadata/labels/test"
```
- The above configmap will apply the JSON Patch to all the PVCs in the namespaces bar and foo with name starting with mysql. The JSON Patch will replace the storageClassName with "premium" and remove the label "test" from the PVCs.
- Note that the Namespace here is the original namespace of the backed up resource, not the new namespace where the resource is going to be restored.
- The user can specify multiple JSON Patches for a particular resource. The patches will be applied in the order specified in the configmap. A subsequent patch is applied in order and if multiple patches are specified for the same path, the last patch will override the previous patches.
- The user can specify multiple resourceModifierRules in the configmap. The rules will be applied in the order specified in the configmap.
@@ -119,7 +120,7 @@ kubectl create cm <configmap-name> --from-file <yaml-file> -n velero
version: v1
resourceModifierRules:
- conditions:
groupKind: persistentvolumeclaims.storage.k8s.io
groupResource: persistentvolumeclaims.storage.k8s.io
resourceNameRegex: ".*"
namespaces:
- bar

View File

@@ -0,0 +1,193 @@
# Proposal to Support JSON Merge Patch and Strategic Merge Patch in Resource Modifiers
- [Proposal to Support JSON Merge Patch and Strategic Merge Patch in Resource Modifiers](#proposal-to-support-json-merge-patch-and-strategic-merge-patch-in-resource-modifiers)
- [Abstract](#abstract)
- [Goals](#goals)
- [Non Goals](#non-goals)
- [User Stories](#user-stories)
- [Scenario 1](#scenario-1)
- [Scenario 2](#scenario-2)
- [Detailed Design](#detailed-design)
- [How to choose the right patch type](#how-to-choose-the-right-patch-type)
- [New Field MergePatches](#new-field-mergepatches)
- [New Field StrategicPatches](#new-field-strategicpatches)
- [Conditional Patches in ALL Patch Types](#conditional-patches-in-all-patch-types)
- [Wildcard Support for GroupResource](#wildcard-support-for-groupresource)
- [Helper Command to Generate Merge Patch and Strategic Merge Patch](#helper-command-to-generate-merge-patch-and-strategic-merge-patch)
- [Security Considerations](#security-considerations)
- [Compatibility](#compatibility)
- [Implementation](#implementation)
- [Future Enhancements](#future-enhancements)
- [Open Issues](#open-issues)
## Abstract
Velero introduced the concept of Resource Modifiers in v1.12.0. This feature allows the user to specify a configmap with a set of rules to modify the resources during restore. The user can specify the filters to select the resources and then specify the JSON Patch to apply on the resource. This feature is currently limited to the operations supported by JSON Patch RFC.
This proposal is to add support for JSON Merge Patch and Strategic Merge Patch in the Resource Modifiers. This will allow the user to use the same configmap to apply JSON Merge Patch and Strategic Merge Patch on the resources during restore.
## Goals
- Allow the user to specify a JSON patch, JSON Merge Patch or Strategic Merge Patch for modification.
- Allow the user to specify multiple JSON Patch, JSON Merge Patch or Strategic Merge Patch.
- Allow the user to specify mixed JSON Patch, JSON Merge Patch and Strategic Merge Patch in the same configmap.
## Non Goals
- Deprecating the existing RestoreItemAction plugins for standard substitutions(like changing the namespace, changing the storage class, etc.)
## User Stories
### Scenario 1
- Alice has some Pods and part of them have an annotation `{"for": "bar"}`.
- Alice wishes to restore these Pods to a different cluster without this annotation.
- Alice can use this feature to remove this annotation during restore.
### Scenario 2
- Bob has a Pod with several containers and one container with name nginx has an image `repo1/nginx`.
- Bob wishes to restore this Pod to a different cluster, but new cluster can not access repo1, so he pushes the image to repo2.
- Bob can use this feature to update the image of container nginx to `repo2/nginx` during restore.
## Detailed Design
- The design and approach is inspired by kubectl patch command and [this doc](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/).
- New fields `MergePatches` and `StrategicPatches` will be added to the `ResourceModifierRule` struct to support all three patch types.
- Only one of the three patch types can be specified in a single `ResourceModifierRule`.
- Add wildcard support for `groupResource` in `conditions` struct.
- The workflow to create Resource Modifier ConfigMap and reference it in RestoreSpec will remain the same as described in document [Resource Modifiers](https://github.com/vmware-tanzu/velero/blob/main/site/content/docs/main/restore-resource-modifiers.md).
### How to choose the right patch type
- [JSON Merge Patch](https://datatracker.ietf.org/doc/html/rfc7386) is a naively simple format, with limited usability. Probably it is a good choice if you are building something small, with very simple JSON Schema.
- [JSON Patch](https://datatracker.ietf.org/doc/html/rfc6902) is a more complex format, but it is applicable to any JSON documents. For a comparison of JSON patch and JSON merge patch, see [JSON Patch and JSON Merge Patch](https://erosb.github.io/post/json-patch-vs-merge-patch/).
- Strategic Merge Patch is a Kubernetes defined patch type, mainly used to process resources of type list. You can replace/merge a list, add/remove items from a list by key, change the order of items in a list, etc. Strategic merge patch is not supported for custom resources. For more details, see [this doc](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/).
### New Field MergePatches
MergePatches is a list to specify the merge patches to be applied on the resource. The merge patches will be applied in the order specified in the configmap. A subsequent patch is applied in order and if multiple patches are specified for the same path, the last patch will override the previous patches.
Example of MergePatches in ResourceModifierRule
```yaml
version: v1
resourceModifierRules:
- conditions:
groupResource: pods
namespaces:
- ns1
mergePatches:
- patchData: |
{
"metadata": {
"annotations": {
"foo": null
}
}
}
```
- The above configmap will apply the Merge Patch to all the pods in namespace ns1 and remove the annotation `foo` from the pods.
- Both json and yaml format are supported for the patchData.
### New Field StrategicPatches
StrategicPatches is a list to specify the strategic merge patches to be applied on the resource. The strategic merge patches will be applied in the order specified in the configmap. A subsequent patch is applied in order and if multiple patches are specified for the same path, the last patch will override the previous patches.
Example of StrategicPatches in ResourceModifierRule
```yaml
version: v1
resourceModifierRules:
- conditions:
groupResource: pods
resourceNameRegex: "^my-pod$"
namespaces:
- ns1
strategicPatches:
- patchData: |
{
"spec": {
"containers": [
{
"name": "nginx",
"image": "repo2/nginx"
}
]
}
}
```
- The above configmap will apply the Strategic Merge Patch to the pod with name my-pod in namespace ns1 and update the image of container nginx to `repo2/nginx`.
- Both json and yaml format are supported for the patchData.
### Conditional Patches in ALL Patch Types
Since JSON Merge Patch and Strategic Merge Patch do not support conditional patches, we will use the `test` operation of JSON Patch to support conditional patches in all patch types by adding it to `Conditions` struct in `ResourceModifierRule`.
Example of test in conditions
```yaml
version: v1
resourceModifierRules:
- conditions:
groupResource: persistentvolumeclaims.storage.k8s.io
matches:
- path: "/spec/storageClassName"
value: "premium"
mergePatches:
- patchData: |
{
"metadata": {
"annotations": {
"foo": null
}
}
}
```
- The above configmap will apply the Merge Patch to all the PVCs in all namespaces with storageClassName premium and remove the annotation `foo` from the PVCs.
- You can specify multiple rules in the `matches` list. The patch will be applied only if all the matches are satisfied.
### Wildcard Support for GroupResource
The user can specify a wildcard for groupResource in the conditions' struct. This will allow the user to apply the patches for all the resources of a particular group or all resources in all groups. For example, `*.apps` will apply to all the resources in the `apps` group, `*` will apply to all the resources in all groups.
### Helper Command to Generate Merge Patch and Strategic Merge Patch
The patchData of Strategic Merge Patch is sometimes a bit complex for user to write. We can provide a helper command to generate the patchData for Strategic Merge Patch. The command will take the original resource and the modified resource as input and generate the patchData.
It can also be used in JSON Merge Patch.
Here is a sample code snippet to achieve this:
```go
package main
import (
"fmt"
corev1 "k8s.io/api/core/v1"
"sigs.k8s.io/controller-runtime/pkg/client"
)
func main() {
pod := &corev1.Pod{
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "web",
Image: "nginx",
},
},
},
}
newPod := pod.DeepCopy()
patch := client.StrategicMergeFrom(pod)
newPod.Spec.Containers[0].Image = "nginx1"
data, _ := patch.Data(newPod)
fmt.Println(string(data))
// Output:
// {"spec":{"$setElementOrder/containers":[{"name":"web"}],"containers":[{"image":"nginx1","name":"web"}]}}
}
```
## Security Considerations
No security impact.
## Compatibility
Compatible with current Resource Modifiers.
## Implementation
- Use "github.com/evanphx/json-patch" to support JSON Merge Patch.
- Use "k8s.io/apimachinery/pkg/util/strategicpatch" to support Strategic Merge Patch.
- Use glob to support wildcard for `groupResource` in `conditions` struct.
- Use `test` operation of JSON Patch to calculate the `matches` in `conditions` struct.
## Future enhancements
- add a Velero subcommand to generate/validate the patchData for Strategic Merge Patch and JSON Merge Patch.
- add jq support for more complex conditions or patches, to meet the situations that the current conditions or patches can not handle. like [this issue](https://github.com/vmware-tanzu/velero/issues/6344)
## Open Issues
N/A

View File

@@ -67,12 +67,12 @@ The Velero CSI plugin chooses the VolumeSnapshotClass in the cluster that has th
metadata:
name: backup-1
annotations:
velero.io/csi-volumesnapshot-class/csi.cloud.disk.driver: csi-diskdriver-snapclass
velero.io/csi-volumesnapshot-class/csi.cloud.file.driver: csi-filedriver-snapclass
velero.io/csi-volumesnapshot-class/<driver name>: csi-snapclass
velero.io/csi-volumesnapshot-class_csi.cloud.disk.driver: csi-diskdriver-snapclass
velero.io/csi-volumesnapshot-class_csi.cloud.file.driver: csi-filedriver-snapclass
velero.io/csi-volumesnapshot-class_<driver name>: csi-snapclass
```
To query the annotations on a backup: "velero.io/csi-volumesnapshot-class/'driver name'" - where driver names comes from the PVC's driver.
To query the annotations on a backup: "velero.io/csi-volumesnapshot-class_'driver name'" - where driver names comes from the PVC's driver.
2. **Support VolumeSnapshotClass selection at PVC level**
The user can annotate the PVCs with driver and VolumeSnapshotClass name. The CSI plugin will use the VolumeSnapshotClass specified in the annotation. If the annotation is not present, the CSI plugin will use the default VolumeSnapshotClass for the driver. If the VolumeSnapshotClass provided is of a different driver, the CSI plugin will use the default VolumeSnapshotClass for the driver.

View File

@@ -0,0 +1,131 @@
# Node-agent Concurrency Design
## Glossary & Abbreviation
**Velero Generic Data Path (VGDP)**: VGDP is the collective of modules that is introduced in [Unified Repository design][1]. Velero uses these modules to finish data transfer for various purposes (i.e., PodVolume backup/restore, Volume Snapshot Data Movement). VGDP modules include uploaders and the backup repository.
## Background
Velero node-agent is a daemonset hosting controllers and VGDP modules to complete the concrete work of backups/restores, i.e., PodVolume backup/restore, Volume Snapshot Data Movement backup/restore.
For example, node-agent runs DataUpload controllers to watch DataUpload CRs for Volume Snapshot Data Movement backups, so there is one controller instance in each node. One controller instance takes a DataUpload CR and then launches a VGDP instance, which initializes a uploader instance and the backup repository connection, to finish the data transfer. The VGDP instance runs inside the node-agent pod or in a pod associated to the node-agent pod in the same node.
Varying from the data size, data complexity, resource availability, VGDP may take a long time and remarkable resources (CPU, memory, network bandwidth, etc.).
Technically, VGDP instances are able to run in concurrent regardless of the requesters. For example, a VGDP instance for a PodVolume backup could run in parallel with another VGDP instance for a DataUpload. Then the two VGDP instances share the same resources if they are running in the same node.
Therefore, in order to gain the optimized performance with the limited resources, it is worthy to configure the concurrent number of VGDP per node. When the resources are sufficient in nodes, users can set a large concurrent number, so as to reduce the backup/restore time; otherwise, the concurrency should be reduced, otherwise, the backup/restore may encounter problems, i.e., time lagging, hang or OOM kill.
## Goals
- Define the behaviors of concurrent VGDP instances in node-agent
- Create a mechanism for users to specify the concurrent number of VGDP per node
## Non-Goals
- VGDP instances from different nodes always run in concurrent since in most common cases the resources are isolated. For special cases that some resources are shared across nodes, there is no support at present
- In practice, restores run in prioritized scenarios, e.g., disaster recovery. However, the current design doesn't consider this difference, a VGDP instance for a restore is blocked if it reaches to the limit of the concurrency, even though the ones block it are for backups. If users do meet some problems here, they should consider to stop the backups first
- Sometimes, users wants to totally block backups/restores from running in a specific node, this is out of the scope the current design. To archive this, more modules need to be considered (i.e., expoers of data movers), simply blocking the VGDP (e.g., by setting its concurrent number to 0) doesn't work. E.g., for a fs backup, VGDP instance must run in the node the source pod is running in, if we simply block from VGDP instance, the PodVolumeBackup CR is still submitted but never processed.
## Solution
We introduce a configMap named ```node-agent-config``` for users to specify the node-agent related configurations. This configMap is not created by Velero, users should create it manually on demand. The configMap should be in the same namespace where Velero is installed. If multiple Velero instances are installed in different namespaces, there should be one configMap in each namespace which applies to node-agent in that namespace only.
Node-agent server checks these configurations at startup time and use it to initiate the related VGDP modules. Therefore, users could edit this configMap any time, but in order to make the changes effective, node-agent server needs to be restarted.
The ```node-agent-config``` configMap may be used for other purpose of configuring node-agent in future, at present, there is only one kind of configuration as the data in the configMap, the name is ```loadConcurrency```.
The data structure for ```node-agent-config``` is as below:
```go
type Configs struct {
// LoadConcurrency is the config for load concurrency per node.
LoadConcurrency *LoadConcurrency `json:"loadConcurrency,omitempty"`
}
type LoadConcurrency struct {
// GlobalConfig specifies the concurrency number to all nodes for which per-node config is not specified
GlobalConfig int `json:"globalConfig,omitempty"`
// PerNodeConfig specifies the concurrency number to nodes matched by rules
PerNodeConfig []RuledConfigs `json:"perNodeConfig,omitempty"`
}
type RuledConfigs struct {
// NodeSelector specifies the label selector to match nodes
NodeSelector metav1.LabelSelector `json:"nodeSelector"`
// Number specifies the number value associated to the matched nodes
Number int `json:"number"`
}
```
### Global concurrent number
We allow users to specify a concurrent number that will be applied to all nodes if the per-node number is not specified. This number is set through ```globalConfig```.
The number starts from 1 which means there is no concurrency, only one instance of VGDP is allowed. There is no roof limit.
If this number is not specified or not valid, a hard-coded default value will be used, the value is set to 1.
### Per-node concurrent number
We allow users to specify different concurrent number per node, for example, users can set 3 concurrent instances in Node-1, 2 instances in Node-2 and 1 instance in Node-3. This is for below considerations:
- The resources may be different among nodes. Then users could specify smaller concurrent number for nodes with less resources while larger number for the ones with more resources
- Help users to isolate critical environments. Users may run some critical workloads in some specified nodes, since VGDP instances may take large resource consumption, users may want to run less number of instances in the nodes with critical workloads
The range of Per-node concurrent number is the same with Global concurrent number.
Per-node concurrent number is preferable to Global concurrent number, so it will overwrite the Global concurrent number for that node.
Per-node concurrent number is implemented through ```perNodeConfig``` field.
```perNodeConfig``` is a list of ```RuledConfigs``` each item of which matches one or more nodes by label selectors and specify the concurrent number for the matched nodes. This means, the nodes are identified by labels.
For example, the ```perNodeConfig`` could have below elements:
```
"nodeSelector: kubernetes.io/hostname=node1; number: 3"
"nodeSelector: beta.kubernetes.io/instance-type=Standard_B4ms; number: 5"
```
The first element means the node with host name ```node1``` gets the Per-node concurrent number of 3.
The second element means all the nodes with label ```beta.kubernetes.io/instance-type``` of value ```Standard_B4ms``` get the Per-node concurrent number of 5.
At least one node is expected to have a label with the specified ```RuledConfigs``` element (rule). If no node is with this label, the Per-node rule makes no effect.
If one node falls into more than one rules, e.g., if node1 also has the label ```beta.kubernetes.io/instance-type=Standard_B4ms```, the smallest number (3) will be used.
### Sample
A sample of the ```node-agent-config``` configMap is as below:
```json
{
"loadConcurrency": {
"globalConfig": 2,
"perNodeConfig": [
{
"nodeSelector": {
"matchLabels": {
"kubernetes.io/hostname": "node1"
}
},
"number": 3
},
{
"nodeSelector": {
"matchLabels": {
"beta.kubernetes.io/instance-type": "Standard_B4ms"
}
},
"number": 5
}
]
}
}
```
To create the configMap, users need to save something like the above sample to a json file and then run below command:
```
kubectl create cm node-agent-config -n velero --from-file=<json file name>
```
### Global data path manager
As for the code implementation, data path manager is to maintain the total number of the running VGDP instances and ensure the limit is not excceeded. At present, there is one data path manager instance per controller, as a result, the concurrent numbers are calculated separately for each controller. This doesn't help to limit the concurrency among different requesters.
Therefore, we need to create one global data path manager instance server-wide, and pass it to different controllers. The instance will be created at node-agent server startup.
The concurrent number is required to initiate a data path manager, the number comes from either Per-node concurrent number or Global concurrent number.
Below are some prototypes related to data path manager:
```go
func NewManager(cocurrentNum int) *Manager
func (m *Manager) CreateFileSystemBR(jobName string, requestorType string, ctx context.Context, client client.Client, namespace string, callbacks Callbacks, log logrus.FieldLogger) (AsyncBR, error)
```
[1]: Implemented/unified-repo-and-kopia-integration/unified-repo-and-kopia-integration.md

View File

@@ -0,0 +1,186 @@
# PersistentVolume backup information design
## Abstract
Create a new metadata file in the backup repository's backup name sub-directory to store the backup-including PVC and PV information. The information includes the way of backing up the PVC and PV data, snapshot information, and status. The needed snapshot status can also be recorded there, but the Velero-Native snapshot plugin doesn't provide a way to get the snapshot size from the API, so it's possible that not all snapshot size information is available.
This new additional metadata file is needed when:
* Get a summary of the backup's PVC and PV information, including how the data in them is backed up, or whether the data in them is skipped from backup.
* Find out how the PVC and PV should be restored in the restore process.
* Retrieve the PV's snapshot information for backup.
## Background
There is already a [PR](https://github.com/vmware-tanzu/velero/pull/6496) to track the skipped PVC in the backup. This design will depend on it and go further to get a summary of PVC and PV information, then persist into a metadata file in the backup repository.
In the restore process, the Velero server needs to decide how the PV resource should be restored according to how the PV is backed up. The current logic is to check whether it's backed up by Velero-native snapshot, by file-system backup, or having `DeletionPolicy` set as `Delete`.
The checks are made by the backup-generated PVBs or Snapshots. There is no generic way to find this information, and the CSI backup and Snapshot data movement backup are not covered.
Another thing that needs noticing is when describing the backup, there is no generic way to find the PV's snapshot information.
## Goals
- Create a new metadata file to store backup's PVCs and PVs information and volume data backing up method. The file can be used to let downstream consumers generate a summary.
- Create a generic way to let the Velero server know how the PV resources are backed up.
- Create a generic way to let the Velero server find the PV corresponding snapshot information.
## Non Goals
- Unify how to get snapshot size information for all PV backing-up methods, and all other currently not ready PVs' information.
## High-Level Design
Create _backup-name_-volumes-info.json metadata file in the backup's repository. This file will be encoded to contain all the PVC and PV information included in the backup. The information covers whether the PV or PVC's data is skipped during backup, how its data is backed up, and the backed-up detail information.
Please notice that the new metadata file includes all skipped volume information. This is used to address [the second phase needs of skipped volumes information](https://github.com/vmware-tanzu/velero/issues/5834#issuecomment-1526624211).
The `restoreItem` function can decode the _backup-name_-volumes-info.json file to determine how to handle the PV resource.
## Detailed Design
### The VolumeInfo structure
_backup-name_-volumes-info.json file is a structure that contains an array of structure `VolumeInfo`.
``` golang
type VolumeInfo struct {
PVCName string // The PVC's name.
PVCNamespace string // The PVC's namespace.
PVName string // The PV name.
BackupMethod string // The way the volume data is backed up. The valid value includes `VeleroNativeSnapshot`, `PodVolumeBackup` and `CSISnapshot`.
SnapshotDataMoved bool // Whether the volume's snapshot data is moved to specified storage.
Skipped boolean // Whether the Volume is skipped in this backup.
SkippedReason string // The reason for the volume is skipped in the backup.
StartTimestamp *metav1.Time // Snapshot starts timestamp.
OperationID string // The Async Operation's ID.
CSISnapshotInfo CSISnapshotInfo
SnapshotDataMovementInfo SnapshotDataMovementInfo
NativeSnapshotInfo VeleroNativeSnapshotInfo
PVBInfo PodVolumeBackupInfo
PVInfo PVInfo
}
// CSISnapshotInfo is used for displaying the CSI snapshot status
type CSISnapshotInfo struct {
SnapshotHandle string // It's the storage provider's snapshot ID for CSI.
Size int64 // The snapshot corresponding volume size.
Driver string // The name of the CSI driver.
VSCName string // The name of the VolumeSnapshotContent.
}
// SnapshotDataMovementInfo is used for displaying the snapshot data mover status.
type SnapshotDataMovementInfo struct {
DataMover string // The data mover used by the backup. The valid values are `velero` and ``(equals to `velero`).
UploaderType string // The type of the uploader that uploads the snapshot data. The valid values are `kopia` and `restic`.
RetainedSnapshot string // The name or ID of the snapshot associated object(SAO). SAO is used to support local snapshots for the snapshot data mover, e.g. it could be a VolumeSnapshot for CSI snapshot data moign/pv_backup_info.
SnapshotHandle string // It's the filesystem repository's snapshot ID.
}
// VeleroNativeSnapshotInfo is used for displaying the Velero native snapshot status.
type VeleroNativeSnapshotInfo struct {
SnapshotHandle string // It's the storage provider's snapshot ID for the Velero-native snapshot.
VolumeType string // The cloud provider snapshot volume type.
VolumeAZ string // The cloud provider snapshot volume's availability zones.
IOPS string // The cloud provider snapshot volume's IOPS.
}
// PodVolumeBackupInfo is used for displaying the PodVolumeBackup snapshot status.
type PodVolumeBackupInfo struct {
SnapshotHandle string // It's the file-system uploader's snapshot ID for PodVolumeBackup.
Size int64 // The snapshot corresponding volume size.
UploaderType string // The type of the uploader that uploads the data. The valid values are `kopia` and `restic`.
VolumeName string // The PVC's corresponding volume name used by Pod: https://github.com/kubernetes/kubernetes/blob/e4b74dd12fa8cb63c174091d5536a10b8ec19d34/pkg/apis/core/types.go#L48
PodName string // The Pod name mounting this PVC.
PodNamespace string // The Pod namespace.
NodeName string // The PVB-taken k8s node's name.
}
// PVInfo is used to store some PV information modified after creation.
// Those information are lost after PV recreation.
type PVInfo struct {
ReclaimPolicy string // ReclaimPolicy of PV. It could be different from the referenced StorageClass.
Labels map[string]string // The PV's labels should be kept after recreation.
}
```
### How the VolumeInfo array is generated.
The function `persistBackup` has `backup *pkgbackup.Request` in parameters.
From it, the `VolumeSnapshots`, `PodVolumeBackups`, `CSISnapshots`, `itemOperationsList`, and `SkippedPVTracker` can be read. All of them will be iterated and merged into the `VolumeInfo` array, and then persisted into backup repository in function `persistBackup`.
Please notice that the change happened in async operations are not reflected in the new metadata file. The file only covers the volume changes happen in the Velero server process scope.
A new methods are added to BackupStore to download the VolumeInfo metadata file.
Uploading the metadata file is covered in the exiting `PutBackup` method.
``` golang
type BackupStore interface {
...
GetVolumeInfos(name string) ([]*VolumeInfo, error)
...
}
```
### How the VolumeInfo array is used.
#### Generate the PVC backed-up information summary
The downstream tools can use this VolumeInfo array to format and display their volume information. This is not in the scope of this feature.
#### Retrieve volume backed-up information for `velero backup describe` command
The `velero backup describe` can also use this VolumeInfo array structure to display the volume information. The snapshot data mover volume should use this structure at first, then the Velero native snapshot, CSI snapshot, and PodVolumeBackup can also use this structure. The detailed implementation is also not in this feature's scope.
#### Let restore know how to restore the PV
In the function `restoreItem`, it will determine whether to restore the PV resource by checking it in the Velero native snapshots list, PodVolumeBackup list, and its DeletionPolicy. This logic is still kept. The logic will be used when the new `VolumeInfo` metadata cannot be found to support backward compatibility.
``` golang
if groupResource == kuberesource.PersistentVolumes {
switch {
case hasSnapshot(name, ctx.volumeSnapshots):
...
case hasPodVolumeBackup(obj, ctx):
...
case hasDeleteReclaimPolicy(obj.Object):
...
default:
...
```
After introducing the VolumeInfo array, the following logic will be added.
``` golang
if groupResource == kuberesource.PersistentVolumes {
volumeInfo := GetVolumeInfo(pvName)
switch volumeInfo.BackupMethod {
case VeleroNativeSnapshot:
...
case PodVolumeBackup:
...
case CSISnapshot:
...
default:
// Need to check whether the volume is backed up by the SnapshotDataMover.
if volumeInfo.SnapshotDataMovement:
// Check whether the Velero server should restore the PV depending on the DeletionPolicy setting.
if volumeInfo.Skipped:
```
### How the VolumeInfo metadata file is deleted
_backup-name_-volumes-info.json file is deleted during backup deletion.
## Alternatives Considered
The restore process needs more information about how the PVs are backed up to determine whether this PV should be restored. The released branches also need a similar function, but backporting a new feature into previous releases may not be a good idea, so according to [Anshul Ahuja's suggestion](https://github.com/vmware-tanzu/velero/issues/6595#issuecomment-1731081580), adding more cases here to support checking PV backed-up by CSI plugin and CSI snapshot data mover: https://github.com/vmware-tanzu/velero/blob/5ff5073cc3f364bafcfbd26755e2a92af68ba180/pkg/restore/restore.go#L1206-L1324.
## Security Considerations
There should be no security impact introduced by this design.
## Compatibility
After this design is implemented, there should be no impact on the existing [skipped PVC summary feature](https://github.com/vmware-tanzu/velero/pull/6496).
To support older version backup, which doesn't have the VolumeInfo metadata file, the old logic, which is checking the Velero native snapshots list, PodVolumeBackup list, and PVC DeletionPolicy, is still kept, and supporting CSI snapshots and snapshot data mover logic will be added too.
## Implementation
This will be implemented in the Velero v1.13 development cycle.
## Open Issues
There are no open issues identified by now.

View File

@@ -29,7 +29,7 @@ During restore, the proposal is that Velero will determine if the `APIGroupVersi
The proposed code starts with creating three lists for each backed up resource. The three lists will be created by
(1) reading the directory names in the backup tarball file and seeing which API group versions were backed up from the source cluster,
(2) looking at the target cluster and determining which API group versions are supported, and
(3) getting config maps from the target cluster in order to get user-defined prioritization of versions.
(3) getting ConfigMaps from the target cluster in order to get user-defined prioritization of versions.
The three lists will be used to create a map of chosen versions for each resource to restore. If there is a user-defined list of priority versions, the versions will be checked against the supported versions lists. The highest user-defined priority version that is/was supported by both target and source clusters will be the chosen version for that resource. If no user specified versions are supported by neither target nor source, the versions will be logged and the restore will continue with other prioritizations.

View File

@@ -0,0 +1,145 @@
# Schedule Skip Immediately Config Design
## Abstract
When unpausing schedule, a backup could be due immediately.
New Schedules also create new backup immediately.
This design allows user to *skip **immediately due** backup run upon unpausing or schedule creation*.
## Background
Currently, the default behavior of schedule when `.Status.LastBackup` is nil or is due immediately after unpausing, a backup will be created. This may not be a desired by all users (https://github.com/vmware-tanzu/velero/issues/6517)
User want ability to skip the first immediately due backup when schedule is unpaused and or created.
If you create a schedule with cron "45 * * * *" and pause it at say the 43rd minute and then unpause it at say 50th minute, a backup gets triggered (since .Status.LastBackup is nil or >60min ago).
With this design, user can skip the first immediately due backup when schedule is unpaused and or created.
## Goals
- Add an option so user can when unpausing (when immediately due) or creating new schedule, to not create a backup immediately.
## Non Goals
- Changing the default behavior
## High-Level Design
Add a new field with to the schedule spec and as a new cli flags for install, server, schedule commands; allowing user to skip immediately due backup when unpausing or schedule creation.
If CLI flag is specified during schedule unpause, velero will update the schedule spec accordingly and override prior spec for `skipImmediately``.
## Detailed Design
### CLI Changes
`velero schedule unpause` will now take an optional bool flag `--skip-immediately` to allow user to override the behavior configured for velero server (see `velero server` below).
`velero schedule unpause schedule-1 --skip-immediately=false` will unpause the schedule but not skip the backup if due immediately from `Schedule.Status.LastBackup` timestamp. Backup will be run at the next cron schedule.
`velero schedule unpause schedule-1 --skip-immediately=true` will unpause the schedule and skip the backup if due immediately from `Schedule.Status.LastBackup` timestamp. Backup will also be run at the next cron schedule.
`velero schedule unpause schedule-1` will check `.spec.SkipImmediately` in the schedule to determine behavior. This field will default to false to maintain prior behavior.
`velero server` will add a new flag `--schedule-skip-immediately` to configure default value to patch new schedules created without the field. This flag will default to false to maintain prior behavior if not set.
`velero install` will add a new flag `--schedule-skip-immediately` to configure default value to patch new schedules created without the field. This flag will default to false to maintain prior behavior if not set.
### API Changes
`pkg/apis/velero/v1/schedule_types.go`
```diff
// ScheduleSpec defines the specification for a Velero schedule
type ScheduleSpec struct {
// Template is the definition of the Backup to be run
// on the provided schedule
Template BackupSpec `json:"template"`
// Schedule is a Cron expression defining when to run
// the Backup.
Schedule string `json:"schedule"`
// UseOwnerReferencesBackup specifies whether to use
// OwnerReferences on backups created by this Schedule.
// +optional
// +nullable
UseOwnerReferencesInBackup *bool `json:"useOwnerReferencesInBackup,omitempty"`
// Paused specifies whether the schedule is paused or not
// +optional
Paused bool `json:"paused,omitempty"`
+ // SkipImmediately specifies whether to skip backup if schedule is due immediately from `Schedule.Status.LastBackup` timestamp when schedule is unpaused or if schedule is new.
+ // If true, backup will be skipped immediately when schedule is unpaused if it is due based on .Status.LastBackupTimestamp or schedule is new, and will run at next schedule time.
+ // If false, backup will not be skipped immediately when schedule is unpaused, but will run at next schedule time.
+ // If empty, will follow server configuration (default: false).
+ // +optional
+ SkipImmediately bool `json:"skipImmediately,omitempty"`
}
```
`LastSkipped` will be added to `ScheduleStatus` struct to track the last time a schedule was skipped.
```diff
// ScheduleStatus captures the current state of a Velero schedule
type ScheduleStatus struct {
// Phase is the current phase of the Schedule
// +optional
Phase SchedulePhase `json:"phase,omitempty"`
// LastBackup is the last time a Backup was run for this
// Schedule schedule
// +optional
// +nullable
LastBackup *metav1.Time `json:"lastBackup,omitempty"`
+ // LastSkipped is the last time a Schedule was skipped
+ // +optional
+ // +nullable
+ LastSkipped *metav1.Time `json:"lastSkipped,omitempty"`
// ValidationErrors is a slice of all validation errors (if
// applicable)
// +optional
ValidationErrors []string `json:"validationErrors,omitempty"`
}
```
When `schedule.spec.SkipImmediately` is `true`, `LastSkipped` will be set to the current time, and `schedule.spec.SkipImmediately` set to nil so it can be used again.
The `getNextRunTime()` function below is updated so `LastSkipped` which is after `LastBackup` will be used to determine next run time.
```go
func getNextRunTime(schedule *velerov1.Schedule, cronSchedule cron.Schedule, asOf time.Time) (bool, time.Time) {
var lastBackupTime time.Time
if schedule.Status.LastBackup != nil {
lastBackupTime = schedule.Status.LastBackup.Time
} else {
lastBackupTime = schedule.CreationTimestamp.Time
}
if schedule.Status.LastSkipped != nil && schedule.Status.LastSkipped.After(lastBackupTime) {
lastBackupTime = schedule.Status.LastSkipped.Time
}
nextRunTime := cronSchedule.Next(lastBackupTime)
return asOf.After(nextRunTime), nextRunTime
}
```
When schedule is unpaused, and `Schedule.Status.LastBackup` is not nil, if `Schedule.Status.LastSkipped` is recent, a backup will not be created.
When schedule is unpaused or created with `Schedule.Status.LastBackup` set to nil or schedule is newly created, normally a backup will be created immediately. If `Schedule.Status.LastSkipped` is recent, a backup will not be created.
Backup will be run at the next cron schedule based on LastBackup or LastSkipped whichever is more recent.
## Alternatives Considered
N/A
## Security Considerations
None
## Compatibility
Upon upgrade, the new field will be added to the schedule spec automatically and will default to the prior behavior of running a backup when schedule is unpaused if it is due based on .Status.LastBackup or schedule is new.
Since this is a new field, it will be ignored by older versions of velero.
## Implementation
TBD
## Open Issues
N/A

View File

@@ -433,23 +433,24 @@ spec:
volume: nginx-log
```
We will add the flag for both CLI installation and Helm Chart Installation. Specifically:
- Helm Chart Installation: add the "--pod-volume-backup-uploader" flag into its value.yaml and then generate the deployments according to the value. Value.yaml is the user-provided configuration file, therefore, users could set this value at the time of installation. The changes in Value.yaml are as below:
- Helm Chart Installation: add the "--uploaderType" and "--default-volumes-to-fs-backup" flag into its value.yaml and then generate the deployments according to the value. Value.yaml is the user-provided configuration file, therefore, users could set this value at the time of installation. The changes in Value.yaml are as below:
```
command:
- /velero
args:
- server
{{- with .Values.configuration }}
{{- if .pod-volume-backup-uploader "restic" }}
- --legacy
{{- end }}
- --uploader-type={{ default "restic" .uploaderType }}
{{- if .defaultVolumesToFsBackup }}
- --default-volumes-to-fs-backup
{{- end }}
```
- CLI Installation: add the "--pod-volume-backup-uploader" flag into the installation command line, and then create the two deployments accordingly. Users could change the option at the time of installation. The CLI is as below:
```velero install --pod-volume-backup-uploader=restic```
```velero install --pod-volume-backup-uploader=kopia```
- CLI Installation: add the "--uploaderType" and "--default-volumes-to-fs-backup" flag into the installation command line, and then create the two deployments accordingly. Users could change the option at the time of installation. The CLI is as below:
```velero install --uploader-type=restic --default-volumes-to-fs-backup --use-node-agent```
```velero install --uploader-type=kopia --default-volumes-to-fs-backup --use-node-agent```
## Upgrade
For upgrade, we allow users to change the path by specifying "--pod-volume-backup-uploader" flag in the same way as the fresh installation. Therefore, the flag change should be applied to the Velero server after upgrade. Additionally, We need to add a label to Velero server to indicate the current path, so as to provide an easy for querying it.
For upgrade, we allow users to change the path by specifying "--uploader-type" flag in the same way as the fresh installation. Therefore, the flag change should be applied to the Velero server after upgrade. Additionally, We need to add a label to Velero server to indicate the current path, so as to provide an easy for querying it.
Moreover, if users upgrade from the old release, we need to change the existing Restic Daemonset name to VeleroNodeAgent daemonSet. The name change should be applied after upgrade.
The recommended way for upgrade is to modify the related Velero resource directly through kubectl, the above changes will be applied in the same way. We need to modify the Velero doc for all these changes.
@@ -459,7 +460,7 @@ Below Velero CLI or its output needs some changes:
- ```Velero restore describe```: the output should indicate the path
- ```Velero restic repo get```: the name of this CLI should be changed to a generic one, for example, "Velero repo get"; the output of this CLI should print all the backup repository if Restic repository and Unified Repository exist at the same time
At present, we don't have a requirement for selecting the path during backup, so we don't change the ```Velero backup create``` CLI for now. If there is a requirement in future, we could simply add a flag similar to "--pod-volume-backup-uploader" to select the path.
At present, we don't have a requirement for selecting the path during backup, so we don't change the ```Velero backup create``` CLI for now. If there is a requirement in future, we could simply add a flag similar to "--uploader-type" to select the path.
## CR Example
Below sample files demonstrate complete CRs with all the changes mentioned above:

View File

@@ -0,0 +1,181 @@
# Velero Uploader Configuration Integration and Extensibility
## Abstract
This design proposal aims to make Velero Uploader configurable by introducing a structured approach for managing Uploader settings. we will define and standardize a data structure to facilitate future additions to Uploader configurations. This enhancement provides a template for extending Uploader-related options. And also includes examples of adding sub-options to the Uploader Configuration.
## Background
Velero is widely used for backing up and restoring Kubernetes clusters. In various scenarios, optimizing the backup process is essential, future needs may arise for adding more configuration options related to the Uploader component especially when dealing with large datasets. Therefore, a standardized configuration template is required.
## Goals
1. **Extensible Uploader Configuration**: Provide an extensible approach to manage Uploader configurations, making it easy to add and modify configuration options related to the Velero uploader.
2. **User-friendliness**: Ensure that the new Uploader configuration options are easy to understand and use for Velero users without introducing excessive complexity.
## Non Goals
1. Expanding to other Velero components: The primary focus of this design is Uploader configuration and does not include extending to other components or modules within Velero. Configuration changes for other components may require separate design and implementation.
## High-Level Design
To achieve extensibility in Velero Uploader configurations, the following key components and changes are proposed:
### UploaderConfig Structure
Two new data structures, `UploaderConfigForBackup` and `UploaderConfigForRestore`, will be defined to store Uploader configurations. These structures will include the configuration options related to backup and restore for Uploader:
```go
type UploaderConfigForBackup struct {
}
type UploaderConfigForRestore struct {
}
```
### Integration with Backup & Restore CRD
The Velero CLI will support an uploader configuration-related flag, allowing users to set the value when creating backups or restores. This value will be stored in the `UploaderConfig` field within the `Backup` CRD and `Restore` CRD:
```go
type BackupSpec struct {
// UploaderConfig specifies the configuration for the uploader.
// +optional
// +nullable
UploaderConfig *UploaderConfigForBackup `json:"uploaderConfig,omitempty"`
}
type RestoreSpec struct {
// UploaderConfig specifies the configuration for the restore.
// +optional
// +nullable
UploaderConfig *UploaderConfigForRestore `json:"uploaderConfig,omitempty"`
}
```
### Configuration Propagated to Different CRDs
The configuration specified in `UploaderConfig` needs to be effective for backup and restore both by file system way and data-mover way.
Therefore, the `UploaderConfig` field value from the `Backup` CRD should be propagated to `PodVolumeBackup` and `DataUpload` CRDs.
We aim for the configurations in PodVolumeBackup to originate not only from UploaderConfig in Backup but also potentially from other sources such as the server or configmap. Simultaneously, to align with the configurations in DataUpload's `DataMoverConfig map[string]string`, we have defined an `UploaderSettings map[string]string` here to record the configurations in PodVolumeBackup.
```go
type PodVolumeBackupSpec struct {
// UploaderSettings are a map of key-value pairs that should be applied to the
// uploader configuration.
// +optional
// +nullable
UploaderSettings map[string]string `json:"uploaderSettings,omitempty"`
}
```
`UploaderConfig` will be stored in DataUpload's `DataMoverConfig map[string]string` field.
Also the `UploaderConfig` field value from the `Restore` CRD should be propagated to `PodVolumeRestore` and `DataDownload` CRDs:
```go
type PodVolumeRestoreSpec struct {
// UploaderSettings are a map of key-value pairs that should be applied to the
// uploader configuration.
// +optional
// +nullable
UploaderSettings map[string]string `json:"uploaderSettings,omitempty"`
}
```
Also `UploaderConfig` will be stored in DataUpload's `DataMoverConfig map[string]string` field.
### Store and Get Configuration
We need to store and retrieve configurations in the PodVolumeBackup and DataUpload structs. This involves type conversion based on the configuration type, storing it in a map[string]string, or performing type conversion from this map for retrieval.
PodVolumeRestore and DataDownload are also similar.
## Sub-options in UploaderConfig
Adding fields above in CRDs can accommodate any future additions to Uploader configurations by adding new fields to the `UploaderConfigForBackup` or `UploaderConfigForRestore` structures.
### Parallel Files Upload
This section focuses on enabling the configuration for the number of parallel file uploads during backups.
below are the key steps that should be added to support this new feature.
#### Velero CLI
The Velero CLI will support a `--parallel-files-upload` flag, allowing users to set the `ParallelFilesUpload` value when creating backups.
#### UploaderConfig
below the sub-option `ParallelFilesUpload` is added into UploaderConfig:
```go
// UploaderConfigForBackup defines the configuration for the uploader when doing backup.
type UploaderConfigForBackup struct {
// ParallelFilesUpload is the number of files parallel uploads to perform when using the uploader.
// +optional
ParallelFilesUpload int `json:"parallelFilesUpload,omitempty"`
}
```
#### Kopia Parallel Upload Policy
Velero Uploader can set upload policies when calling Kopia APIs. In the Kopia codebase, the structure for upload policies is defined as follows:
```go
// UploadPolicy describes the policy to apply when uploading snapshots.
type UploadPolicy struct {
...
MaxParallelFileReads *OptionalInt `json:"maxParallelFileReads,omitempty"`
}
```
Velero can set the `MaxParallelFileReads` parameter for Kopia's upload policy as follows:
```go
curPolicy := getDefaultPolicy()
if parallelUpload > 0 {
curPolicy.UploadPolicy.MaxParallelFileReads = newOptionalInt(parallelUpload)
}
```
#### Restic Parallel Upload Policy
As Restic does not support parallel file upload, the configuration would not take effect, so we should output a warning when the user sets the `ParallelFilesUpload` value by using Restic to do a backup.
```go
if parallelFilesUpload > 0 {
log.Warnf("ParallelFilesUpload is set to %d, but Restic does not support parallel file uploads. Ignoring", parallelFilesUpload)
}
```
Roughly, the process is as follows:
1. Users pass the ParallelFilesUpload parameter and its value through the Velero CLI. This parameter and its value are stored as a sub-option within UploaderConfig and then placed into the Backup CR.
2. When users perform file system backups, UploaderConfig is passed to the PodVolumeBackup CR. When users use the Data-mover for backups, it is passed to the DataUpload CR.
3. The configuration will be stored in map[string]string type of field in CR.
3. Each respective controller within the CRs calls the uploader, and the ParallelFilesUpload from map in CRs is passed to the uploader.
4. When the uploader subsequently calls the Kopia API, it can use the ParallelFilesUpload to set the MaxParallelFileReads parameter, and if the uploader calls the Restic command it would output one warning log for Restic does not support this feature.
### Sparse Option For Kopia & Restic Restore
In many system files, numerous zero bytes or empty blocks persist, occupying physical storage space. Sparse restore employs a more intelligent approach, including appropriately handling empty blocks, thereby achieving the correct system state. This write sparse files mechanism aims to enhance restore efficiency while maintaining restoration accuracy.
Below are the key steps that should be added to support this new feature.
#### Velero CLI
The Velero CLI will support a `--write-sparse-files` flag, allowing users to set the `WriteSparseFiles` value when creating restores with Restic or Kopia uploader.
#### UploaderConfig
below the sub-option `WriteSparseFiles` is added into UploaderConfig:
```go
// UploaderConfigForRestore defines the configuration for the restore.
type UploaderConfigForRestore struct {
// WriteSparseFiles is a flag to indicate whether write files sparsely or not.
// +optional
// +nullable
WriteSparseFiles *bool `json:"writeSparseFiles,omitempty"`
}
```
### Enable Sparse in Restic
For Restic, it could be enabled by pass the flag `--sparse` in creating restore:
```bash
restic restore create --sparse $snapshotID
```
### Enable Sparse in Kopia
For Kopia, it could be enabled this feature by the `WriteSparseFiles` field in the [FilesystemOutput](https://pkg.go.dev/github.com/kopia/kopia@v0.13.0/snapshot/restore#FilesystemOutput).
```go
fsOutput := &restore.FilesystemOutput{
WriteSparseFiles: uploaderutil.GetWriteSparseFiles(uploaderCfg),
}
```
Roughly, the process is as follows:
1. Users pass the WriteSparseFiles parameter and its value through the Velero CLI. This parameter and its value are stored as a sub-option within UploaderConfig and then placed into the Restore CR.
2. When users perform file system restores, UploaderConfig is passed to the PodVolumeRestore CR. When users use the Data-mover for restores, it is passed to the DataDownload CR.
3. The configuration will be stored in map[string]string type of field in CR.
4. Each respective controller within the CRs calls the uploader, and the WriteSparseFiles from map in CRs is passed to the uploader.
5. When the uploader subsequently calls the Kopia API, it can use the WriteSparseFiles to set the WriteSparseFiles parameter, and if the uploader calls the Restic command it would append `--sparse` flag within the restore command.
## Alternatives Considered
To enhance extensibility further, the option of storing `UploaderConfig` in a Kubernetes ConfigMap can be explored, this approach would allow the addition and modification of configuration options without the need to modify the CRD.

View File

@@ -397,7 +397,7 @@ Target volume information includes PVC and PV that represents the volume and the
The data mover information and backup repository information are the same with DataUpload CRD.
DataDownload CRD defines the same status as DataUpload CRD with nearly the same meanings.
Below is the full spec of DataUpload CRD:
Below is the full spec of DataDownload CRD:
```
apiVersion: apiextensions.k8s.io/v1alpha1
kind: CustomResourceDefinition
@@ -626,10 +626,9 @@ Therefore, we have below principles:
We will address the two principles step by step. As the first step, VBDMs parallelism is designed as below:
- We dont create the load balancing mechanism for the first step, we dont detect the accessibility of the volume/volume snapshot explicitly. Instead, we create the backupPod/restorePod under the help of Kubernetes, Kubernetes schedules the backupPod/restorePod to the appropriate node, then the data movement controller on that node will handle the DataUpload/DataDownload CR there, so the resource will be consumed from that node.
- We dont expose the configurable concurrency value in one node, instead, the concurrency value in value will be set to 1, that is, there is no concurrency in one node.
- We expose the configurable concurrency value per node, for details of how the concurrency number constraints various backups and restores which share VGDP, check the [node-agent concurrency design][3].
As for the resource consumption, it is related to the data scale of the data movement activity and it is charged to node-agent pods, so users should configure enough resource to node-agent pods.
Meanwhile, Pod Volume Backup/Restore are also running in node-agent pods, we dont restrict the concurrency of these two types. For example, in one node, one Pod Volume Backup and one DataUpload could run at the same time, in this case, the resource will be shared by the two activities.
## Progress Report
When a DUCR/DDCR is in InProgress phase, users could check the progress.
@@ -666,6 +665,9 @@ At present, VBDM doesn't support recovery, so it will follow the second rule.
## Kopia For Block Device
To work with block devices, VGDP will be updated. Today, when Kopia attempts to create a snapshot of the block device, it will error because kopia does not support this file type. Kopia does have a nice set of interfaces that are able to be extended though.
**Notice**
The Kopia block mode uploader only supports non-Windows platforms, because the block mode code invokes some system calls that are not present in the Windows platform.
To achieve the necessary information to determine the type of volume that is being used, we will need to pass in the volume mode in provider interface.
```go
@@ -689,7 +691,8 @@ type Provider interface {
tags map[string]string,
forceFull bool,
parentSnapshot string,
volMode uploader.PersistentVolumeMode,
volMode uploader.PersistentVolumeMode,
uploaderCfg shared.UploaderConfig,
updater uploader.ProgressUpdater) (string, bool, error)
RunRestore(
@@ -703,33 +706,38 @@ type Provider interface {
In this case, we will extend the default kopia uploader to add the ability, when a given volume is for a block mode and is mapped as a device, we will use the [StreamingFile](https://pkg.go.dev/github.com/kopia/kopia@v0.13.0/fs#StreamingFile) to stream the device and backup to the kopia repository.
```go
func getLocalBlockEntry(kopiaEntry fs.Entry, log logrus.FieldLogger) (fs.Entry, error) {
path := kopiaEntry.LocalFilesystemPath()
fileInfo, err := os.Lstat(path)
func getLocalBlockEntry(sourcePath string) (fs.Entry, error) {
source, err := resolveSymlink(sourcePath)
if err != nil {
return nil, errors.Wrapf(err, "Unable to get the source device information %s", path)
return nil, errors.Wrap(err, "resolveSymlink")
}
fileInfo, err := os.Lstat(source)
if err != nil {
return nil, errors.Wrapf(err, "unable to get the source device information %s", source)
}
if (fileInfo.Sys().(*syscall.Stat_t).Mode & syscall.S_IFMT) != syscall.S_IFBLK {
return nil, errors.Errorf("Source path %s is not a block device", path)
return nil, errors.Errorf("source path %s is not a block device", source)
}
device, err := os.Open(path)
device, err := os.Open(source)
if err != nil {
if os.IsPermission(err) || err.Error() == ErrNotPermitted {
return nil, errors.Wrapf(err, "No permission to open the source device %s, make sure that node agent is running in privileged mode", path)
return nil, errors.Wrapf(err, "no permission to open the source device %s, make sure that node agent is running in privileged mode", source)
}
return nil, errors.Wrapf(err, "Unable to open the source device %s", path)
return nil, errors.Wrapf(err, "unable to open the source device %s", source)
}
return virtualfs.StreamingFileFromReader(kopiaEntry.Name(), device), nil
sf := virtualfs.StreamingFileFromReader(source, device)
return virtualfs.NewStaticDirectory(source, []fs.Entry{sf}), nil
}
```
In the `pkg/uploader/kopia/snapshot.go` this is used in the Backup call like
```go
if volMode == PersistentVolumeFilesystem {
if volMode == uploader.PersistentVolumeFilesystem {
// to be consistent with restic when backup empty dir returns one error for upper logic handle
dirs, err := os.ReadDir(source)
if err != nil {
@@ -742,15 +750,17 @@ In the `pkg/uploader/kopia/snapshot.go` this is used in the Backup call like
source = filepath.Clean(source)
...
sourceEntry, err := getLocalFSEntry(source)
if err != nil {
return nil, false, errors.Wrap(err, "Unable to get local filesystem entry")
}
var sourceEntry fs.Entry
if volMode == PersistentVolumeBlock {
sourceEntry, err = getLocalBlockEntry(sourceEntry, log)
if volMode == uploader.PersistentVolumeBlock {
sourceEntry, err = getLocalBlockEntry(source)
if err != nil {
return nil, false, errors.Wrap(err, "Unable to get local block device entry")
return nil, false, errors.Wrap(err, "unable to get local block device entry")
}
} else {
sourceEntry, err = getLocalFSEntry(source)
if err != nil {
return nil, false, errors.Wrap(err, "unable to get local filesystem entry")
}
}
@@ -766,6 +776,8 @@ We only need to extend two functions the rest will be passed through.
```go
type BlockOutput struct {
*restore.FilesystemOutput
targetFileName string
}
var _ restore.Output = &BlockOutput{}
@@ -773,30 +785,15 @@ var _ restore.Output = &BlockOutput{}
const bufferSize = 128 * 1024
func (o *BlockOutput) WriteFile(ctx context.Context, relativePath string, remoteFile fs.File) error {
targetFileName, err := filepath.EvalSymlinks(o.TargetPath)
if err != nil {
return errors.Wrapf(err, "Unable to evaluate symlinks for %s", targetFileName)
}
fileInfo, err := os.Lstat(targetFileName)
if err != nil {
return errors.Wrapf(err, "Unable to get the target device information for %s", targetFileName)
}
if (fileInfo.Sys().(*syscall.Stat_t).Mode & syscall.S_IFMT) != syscall.S_IFBLK {
return errors.Errorf("Target file %s is not a block device", targetFileName)
}
remoteReader, err := remoteFile.Open(ctx)
if err != nil {
return errors.Wrapf(err, "Failed to open remote file %s", remoteFile.Name())
return errors.Wrapf(err, "failed to open remote file %s", remoteFile.Name())
}
defer remoteReader.Close()
targetFile, err := os.Create(targetFileName)
targetFile, err := os.Create(o.targetFileName)
if err != nil {
return errors.Wrapf(err, "Failed to open file %s", targetFileName)
return errors.Wrapf(err, "failed to open file %s", o.targetFileName)
}
defer targetFile.Close()
@@ -807,7 +804,7 @@ func (o *BlockOutput) WriteFile(ctx context.Context, relativePath string, remote
bytesToWrite, err := remoteReader.Read(buffer)
if err != nil {
if err != io.EOF {
return errors.Wrapf(err, "Failed to read data from remote file %s", targetFileName)
return errors.Wrapf(err, "failed to read data from remote file %s", o.targetFileName)
}
readData = false
}
@@ -819,7 +816,7 @@ func (o *BlockOutput) WriteFile(ctx context.Context, relativePath string, remote
bytesToWrite -= bytesWritten
offset += bytesWritten
} else {
return errors.Wrapf(err, "Failed to write data to file %s", targetFileName)
return errors.Wrapf(err, "failed to write data to file %s", o.targetFileName)
}
}
}
@@ -829,42 +826,43 @@ func (o *BlockOutput) WriteFile(ctx context.Context, relativePath string, remote
}
func (o *BlockOutput) BeginDirectory(ctx context.Context, relativePath string, e fs.Directory) error {
targetFileName, err := filepath.EvalSymlinks(o.TargetPath)
var err error
o.targetFileName, err = filepath.EvalSymlinks(o.TargetPath)
if err != nil {
return errors.Wrapf(err, "Unable to evaluate symlinks for %s", targetFileName)
return errors.Wrapf(err, "unable to evaluate symlinks for %s", o.targetFileName)
}
fileInfo, err := os.Lstat(targetFileName)
fileInfo, err := os.Lstat(o.targetFileName)
if err != nil {
return errors.Wrapf(err, "Unable to get the target device information for %s", o.TargetPath)
return errors.Wrapf(err, "unable to get the target device information for %s", o.TargetPath)
}
if (fileInfo.Sys().(*syscall.Stat_t).Mode & syscall.S_IFMT) != syscall.S_IFBLK {
return errors.Errorf("Target file %s is not a block device", o.TargetPath)
return errors.Errorf("target file %s is not a block device", o.TargetPath)
}
return nil
}
```
Of note, we do need to add root access to the daemon set node agent to access the new mount.
Additional mount is required in the node-agent specification to resolve symlinks to the block devices from /host_pods/POD_ID/volumeDevices/kubernetes.io~csi directory.
```yaml
...
- mountPath: /var/lib/kubelet/plugins
mountPropagation: HostToContainer
name: host-plugins
....
- hostPath:
path: /var/lib/kubelet/plugins
name: host-plugins
```
...
Privileged mode is required to access the block devices in /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish directory as confirmed by testing on EKS and Minikube.
```yaml
SecurityContext: &corev1.SecurityContext{
Privileged: &c.privilegedAgent,
Privileged: &c.privilegedNodeAgent,
},
```
## Plugin Data Movers
@@ -971,5 +969,6 @@ Restore command is kept as is.
[1]: ../unified-repo-and-kopia-integration/unified-repo-and-kopia-integration.md
[2]: ../general-progress-monitoring.md
[1]: ../Implemented/unified-repo-and-kopia-integration/unified-repo-and-kopia-integration.md
[2]: ../Implemented/general-progress-monitoring.md
[3]: ../node-agent-concurrency.md

182
go.mod
View File

@@ -1,70 +1,79 @@
module github.com/vmware-tanzu/velero
go 1.20
go 1.21
toolchain go1.21.9
require (
cloud.google.com/go/storage v1.30.1
cloud.google.com/go/storage v1.33.0
github.com/Azure/azure-pipeline-go v0.2.3
github.com/Azure/azure-sdk-for-go v67.2.0+incompatible
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.8.0
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.3.1
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.3.0
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.1.0
github.com/Azure/azure-storage-blob-go v0.15.0
github.com/Azure/go-autorest/autorest v0.11.27
github.com/Azure/go-autorest/autorest/azure/auth v0.5.8
github.com/Azure/go-autorest/autorest/to v0.3.0
github.com/aws/aws-sdk-go v1.44.253
github.com/aws/aws-sdk-go-v2 v1.21.0
github.com/aws/aws-sdk-go-v2/config v1.18.42
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.11.87
github.com/aws/aws-sdk-go-v2/service/ec2 v1.123.0
github.com/aws/aws-sdk-go-v2/service/s3 v1.40.0
github.com/bombsimon/logrusr/v3 v3.0.0
github.com/evanphx/json-patch v5.6.0+incompatible
github.com/fatih/color v1.15.0
github.com/gobwas/glob v0.2.3
github.com/golang/protobuf v1.5.3
github.com/google/go-cmp v0.5.9
github.com/google/uuid v1.3.0
github.com/google/go-cmp v0.6.0
github.com/google/uuid v1.3.1
github.com/hashicorp/go-hclog v0.14.1
github.com/hashicorp/go-plugin v1.4.3
github.com/joho/godotenv v1.3.0
github.com/kopia/kopia v0.13.0
github.com/kubernetes-csi/external-snapshotter/client/v4 v4.2.0
github.com/kopia/kopia v0.14.1
github.com/kubernetes-csi/external-snapshotter/client/v7 v7.0.0
github.com/onsi/ginkgo v1.16.5
github.com/onsi/gomega v1.20.1
github.com/onsi/gomega v1.30.0
github.com/pkg/errors v0.9.1
github.com/prometheus/client_golang v1.15.0
github.com/prometheus/client_golang v1.18.0
github.com/robfig/cron v1.1.0
github.com/sirupsen/logrus v1.9.0
github.com/sirupsen/logrus v1.9.3
github.com/spf13/afero v1.6.0
github.com/spf13/cobra v1.4.0
github.com/spf13/cobra v1.7.0
github.com/spf13/pflag v1.0.5
github.com/stretchr/testify v1.8.2
github.com/stretchr/testify v1.8.4
github.com/vmware-tanzu/crash-diagnostics v0.3.7
go.uber.org/zap v1.24.0
golang.org/x/exp v0.0.0-20221028150844-83b7d23a625f
golang.org/x/mod v0.10.0
golang.org/x/net v0.9.0
golang.org/x/oauth2 v0.7.0
golang.org/x/text v0.9.0
google.golang.org/api v0.120.0
google.golang.org/grpc v1.54.0
google.golang.org/protobuf v1.30.0
go.uber.org/zap v1.26.0
golang.org/x/exp v0.0.0-20230522175609-2e198f4a06a1
golang.org/x/mod v0.13.0
golang.org/x/net v0.19.0
golang.org/x/oauth2 v0.13.0
golang.org/x/text v0.14.0
google.golang.org/api v0.146.0
google.golang.org/grpc v1.58.3
google.golang.org/protobuf v1.33.0
gopkg.in/yaml.v3 v3.0.1
k8s.io/api v0.25.6
k8s.io/apiextensions-apiserver v0.24.2
k8s.io/apimachinery v0.25.6
k8s.io/api v0.29.0
k8s.io/apiextensions-apiserver v0.29.0
k8s.io/apimachinery v0.29.0
k8s.io/cli-runtime v0.24.0
k8s.io/client-go v0.25.6
k8s.io/klog/v2 v2.70.1
k8s.io/client-go v0.29.0
k8s.io/klog/v2 v2.110.1
k8s.io/kube-aggregator v0.19.12
k8s.io/metrics v0.25.6
k8s.io/utils v0.0.0-20220728103510-ee6ede2d64ed
sigs.k8s.io/controller-runtime v0.12.2
sigs.k8s.io/yaml v1.3.0
k8s.io/utils v0.0.0-20230726121419-3b25d923346b
sigs.k8s.io/controller-runtime v0.17.2
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd
sigs.k8s.io/yaml v1.4.0
)
require (
cloud.google.com/go v0.110.0 // indirect
cloud.google.com/go/compute v1.19.0 // indirect
cloud.google.com/go v0.110.7 // indirect
cloud.google.com/go/compute v1.23.0 // indirect
cloud.google.com/go/compute/metadata v0.2.3 // indirect
cloud.google.com/go/iam v0.13.0 // indirect
github.com/Azure/azure-sdk-for-go/sdk/azcore v0.21.1 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v0.8.3 // indirect
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v0.3.0 // indirect
cloud.google.com/go/iam v1.1.1 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.3.0 // indirect
github.com/Azure/go-autorest v14.2.0+incompatible // indirect
github.com/Azure/go-autorest/autorest/adal v0.9.20 // indirect
github.com/Azure/go-autorest/autorest/azure/cli v0.4.2 // indirect
@@ -72,6 +81,22 @@ require (
github.com/Azure/go-autorest/autorest/validation v0.2.0 // indirect
github.com/Azure/go-autorest/logger v0.2.1 // indirect
github.com/Azure/go-autorest/tracing v0.6.0 // indirect
github.com/AzureAD/microsoft-authentication-library-for-go v1.1.1 // indirect
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.4.13 // indirect
github.com/aws/aws-sdk-go-v2/credentials v1.13.40 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.13.11 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.41 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.4.35 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.3.43 // indirect
github.com/aws/aws-sdk-go-v2/internal/v4a v1.1.4 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.9.14 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.1.36 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.9.35 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.15.4 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.14.1 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.17.1 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.22.0 // indirect
github.com/aws/smithy-go v1.14.2 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/chmduquesne/rollinghash v4.0.0+incompatible // indirect
@@ -79,82 +104,91 @@ require (
github.com/dimchansky/utfbom v1.1.1 // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/edsrzf/mmap-go v1.1.0 // indirect
github.com/emicklei/go-restful/v3 v3.8.0 // indirect
github.com/fsnotify/fsnotify v1.5.4 // indirect
github.com/go-logr/logr v1.2.3 // indirect
github.com/emicklei/go-restful/v3 v3.11.0 // indirect
github.com/evanphx/json-patch/v5 v5.8.0 // indirect
github.com/fsnotify/fsnotify v1.7.0 // indirect
github.com/go-logr/logr v1.4.1 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-logr/zapr v1.2.0 // indirect
github.com/go-openapi/jsonpointer v0.19.5 // indirect
github.com/go-openapi/jsonreference v0.20.0 // indirect
github.com/go-openapi/swag v0.21.1 // indirect
github.com/go-logr/zapr v1.3.0 // indirect
github.com/go-openapi/jsonpointer v0.19.6 // indirect
github.com/go-openapi/jsonreference v0.20.2 // indirect
github.com/go-openapi/swag v0.22.3 // indirect
github.com/gofrs/flock v0.8.1 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang-jwt/jwt/v4 v4.5.0 // indirect
github.com/golang-jwt/jwt/v5 v5.0.0 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/google/gnostic v0.6.9 // indirect
github.com/google/gnostic-models v0.6.8 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/google/s2a-go v0.1.2 // indirect
github.com/googleapis/enterprise-certificate-proxy v0.2.3 // indirect
github.com/googleapis/gax-go/v2 v2.8.0 // indirect
github.com/google/s2a-go v0.1.7 // indirect
github.com/googleapis/enterprise-certificate-proxy v0.3.1 // indirect
github.com/googleapis/gax-go/v2 v2.12.0 // indirect
github.com/gorilla/websocket v1.5.0 // indirect
github.com/hashicorp/cronexpr v1.1.2 // indirect
github.com/hashicorp/yamux v0.0.0-20180604194846-3520598351bb // indirect
github.com/imdario/mergo v0.3.13 // indirect
github.com/inconshreveable/mousetrap v1.0.0 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.16.5 // indirect
github.com/klauspost/cpuid/v2 v2.2.4 // indirect
github.com/klauspost/pgzip v1.2.5 // indirect
github.com/klauspost/reedsolomon v1.11.7 // indirect
github.com/klauspost/compress v1.17.0 // indirect
github.com/klauspost/cpuid/v2 v2.2.5 // indirect
github.com/klauspost/pgzip v1.2.6 // indirect
github.com/klauspost/reedsolomon v1.11.8 // indirect
github.com/kylelemons/godebug v1.1.0 // indirect
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-ieproxy v0.0.1 // indirect
github.com/mattn/go-isatty v0.0.17 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect
github.com/mattn/go-isatty v0.0.19 // indirect
github.com/matttproud/golang_protobuf_extensions/v2 v2.0.0 // indirect
github.com/minio/md5-simd v1.1.2 // indirect
github.com/minio/minio-go/v7 v7.0.52 // indirect
github.com/minio/sha256-simd v1.0.0 // indirect
github.com/minio/minio-go/v7 v7.0.63 // indirect
github.com/minio/sha256-simd v1.0.1 // indirect
github.com/mitchellh/go-homedir v1.1.0 // indirect
github.com/mitchellh/go-testing-interface v1.0.0 // indirect
github.com/moby/spdystream v0.2.0 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f // indirect
github.com/natefinch/atomic v1.0.1 // indirect
github.com/nxadm/tail v1.4.8 // indirect
github.com/oklog/run v1.0.0 // indirect
github.com/pierrec/lz4 v2.6.1+incompatible // indirect
github.com/pkg/browser v0.0.0-20210911075715-681adbf594b8 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/prometheus/client_model v0.3.0 // indirect
github.com/prometheus/common v0.42.0 // indirect
github.com/prometheus/procfs v0.9.0 // indirect
github.com/rogpeppe/go-internal v1.9.0 // indirect
github.com/rs/xid v1.4.0 // indirect
github.com/prometheus/client_model v0.5.0 // indirect
github.com/prometheus/common v0.45.0 // indirect
github.com/prometheus/procfs v0.12.0 // indirect
github.com/rs/xid v1.5.0 // indirect
github.com/stretchr/objx v0.5.0 // indirect
github.com/vladimirvivien/gexe v0.1.1 // indirect
github.com/zeebo/blake3 v0.2.3 // indirect
go.opencensus.io v0.24.0 // indirect
go.opentelemetry.io/otel v1.14.0 // indirect
go.opentelemetry.io/otel/trace v1.14.0 // indirect
go.opentelemetry.io/otel v1.19.0 // indirect
go.opentelemetry.io/otel/metric v1.19.0 // indirect
go.opentelemetry.io/otel/trace v1.19.0 // indirect
go.starlark.net v0.0.0-20201006213952-227f4aabceb5 // indirect
go.uber.org/atomic v1.9.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
golang.org/x/crypto v0.8.0 // indirect
golang.org/x/sync v0.1.0 // indirect
golang.org/x/sys v0.7.0 // indirect
golang.org/x/term v0.7.0 // indirect
golang.org/x/time v0.0.0-20220609170525-579cf78fd858 // indirect
golang.org/x/crypto v0.17.0 // indirect
golang.org/x/sync v0.5.0 // indirect
golang.org/x/sys v0.16.0 // indirect
golang.org/x/term v0.15.0 // indirect
golang.org/x/time v0.3.0 // indirect
golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2 // indirect
gomodules.xyz/jsonpatch/v2 v2.2.0 // indirect
gomodules.xyz/jsonpatch/v2 v2.4.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1 // indirect
google.golang.org/genproto v0.0.0-20230913181813-007df8e322eb // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20230913181813-007df8e322eb // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20230920204549-e6e6cdab5c13 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/ini.v1 v1.67.0 // indirect
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
k8s.io/component-base v0.24.2 // indirect
k8s.io/kube-openapi v0.0.0-20220803162953-67bda5d908f1 // indirect
sigs.k8s.io/json v0.0.0-20220713155537-f223a00ba0e2 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 // indirect
k8s.io/component-base v0.29.0 // indirect
k8s.io/kube-openapi v0.0.0-20231010175941-2dd684a91f00 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.4.1 // indirect
)
replace github.com/kopia/kopia => github.com/project-velero/kopia v0.0.0-20231023031817-cf7bbc7f8519

525
go.sum

File diff suppressed because it is too large Load Diff

View File

@@ -326,6 +326,12 @@ linters:
- unused
- usestdlibvars
- whitespace
- dupword
- errchkjson
- ginkgolinter
- nilerr
- noctx
- nolintlint
fast: false

View File

@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
FROM --platform=linux/amd64 golang:1.20-bullseye
FROM --platform=linux/amd64 golang:1.21.9-bookworm
ARG GOPROXY
@@ -56,7 +56,7 @@ RUN wget --quiet https://github.com/goreleaser/goreleaser/releases/download/v1.1
chmod +x /usr/bin/goreleaser
# get golangci-lint
RUN curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(go env GOPATH)/bin v1.51.0
RUN curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(go env GOPATH)/bin v1.54.2
# install kubectl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl

View File

@@ -89,7 +89,7 @@ else
fi
if [[ -z "$BUILDX_PLATFORMS" ]]; then
BUILDX_PLATFORMS="linux/amd64,linux/arm64,linux/arm/v7,linux/ppc64le"
BUILDX_PLATFORMS="linux/amd64,linux/arm64"
fi
# Debugging info

View File

@@ -1,60 +1,215 @@
diff --git a/go.mod b/go.mod
index 5f939c481..6f281b45d 100644
index 5f939c481..0b760039b 100644
--- a/go.mod
+++ b/go.mod
@@ -25,12 +25,12 @@ require (
@@ -24,32 +24,32 @@ require (
github.com/restic/chunker v0.4.0
github.com/spf13/cobra v1.6.1
github.com/spf13/pflag v1.0.5
golang.org/x/crypto v0.5.0
- golang.org/x/crypto v0.5.0
- golang.org/x/net v0.5.0
+ golang.org/x/net v0.7.0
golang.org/x/oauth2 v0.4.0
- golang.org/x/oauth2 v0.4.0
+ golang.org/x/crypto v0.17.0
+ golang.org/x/net v0.17.0
+ golang.org/x/oauth2 v0.7.0
golang.org/x/sync v0.1.0
- golang.org/x/sys v0.4.0
- golang.org/x/term v0.4.0
- golang.org/x/text v0.6.0
+ golang.org/x/sys v0.5.0
+ golang.org/x/term v0.5.0
+ golang.org/x/text v0.7.0
google.golang.org/api v0.106.0
- google.golang.org/api v0.106.0
+ golang.org/x/sys v0.15.0
+ golang.org/x/term v0.15.0
+ golang.org/x/text v0.14.0
+ google.golang.org/api v0.114.0
)
require (
- cloud.google.com/go v0.108.0 // indirect
- cloud.google.com/go/compute v1.15.1 // indirect
+ cloud.google.com/go v0.110.0 // indirect
+ cloud.google.com/go/compute v1.19.1 // indirect
cloud.google.com/go/compute/metadata v0.2.3 // indirect
- cloud.google.com/go/iam v0.10.0 // indirect
+ cloud.google.com/go/iam v0.13.0 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.1.2 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.2 // indirect
github.com/dnaeon/go-vcr v1.2.0 // indirect
github.com/dustin/go-humanize v1.0.0 // indirect
github.com/felixge/fgprof v0.9.3 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
- github.com/golang/protobuf v1.5.2 // indirect
+ github.com/golang/protobuf v1.5.3 // indirect
github.com/google/pprof v0.0.0-20230111200839-76d1ae5aea2b // indirect
github.com/google/uuid v1.3.0 // indirect
- github.com/googleapis/enterprise-certificate-proxy v0.2.1 // indirect
- github.com/googleapis/gax-go/v2 v2.7.0 // indirect
+ github.com/googleapis/enterprise-certificate-proxy v0.2.3 // indirect
+ github.com/googleapis/gax-go/v2 v2.7.1 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/cpuid/v2 v2.2.3 // indirect
@@ -63,9 +63,9 @@ require (
go.opencensus.io v0.24.0 // indirect
golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2 // indirect
google.golang.org/appengine v1.6.7 // indirect
- google.golang.org/genproto v0.0.0-20230110181048-76db0878b65f // indirect
- google.golang.org/grpc v1.52.0 // indirect
- google.golang.org/protobuf v1.28.1 // indirect
+ google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1 // indirect
+ google.golang.org/grpc v1.56.3 // indirect
+ google.golang.org/protobuf v1.33.0 // indirect
gopkg.in/ini.v1 v1.67.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)
diff --git a/go.sum b/go.sum
index 026e1d2fa..da35b7a6c 100644
index 026e1d2fa..c09e5fae1 100644
--- a/go.sum
+++ b/go.sum
@@ -189,8 +189,8 @@ golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLL
@@ -1,13 +1,13 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
-cloud.google.com/go v0.108.0 h1:xntQwnfn8oHGX0crLVinvHM+AhXvi3QHQIEcX/2hiWk=
-cloud.google.com/go v0.108.0/go.mod h1:lNUfQqusBJp0bgAg6qrHgYFYbTB+dOiob1itwnlD33Q=
-cloud.google.com/go/compute v1.15.1 h1:7UGq3QknM33pw5xATlpzeoomNxsacIVvTqTTvbfajmE=
-cloud.google.com/go/compute v1.15.1/go.mod h1:bjjoF/NtFUrkD/urWfdHaKuOPDR5nWIs63rR+SXhcpA=
+cloud.google.com/go v0.110.0 h1:Zc8gqp3+a9/Eyph2KDmcGaPtbKRIoqq4YTlL4NMD0Ys=
+cloud.google.com/go v0.110.0/go.mod h1:SJnCLqQ0FCFGSZMUNUf84MV3Aia54kn7pi8st7tMzaY=
+cloud.google.com/go/compute v1.19.1 h1:am86mquDUgjGNWxiGn+5PGLbmgiWXlE/yNWpIpNvuXY=
+cloud.google.com/go/compute v1.19.1/go.mod h1:6ylj3a05WF8leseCdIf77NK0g1ey+nj5IKd5/kvShxE=
cloud.google.com/go/compute/metadata v0.2.3 h1:mg4jlk7mCAj6xXp9UJ4fjI9VUI5rubuGBW5aJ7UnBMY=
cloud.google.com/go/compute/metadata v0.2.3/go.mod h1:VAV5nSsACxMJvgaAuX6Pk2AawlZn8kiOGuCv6gTkwuA=
-cloud.google.com/go/iam v0.10.0 h1:fpP/gByFs6US1ma53v7VxhvbJpO2Aapng6wabJ99MuI=
-cloud.google.com/go/iam v0.10.0/go.mod h1:nXAECrMt2qHpF6RZUZseteD6QyanL68reN4OXPw0UWM=
-cloud.google.com/go/longrunning v0.3.0 h1:NjljC+FYPV3uh5/OwWT6pVU+doBqMg2x/rZlE+CamDs=
+cloud.google.com/go/iam v0.13.0 h1:+CmB+K0J/33d0zSQ9SlFWUeCCEn5XJA0ZMZ3pHE9u8k=
+cloud.google.com/go/iam v0.13.0/go.mod h1:ljOg+rcNfzZ5d6f1nAUJ8ZIxOaZUVoS14bKCtaLZ/D0=
+cloud.google.com/go/longrunning v0.4.1 h1:v+yFJOfKC3yZdY6ZUI933pIYdhyhV8S3NpWrXWmg7jM=
cloud.google.com/go/storage v1.28.1 h1:F5QDG5ChchaAVQhINh24U99OWHURqrW8OmQcGKXcbgI=
cloud.google.com/go/storage v1.28.1/go.mod h1:Qnisd4CqDdo6BGs2AD5LLnEsmSQ80wQ5ogcBBKhU86Y=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.3.0 h1:VuHAcMq8pU1IWNT/m5yRaGqbK0BiQKHT8X4DTp9CHdI=
@@ -70,8 +70,8 @@ github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvq
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
-github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw=
-github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
+github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
+github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
@@ -82,17 +82,17 @@ github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
-github.com/google/martian/v3 v3.2.1 h1:d8MncMlErDFTwQGBK1xhv026j9kqhvw1Qv9IbWT1VLQ=
+github.com/google/martian/v3 v3.3.2 h1:IqNFLAmvJOgVlpdEBiQbDc2EwKW77amAycfTuWKdfvw=
github.com/google/pprof v0.0.0-20211214055906-6f57359322fd/go.mod h1:KgnwoLYCZ8IQu3XUZ8Nc/bM9CCZFOyjUNOSygVozoDg=
github.com/google/pprof v0.0.0-20230111200839-76d1ae5aea2b h1:8htHrh2bw9c7Idkb7YNac+ZpTqLMjRpI+FWu51ltaQc=
github.com/google/pprof v0.0.0-20230111200839-76d1ae5aea2b/go.mod h1:dDKJzRmX4S37WGHujM7tX//fmj1uioxKzKxz3lo4HJo=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I=
github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
-github.com/googleapis/enterprise-certificate-proxy v0.2.1 h1:RY7tHKZcRlk788d5WSo/e83gOyyy742E8GSs771ySpg=
-github.com/googleapis/enterprise-certificate-proxy v0.2.1/go.mod h1:AwSRAtLfXpU5Nm3pW+v7rGDHp09LsPtGY9MduiEsR9k=
-github.com/googleapis/gax-go/v2 v2.7.0 h1:IcsPKeInNvYi7eqSaDjiZqDDKu5rsmunY0Y1YupQSSQ=
-github.com/googleapis/gax-go/v2 v2.7.0/go.mod h1:TEop28CZZQ2y+c0VxMUmu1lV+fQx57QpBWsYpwqHJx8=
+github.com/googleapis/enterprise-certificate-proxy v0.2.3 h1:yk9/cqRKtT9wXZSsRH9aurXEpJX+U6FLtpYTdC3R06k=
+github.com/googleapis/enterprise-certificate-proxy v0.2.3/go.mod h1:AwSRAtLfXpU5Nm3pW+v7rGDHp09LsPtGY9MduiEsR9k=
+github.com/googleapis/gax-go/v2 v2.7.1 h1:gF4c0zjUP2H/s/hEGyLA3I0fA2ZWjzYiONAD6cvPr8A=
+github.com/googleapis/gax-go/v2 v2.7.1/go.mod h1:4orTrqY6hXxxaUL4LHIPl6lGo8vAE38/qKbhSAKP6QI=
github.com/hashicorp/golang-lru/v2 v2.0.1 h1:5pv5N1lT1fjLg2VQ5KWc7kmucp2x/kvFOnxuVTqZ6x4=
github.com/hashicorp/golang-lru/v2 v2.0.1/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/ianlancetaylor/demangle v0.0.0-20210905161508-09a460cdf81d/go.mod h1:aYm2/VgdVmcIU8iMfdMvDMsRAQjcfZSKFby6HOFvi/w=
@@ -172,8 +172,8 @@ golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACk
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
-golang.org/x/crypto v0.5.0 h1:U/0M97KRkSFvyD/3FSmdP5W5swImpNgle/EHFhOsQPE=
-golang.org/x/crypto v0.5.0/go.mod h1:NK/OQwhpMQP3MwtdjgLlYHnH9ebylxKWv3e0fK+mkQU=
+golang.org/x/crypto v0.17.0 h1:r8bRNjWL3GshPW3gkd+RpvzWrZAwPS49OmTGZ/uhM4k=
+golang.org/x/crypto v0.17.0/go.mod h1:gCAAfMLgwOJRpTjQ2zCCt2OcSfYMTeZVSRtQlPC7Nq4=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
@@ -189,11 +189,11 @@ golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLL
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
-golang.org/x/net v0.5.0 h1:GyT4nK/YDHSqa1c4753ouYCDajOYKTja9Xb/OHtgvSw=
-golang.org/x/net v0.5.0/go.mod h1:DivGGAXEgPSlEBzxGzZI+ZLohi+xUj054jfeKui00ws=
+golang.org/x/net v0.7.0 h1:rJrUqqhjsgNp7KqAIc25s9pZnjU7TUcSY7HcVZjdn1g=
+golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
+golang.org/x/net v0.17.0 h1:pVaXccu2ozPjCXewfr1S7xza/zcXTity9cCdXQYSjIM=
+golang.org/x/net v0.17.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.4.0 h1:NF0gk8LVPg1Ml7SSbGyySuoxdsXitj7TvgvuRxIMc/M=
golang.org/x/oauth2 v0.4.0/go.mod h1:RznEsdpjGAINPTOF0UH/t+xJ75L18YO3Ho6Pyn+uRec=
-golang.org/x/oauth2 v0.4.0 h1:NF0gk8LVPg1Ml7SSbGyySuoxdsXitj7TvgvuRxIMc/M=
-golang.org/x/oauth2 v0.4.0/go.mod h1:RznEsdpjGAINPTOF0UH/t+xJ75L18YO3Ho6Pyn+uRec=
+golang.org/x/oauth2 v0.7.0 h1:qe6s0zUXlPX80/dITx3440hWZ7GwMwgDDyrSGTPJG/g=
+golang.org/x/oauth2 v0.7.0/go.mod h1:hPLQkd9LyjfXTiRohC/41GhcFqxisoUQ99sCUOHO9x4=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -214,17 +214,17 @@ golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20220408201424-a24fb2fb8a0f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220704084225-05e143d24a9e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
-golang.org/x/sys v0.4.0 h1:Zr2JFtRQNX3BCZ8YtxRE9hNJYC8J6I1MVbMg6owUp18=
-golang.org/x/sys v0.4.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.5.0 h1:MUK/U/4lj1t1oPg0HfuXDN/Z1wv31ZJ/YcPiGccS4DU=
+golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.15.0 h1:h48lPFYpsTvQJZF4EKyI4aLHaev3CxivZmv7yZig9pc=
+golang.org/x/sys v0.15.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
-golang.org/x/term v0.4.0 h1:O7UWfv5+A2qiuulQk30kVinPoMtoIPeVaKLEgLpVkvg=
-golang.org/x/term v0.4.0/go.mod h1:9P2UbLfCdcvo3p/nzKvsmas4TnlujnuoV9hGgYzW1lQ=
+golang.org/x/term v0.5.0 h1:n2a8QNdAb0sZNpU9R1ALUXBbY+w51fCQDN+7EdxNBsY=
+golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
+golang.org/x/term v0.15.0 h1:y/Oo/a/q3IXu26lQgl04j/gjuBDOBlx7X6Om1j2CPW4=
+golang.org/x/term v0.15.0/go.mod h1:BDl952bC7+uMoWR75FIrCDx79TPU9oHkTZ9yRbYOrX0=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
-golang.org/x/text v0.6.0 h1:3XmdazWV+ubf7QgHSTWeykHOci5oeekaGJBLkrkaw4k=
-golang.org/x/text v0.6.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
+golang.org/x/text v0.7.0 h1:4BRB4x83lYWy72KwLD/qYDuTu7q9PjSagHvijDw7cLo=
+golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
+golang.org/x/text v0.14.0 h1:ScX5w1eTa3QqT8oi6+ziP7dTV1S2+ALU0bI+0zXKWiQ=
+golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
@@ -237,8 +237,8 @@ golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8T
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2 h1:H2TDz8ibqkAF6YGhCdN3jS9O0/s90v0rJh3X/OLHEUk=
golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8=
-google.golang.org/api v0.106.0 h1:ffmW0faWCwKkpbbtvlY/K/8fUl+JKvNS5CVzRoyfCv8=
-google.golang.org/api v0.106.0/go.mod h1:2Ts0XTHNVWxypznxWOYUeI4g3WdP9Pk2Qk58+a/O9MY=
+google.golang.org/api v0.114.0 h1:1xQPji6cO2E2vLiI+C/XiFAnsn1WV3mjaEwGLhi3grE=
+google.golang.org/api v0.114.0/go.mod h1:ifYI2ZsFK6/uGddGfAD5BMxlnkBqCmqHSDUVi45N5Yg=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c=
@@ -246,15 +246,15 @@ google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCID
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
-google.golang.org/genproto v0.0.0-20230110181048-76db0878b65f h1:BWUVssLB0HVOSY78gIdvk1dTVYtT1y8SBWtPYuTJ/6w=
-google.golang.org/genproto v0.0.0-20230110181048-76db0878b65f/go.mod h1:RGgjbofJ8xD9Sq1VVhDM1Vok1vRONV+rg+CjzG4SZKM=
+google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1 h1:KpwkzHKEF7B9Zxg18WzOa7djJ+Ha5DzthMyZYQfEn2A=
+google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1/go.mod h1:nKE/iIaLqn2bQwXBg8f1g2Ylh6r5MN5CmZvuzZCgsCU=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc=
-google.golang.org/grpc v1.52.0 h1:kd48UiU7EHsV4rnLyOJRuP/Il/UHE7gdDAQ+SZI7nZk=
-google.golang.org/grpc v1.52.0/go.mod h1:pu6fVzoFb+NBYNAvQL08ic+lvB2IojljRYuun5vorUY=
+google.golang.org/grpc v1.56.3 h1:8I4C0Yq1EjstUzUJzpcRVbuYA2mODtEmpWiQoN/b2nc=
+google.golang.org/grpc v1.56.3/go.mod h1:I9bI3vqKfayGqPUAwGdOSu7kt6oIJLixfffKrpXqQ9s=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
@@ -266,8 +266,8 @@ google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpAD
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
-google.golang.org/protobuf v1.28.1 h1:d0NfwRgPtno5B1Wa6L2DAG+KivqkdutMf1UhdNx175w=
-google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
+google.golang.org/protobuf v1.33.0 h1:uNO2rsAINq/JlFpSdYEKIZ0uKD/R9cpdv0T+yoGwGmI=
+google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/ini.v1 v1.67.0 h1:Dgnx+6+nfE+IfzjUEISNeydPJh9AXNNsWbGP9KzCsOA=

View File

@@ -19,7 +19,7 @@ HACK_DIR=$(dirname "${BASH_SOURCE}")
${HACK_DIR}/update-3generated-crd-code.sh
# ensure no changes to generated CRDs
if [! git diff --exit-code config/crd/v1/crds/crds.go config/crd/v2alpha1/crds/crds.go >/dev/null]; then
if ! git diff --exit-code config/crd/v1/crds/crds.go config/crd/v2alpha1/crds/crds.go &> /dev/null; then
# revert changes to state before running CRD generation to stay consistent
# with code-generator `--verify-only` option which discards generated changes
git checkout config/crd

View File

@@ -71,7 +71,7 @@ func (n *namespacedFileStore) Path(selector *corev1api.SecretKeySelector) (strin
keyFilePath := filepath.Join(n.fsRoot, fmt.Sprintf("%s-%s", selector.Name, selector.Key))
file, err := n.fs.OpenFile(keyFilePath, os.O_RDWR|os.O_CREATE, 0644)
file, err := n.fs.OpenFile(keyFilePath, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0644)
if err != nil {
return "", errors.Wrap(err, "unable to open credentials file for writing")
}

View File

@@ -119,6 +119,7 @@ func InvokeDeleteActions(ctx *Context) error {
if !action.Selector.Matches(labels.Set(obj.GetLabels())) {
continue
}
err = action.DeleteItemAction.Execute(&velero.DeleteItemActionExecuteInput{
Item: obj,
Backup: ctx.Backup,

View File

@@ -0,0 +1,148 @@
/*
Copyright 2020 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package hook
import (
"fmt"
"sync"
)
const (
HookSourceAnnotation = "annotation"
HookSourceSpec = "spec"
)
// hookTrackerKey identifies a backup/restore hook
type hookTrackerKey struct {
// PodNamespace indicates the namespace of pod where hooks are executed.
// For hooks specified in the backup/restore spec, this field is the namespace of an applicable pod.
// For hooks specified in pod annotation, this field is the namespace of pod where hooks are annotated.
podNamespace string
// PodName indicates the pod where hooks are executed.
// For hooks specified in the backup/restore spec, this field is an applicable pod name.
// For hooks specified in pod annotation, this field is the pod where hooks are annotated.
podName string
// HookPhase is only for backup hooks, for restore hooks, this field is empty.
hookPhase hookPhase
// HookName is only for hooks specified in the backup/restore spec.
// For hooks specified in pod annotation, this field is empty or "<from-annotation>".
hookName string
// HookSource indicates where hooks come from.
hookSource string
// Container indicates the container hooks use.
// For hooks specified in the backup/restore spec, the container might be the same under different hookName.
container string
}
// hookTrackerVal records the execution status of a specific hook.
// hookTrackerVal is extensible to accommodate additional fields as needs develop.
type hookTrackerVal struct {
// HookFailed indicates if hook failed to execute.
hookFailed bool
// hookExecuted indicates if hook already execute.
hookExecuted bool
}
// HookTracker tracks all hooks' execution status
type HookTracker struct {
lock *sync.RWMutex
tracker map[hookTrackerKey]hookTrackerVal
}
// NewHookTracker creates a hookTracker.
func NewHookTracker() *HookTracker {
return &HookTracker{
lock: &sync.RWMutex{},
tracker: make(map[hookTrackerKey]hookTrackerVal),
}
}
// Add adds a hook to the tracker
// Add must precede the Record for each individual hook.
// In other words, a hook must be added to the tracker before its execution result is recorded.
func (ht *HookTracker) Add(podNamespace, podName, container, source, hookName string, hookPhase hookPhase) {
ht.lock.Lock()
defer ht.lock.Unlock()
key := hookTrackerKey{
podNamespace: podNamespace,
podName: podName,
hookSource: source,
container: container,
hookPhase: hookPhase,
hookName: hookName,
}
if _, ok := ht.tracker[key]; !ok {
ht.tracker[key] = hookTrackerVal{
hookFailed: false,
hookExecuted: false,
}
}
}
// Record records the hook's execution status
// Add must precede the Record for each individual hook.
// In other words, a hook must be added to the tracker before its execution result is recorded.
func (ht *HookTracker) Record(podNamespace, podName, container, source, hookName string, hookPhase hookPhase, hookFailed bool) error {
ht.lock.Lock()
defer ht.lock.Unlock()
key := hookTrackerKey{
podNamespace: podNamespace,
podName: podName,
hookSource: source,
container: container,
hookPhase: hookPhase,
hookName: hookName,
}
var err error
if _, ok := ht.tracker[key]; ok {
ht.tracker[key] = hookTrackerVal{
hookFailed: hookFailed,
hookExecuted: true,
}
} else {
err = fmt.Errorf("hook not exist in hooks tracker, hook key: %v", key)
}
return err
}
// Stat calculates the number of attempted hooks and failed hooks
func (ht *HookTracker) Stat() (hookAttemptedCnt int, hookFailed int) {
ht.lock.RLock()
defer ht.lock.RUnlock()
for _, hookInfo := range ht.tracker {
if hookInfo.hookExecuted {
hookAttemptedCnt++
if hookInfo.hookFailed {
hookFailed++
}
}
}
return
}
// GetTracker gets the tracker inside HookTracker
func (ht *HookTracker) GetTracker() map[hookTrackerKey]hookTrackerVal {
ht.lock.RLock()
defer ht.lock.RUnlock()
return ht.tracker
}

View File

@@ -0,0 +1,93 @@
/*
Copyright 2020 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package hook
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestNewHookTracker(t *testing.T) {
tracker := NewHookTracker()
assert.NotNil(t, tracker)
assert.Empty(t, tracker.tracker)
}
func TestHookTracker_Add(t *testing.T) {
tracker := NewHookTracker()
tracker.Add("ns1", "pod1", "container1", HookSourceAnnotation, "h1", PhasePre)
key := hookTrackerKey{
podNamespace: "ns1",
podName: "pod1",
container: "container1",
hookPhase: PhasePre,
hookSource: HookSourceAnnotation,
hookName: "h1",
}
_, ok := tracker.tracker[key]
assert.True(t, ok)
}
func TestHookTracker_Record(t *testing.T) {
tracker := NewHookTracker()
tracker.Add("ns1", "pod1", "container1", HookSourceAnnotation, "h1", PhasePre)
err := tracker.Record("ns1", "pod1", "container1", HookSourceAnnotation, "h1", PhasePre, true)
key := hookTrackerKey{
podNamespace: "ns1",
podName: "pod1",
container: "container1",
hookPhase: PhasePre,
hookSource: HookSourceAnnotation,
hookName: "h1",
}
info := tracker.tracker[key]
assert.True(t, info.hookFailed)
assert.Nil(t, err)
err = tracker.Record("ns2", "pod2", "container1", HookSourceAnnotation, "h1", PhasePre, true)
assert.NotNil(t, err)
}
func TestHookTracker_Stat(t *testing.T) {
tracker := NewHookTracker()
tracker.Add("ns1", "pod1", "container1", HookSourceAnnotation, "h1", PhasePre)
tracker.Add("ns2", "pod2", "container1", HookSourceAnnotation, "h2", PhasePre)
tracker.Record("ns1", "pod1", "container1", HookSourceAnnotation, "h1", PhasePre, true)
attempted, failed := tracker.Stat()
assert.Equal(t, 1, attempted)
assert.Equal(t, 1, failed)
}
func TestHookTracker_Get(t *testing.T) {
tracker := NewHookTracker()
tracker.Add("ns1", "pod1", "container1", HookSourceAnnotation, "h1", PhasePre)
tr := tracker.GetTracker()
assert.NotNil(t, tr)
t.Logf("tracker :%+v", tr)
}

View File

@@ -19,6 +19,7 @@ package hook
import (
"encoding/json"
"fmt"
"strconv"
"strings"
"time"
@@ -37,6 +38,7 @@ import (
"github.com/vmware-tanzu/velero/pkg/kuberesource"
"github.com/vmware-tanzu/velero/pkg/podexec"
"github.com/vmware-tanzu/velero/pkg/restorehelper"
"github.com/vmware-tanzu/velero/pkg/util/boolptr"
"github.com/vmware-tanzu/velero/pkg/util/collections"
"github.com/vmware-tanzu/velero/pkg/util/kube"
)
@@ -61,6 +63,7 @@ const (
podRestoreHookOnErrorAnnotationKey = "post.hook.restore.velero.io/on-error"
podRestoreHookTimeoutAnnotationKey = "post.hook.restore.velero.io/exec-timeout"
podRestoreHookWaitTimeoutAnnotationKey = "post.hook.restore.velero.io/wait-timeout"
podRestoreHookWaitForReadyAnnotationKey = "post.hook.restore.velero.io/wait-for-ready"
podRestoreHookInitContainerImageAnnotationKey = "init.hook.restore.velero.io/container-image"
podRestoreHookInitContainerNameAnnotationKey = "init.hook.restore.velero.io/container-name"
podRestoreHookInitContainerCommandAnnotationKey = "init.hook.restore.velero.io/command"
@@ -79,6 +82,7 @@ type ItemHookHandler interface {
obj runtime.Unstructured,
resourceHooks []ResourceHook,
phase hookPhase,
hookTracker *HookTracker,
) error
}
@@ -197,6 +201,7 @@ func (h *DefaultItemHookHandler) HandleHooks(
obj runtime.Unstructured,
resourceHooks []ResourceHook,
phase hookPhase,
hookTracker *HookTracker,
) error {
// We only support hooks on pods right now
if groupResource != kuberesource.Pods {
@@ -218,18 +223,29 @@ func (h *DefaultItemHookHandler) HandleHooks(
hookFromAnnotations = getPodExecHookFromAnnotations(metadata.GetAnnotations(), "", log)
}
if hookFromAnnotations != nil {
hookTracker.Add(namespace, name, hookFromAnnotations.Container, HookSourceAnnotation, "", phase)
hookLog := log.WithFields(
logrus.Fields{
"hookSource": "annotation",
"hookSource": HookSourceAnnotation,
"hookType": "exec",
"hookPhase": phase,
},
)
if err := h.PodCommandExecutor.ExecutePodCommand(hookLog, obj.UnstructuredContent(), namespace, name, "<from-annotation>", hookFromAnnotations); err != nil {
hookLog.WithError(err).Error("Error executing hook")
if hookFromAnnotations.OnError == velerov1api.HookErrorModeFail {
return err
}
hookFailed := false
var errExec error
if errExec = h.PodCommandExecutor.ExecutePodCommand(hookLog, obj.UnstructuredContent(), namespace, name, "<from-annotation>", hookFromAnnotations); errExec != nil {
hookLog.WithError(errExec).Error("Error executing hook")
hookFailed = true
}
errTracker := hookTracker.Record(namespace, name, hookFromAnnotations.Container, HookSourceAnnotation, "", phase, hookFailed)
if errTracker != nil {
hookLog.WithError(errTracker).Warn("Error recording the hook in hook tracker")
}
if errExec != nil && hookFromAnnotations.OnError == velerov1api.HookErrorModeFail {
return errExec
}
return nil
@@ -237,6 +253,8 @@ func (h *DefaultItemHookHandler) HandleHooks(
labels := labels.Set(metadata.GetLabels())
// Otherwise, check for hooks defined in the backup spec.
// modeFailError records the error from the hook with "Fail" error mode
var modeFailError error
for _, resourceHook := range resourceHooks {
if !resourceHook.Selector.applicableTo(groupResource, namespace, labels) {
continue
@@ -248,21 +266,34 @@ func (h *DefaultItemHookHandler) HandleHooks(
} else {
hooks = resourceHook.Post
}
for _, hook := range hooks {
if groupResource == kuberesource.Pods {
if hook.Exec != nil {
hookLog := log.WithFields(
logrus.Fields{
"hookSource": "backupSpec",
"hookType": "exec",
"hookPhase": phase,
},
)
err := h.PodCommandExecutor.ExecutePodCommand(hookLog, obj.UnstructuredContent(), namespace, name, resourceHook.Name, hook.Exec)
if err != nil {
hookLog.WithError(err).Error("Error executing hook")
if hook.Exec.OnError == velerov1api.HookErrorModeFail {
return err
hookTracker.Add(namespace, name, hook.Exec.Container, HookSourceSpec, resourceHook.Name, phase)
// The remaining hooks will only be executed if modeFailError is nil.
// Otherwise, execution will stop and only hook collection will occur.
if modeFailError == nil {
hookLog := log.WithFields(
logrus.Fields{
"hookSource": HookSourceSpec,
"hookType": "exec",
"hookPhase": phase,
},
)
hookFailed := false
err := h.PodCommandExecutor.ExecutePodCommand(hookLog, obj.UnstructuredContent(), namespace, name, resourceHook.Name, hook.Exec)
if err != nil {
hookLog.WithError(err).Error("Error executing hook")
hookFailed = true
if hook.Exec.OnError == velerov1api.HookErrorModeFail {
modeFailError = err
}
}
errTracker := hookTracker.Record(namespace, name, hook.Exec.Container, HookSourceSpec, resourceHook.Name, phase, hookFailed)
if errTracker != nil {
hookLog.WithError(errTracker).Warn("Error recording the hook in hook tracker")
}
}
}
@@ -270,7 +301,7 @@ func (h *DefaultItemHookHandler) HandleHooks(
}
}
return nil
return modeFailError
}
// NoOpItemHookHandler is the an itemHookHandler for the Finalize controller where hooks don't run
@@ -282,6 +313,7 @@ func (h *NoOpItemHookHandler) HandleHooks(
obj runtime.Unstructured,
resourceHooks []ResourceHook,
phase hookPhase,
hookTracker *HookTracker,
) error {
return nil
}
@@ -477,12 +509,23 @@ func getPodExecRestoreHookFromAnnotations(annotations map[string]string, log log
}
}
waitForReadyString := annotations[podRestoreHookWaitForReadyAnnotationKey]
waitForReady := boolptr.False()
if waitForReadyString != "" {
var err error
*waitForReady, err = strconv.ParseBool(waitForReadyString)
if err != nil {
log.Warn(errors.Wrapf(err, "Unable to parse wait for ready %s, ignoring", waitForReadyString))
}
}
return &velerov1api.ExecRestoreHook{
Container: container,
Command: parseStringToCommand(commandValue),
OnError: onError,
ExecTimeout: metav1.Duration{Duration: execTimeout},
WaitTimeout: metav1.Duration{Duration: waitTimeout},
Container: container,
Command: parseStringToCommand(commandValue),
OnError: onError,
ExecTimeout: metav1.Duration{Duration: execTimeout},
WaitTimeout: metav1.Duration{Duration: waitTimeout},
WaitForReady: waitForReady,
}
}
@@ -500,6 +543,7 @@ func GroupRestoreExecHooks(
resourceRestoreHooks []ResourceRestoreHook,
pod *corev1api.Pod,
log logrus.FieldLogger,
hookTrack *HookTracker,
) (map[string][]PodExecRestoreHook, error) {
byContainer := map[string][]PodExecRestoreHook{}
@@ -516,10 +560,11 @@ func GroupRestoreExecHooks(
if hookFromAnnotation.Container == "" {
hookFromAnnotation.Container = pod.Spec.Containers[0].Name
}
hookTrack.Add(metadata.GetNamespace(), metadata.GetName(), hookFromAnnotation.Container, HookSourceAnnotation, "<from-annotation>", hookPhase(""))
byContainer[hookFromAnnotation.Container] = []PodExecRestoreHook{
{
HookName: "<from-annotation>",
HookSource: "annotation",
HookSource: HookSourceAnnotation,
Hook: *hookFromAnnotation,
},
}
@@ -540,12 +585,17 @@ func GroupRestoreExecHooks(
named := PodExecRestoreHook{
HookName: rrh.Name,
Hook: *rh.Exec,
HookSource: "backupSpec",
HookSource: HookSourceSpec,
}
// default to false if attr WaitForReady not set
if named.Hook.WaitForReady == nil {
named.Hook.WaitForReady = boolptr.False()
}
// default to first container in pod if unset, without mutating resource restore hook
if named.Hook.Container == "" {
named.Hook.Container = pod.Spec.Containers[0].Name
}
hookTrack.Add(metadata.GetNamespace(), metadata.GetName(), named.Hook.Container, HookSourceSpec, rrh.Name, hookPhase(""))
byContainer[named.Hook.Container] = append(byContainer[named.Hook.Container], named)
}
}

View File

@@ -36,6 +36,7 @@ import (
"github.com/vmware-tanzu/velero/pkg/builder"
"github.com/vmware-tanzu/velero/pkg/kuberesource"
velerotest "github.com/vmware-tanzu/velero/pkg/test"
"github.com/vmware-tanzu/velero/pkg/util/boolptr"
"github.com/vmware-tanzu/velero/pkg/util/collections"
)
@@ -107,6 +108,7 @@ func TestHandleHooksSkips(t *testing.T) {
},
}
hookTracker := NewHookTracker()
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
podCommandExecutor := &velerotest.MockPodCommandExecutor{}
@@ -117,7 +119,7 @@ func TestHandleHooksSkips(t *testing.T) {
}
groupResource := schema.ParseGroupResource(test.groupResource)
err := h.HandleHooks(velerotest.NewLogger(), groupResource, test.item, test.hooks, PhasePre)
err := h.HandleHooks(velerotest.NewLogger(), groupResource, test.item, test.hooks, PhasePre, hookTracker)
assert.NoError(t, err)
})
}
@@ -484,7 +486,8 @@ func TestHandleHooks(t *testing.T) {
}
groupResource := schema.ParseGroupResource(test.groupResource)
err := h.HandleHooks(velerotest.NewLogger(), groupResource, test.item, test.hooks, test.phase)
hookTracker := NewHookTracker()
err := h.HandleHooks(velerotest.NewLogger(), groupResource, test.item, test.hooks, test.phase, hookTracker)
if test.expectedError != nil {
assert.EqualError(t, err, test.expectedError.Error())
@@ -724,7 +727,8 @@ func TestGetPodExecRestoreHookFromAnnotations(t *testing.T) {
podRestoreHookCommandAnnotationKey: "/usr/bin/foo",
},
expected: &velerov1api.ExecRestoreHook{
Command: []string{"/usr/bin/foo"},
Command: []string{"/usr/bin/foo"},
WaitForReady: boolptr.False(),
},
},
{
@@ -733,7 +737,8 @@ func TestGetPodExecRestoreHookFromAnnotations(t *testing.T) {
podRestoreHookCommandAnnotationKey: `["a","b","c"]`,
},
expected: &velerov1api.ExecRestoreHook{
Command: []string{"a", "b", "c"},
Command: []string{"a", "b", "c"},
WaitForReady: boolptr.False(),
},
},
{
@@ -743,8 +748,9 @@ func TestGetPodExecRestoreHookFromAnnotations(t *testing.T) {
podRestoreHookOnErrorAnnotationKey: string(velerov1api.HookErrorModeContinue),
},
expected: &velerov1api.ExecRestoreHook{
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeContinue,
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeContinue,
WaitForReady: boolptr.False(),
},
},
{
@@ -754,8 +760,9 @@ func TestGetPodExecRestoreHookFromAnnotations(t *testing.T) {
podRestoreHookOnErrorAnnotationKey: string(velerov1api.HookErrorModeFail),
},
expected: &velerov1api.ExecRestoreHook{
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeFail,
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeFail,
WaitForReady: boolptr.False(),
},
},
{
@@ -766,9 +773,10 @@ func TestGetPodExecRestoreHookFromAnnotations(t *testing.T) {
podRestoreHookWaitTimeoutAnnotationKey: "1h",
},
expected: &velerov1api.ExecRestoreHook{
Command: []string{"/usr/bin/foo"},
ExecTimeout: metav1.Duration{Duration: 45 * time.Second},
WaitTimeout: metav1.Duration{Duration: time.Hour},
Command: []string{"/usr/bin/foo"},
ExecTimeout: metav1.Duration{Duration: 45 * time.Second},
WaitTimeout: metav1.Duration{Duration: time.Hour},
WaitForReady: boolptr.False(),
},
},
{
@@ -778,8 +786,9 @@ func TestGetPodExecRestoreHookFromAnnotations(t *testing.T) {
podRestoreHookContainerAnnotationKey: "my-app",
},
expected: &velerov1api.ExecRestoreHook{
Command: []string{"/usr/bin/foo"},
Container: "my-app",
Command: []string{"/usr/bin/foo"},
Container: "my-app",
WaitForReady: boolptr.False(),
},
},
{
@@ -790,9 +799,10 @@ func TestGetPodExecRestoreHookFromAnnotations(t *testing.T) {
podRestoreHookTimeoutAnnotationKey: "none",
},
expected: &velerov1api.ExecRestoreHook{
Command: []string{"/usr/bin/foo"},
Container: "my-app",
ExecTimeout: metav1.Duration{Duration: 0},
Command: []string{"/usr/bin/foo"},
Container: "my-app",
ExecTimeout: metav1.Duration{Duration: 0},
WaitForReady: boolptr.False(),
},
},
{
@@ -803,9 +813,10 @@ func TestGetPodExecRestoreHookFromAnnotations(t *testing.T) {
podRestoreHookWaitTimeoutAnnotationKey: "none",
},
expected: &velerov1api.ExecRestoreHook{
Command: []string{"/usr/bin/foo"},
Container: "my-app",
ExecTimeout: metav1.Duration{Duration: 0},
Command: []string{"/usr/bin/foo"},
Container: "my-app",
ExecTimeout: metav1.Duration{Duration: 0},
WaitForReady: boolptr.False(),
},
},
}
@@ -842,6 +853,7 @@ func TestGroupRestoreExecHooks(t *testing.T) {
podRestoreHookOnErrorAnnotationKey, string(velerov1api.HookErrorModeContinue),
podRestoreHookTimeoutAnnotationKey, "1s",
podRestoreHookWaitTimeoutAnnotationKey, "1m",
podRestoreHookWaitForReadyAnnotationKey, "true",
)).
Containers(&corev1api.Container{
Name: "container1",
@@ -851,13 +863,14 @@ func TestGroupRestoreExecHooks(t *testing.T) {
"container1": {
{
HookName: "<from-annotation>",
HookSource: "annotation",
HookSource: HookSourceAnnotation,
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeContinue,
ExecTimeout: metav1.Duration{Duration: time.Second},
WaitTimeout: metav1.Duration{Duration: time.Minute},
Container: "container1",
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeContinue,
ExecTimeout: metav1.Duration{Duration: time.Second},
WaitTimeout: metav1.Duration{Duration: time.Minute},
WaitForReady: boolptr.True(),
},
},
},
@@ -881,13 +894,14 @@ func TestGroupRestoreExecHooks(t *testing.T) {
"container1": {
{
HookName: "<from-annotation>",
HookSource: "annotation",
HookSource: HookSourceAnnotation,
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeContinue,
ExecTimeout: metav1.Duration{Duration: time.Second},
WaitTimeout: metav1.Duration{Duration: time.Minute},
Container: "container1",
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeContinue,
ExecTimeout: metav1.Duration{Duration: time.Second},
WaitTimeout: metav1.Duration{Duration: time.Minute},
WaitForReady: boolptr.False(),
},
},
},
@@ -921,13 +935,14 @@ func TestGroupRestoreExecHooks(t *testing.T) {
"container1": {
{
HookName: "hook1",
HookSource: "backupSpec",
HookSource: HookSourceSpec,
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeContinue,
ExecTimeout: metav1.Duration{Duration: time.Second},
WaitTimeout: metav1.Duration{Duration: time.Minute},
Container: "container1",
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeContinue,
ExecTimeout: metav1.Duration{Duration: time.Second},
WaitTimeout: metav1.Duration{Duration: time.Minute},
WaitForReady: boolptr.False(),
},
},
},
@@ -960,13 +975,14 @@ func TestGroupRestoreExecHooks(t *testing.T) {
"container1": {
{
HookName: "hook1",
HookSource: "backupSpec",
HookSource: HookSourceSpec,
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeContinue,
ExecTimeout: metav1.Duration{Duration: time.Second},
WaitTimeout: metav1.Duration{Duration: time.Minute},
Container: "container1",
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeContinue,
ExecTimeout: metav1.Duration{Duration: time.Second},
WaitTimeout: metav1.Duration{Duration: time.Minute},
WaitForReady: boolptr.False(),
},
},
},
@@ -1007,13 +1023,14 @@ func TestGroupRestoreExecHooks(t *testing.T) {
"container1": {
{
HookName: "<from-annotation>",
HookSource: "annotation",
HookSource: HookSourceAnnotation,
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeContinue,
ExecTimeout: metav1.Duration{Duration: time.Second},
WaitTimeout: metav1.Duration{Duration: time.Minute},
Container: "container1",
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeContinue,
ExecTimeout: metav1.Duration{Duration: time.Second},
WaitTimeout: metav1.Duration{Duration: time.Minute},
WaitForReady: boolptr.False(),
},
},
},
@@ -1105,11 +1122,12 @@ func TestGroupRestoreExecHooks(t *testing.T) {
RestoreHooks: []velerov1api.RestoreResourceHook{
{
Exec: &velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/aaa"},
OnError: velerov1api.HookErrorModeContinue,
ExecTimeout: metav1.Duration{Duration: time.Second * 4},
WaitTimeout: metav1.Duration{Duration: time.Minute * 4},
Container: "container1",
Command: []string{"/usr/bin/aaa"},
OnError: velerov1api.HookErrorModeContinue,
ExecTimeout: metav1.Duration{Duration: time.Second * 4},
WaitTimeout: metav1.Duration{Duration: time.Minute * 4},
WaitForReady: boolptr.True(),
},
},
},
@@ -1124,57 +1142,63 @@ func TestGroupRestoreExecHooks(t *testing.T) {
"container1": {
{
HookName: "hook1",
HookSource: "backupSpec",
HookSource: HookSourceSpec,
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeFail,
ExecTimeout: metav1.Duration{Duration: time.Second},
WaitTimeout: metav1.Duration{Duration: time.Minute},
Container: "container1",
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeFail,
ExecTimeout: metav1.Duration{Duration: time.Second},
WaitTimeout: metav1.Duration{Duration: time.Minute},
WaitForReady: boolptr.False(),
},
},
{
HookName: "hook1",
HookSource: "backupSpec",
HookSource: HookSourceSpec,
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/bar"},
OnError: velerov1api.HookErrorModeContinue,
ExecTimeout: metav1.Duration{Duration: time.Second * 2},
WaitTimeout: metav1.Duration{Duration: time.Minute * 2},
Container: "container1",
Command: []string{"/usr/bin/bar"},
OnError: velerov1api.HookErrorModeContinue,
ExecTimeout: metav1.Duration{Duration: time.Second * 2},
WaitTimeout: metav1.Duration{Duration: time.Minute * 2},
WaitForReady: boolptr.False(),
},
},
{
HookName: "hook2",
HookSource: "backupSpec",
HookSource: HookSourceSpec,
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/aaa"},
OnError: velerov1api.HookErrorModeContinue,
ExecTimeout: metav1.Duration{Duration: time.Second * 4},
WaitTimeout: metav1.Duration{Duration: time.Minute * 4},
Container: "container1",
Command: []string{"/usr/bin/aaa"},
OnError: velerov1api.HookErrorModeContinue,
ExecTimeout: metav1.Duration{Duration: time.Second * 4},
WaitTimeout: metav1.Duration{Duration: time.Minute * 4},
WaitForReady: boolptr.True(),
},
},
},
"container2": {
{
HookName: "hook1",
HookSource: "backupSpec",
HookSource: HookSourceSpec,
Hook: velerov1api.ExecRestoreHook{
Container: "container2",
Command: []string{"/usr/bin/baz"},
OnError: velerov1api.HookErrorModeContinue,
ExecTimeout: metav1.Duration{Duration: time.Second * 3},
WaitTimeout: metav1.Duration{Duration: time.Second * 3},
Container: "container2",
Command: []string{"/usr/bin/baz"},
OnError: velerov1api.HookErrorModeContinue,
ExecTimeout: metav1.Duration{Duration: time.Second * 3},
WaitTimeout: metav1.Duration{Duration: time.Second * 3},
WaitForReady: boolptr.False(),
},
},
},
},
},
}
hookTracker := NewHookTracker()
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
actual, err := GroupRestoreExecHooks(tc.resourceRestoreHooks, tc.pod, velerotest.NewLogger())
actual, err := GroupRestoreExecHooks(tc.resourceRestoreHooks, tc.pod, velerotest.NewLogger(), hookTracker)
assert.Nil(t, err)
assert.Equal(t, tc.expected, actual)
})
@@ -1963,3 +1987,494 @@ func TestValidateContainer(t *testing.T) {
// noCommand string should return expected error as result.
assert.Equal(t, expectedError, ValidateContainer([]byte(noCommand)))
}
func TestBackupHookTracker(t *testing.T) {
type podWithHook struct {
item runtime.Unstructured
hooks []ResourceHook
hookErrorsByContainer map[string]error
expectedPodHook *velerov1api.ExecHook
expectedPodHookError error
expectedError error
}
test1 := []struct {
name string
phase hookPhase
groupResource string
pods []podWithHook
hookTracker *HookTracker
expectedHookAttempted int
expectedHookFailed int
}{
{
name: "a pod with spec hooks, no error",
phase: PhasePre,
groupResource: "pods",
hookTracker: NewHookTracker(),
expectedHookAttempted: 2,
expectedHookFailed: 0,
pods: []podWithHook{
{
item: velerotest.UnstructuredOrDie(`
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"namespace": "ns",
"name": "name"
}
}`),
hooks: []ResourceHook{
{
Name: "hook1",
Pre: []velerov1api.BackupResourceHook{
{
Exec: &velerov1api.ExecHook{
Container: "1a",
Command: []string{"pre-1a"},
},
},
{
Exec: &velerov1api.ExecHook{
Container: "1b",
Command: []string{"pre-1b"},
},
},
},
},
},
},
},
},
{
name: "a pod with spec hooks and same container under different hook name, no error",
phase: PhasePre,
groupResource: "pods",
hookTracker: NewHookTracker(),
expectedHookAttempted: 4,
expectedHookFailed: 0,
pods: []podWithHook{
{
item: velerotest.UnstructuredOrDie(`
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"namespace": "ns",
"name": "name"
}
}`),
hooks: []ResourceHook{
{
Name: "hook1",
Pre: []velerov1api.BackupResourceHook{
{
Exec: &velerov1api.ExecHook{
Container: "1a",
Command: []string{"pre-1a"},
},
},
{
Exec: &velerov1api.ExecHook{
Container: "1b",
Command: []string{"pre-1b"},
},
},
},
},
{
Name: "hook2",
Pre: []velerov1api.BackupResourceHook{
{
Exec: &velerov1api.ExecHook{
Container: "1a",
Command: []string{"2a"},
},
},
{
Exec: &velerov1api.ExecHook{
Container: "2b",
Command: []string{"2b"},
},
},
},
},
},
},
},
},
{
name: "a pod with spec hooks, on error=fail",
phase: PhasePre,
groupResource: "pods",
hookTracker: NewHookTracker(),
expectedHookAttempted: 3,
expectedHookFailed: 2,
pods: []podWithHook{
{
item: velerotest.UnstructuredOrDie(`
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"namespace": "ns",
"name": "name"
}
}`),
hooks: []ResourceHook{
{
Name: "hook1",
Pre: []velerov1api.BackupResourceHook{
{
Exec: &velerov1api.ExecHook{
Container: "1a",
Command: []string{"1a"},
OnError: velerov1api.HookErrorModeContinue,
},
},
{
Exec: &velerov1api.ExecHook{
Container: "1b",
Command: []string{"1b"},
},
},
},
},
{
Name: "hook2",
Pre: []velerov1api.BackupResourceHook{
{
Exec: &velerov1api.ExecHook{
Container: "2",
Command: []string{"2"},
OnError: velerov1api.HookErrorModeFail,
},
},
},
},
{
Name: "hook3",
Pre: []velerov1api.BackupResourceHook{
{
Exec: &velerov1api.ExecHook{
Container: "3",
Command: []string{"3"},
},
},
},
},
},
hookErrorsByContainer: map[string]error{
"1a": errors.New("1a error, but continue"),
"2": errors.New("2 error, fail"),
},
},
},
},
{
name: "a pod with annotation and spec hooks",
phase: PhasePre,
groupResource: "pods",
hookTracker: NewHookTracker(),
expectedHookAttempted: 1,
expectedHookFailed: 0,
pods: []podWithHook{
{
item: velerotest.UnstructuredOrDie(`
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"namespace": "ns",
"name": "name",
"annotations": {
"hook.backup.velero.io/container": "c",
"hook.backup.velero.io/command": "/bin/ls"
}
}
}`),
expectedPodHook: &velerov1api.ExecHook{
Container: "c",
Command: []string{"/bin/ls"},
},
hooks: []ResourceHook{
{
Name: "hook1",
Pre: []velerov1api.BackupResourceHook{
{
Exec: &velerov1api.ExecHook{
Container: "1a",
Command: []string{"1a"},
OnError: velerov1api.HookErrorModeContinue,
},
},
{
Exec: &velerov1api.ExecHook{
Container: "1b",
Command: []string{"1b"},
},
},
},
},
},
},
},
},
{
name: "a pod with annotation, on error=fail",
phase: PhasePre,
groupResource: "pods",
hookTracker: NewHookTracker(),
expectedHookAttempted: 1,
expectedHookFailed: 1,
pods: []podWithHook{
{
item: velerotest.UnstructuredOrDie(`
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"namespace": "ns",
"name": "name",
"annotations": {
"hook.backup.velero.io/container": "c",
"hook.backup.velero.io/command": "/bin/ls",
"hook.backup.velero.io/on-error": "Fail"
}
}
}`),
expectedPodHook: &velerov1api.ExecHook{
Container: "c",
Command: []string{"/bin/ls"},
OnError: velerov1api.HookErrorModeFail,
},
expectedPodHookError: errors.New("pod hook error"),
},
},
},
{
name: "two pods, one with annotation, the other with spec",
phase: PhasePre,
groupResource: "pods",
hookTracker: NewHookTracker(),
expectedHookAttempted: 3,
expectedHookFailed: 1,
pods: []podWithHook{
{
item: velerotest.UnstructuredOrDie(`
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"namespace": "ns",
"name": "name",
"annotations": {
"hook.backup.velero.io/container": "c",
"hook.backup.velero.io/command": "/bin/ls",
"hook.backup.velero.io/on-error": "Fail"
}
}
}`),
expectedPodHook: &velerov1api.ExecHook{
Container: "c",
Command: []string{"/bin/ls"},
OnError: velerov1api.HookErrorModeFail,
},
expectedPodHookError: errors.New("pod hook error"),
},
{
item: velerotest.UnstructuredOrDie(`
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"namespace": "ns",
"name": "name"
}
}`),
hooks: []ResourceHook{
{
Name: "hook1",
Pre: []velerov1api.BackupResourceHook{
{
Exec: &velerov1api.ExecHook{
Container: "1a",
Command: []string{"pre-1a"},
},
},
{
Exec: &velerov1api.ExecHook{
Container: "1b",
Command: []string{"pre-1b"},
},
},
},
},
},
},
},
},
}
for _, test := range test1 {
t.Run(test.name, func(t *testing.T) {
podCommandExecutor := &velerotest.MockPodCommandExecutor{}
defer podCommandExecutor.AssertExpectations(t)
h := &DefaultItemHookHandler{
PodCommandExecutor: podCommandExecutor,
}
groupResource := schema.ParseGroupResource(test.groupResource)
hookTracker := test.hookTracker
for _, pod := range test.pods {
if pod.expectedPodHook != nil {
podCommandExecutor.On("ExecutePodCommand", mock.Anything, pod.item.UnstructuredContent(), "ns", "name", "<from-annotation>", pod.expectedPodHook).Return(pod.expectedPodHookError)
} else {
hookLoop:
for _, resourceHook := range pod.hooks {
for _, hook := range resourceHook.Pre {
hookError := pod.hookErrorsByContainer[hook.Exec.Container]
podCommandExecutor.On("ExecutePodCommand", mock.Anything, pod.item.UnstructuredContent(), "ns", "name", resourceHook.Name, hook.Exec).Return(hookError)
if hookError != nil && hook.Exec.OnError == velerov1api.HookErrorModeFail {
break hookLoop
}
}
for _, hook := range resourceHook.Post {
hookError := pod.hookErrorsByContainer[hook.Exec.Container]
podCommandExecutor.On("ExecutePodCommand", mock.Anything, pod.item.UnstructuredContent(), "ns", "name", resourceHook.Name, hook.Exec).Return(hookError)
if hookError != nil && hook.Exec.OnError == velerov1api.HookErrorModeFail {
break hookLoop
}
}
}
}
h.HandleHooks(velerotest.NewLogger(), groupResource, pod.item, pod.hooks, test.phase, hookTracker)
}
actualAtemptted, actualFailed := hookTracker.Stat()
assert.Equal(t, test.expectedHookAttempted, actualAtemptted)
assert.Equal(t, test.expectedHookFailed, actualFailed)
})
}
}
func TestRestoreHookTrackerAdd(t *testing.T) {
testCases := []struct {
name string
resourceRestoreHooks []ResourceRestoreHook
pod *corev1api.Pod
hookTracker *HookTracker
expectedCnt int
}{
{
name: "neither spec hooks nor annotations hooks are set",
resourceRestoreHooks: nil,
pod: builder.ForPod("default", "my-pod").Result(),
hookTracker: NewHookTracker(),
expectedCnt: 0,
},
{
name: "a hook specified in pod annotation",
resourceRestoreHooks: nil,
pod: builder.ForPod("default", "my-pod").
ObjectMeta(builder.WithAnnotations(
podRestoreHookCommandAnnotationKey, "/usr/bin/foo",
podRestoreHookContainerAnnotationKey, "container1",
podRestoreHookOnErrorAnnotationKey, string(velerov1api.HookErrorModeContinue),
podRestoreHookTimeoutAnnotationKey, "1s",
podRestoreHookWaitTimeoutAnnotationKey, "1m",
podRestoreHookWaitForReadyAnnotationKey, "true",
)).
Containers(&corev1api.Container{
Name: "container1",
}).
Result(),
hookTracker: NewHookTracker(),
expectedCnt: 1,
},
{
name: "two hooks specified in restore spec",
resourceRestoreHooks: []ResourceRestoreHook{
{
Name: "hook1",
Selector: ResourceHookSelector{},
RestoreHooks: []velerov1api.RestoreResourceHook{
{
Exec: &velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeContinue,
ExecTimeout: metav1.Duration{Duration: time.Second},
WaitTimeout: metav1.Duration{Duration: time.Minute},
},
},
{
Exec: &velerov1api.ExecRestoreHook{
Container: "container2",
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeContinue,
ExecTimeout: metav1.Duration{Duration: time.Second},
WaitTimeout: metav1.Duration{Duration: time.Minute},
},
},
},
},
},
pod: builder.ForPod("default", "my-pod").
Containers(&corev1api.Container{
Name: "container1",
}, &corev1api.Container{
Name: "container2",
}).
Result(),
hookTracker: NewHookTracker(),
expectedCnt: 2,
},
{
name: "both spec hooks and annotations hooks are set",
resourceRestoreHooks: []ResourceRestoreHook{
{
Name: "hook1",
Selector: ResourceHookSelector{},
RestoreHooks: []velerov1api.RestoreResourceHook{
{
Exec: &velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/foo2"},
OnError: velerov1api.HookErrorModeContinue,
ExecTimeout: metav1.Duration{Duration: time.Second},
WaitTimeout: metav1.Duration{Duration: time.Minute},
},
},
},
},
},
pod: builder.ForPod("default", "my-pod").
ObjectMeta(builder.WithAnnotations(
podRestoreHookCommandAnnotationKey, "/usr/bin/foo",
podRestoreHookContainerAnnotationKey, "container1",
podRestoreHookOnErrorAnnotationKey, string(velerov1api.HookErrorModeContinue),
podRestoreHookTimeoutAnnotationKey, "1s",
podRestoreHookWaitTimeoutAnnotationKey, "1m",
podRestoreHookWaitForReadyAnnotationKey, "true",
)).
Containers(&corev1api.Container{
Name: "container1",
}).
Result(),
hookTracker: NewHookTracker(),
expectedCnt: 1,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
_, _ = GroupRestoreExecHooks(tc.resourceRestoreHooks, tc.pod, velerotest.NewLogger(), tc.hookTracker)
tracker := tc.hookTracker.GetTracker()
assert.Equal(t, tc.expectedCnt, len(tracker))
})
}
}

View File

@@ -29,6 +29,7 @@ import (
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/podexec"
"github.com/vmware-tanzu/velero/pkg/util/boolptr"
"github.com/vmware-tanzu/velero/pkg/util/kube"
)
@@ -38,6 +39,7 @@ type WaitExecHookHandler interface {
log logrus.FieldLogger,
pod *v1.Pod,
byContainer map[string][]PodExecRestoreHook,
hookTrack *HookTracker,
) []error
}
@@ -49,6 +51,11 @@ type DefaultListWatchFactory struct {
PodsGetter cache.Getter
}
type HookErrInfo struct {
Namespace string
Err error
}
func (d *DefaultListWatchFactory) NewListWatch(namespace string, selector fields.Selector) cache.ListerWatcher {
return cache.NewListWatchFromClient(d.PodsGetter, "pods", namespace, selector)
}
@@ -67,6 +74,7 @@ func (e *DefaultWaitExecHookHandler) HandleHooks(
log logrus.FieldLogger,
pod *v1.Pod,
byContainer map[string][]PodExecRestoreHook,
hookTracker *HookTracker,
) []error {
if pod == nil {
return nil
@@ -126,8 +134,8 @@ func (e *DefaultWaitExecHookHandler) HandleHooks(
}
for containerName, hooks := range byContainer {
if !isContainerRunning(newPod, containerName) {
podLog.Infof("Container %s is not running: post-restore hooks will not yet be executed", containerName)
if !isContainerUp(newPod, containerName, hooks) {
podLog.Infof("Container %s is not up: post-restore hooks will not yet be executed", containerName)
continue
}
podMap, err := runtime.DefaultUnstructuredConverter.ToUnstructured(newPod)
@@ -157,8 +165,14 @@ func (e *DefaultWaitExecHookHandler) HandleHooks(
if hook.Hook.WaitTimeout.Duration != 0 && time.Since(waitStart) > hook.Hook.WaitTimeout.Duration {
err := fmt.Errorf("hook %s in container %s expired before executing", hook.HookName, hook.Hook.Container)
hookLog.Error(err)
errors = append(errors, err)
errTracker := hookTracker.Record(newPod.Namespace, newPod.Name, hook.Hook.Container, hook.HookSource, hook.HookName, hookPhase(""), true)
if errTracker != nil {
hookLog.WithError(errTracker).Warn("Error recording the hook in hook tracker")
}
if hook.Hook.OnError == velerov1api.HookErrorModeFail {
errors = append(errors, err)
cancel()
return
}
@@ -169,13 +183,24 @@ func (e *DefaultWaitExecHookHandler) HandleHooks(
OnError: hook.Hook.OnError,
Timeout: hook.Hook.ExecTimeout,
}
if err := e.PodCommandExecutor.ExecutePodCommand(hookLog, podMap, pod.Namespace, pod.Name, hook.HookName, eh); err != nil {
hookLog.WithError(err).Error("Error executing hook")
if hook.Hook.OnError == velerov1api.HookErrorModeFail {
errors = append(errors, err)
cancel()
return
}
hookFailed := false
var hookErr error
if hookErr = e.PodCommandExecutor.ExecutePodCommand(hookLog, podMap, pod.Namespace, pod.Name, hook.HookName, eh); hookErr != nil {
hookLog.WithError(hookErr).Error("Error executing hook")
hookErr = fmt.Errorf("hook %s in container %s failed to execute, err: %v", hook.HookName, hook.Hook.Container, hookErr)
errors = append(errors, hookErr)
hookFailed = true
}
errTracker := hookTracker.Record(newPod.Namespace, newPod.Name, hook.Hook.Container, hook.HookSource, hook.HookName, hookPhase(""), hookFailed)
if errTracker != nil {
hookLog.WithError(errTracker).Warn("Error recording the hook in hook tracker")
}
if hookErr != nil && hook.Hook.OnError == velerov1api.HookErrorModeFail {
cancel()
return
}
}
delete(byContainer, containerName)
@@ -203,10 +228,9 @@ func (e *DefaultWaitExecHookHandler) HandleHooks(
podWatcher.Run(ctx.Done())
// There are some cases where this function could return with unexecuted hooks: the pod may
// be deleted, a hook with OnError mode Fail could fail, or it may timeout waiting for
// be deleted, a hook could fail, or it may timeout waiting for
// containers to become ready.
// Each unexecuted hook is logged as an error but only hooks with OnError mode Fail return
// an error from this function.
// Each unexecuted hook is logged as an error and this error will be returned from this function.
for _, hooks := range byContainer {
for _, hook := range hooks {
if hook.executed {
@@ -220,10 +244,14 @@ func (e *DefaultWaitExecHookHandler) HandleHooks(
"hookPhase": "post",
},
)
hookLog.Error(err)
if hook.Hook.OnError == velerov1api.HookErrorModeFail {
errors = append(errors, err)
errTracker := hookTracker.Record(pod.Namespace, pod.Name, hook.Hook.Container, hook.HookSource, hook.HookName, hookPhase(""), true)
if errTracker != nil {
hookLog.WithError(errTracker).Warn("Error recording the hook in hook tracker")
}
hookLog.Error(err)
errors = append(errors, err)
}
}
@@ -243,14 +271,24 @@ func podHasContainer(pod *v1.Pod, containerName string) bool {
return false
}
func isContainerRunning(pod *v1.Pod, containerName string) bool {
func isContainerUp(pod *v1.Pod, containerName string, hooks []PodExecRestoreHook) bool {
if pod == nil {
return false
}
var waitForReady bool
for _, hook := range hooks {
if boolptr.IsSetToTrue(hook.Hook.WaitForReady) {
waitForReady = true
break
}
}
for _, cs := range pod.Status.ContainerStatuses {
if cs.Name != containerName {
continue
}
if waitForReady {
return cs.Ready
}
return cs.State.Running != nil
}

View File

@@ -35,6 +35,7 @@ import (
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/builder"
velerotest "github.com/vmware-tanzu/velero/pkg/test"
"github.com/vmware-tanzu/velero/pkg/util/boolptr"
)
type fakeListWatchFactory struct {
@@ -97,7 +98,7 @@ func TestWaitExecHandleHooks(t *testing.T) {
"container1": {
{
HookName: "<from-annotation>",
HookSource: "annotation",
HookSource: HookSourceAnnotation,
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
@@ -166,7 +167,7 @@ func TestWaitExecHandleHooks(t *testing.T) {
"container1": {
{
HookName: "<from-annotation>",
HookSource: "annotation",
HookSource: HookSourceAnnotation,
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
@@ -208,10 +209,10 @@ func TestWaitExecHandleHooks(t *testing.T) {
Result(),
},
},
expectedErrors: []error{errors.New("pod hook error")},
expectedErrors: []error{errors.New("hook <from-annotation> in container container1 failed to execute, err: pod hook error")},
},
{
name: "should return no error when hook from annotation fails with on error mode continue",
name: "should return error when hook from annotation fails with on error mode continue",
initialPod: builder.ForPod("default", "my-pod").
ObjectMeta(builder.WithAnnotations(
podRestoreHookCommandAnnotationKey, "/usr/bin/foo",
@@ -235,7 +236,7 @@ func TestWaitExecHandleHooks(t *testing.T) {
"container1": {
{
HookName: "<from-annotation>",
HookSource: "annotation",
HookSource: HookSourceAnnotation,
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
@@ -277,7 +278,7 @@ func TestWaitExecHandleHooks(t *testing.T) {
Result(),
},
},
expectedErrors: nil,
expectedErrors: []error{errors.New("hook <from-annotation> in container container1 failed to execute, err: pod hook error")},
},
{
name: "should return no error when hook from annotation executes after 10ms wait for container to start",
@@ -304,7 +305,7 @@ func TestWaitExecHandleHooks(t *testing.T) {
"container1": {
{
HookName: "<from-annotation>",
HookSource: "annotation",
HookSource: HookSourceAnnotation,
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
@@ -390,7 +391,7 @@ func TestWaitExecHandleHooks(t *testing.T) {
"container1": {
{
HookName: "my-hook-1",
HookSource: "backupSpec",
HookSource: HookSourceSpec,
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
@@ -421,7 +422,7 @@ func TestWaitExecHandleHooks(t *testing.T) {
},
},
{
name: "should return no error when spec hook with wait timeout expires with OnError mode Continue",
name: "should return error when spec hook with wait timeout expires with OnError mode Continue",
groupResource: "pods",
initialPod: builder.ForPod("default", "my-pod").
Containers(&v1.Container{
@@ -434,12 +435,12 @@ func TestWaitExecHandleHooks(t *testing.T) {
},
}).
Result(),
expectedErrors: nil,
expectedErrors: []error{errors.New("hook my-hook-1 in container container1 in pod default/my-pod not executed: context deadline exceeded")},
byContainer: map[string][]PodExecRestoreHook{
"container1": {
{
HookName: "my-hook-1",
HookSource: "backupSpec",
HookSource: HookSourceSpec,
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
@@ -470,7 +471,7 @@ func TestWaitExecHandleHooks(t *testing.T) {
"container1": {
{
HookName: "my-hook-1",
HookSource: "backupSpec",
HookSource: HookSourceSpec,
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
@@ -501,7 +502,7 @@ func TestWaitExecHandleHooks(t *testing.T) {
"container1": {
{
HookName: "my-hook-1",
HookSource: "backupSpec",
HookSource: HookSourceSpec,
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
@@ -514,8 +515,8 @@ func TestWaitExecHandleHooks(t *testing.T) {
sharedHooksContextTimeout: time.Millisecond,
},
{
name: "should return no error when shared hooks context is canceled before spec hook with OnError mode Continue executes",
expectedErrors: nil,
name: "should return error when shared hooks context is canceled before spec hook with OnError mode Continue executes",
expectedErrors: []error{errors.New("hook my-hook-1 in container container1 in pod default/my-pod not executed: context deadline exceeded")},
groupResource: "pods",
initialPod: builder.ForPod("default", "my-pod").
Containers(&v1.Container{
@@ -532,7 +533,7 @@ func TestWaitExecHandleHooks(t *testing.T) {
"container1": {
{
HookName: "my-hook-1",
HookSource: "backupSpec",
HookSource: HookSourceSpec,
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
@@ -573,7 +574,7 @@ func TestWaitExecHandleHooks(t *testing.T) {
"container1": {
{
HookName: "my-hook-1",
HookSource: "backupSpec",
HookSource: HookSourceSpec,
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
@@ -583,7 +584,7 @@ func TestWaitExecHandleHooks(t *testing.T) {
"container2": {
{
HookName: "my-hook-1",
HookSource: "backupSpec",
HookSource: HookSourceSpec,
Hook: velerov1api.ExecRestoreHook{
Container: "container2",
Command: []string{"/usr/bin/bar"},
@@ -743,7 +744,8 @@ func TestWaitExecHandleHooks(t *testing.T) {
defer ctxCancel()
}
errs := h.HandleHooks(ctx, velerotest.NewLogger(), test.initialPod, test.byContainer)
hookTracker := NewHookTracker()
errs := h.HandleHooks(ctx, velerotest.NewLogger(), test.initialPod, test.byContainer, hookTracker)
// for i, ee := range test.expectedErrors {
require.Len(t, errs, len(test.expectedErrors))
@@ -790,12 +792,13 @@ func TestPodHasContainer(t *testing.T) {
}
}
func TestIsContainerRunning(t *testing.T) {
func TestIsContainerUp(t *testing.T) {
tests := []struct {
name string
pod *v1.Pod
container string
expect bool
hooks []PodExecRestoreHook
}{
{
name: "should return true when running",
@@ -809,6 +812,49 @@ func TestIsContainerRunning(t *testing.T) {
},
}).
Result(),
hooks: []PodExecRestoreHook{},
},
{
name: "should return false when running but not ready",
container: "container1",
expect: false,
pod: builder.ForPod("default", "my-pod").
ContainerStatuses(&v1.ContainerStatus{
Name: "container1",
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{},
},
Ready: false,
}).
Result(),
hooks: []PodExecRestoreHook{
{
Hook: velerov1api.ExecRestoreHook{
WaitForReady: boolptr.True(),
},
},
},
},
{
name: "should return true when running and ready",
container: "container1",
expect: true,
pod: builder.ForPod("default", "my-pod").
ContainerStatuses(&v1.ContainerStatus{
Name: "container1",
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{},
},
Ready: true,
}).
Result(),
hooks: []PodExecRestoreHook{
{
Hook: velerov1api.ExecRestoreHook{
WaitForReady: boolptr.True(),
},
},
},
},
{
name: "should return false when no state is set",
@@ -820,6 +866,7 @@ func TestIsContainerRunning(t *testing.T) {
State: v1.ContainerState{},
}).
Result(),
hooks: []PodExecRestoreHook{},
},
{
name: "should return false when waiting",
@@ -833,6 +880,7 @@ func TestIsContainerRunning(t *testing.T) {
},
}).
Result(),
hooks: []PodExecRestoreHook{},
},
{
name: "should return true when running and first container is terminated",
@@ -852,11 +900,12 @@ func TestIsContainerRunning(t *testing.T) {
},
}).
Result(),
hooks: []PodExecRestoreHook{},
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
actual := isContainerRunning(test.pod, test.container)
actual := isContainerUp(test.pod, test.container, test.hooks)
assert.Equal(t, actual, test.expect)
})
}
@@ -949,3 +998,284 @@ func TestMaxHookWait(t *testing.T) {
})
}
}
func TestRestoreHookTrackerUpdate(t *testing.T) {
type change struct {
// delta to wait since last change applied or pod added
wait time.Duration
updated *v1.Pod
}
type expectedExecution struct {
hook *velerov1api.ExecHook
name string
error error
pod *v1.Pod
}
hookTracker1 := NewHookTracker()
hookTracker1.Add("default", "my-pod", "container1", HookSourceAnnotation, "<from-annotation>", hookPhase(""))
hookTracker2 := NewHookTracker()
hookTracker2.Add("default", "my-pod", "container1", HookSourceSpec, "my-hook-1", hookPhase(""))
hookTracker3 := NewHookTracker()
hookTracker3.Add("default", "my-pod", "container1", HookSourceSpec, "my-hook-1", hookPhase(""))
hookTracker3.Add("default", "my-pod", "container2", HookSourceSpec, "my-hook-2", hookPhase(""))
tests1 := []struct {
name string
initialPod *v1.Pod
groupResource string
byContainer map[string][]PodExecRestoreHook
expectedExecutions []expectedExecution
hookTracker *HookTracker
expectedFailed int
}{
{
name: "a hook executes successfully",
initialPod: builder.ForPod("default", "my-pod").
ObjectMeta(builder.WithAnnotations(
podRestoreHookCommandAnnotationKey, "/usr/bin/foo",
podRestoreHookContainerAnnotationKey, "container1",
podRestoreHookOnErrorAnnotationKey, string(velerov1api.HookErrorModeContinue),
podRestoreHookTimeoutAnnotationKey, "1s",
podRestoreHookWaitTimeoutAnnotationKey, "1m",
)).
Containers(&v1.Container{
Name: "container1",
}).
ContainerStatuses(&v1.ContainerStatus{
Name: "container1",
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{},
},
}).
Result(),
groupResource: "pods",
byContainer: map[string][]PodExecRestoreHook{
"container1": {
{
HookName: "<from-annotation>",
HookSource: HookSourceAnnotation,
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeContinue,
ExecTimeout: metav1.Duration{Duration: time.Second},
WaitTimeout: metav1.Duration{Duration: time.Minute},
},
},
},
},
expectedExecutions: []expectedExecution{
{
name: "<from-annotation>",
hook: &velerov1api.ExecHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeContinue,
Timeout: metav1.Duration{Duration: time.Second},
},
error: nil,
pod: builder.ForPod("default", "my-pod").
ObjectMeta(builder.WithResourceVersion("1")).
ObjectMeta(builder.WithAnnotations(
podRestoreHookCommandAnnotationKey, "/usr/bin/foo",
podRestoreHookContainerAnnotationKey, "container1",
podRestoreHookOnErrorAnnotationKey, string(velerov1api.HookErrorModeContinue),
podRestoreHookTimeoutAnnotationKey, "1s",
podRestoreHookWaitTimeoutAnnotationKey, "1m",
)).
Containers(&v1.Container{
Name: "container1",
}).
ContainerStatuses(&v1.ContainerStatus{
Name: "container1",
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{},
},
}).
Result(),
},
},
hookTracker: hookTracker1,
expectedFailed: 0,
},
{
name: "a hook with OnError mode Fail failed to execute",
groupResource: "pods",
initialPod: builder.ForPod("default", "my-pod").
Containers(&v1.Container{
Name: "container1",
}).
ContainerStatuses(&v1.ContainerStatus{
Name: "container1",
State: v1.ContainerState{
Waiting: &v1.ContainerStateWaiting{},
},
}).
Result(),
byContainer: map[string][]PodExecRestoreHook{
"container1": {
{
HookName: "my-hook-1",
HookSource: HookSourceSpec,
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeFail,
WaitTimeout: metav1.Duration{Duration: time.Millisecond},
},
},
},
},
hookTracker: hookTracker2,
expectedFailed: 1,
},
{
name: "a hook with OnError mode Continue failed to execute",
groupResource: "pods",
initialPod: builder.ForPod("default", "my-pod").
Containers(&v1.Container{
Name: "container1",
}).
ContainerStatuses(&v1.ContainerStatus{
Name: "container1",
State: v1.ContainerState{
Waiting: &v1.ContainerStateWaiting{},
},
}).
Result(),
byContainer: map[string][]PodExecRestoreHook{
"container1": {
{
HookName: "my-hook-1",
HookSource: HookSourceSpec,
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeContinue,
WaitTimeout: metav1.Duration{Duration: time.Millisecond},
},
},
},
},
hookTracker: hookTracker2,
expectedFailed: 1,
},
{
name: "two hooks with OnError mode Continue failed to execute",
groupResource: "pods",
initialPod: builder.ForPod("default", "my-pod").
Containers(&v1.Container{
Name: "container1",
}).
Containers(&v1.Container{
Name: "container2",
}).
// initially both are waiting
ContainerStatuses(&v1.ContainerStatus{
Name: "container1",
State: v1.ContainerState{
Waiting: &v1.ContainerStateWaiting{},
},
}).
ContainerStatuses(&v1.ContainerStatus{
Name: "container2",
State: v1.ContainerState{
Waiting: &v1.ContainerStateWaiting{},
},
}).
Result(),
byContainer: map[string][]PodExecRestoreHook{
"container1": {
{
HookName: "my-hook-1",
HookSource: HookSourceSpec,
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeContinue,
WaitTimeout: metav1.Duration{Duration: time.Millisecond},
},
},
},
"container2": {
{
HookName: "my-hook-2",
HookSource: HookSourceSpec,
Hook: velerov1api.ExecRestoreHook{
Container: "container2",
Command: []string{"/usr/bin/bar"},
OnError: velerov1api.HookErrorModeContinue,
WaitTimeout: metav1.Duration{Duration: time.Millisecond},
},
},
},
},
hookTracker: hookTracker3,
expectedFailed: 2,
},
{
name: "a hook was recorded before added to tracker",
groupResource: "pods",
initialPod: builder.ForPod("default", "my-pod").
Containers(&v1.Container{
Name: "container1",
}).
ContainerStatuses(&v1.ContainerStatus{
Name: "container1",
State: v1.ContainerState{
Waiting: &v1.ContainerStateWaiting{},
},
}).
Result(),
byContainer: map[string][]PodExecRestoreHook{
"container1": {
{
HookName: "my-hook-1",
HookSource: HookSourceSpec,
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeContinue,
WaitTimeout: metav1.Duration{Duration: time.Millisecond},
},
},
},
},
hookTracker: NewHookTracker(),
expectedFailed: 0,
},
}
for _, test := range tests1 {
t.Run(test.name, func(t *testing.T) {
source := fcache.NewFakeControllerSource()
go func() {
// This is the state of the pod that will be seen by the AddFunc handler.
source.Add(test.initialPod)
}()
podCommandExecutor := &velerotest.MockPodCommandExecutor{}
defer podCommandExecutor.AssertExpectations(t)
h := &DefaultWaitExecHookHandler{
PodCommandExecutor: podCommandExecutor,
ListWatchFactory: &fakeListWatchFactory{source},
}
for _, e := range test.expectedExecutions {
obj, err := runtime.DefaultUnstructuredConverter.ToUnstructured(e.pod)
assert.Nil(t, err)
podCommandExecutor.On("ExecutePodCommand", mock.Anything, obj, e.pod.Namespace, e.pod.Name, e.name, e.hook).Return(e.error)
}
ctx := context.Background()
_ = h.HandleHooks(ctx, velerotest.NewLogger(), test.initialPod, test.byContainer, test.hookTracker)
_, actualFailed := test.hookTracker.Stat()
assert.Equal(t, test.expectedFailed, actualFailed)
})
}
}

View File

@@ -0,0 +1,45 @@
package resourcemodifiers
import (
"fmt"
jsonpatch "github.com/evanphx/json-patch"
"github.com/sirupsen/logrus"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"sigs.k8s.io/yaml"
)
type JSONMergePatch struct {
PatchData string `json:"patchData,omitempty"`
}
type JSONMergePatcher struct {
patches []JSONMergePatch
}
func (p *JSONMergePatcher) Patch(u *unstructured.Unstructured, _ logrus.FieldLogger) (*unstructured.Unstructured, error) {
objBytes, err := u.MarshalJSON()
if err != nil {
return nil, fmt.Errorf("error in marshaling object %s", err)
}
for _, patch := range p.patches {
patchBytes, err := yaml.YAMLToJSON([]byte(patch.PatchData))
if err != nil {
return nil, fmt.Errorf("error in converting YAML to JSON %s", err)
}
objBytes, err = jsonpatch.MergePatch(objBytes, patchBytes)
if err != nil {
return nil, fmt.Errorf("error in applying JSON Patch: %s", err.Error())
}
}
updated := &unstructured.Unstructured{}
err = updated.UnmarshalJSON(objBytes)
if err != nil {
return nil, fmt.Errorf("error in unmarshalling modified object %s", err.Error())
}
return updated, nil
}

View File

@@ -0,0 +1,41 @@
package resourcemodifiers
import (
"testing"
"github.com/sirupsen/logrus"
"github.com/stretchr/testify/assert"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
)
func TestJsonMergePatchFailure(t *testing.T) {
tests := []struct {
name string
data string
}{
{
name: "patch with bad yaml",
data: "a: b:",
},
{
name: "patch with bad json",
data: `{"a"::1}`,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
scheme := runtime.NewScheme()
err := clientgoscheme.AddToScheme(scheme)
assert.NoError(t, err)
pt := &JSONMergePatcher{
patches: []JSONMergePatch{{PatchData: tt.data}},
}
u := &unstructured.Unstructured{}
_, err = pt.Patch(u, logrus.New())
assert.Error(t, err)
})
}
}

View File

@@ -0,0 +1,103 @@
package resourcemodifiers
import (
"errors"
"fmt"
"strconv"
"strings"
jsonpatch "github.com/evanphx/json-patch"
"github.com/sirupsen/logrus"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
)
type JSONPatch struct {
Operation string `json:"operation"`
From string `json:"from,omitempty"`
Path string `json:"path"`
Value string `json:"value,omitempty"`
}
func (p *JSONPatch) ToString() string {
if addQuotes(&p.Value) {
return fmt.Sprintf(`{"op": "%s", "from": "%s", "path": "%s", "value": "%s"}`, p.Operation, p.From, p.Path, p.Value)
}
return fmt.Sprintf(`{"op": "%s", "from": "%s", "path": "%s", "value": %s}`, p.Operation, p.From, p.Path, p.Value)
}
func addQuotes(value *string) bool {
if *value == "" {
return true
}
// if value is escaped, remove escape and add quotes
// this is useful for scenarios where boolean, null and numbers are required to be set as string.
if strings.HasPrefix(*value, "\"") && strings.HasSuffix(*value, "\"") {
*value = strings.TrimPrefix(*value, "\"")
*value = strings.TrimSuffix(*value, "\"")
return true
}
// if value is null, then don't add quotes
if *value == "null" {
return false
}
// if value is a boolean, then don't add quotes
if strings.ToLower(*value) == "true" || strings.ToLower(*value) == "false" {
return false
}
// if value is a json object or array, then don't add quotes.
if strings.HasPrefix(*value, "{") || strings.HasPrefix(*value, "[") {
return false
}
// if value is a number, then don't add quotes
if _, err := strconv.ParseFloat(*value, 64); err == nil {
return false
}
return true
}
type JSONPatcher struct {
patches []JSONPatch `yaml:"patches"`
}
func (p *JSONPatcher) Patch(u *unstructured.Unstructured, logger logrus.FieldLogger) (*unstructured.Unstructured, error) {
modifiedObjBytes, err := p.applyPatch(u)
if err != nil {
if errors.Is(err, jsonpatch.ErrTestFailed) {
logger.Infof("Test operation failed for JSON Patch %s", err.Error())
return u.DeepCopy(), nil
}
return nil, fmt.Errorf("error in applying JSON Patch %s", err.Error())
}
updated := &unstructured.Unstructured{}
err = updated.UnmarshalJSON(modifiedObjBytes)
if err != nil {
return nil, fmt.Errorf("error in unmarshalling modified object %s", err.Error())
}
return updated, nil
}
func (p *JSONPatcher) applyPatch(u *unstructured.Unstructured) ([]byte, error) {
patchBytes := p.patchArrayToByteArray()
jsonPatch, err := jsonpatch.DecodePatch(patchBytes)
if err != nil {
return nil, fmt.Errorf("error in decoding json patch %s", err.Error())
}
objBytes, err := u.MarshalJSON()
if err != nil {
return nil, fmt.Errorf("error in marshaling object %s", err.Error())
}
return jsonPatch.Apply(objBytes)
}
func (p *JSONPatcher) patchArrayToByteArray() []byte {
var patches []string
for _, patch := range p.patches {
patches = append(patches, patch.ToString())
}
patchesStr := strings.Join(patches, ",\n\t")
return []byte(fmt.Sprintf(`[%s]`, patchesStr))
}

View File

@@ -1,18 +1,34 @@
/*
Copyright The Velero Contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package resourcemodifiers
import (
"fmt"
"io"
"regexp"
"strconv"
"strings"
jsonpatch "github.com/evanphx/json-patch"
"github.com/gobwas/glob"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"gopkg.in/yaml.v3"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime"
"sigs.k8s.io/yaml"
"github.com/vmware-tanzu/velero/pkg/util/collections"
)
@@ -22,27 +38,29 @@ const (
ResourceModifierSupportedVersionV1 = "v1"
)
type JSONPatch struct {
Operation string `yaml:"operation"`
From string `yaml:"from,omitempty"`
Path string `yaml:"path"`
Value string `yaml:"value,omitempty"`
type MatchRule struct {
Path string `json:"path,omitempty"`
Value string `json:"value,omitempty"`
}
type Conditions struct {
Namespaces []string `yaml:"namespaces,omitempty"`
GroupKind string `yaml:"groupKind"`
ResourceNameRegex string `yaml:"resourceNameRegex"`
Namespaces []string `json:"namespaces,omitempty"`
GroupResource string `json:"groupResource"`
ResourceNameRegex string `json:"resourceNameRegex,omitempty"`
LabelSelector *metav1.LabelSelector `json:"labelSelector,omitempty"`
Matches []MatchRule `json:"matches,omitempty"`
}
type ResourceModifierRule struct {
Conditions Conditions `yaml:"conditions"`
Patches []JSONPatch `yaml:"patches"`
Conditions Conditions `json:"conditions"`
Patches []JSONPatch `json:"patches,omitempty"`
MergePatches []JSONMergePatch `json:"mergePatches,omitempty"`
StrategicPatches []StrategicMergePatch `json:"strategicPatches,omitempty"`
}
type ResourceModifiers struct {
Version string `yaml:"version"`
ResourceModifierRules []ResourceModifierRule `yaml:"resourceModifierRules"`
Version string `json:"version"`
ResourceModifierRules []ResourceModifierRule `json:"resourceModifierRules"`
}
func GetResourceModifiersFromConfig(cm *v1.ConfigMap) (*ResourceModifiers, error) {
@@ -50,7 +68,7 @@ func GetResourceModifiersFromConfig(cm *v1.ConfigMap) (*ResourceModifiers, error
return nil, fmt.Errorf("could not parse config from nil configmap")
}
if len(cm.Data) != 1 {
return nil, fmt.Errorf("illegal resource modifiers %s/%s configmap", cm.Name, cm.Namespace)
return nil, fmt.Errorf("illegal resource modifiers %s/%s configmap", cm.Namespace, cm.Name)
}
var yamlData string
@@ -58,7 +76,7 @@ func GetResourceModifiersFromConfig(cm *v1.ConfigMap) (*ResourceModifiers, error
yamlData = v
}
resModifiers, err := unmarshalResourceModifiers(&yamlData)
resModifiers, err := unmarshalResourceModifiers([]byte(yamlData))
if err != nil {
return nil, errors.WithStack(err)
}
@@ -66,10 +84,10 @@ func GetResourceModifiersFromConfig(cm *v1.ConfigMap) (*ResourceModifiers, error
return resModifiers, nil
}
func (p *ResourceModifiers) ApplyResourceModifierRules(obj *unstructured.Unstructured, groupResource string, log logrus.FieldLogger) []error {
func (p *ResourceModifiers) ApplyResourceModifierRules(obj *unstructured.Unstructured, groupResource string, scheme *runtime.Scheme, log logrus.FieldLogger) []error {
var errs []error
for _, rule := range p.ResourceModifierRules {
err := rule.Apply(obj, groupResource, log)
err := rule.apply(obj, groupResource, scheme, log)
if err != nil {
errs = append(errs, err)
}
@@ -78,14 +96,25 @@ func (p *ResourceModifiers) ApplyResourceModifierRules(obj *unstructured.Unstruc
return errs
}
func (r *ResourceModifierRule) Apply(obj *unstructured.Unstructured, groupResource string, log logrus.FieldLogger) error {
namespaceInclusion := collections.NewIncludesExcludes().Includes(r.Conditions.Namespaces...)
if !namespaceInclusion.ShouldInclude(obj.GetNamespace()) {
return nil
}
if !strings.EqualFold(groupResource, r.Conditions.GroupKind) {
func (r *ResourceModifierRule) apply(obj *unstructured.Unstructured, groupResource string, scheme *runtime.Scheme, log logrus.FieldLogger) error {
ns := obj.GetNamespace()
if ns != "" {
namespaceInclusion := collections.NewIncludesExcludes().Includes(r.Conditions.Namespaces...)
if !namespaceInclusion.ShouldInclude(ns) {
return nil
}
}
g, err := glob.Compile(r.Conditions.GroupResource, '.')
if err != nil {
log.Errorf("Bad glob pattern of groupResource in condition, groupResource: %s, err: %s", r.Conditions.GroupResource, err)
return err
}
if !g.Match(groupResource) {
return nil
}
if r.Conditions.ResourceNameRegex != "" {
match, err := regexp.MatchString(r.Conditions.ResourceNameRegex, obj.GetName())
if err != nil {
@@ -95,94 +124,93 @@ func (r *ResourceModifierRule) Apply(obj *unstructured.Unstructured, groupResour
return nil
}
}
patches, err := r.PatchArrayToByteArray()
if err != nil {
return err
}
log.Infof("Applying resource modifier patch on %s/%s", obj.GetNamespace(), obj.GetName())
err = ApplyPatch(patches, obj, log)
if err != nil {
return err
}
return nil
}
// convert all JsonPatch to string array with the format of jsonpatch.Patch and then convert it to byte array
func (r *ResourceModifierRule) PatchArrayToByteArray() ([]byte, error) {
var patches []string
for _, patch := range r.Patches {
patches = append(patches, patch.ToString())
}
patchesStr := strings.Join(patches, ",\n\t")
return []byte(fmt.Sprintf(`[%s]`, patchesStr)), nil
}
func (p *JSONPatch) ToString() string {
if addQuotes(p.Value) {
return fmt.Sprintf(`{"op": "%s", "from": "%s", "path": "%s", "value": "%s"}`, p.Operation, p.From, p.Path, p.Value)
}
return fmt.Sprintf(`{"op": "%s", "from": "%s", "path": "%s", "value": %s}`, p.Operation, p.From, p.Path, p.Value)
}
func ApplyPatch(patch []byte, obj *unstructured.Unstructured, log logrus.FieldLogger) error {
jsonPatch, err := jsonpatch.DecodePatch(patch)
if err != nil {
return fmt.Errorf("error in decoding json patch %s", err.Error())
}
objBytes, err := obj.MarshalJSON()
if err != nil {
return fmt.Errorf("error in marshaling object %s", err.Error())
}
modifiedObjBytes, err := jsonPatch.Apply(objBytes)
if err != nil {
if errors.Is(err, jsonpatch.ErrTestFailed) {
log.Infof("Test operation failed for JSON Patch %s", err.Error())
if r.Conditions.LabelSelector != nil {
selector, err := metav1.LabelSelectorAsSelector(r.Conditions.LabelSelector)
if err != nil {
return errors.Errorf("error in creating label selector %s", err.Error())
}
if !selector.Matches(labels.Set(obj.GetLabels())) {
return nil
}
return fmt.Errorf("error in applying JSON Patch %s", err.Error())
}
err = obj.UnmarshalJSON(modifiedObjBytes)
match, err := matchConditions(obj, r.Conditions.Matches, log)
if err != nil {
return fmt.Errorf("error in unmarshalling modified object %s", err.Error())
return err
} else if !match {
log.Info("Conditions do not match, skip it")
return nil
}
log.Infof("Applying resource modifier patch on %s/%s", obj.GetNamespace(), obj.GetName())
err = r.applyPatch(obj, scheme, log)
if err != nil {
return err
}
return nil
}
func unmarshalResourceModifiers(yamlData *string) (*ResourceModifiers, error) {
resModifiers := &ResourceModifiers{}
err := decodeStruct(strings.NewReader(*yamlData), resModifiers)
func matchConditions(u *unstructured.Unstructured, rules []MatchRule, _ logrus.FieldLogger) (bool, error) {
if len(rules) == 0 {
return true, nil
}
var fixed []JSONPatch
for _, rule := range rules {
if rule.Path == "" {
return false, fmt.Errorf("path is required for match rule")
}
fixed = append(fixed, JSONPatch{
Operation: "test",
Path: rule.Path,
Value: rule.Value,
})
}
p := &JSONPatcher{patches: fixed}
_, err := p.applyPatch(u)
if err != nil {
return nil, fmt.Errorf("failed to decode yaml data into resource modifiers %v", err)
if errors.Is(err, jsonpatch.ErrTestFailed) {
return false, nil
}
return false, err
}
return true, nil
}
func unmarshalResourceModifiers(yamlData []byte) (*ResourceModifiers, error) {
resModifiers := &ResourceModifiers{}
err := yaml.UnmarshalStrict(yamlData, resModifiers)
if err != nil {
return nil, fmt.Errorf("failed to decode yaml data into resource modifiers, err: %s", err)
}
return resModifiers, nil
}
// decodeStruct restrict validate the keys in decoded mappings to exist as fields in the struct being decoded into
func decodeStruct(r io.Reader, s interface{}) error {
dec := yaml.NewDecoder(r)
dec.KnownFields(true)
return dec.Decode(s)
type patcher interface {
Patch(u *unstructured.Unstructured, logger logrus.FieldLogger) (*unstructured.Unstructured, error)
}
func addQuotes(value string) bool {
if value == "" {
return true
func (r *ResourceModifierRule) applyPatch(u *unstructured.Unstructured, scheme *runtime.Scheme, logger logrus.FieldLogger) error {
var p patcher
if len(r.Patches) > 0 {
p = &JSONPatcher{patches: r.Patches}
} else if len(r.MergePatches) > 0 {
p = &JSONMergePatcher{patches: r.MergePatches}
} else if len(r.StrategicPatches) > 0 {
p = &StrategicMergePatcher{patches: r.StrategicPatches, scheme: scheme}
} else {
return fmt.Errorf("no patch data found")
}
// if value is null, then don't add quotes
if value == "null" {
return false
updated, err := p.Patch(u, logger)
if err != nil {
return fmt.Errorf("error in applying patch %s", err)
}
// if value is a boolean, then don't add quotes
if _, err := strconv.ParseBool(value); err == nil {
return false
}
// if value is a json object or array, then don't add quotes.
if strings.HasPrefix(value, "{") || strings.HasPrefix(value, "[") {
return false
}
// if value is a number, then don't add quotes
if _, err := strconv.ParseFloat(value, 64); err == nil {
return false
}
return true
u.SetUnstructuredContent(updated.Object)
return nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,15 +1,44 @@
/*
Copyright The Velero Contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package resourcemodifiers
import (
"strings"
"fmt"
"strings"
)
func (r *ResourceModifierRule) Validate() error {
if err := r.Conditions.Validate(); err != nil {
return err
}
count := 0
for _, size := range []int{
len(r.Patches),
len(r.MergePatches),
len(r.StrategicPatches),
} {
if size != 0 {
count++
}
if count >= 2 {
return fmt.Errorf("only one of patches, mergePatches, strategicPatches can be specified")
}
}
for _, patch := range r.Patches {
if err := patch.Validate(); err != nil {
return err
@@ -48,8 +77,8 @@ func (p *JSONPatch) Validate() error {
}
func (c *Conditions) Validate() error {
if c.GroupKind == "" {
return fmt.Errorf("groupkind cannot be empty")
if c.GroupResource == "" {
return fmt.Errorf("groupkResource cannot be empty")
}
return nil
}

View File

@@ -1,3 +1,18 @@
/*
Copyright The Velero Contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package resourcemodifiers
import (
@@ -21,7 +36,7 @@ func TestResourceModifiers_Validate(t *testing.T) {
ResourceModifierRules: []ResourceModifierRule{
{
Conditions: Conditions{
GroupKind: "persistentvolumeclaims",
GroupResource: "persistentvolumeclaims",
ResourceNameRegex: ".*",
Namespaces: []string{"bar", "foo"},
},
@@ -44,7 +59,7 @@ func TestResourceModifiers_Validate(t *testing.T) {
ResourceModifierRules: []ResourceModifierRule{
{
Conditions: Conditions{
GroupKind: "persistentvolumeclaims",
GroupResource: "persistentvolumeclaims",
ResourceNameRegex: ".*",
Namespaces: []string{"bar", "foo"},
},
@@ -75,7 +90,7 @@ func TestResourceModifiers_Validate(t *testing.T) {
ResourceModifierRules: []ResourceModifierRule{
{
Conditions: Conditions{
GroupKind: "persistentvolumeclaims",
GroupResource: "persistentvolumeclaims",
ResourceNameRegex: ".*",
Namespaces: []string{"bar", "foo"},
},
@@ -92,13 +107,13 @@ func TestResourceModifiers_Validate(t *testing.T) {
wantErr: true,
},
{
name: "Condition has empty GroupKind",
name: "Condition has empty GroupResource",
fields: fields{
Version: "v1",
ResourceModifierRules: []ResourceModifierRule{
{
Conditions: Conditions{
GroupKind: "",
GroupResource: "",
ResourceNameRegex: ".*",
Namespaces: []string{"bar", "foo"},
},
@@ -114,6 +129,32 @@ func TestResourceModifiers_Validate(t *testing.T) {
},
wantErr: true,
},
{
name: "More than one patch type in a rule",
fields: fields{
Version: "v1",
ResourceModifierRules: []ResourceModifierRule{
{
Conditions: Conditions{
GroupResource: "*",
},
Patches: []JSONPatch{
{
Operation: "test",
Path: "/spec/storageClassName",
Value: "premium",
},
},
MergePatches: []JSONMergePatch{
{
PatchData: `{"metadata":{"labels":{"a":null}}}`,
},
},
},
},
},
wantErr: true,
},
}
for _, tt := range tests {

View File

@@ -0,0 +1,143 @@
package resourcemodifiers
import (
"fmt"
"net/http"
"github.com/sirupsen/logrus"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/util/mergepatch"
"k8s.io/apimachinery/pkg/util/strategicpatch"
"k8s.io/apimachinery/pkg/util/validation/field"
kubejson "sigs.k8s.io/json"
"sigs.k8s.io/yaml"
)
type StrategicMergePatch struct {
PatchData string `json:"patchData,omitempty"`
}
type StrategicMergePatcher struct {
patches []StrategicMergePatch
scheme *runtime.Scheme
}
func (p *StrategicMergePatcher) Patch(u *unstructured.Unstructured, _ logrus.FieldLogger) (*unstructured.Unstructured, error) {
gvk := u.GetObjectKind().GroupVersionKind()
schemaReferenceObj, err := p.scheme.New(gvk)
if err != nil {
return nil, err
}
origin := u.DeepCopy()
updated := u.DeepCopy()
for _, patch := range p.patches {
patchBytes, err := yaml.YAMLToJSON([]byte(patch.PatchData))
if err != nil {
return nil, fmt.Errorf("error in converting YAML to JSON %s", err)
}
err = strategicPatchObject(origin, patchBytes, updated, schemaReferenceObj)
if err != nil {
return nil, fmt.Errorf("error in applying Strategic Patch %s", err.Error())
}
origin = updated.DeepCopy()
}
return updated, nil
}
// strategicPatchObject applies a strategic merge patch of `patchBytes` to
// `originalObject` and stores the result in `objToUpdate`.
// It additionally returns the map[string]interface{} representation of the
// `originalObject` and `patchBytes`.
// NOTE: Both `originalObject` and `objToUpdate` are supposed to be versioned.
func strategicPatchObject(
originalObject runtime.Object,
patchBytes []byte,
objToUpdate runtime.Object,
schemaReferenceObj runtime.Object,
) error {
originalObjMap, err := runtime.DefaultUnstructuredConverter.ToUnstructured(originalObject)
if err != nil {
return err
}
patchMap := make(map[string]interface{})
var strictErrs []error
strictErrs, err = kubejson.UnmarshalStrict(patchBytes, &patchMap)
if err != nil {
return apierrors.NewBadRequest(err.Error())
}
if err := applyPatchToObject(originalObjMap, patchMap, objToUpdate, schemaReferenceObj, strictErrs); err != nil {
return err
}
return nil
}
// applyPatchToObject applies a strategic merge patch of <patchMap> to
// <originalMap> and stores the result in <objToUpdate>.
// NOTE: <objToUpdate> must be a versioned object.
func applyPatchToObject(
originalMap map[string]interface{},
patchMap map[string]interface{},
objToUpdate runtime.Object,
schemaReferenceObj runtime.Object,
strictErrs []error,
) error {
patchedObjMap, err := strategicpatch.StrategicMergeMapPatch(originalMap, patchMap, schemaReferenceObj)
if err != nil {
return interpretStrategicMergePatchError(err)
}
// Rather than serialize the patched map to JSON, then decode it to an object, we go directly from a map to an object
converter := runtime.DefaultUnstructuredConverter
if err := converter.FromUnstructuredWithValidation(patchedObjMap, objToUpdate, true); err != nil {
strictError, isStrictError := runtime.AsStrictDecodingError(err)
switch {
case !isStrictError:
// disregard any sttrictErrs, because it's an incomplete
// list of strict errors given that we don't know what fields were
// unknown because StrategicMergeMapPatch failed.
// Non-strict errors trump in this case.
return apierrors.NewInvalid(schema.GroupKind{}, "", field.ErrorList{
field.Invalid(field.NewPath("patch"), fmt.Sprintf("%+v", patchMap), err.Error()),
})
//case validationDirective == metav1.FieldValidationWarn:
// addStrictDecodingWarnings(requestContext, append(strictErrs, strictError.Errors()...))
default:
strictDecodingError := runtime.NewStrictDecodingError(append(strictErrs, strictError.Errors()...))
return apierrors.NewInvalid(schema.GroupKind{}, "", field.ErrorList{
field.Invalid(field.NewPath("patch"), fmt.Sprintf("%+v", patchMap), strictDecodingError.Error()),
})
}
} else if len(strictErrs) > 0 {
switch {
//case validationDirective == metav1.FieldValidationWarn:
// addStrictDecodingWarnings(requestContext, strictErrs)
default:
return apierrors.NewInvalid(schema.GroupKind{}, "", field.ErrorList{
field.Invalid(field.NewPath("patch"), fmt.Sprintf("%+v", patchMap), runtime.NewStrictDecodingError(strictErrs).Error()),
})
}
}
return nil
}
// interpretStrategicMergePatchError interprets the error type and returns an error with appropriate HTTP code.
func interpretStrategicMergePatchError(err error) error {
switch err {
case mergepatch.ErrBadJSONDoc, mergepatch.ErrBadPatchFormatForPrimitiveList, mergepatch.ErrBadPatchFormatForRetainKeys, mergepatch.ErrBadPatchFormatForSetElementOrderList, mergepatch.ErrUnsupportedStrategicMergePatchFormat:
return apierrors.NewBadRequest(err.Error())
case mergepatch.ErrNoListOfLists, mergepatch.ErrPatchContentNotMatchRetainKeys:
return apierrors.NewGenericServerResponse(http.StatusUnprocessableEntity, "", schema.GroupResource{}, "", err.Error(), 0, false)
default:
return err
}
}

View File

@@ -0,0 +1,52 @@
package resourcemodifiers
import (
"testing"
"github.com/sirupsen/logrus"
"github.com/stretchr/testify/assert"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
)
func TestStrategicMergePatchFailure(t *testing.T) {
tests := []struct {
name string
data string
kind string
}{
{
name: "patch with unknown kind",
data: "{}",
kind: "BadKind",
},
{
name: "patch with bad yaml",
data: "a: b:",
kind: "Pod",
},
{
name: "patch with bad json",
data: `{"a"::1}`,
kind: "Pod",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
scheme := runtime.NewScheme()
err := clientgoscheme.AddToScheme(scheme)
assert.NoError(t, err)
pt := &StrategicMergePatcher{
patches: []StrategicMergePatch{{PatchData: tt.data}},
scheme: scheme,
}
u := &unstructured.Unstructured{}
u.SetGroupVersionKind(schema.GroupVersionKind{Version: "v1", Kind: tt.kind})
_, err = pt.Patch(u, logrus.New())
assert.Error(t, err)
})
}
}

View File

@@ -1,3 +1,18 @@
/*
Copyright The Velero Contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package resourcepolicies
import (
@@ -70,6 +85,7 @@ func (p *Policies) buildPolicy(resPolicies *resourcePolicies) error {
volP.conditions = append(volP.conditions, &storageClassCondition{storageClass: con.StorageClass})
volP.conditions = append(volP.conditions, &nfsCondition{nfs: con.NFS})
volP.conditions = append(volP.conditions, &csiCondition{csi: con.CSI})
volP.conditions = append(volP.conditions, &volumeTypeCondition{volumeTypes: con.VolumeTypes})
p.volumePolicies = append(p.volumePolicies, volP)
}
@@ -132,7 +148,7 @@ func GetResourcePoliciesFromConfig(cm *v1.ConfigMap) (*Policies, error) {
return nil, fmt.Errorf("could not parse config from nil configmap")
}
if len(cm.Data) != 1 {
return nil, fmt.Errorf("illegal resource policies %s/%s configmap", cm.Name, cm.Namespace)
return nil, fmt.Errorf("illegal resource policies %s/%s configmap", cm.Namespace, cm.Name)
}
var yamlData string

View File

@@ -1,3 +1,18 @@
/*
Copyright The Velero Contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package resourcepolicies
import (
@@ -355,6 +370,51 @@ volumePolicies:
},
skip: false,
},
{
name: "match volume by types",
yamlData: `version: v1
volumePolicies:
- conditions:
capacity: "0,100Gi"
volumeTypes:
- local
- hostPath
action:
type: skip`,
vol: &v1.PersistentVolume{
Spec: v1.PersistentVolumeSpec{
Capacity: v1.ResourceList{
v1.ResourceStorage: resource.MustParse("1Gi"),
},
PersistentVolumeSource: v1.PersistentVolumeSource{
HostPath: &v1.HostPathVolumeSource{Path: "/mnt/data"},
},
},
},
skip: true,
},
{
name: "dismatch volume by types",
yamlData: `version: v1
volumePolicies:
- conditions:
capacity: "0,100Gi"
volumeTypes:
- local
action:
type: skip`,
vol: &v1.PersistentVolume{
Spec: v1.PersistentVolumeSpec{
Capacity: v1.ResourceList{
v1.ResourceStorage: resource.MustParse("1Gi"),
},
PersistentVolumeSource: v1.PersistentVolumeSource{
HostPath: &v1.HostPathVolumeSource{Path: "/mnt/data"},
},
},
},
skip: false,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {

View File

@@ -1,3 +1,18 @@
/*
Copyright The Velero Contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package resourcepolicies
import (
@@ -32,6 +47,7 @@ type structuredVolume struct {
storageClass string
nfs *nFSVolumeSource
csi *csiVolumeSource
volumeType SupportedVolume
}
func (s *structuredVolume) parsePV(pv *corev1api.PersistentVolume) {
@@ -46,6 +62,8 @@ func (s *structuredVolume) parsePV(pv *corev1api.PersistentVolume) {
if csi != nil {
s.csi = &csiVolumeSource{Driver: csi.Driver}
}
s.volumeType = getVolumeTypeFromPV(pv)
}
func (s *structuredVolume) parsePodVolume(vol *corev1api.Volume) {
@@ -58,6 +76,8 @@ func (s *structuredVolume) parsePodVolume(vol *corev1api.Volume) {
if csi != nil {
s.csi = &csiVolumeSource{Driver: csi.Driver}
}
s.volumeType = getVolumeTypeFromVolume(vol)
}
type capacityCondition struct {

View File

@@ -1,3 +1,18 @@
/*
Copyright The Velero Contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package resourcepolicies
import (

View File

@@ -1,3 +1,18 @@
/*
Copyright The Velero Contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package resourcepolicies
import (
@@ -23,10 +38,11 @@ type nFSVolumeSource struct {
// volumeConditions defined the current format of conditions we parsed
type volumeConditions struct {
Capacity string `yaml:"capacity,omitempty"`
StorageClass []string `yaml:"storageClass,omitempty"`
NFS *nFSVolumeSource `yaml:"nfs,omitempty"`
CSI *csiVolumeSource `yaml:"csi,omitempty"`
Capacity string `yaml:"capacity,omitempty"`
StorageClass []string `yaml:"storageClass,omitempty"`
NFS *nFSVolumeSource `yaml:"nfs,omitempty"`
CSI *csiVolumeSource `yaml:"csi,omitempty"`
VolumeTypes []SupportedVolume `yaml:"volumeTypes,omitempty"`
}
func (c *capacityCondition) validate() error {

View File

@@ -1,3 +1,18 @@
/*
Copyright The Velero Contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package resourcepolicies
import (

View File

@@ -0,0 +1,247 @@
/*
Copyright the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package resourcepolicies
import (
corev1api "k8s.io/api/core/v1"
)
type volumeTypeCondition struct {
volumeTypes []SupportedVolume
}
type SupportedVolume string
const (
AWSAzureDisk SupportedVolume = "awsAzureDisk"
AWSElasticBlockStore SupportedVolume = "awsElasticBlockStore"
AzureDisk SupportedVolume = "azureDisk"
AzureFile SupportedVolume = "azureFile"
Cinder SupportedVolume = "cinder"
CephFS SupportedVolume = "cephfs"
ConfigMap SupportedVolume = "configMap"
CSI SupportedVolume = "csi"
DownwardAPI SupportedVolume = "downwardAPI"
EmptyDir SupportedVolume = "emptyDir"
Ephemeral SupportedVolume = "ephemeral"
FC SupportedVolume = "fc"
Flocker SupportedVolume = "flocker"
FlexVolume SupportedVolume = "flexVolume"
GitRepo SupportedVolume = "gitRepo"
Glusterfs SupportedVolume = "glusterfs"
GCEPersistentDisk SupportedVolume = "gcePersistentDisk"
HostPath SupportedVolume = "hostPath"
ISCSI SupportedVolume = "iscsi"
Local SupportedVolume = "local"
NFS SupportedVolume = "nfs"
PhotonPersistentDisk SupportedVolume = "photonPersistentDisk"
PortworxVolume SupportedVolume = "portworxVolume"
Projected SupportedVolume = "projected"
Quobyte SupportedVolume = "quobyte"
RBD SupportedVolume = "rbd"
ScaleIO SupportedVolume = "scaleIO"
Secret SupportedVolume = "secret"
StorageOS SupportedVolume = "storageOS"
VsphereVolume SupportedVolume = "vsphereVolume"
)
func (v *volumeTypeCondition) match(s *structuredVolume) bool {
if len(v.volumeTypes) == 0 {
return true
}
for _, vt := range v.volumeTypes {
if vt == s.volumeType {
return true
}
}
return false
}
func (v *volumeTypeCondition) validate() error {
// validate by yamlv3
return nil
}
func getVolumeTypeFromPV(pv *corev1api.PersistentVolume) SupportedVolume {
if pv == nil {
return ""
}
if pv.Spec.AWSElasticBlockStore != nil {
return AWSElasticBlockStore
}
if pv.Spec.AzureDisk != nil {
return AzureDisk
}
if pv.Spec.AzureFile != nil {
return AzureFile
}
if pv.Spec.CephFS != nil {
return CephFS
}
if pv.Spec.Cinder != nil {
return Cinder
}
if pv.Spec.CSI != nil {
return CSI
}
if pv.Spec.FC != nil {
return FC
}
if pv.Spec.Flocker != nil {
return Flocker
}
if pv.Spec.FlexVolume != nil {
return FlexVolume
}
if pv.Spec.GCEPersistentDisk != nil {
return GCEPersistentDisk
}
if pv.Spec.Glusterfs != nil {
return Glusterfs
}
if pv.Spec.HostPath != nil {
return HostPath
}
if pv.Spec.ISCSI != nil {
return ISCSI
}
if pv.Spec.Local != nil {
return Local
}
if pv.Spec.NFS != nil {
return NFS
}
if pv.Spec.PhotonPersistentDisk != nil {
return PhotonPersistentDisk
}
if pv.Spec.PortworxVolume != nil {
return PortworxVolume
}
if pv.Spec.Quobyte != nil {
return Quobyte
}
if pv.Spec.RBD != nil {
return RBD
}
if pv.Spec.ScaleIO != nil {
return ScaleIO
}
if pv.Spec.StorageOS != nil {
return StorageOS
}
if pv.Spec.VsphereVolume != nil {
return VsphereVolume
}
return ""
}
func getVolumeTypeFromVolume(vol *corev1api.Volume) SupportedVolume {
if vol == nil {
return ""
}
if vol.AWSElasticBlockStore != nil {
return AWSElasticBlockStore
}
if vol.AzureDisk != nil {
return AzureDisk
}
if vol.AzureFile != nil {
return AzureFile
}
if vol.CephFS != nil {
return CephFS
}
if vol.Cinder != nil {
return Cinder
}
if vol.CSI != nil {
return CSI
}
if vol.FC != nil {
return FC
}
if vol.Flocker != nil {
return Flocker
}
if vol.FlexVolume != nil {
return FlexVolume
}
if vol.GCEPersistentDisk != nil {
return GCEPersistentDisk
}
if vol.GitRepo != nil {
return GitRepo
}
if vol.Glusterfs != nil {
return Glusterfs
}
if vol.ISCSI != nil {
return ISCSI
}
if vol.NFS != nil {
return NFS
}
if vol.Secret != nil {
return Secret
}
if vol.RBD != nil {
return RBD
}
if vol.DownwardAPI != nil {
return DownwardAPI
}
if vol.ConfigMap != nil {
return ConfigMap
}
if vol.Projected != nil {
return Projected
}
if vol.Ephemeral != nil {
return Ephemeral
}
if vol.FC != nil {
return FC
}
if vol.PhotonPersistentDisk != nil {
return PhotonPersistentDisk
}
if vol.PortworxVolume != nil {
return PortworxVolume
}
if vol.Quobyte != nil {
return Quobyte
}
if vol.ScaleIO != nil {
return ScaleIO
}
if vol.StorageOS != nil {
return StorageOS
}
if vol.VsphereVolume != nil {
return VsphereVolume
}
if vol.HostPath != nil {
return HostPath
}
if vol.EmptyDir != nil {
return EmptyDir
}
return ""
}

View File

@@ -0,0 +1,576 @@
/*
Copyright the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package resourcepolicies
import (
"testing"
corev1api "k8s.io/api/core/v1"
)
func TestGetVolumeTypeFromPV(t *testing.T) {
testCases := []struct {
name string
inputPV *corev1api.PersistentVolume
expected SupportedVolume
}{
{
name: "nil PersistentVolume",
inputPV: nil,
expected: "",
},
{
name: "Test GCEPersistentDisk",
inputPV: &corev1api.PersistentVolume{
Spec: corev1api.PersistentVolumeSpec{
PersistentVolumeSource: corev1api.PersistentVolumeSource{
GCEPersistentDisk: &corev1api.GCEPersistentDiskVolumeSource{},
},
},
},
expected: GCEPersistentDisk,
},
{
name: "Test AWSElasticBlockStore",
inputPV: &corev1api.PersistentVolume{
Spec: corev1api.PersistentVolumeSpec{
PersistentVolumeSource: corev1api.PersistentVolumeSource{
AWSElasticBlockStore: &corev1api.AWSElasticBlockStoreVolumeSource{},
},
},
},
expected: AWSElasticBlockStore,
},
{
name: "Test HostPath",
inputPV: &corev1api.PersistentVolume{
Spec: corev1api.PersistentVolumeSpec{
PersistentVolumeSource: corev1api.PersistentVolumeSource{
HostPath: &corev1api.HostPathVolumeSource{},
},
},
},
expected: HostPath,
},
{
name: "Test Glusterfs",
inputPV: &corev1api.PersistentVolume{
Spec: corev1api.PersistentVolumeSpec{
PersistentVolumeSource: corev1api.PersistentVolumeSource{
Glusterfs: &corev1api.GlusterfsPersistentVolumeSource{},
},
},
},
expected: Glusterfs,
},
{
name: "Test NFS",
inputPV: &corev1api.PersistentVolume{
Spec: corev1api.PersistentVolumeSpec{
PersistentVolumeSource: corev1api.PersistentVolumeSource{
NFS: &corev1api.NFSVolumeSource{},
},
},
},
expected: NFS,
},
{
name: "Test RBD",
inputPV: &corev1api.PersistentVolume{
Spec: corev1api.PersistentVolumeSpec{
PersistentVolumeSource: corev1api.PersistentVolumeSource{
RBD: &corev1api.RBDPersistentVolumeSource{},
},
},
},
expected: RBD,
},
{
name: "Test ISCSI",
inputPV: &corev1api.PersistentVolume{
Spec: corev1api.PersistentVolumeSpec{
PersistentVolumeSource: corev1api.PersistentVolumeSource{
ISCSI: &corev1api.ISCSIPersistentVolumeSource{},
},
},
},
expected: ISCSI,
},
{
name: "Test Cinder",
inputPV: &corev1api.PersistentVolume{
Spec: corev1api.PersistentVolumeSpec{
PersistentVolumeSource: corev1api.PersistentVolumeSource{
Cinder: &corev1api.CinderPersistentVolumeSource{},
},
},
},
expected: Cinder,
},
{
name: "Test CephFS",
inputPV: &corev1api.PersistentVolume{
Spec: corev1api.PersistentVolumeSpec{
PersistentVolumeSource: corev1api.PersistentVolumeSource{
CephFS: &corev1api.CephFSPersistentVolumeSource{},
},
},
},
expected: CephFS,
},
{
name: "Test FC",
inputPV: &corev1api.PersistentVolume{
Spec: corev1api.PersistentVolumeSpec{
PersistentVolumeSource: corev1api.PersistentVolumeSource{
FC: &corev1api.FCVolumeSource{},
},
},
},
expected: FC,
},
{
name: "Test Flocker",
inputPV: &corev1api.PersistentVolume{
Spec: corev1api.PersistentVolumeSpec{
PersistentVolumeSource: corev1api.PersistentVolumeSource{
Flocker: &corev1api.FlockerVolumeSource{},
},
},
},
expected: Flocker,
},
{
name: "Test FlexVolume",
inputPV: &corev1api.PersistentVolume{
Spec: corev1api.PersistentVolumeSpec{
PersistentVolumeSource: corev1api.PersistentVolumeSource{
FlexVolume: &corev1api.FlexPersistentVolumeSource{},
},
},
},
expected: FlexVolume,
},
{
name: "Test AzureFile",
inputPV: &corev1api.PersistentVolume{
Spec: corev1api.PersistentVolumeSpec{
PersistentVolumeSource: corev1api.PersistentVolumeSource{
AzureFile: &corev1api.AzureFilePersistentVolumeSource{},
},
},
},
expected: AzureFile,
},
{
name: "Test VsphereVolume",
inputPV: &corev1api.PersistentVolume{
Spec: corev1api.PersistentVolumeSpec{
PersistentVolumeSource: corev1api.PersistentVolumeSource{
VsphereVolume: &corev1api.VsphereVirtualDiskVolumeSource{},
},
},
},
expected: VsphereVolume,
},
{
name: "Test Quobyte",
inputPV: &corev1api.PersistentVolume{
Spec: corev1api.PersistentVolumeSpec{
PersistentVolumeSource: corev1api.PersistentVolumeSource{
Quobyte: &corev1api.QuobyteVolumeSource{},
},
},
},
expected: Quobyte,
},
{
name: "Test AzureDisk",
inputPV: &corev1api.PersistentVolume{
Spec: corev1api.PersistentVolumeSpec{
PersistentVolumeSource: corev1api.PersistentVolumeSource{
AzureDisk: &corev1api.AzureDiskVolumeSource{},
},
},
},
expected: AzureDisk,
},
{
name: "Test PhotonPersistentDisk",
inputPV: &corev1api.PersistentVolume{
Spec: corev1api.PersistentVolumeSpec{
PersistentVolumeSource: corev1api.PersistentVolumeSource{
PhotonPersistentDisk: &corev1api.PhotonPersistentDiskVolumeSource{},
},
},
},
expected: PhotonPersistentDisk,
},
{
name: "Test PortworxVolume",
inputPV: &corev1api.PersistentVolume{
Spec: corev1api.PersistentVolumeSpec{
PersistentVolumeSource: corev1api.PersistentVolumeSource{
PortworxVolume: &corev1api.PortworxVolumeSource{},
},
},
},
expected: PortworxVolume,
},
{
name: "Test ScaleIO",
inputPV: &corev1api.PersistentVolume{
Spec: corev1api.PersistentVolumeSpec{
PersistentVolumeSource: corev1api.PersistentVolumeSource{
ScaleIO: &corev1api.ScaleIOPersistentVolumeSource{},
},
},
},
expected: ScaleIO,
},
{
name: "Test Local",
inputPV: &corev1api.PersistentVolume{
Spec: corev1api.PersistentVolumeSpec{
PersistentVolumeSource: corev1api.PersistentVolumeSource{
Local: &corev1api.LocalVolumeSource{},
},
},
},
expected: Local,
},
{
name: "Test StorageOS",
inputPV: &corev1api.PersistentVolume{
Spec: corev1api.PersistentVolumeSpec{
PersistentVolumeSource: corev1api.PersistentVolumeSource{
StorageOS: &corev1api.StorageOSPersistentVolumeSource{},
},
},
},
expected: StorageOS,
},
{
name: "Test CSI",
inputPV: &corev1api.PersistentVolume{
Spec: corev1api.PersistentVolumeSpec{
PersistentVolumeSource: corev1api.PersistentVolumeSource{
CSI: &corev1api.CSIPersistentVolumeSource{},
},
},
},
expected: CSI,
},
{
name: "Test Unknown Source",
inputPV: &corev1api.PersistentVolume{
Spec: corev1api.PersistentVolumeSpec{},
},
expected: "",
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
result := getVolumeTypeFromPV(tc.inputPV)
if result != tc.expected {
t.Errorf("Expected %s, but got %s", tc.expected, result)
}
})
}
}
func TestGetVolumeTypeFromVolume(t *testing.T) {
testCases := []struct {
name string
inputVol *corev1api.Volume
expected SupportedVolume
}{
{
name: "nil Volume",
inputVol: nil,
expected: "",
},
{
name: "Test Unknown Source",
inputVol: &corev1api.Volume{
VolumeSource: corev1api.VolumeSource{},
},
expected: "",
},
{
name: "Test HostPath",
inputVol: &corev1api.Volume{
VolumeSource: corev1api.VolumeSource{
HostPath: &corev1api.HostPathVolumeSource{},
},
},
expected: HostPath,
},
{
name: "Test EmptyDir",
inputVol: &corev1api.Volume{
VolumeSource: corev1api.VolumeSource{
EmptyDir: &corev1api.EmptyDirVolumeSource{},
},
},
expected: EmptyDir,
},
{
name: "Test GCEPersistentDisk",
inputVol: &corev1api.Volume{
VolumeSource: corev1api.VolumeSource{
GCEPersistentDisk: &corev1api.GCEPersistentDiskVolumeSource{},
},
},
expected: GCEPersistentDisk,
},
{
name: "Test AWSElasticBlockStore",
inputVol: &corev1api.Volume{
VolumeSource: corev1api.VolumeSource{
AWSElasticBlockStore: &corev1api.AWSElasticBlockStoreVolumeSource{},
},
},
expected: AWSElasticBlockStore,
},
{
name: "Test GitRepo",
inputVol: &corev1api.Volume{
VolumeSource: corev1api.VolumeSource{
GitRepo: &corev1api.GitRepoVolumeSource{},
},
},
expected: GitRepo,
},
{
name: "Test Secret",
inputVol: &corev1api.Volume{
VolumeSource: corev1api.VolumeSource{
Secret: &corev1api.SecretVolumeSource{},
},
},
expected: Secret,
},
{
name: "Test NFS",
inputVol: &corev1api.Volume{
VolumeSource: corev1api.VolumeSource{
NFS: &corev1api.NFSVolumeSource{},
},
},
expected: NFS,
},
{
name: "Test ISCSI",
inputVol: &corev1api.Volume{
VolumeSource: corev1api.VolumeSource{
ISCSI: &corev1api.ISCSIVolumeSource{},
},
},
expected: ISCSI,
},
{
name: "Test Glusterfs",
inputVol: &corev1api.Volume{
VolumeSource: corev1api.VolumeSource{
Glusterfs: &corev1api.GlusterfsVolumeSource{},
},
},
expected: Glusterfs,
},
{
name: "Test RBD",
inputVol: &corev1api.Volume{
VolumeSource: corev1api.VolumeSource{
RBD: &corev1api.RBDVolumeSource{},
},
},
expected: RBD,
},
{
name: "Test FlexVolume",
inputVol: &corev1api.Volume{
VolumeSource: corev1api.VolumeSource{
FlexVolume: &corev1api.FlexVolumeSource{},
},
},
expected: FlexVolume,
},
{
name: "Test Cinder",
inputVol: &corev1api.Volume{
VolumeSource: corev1api.VolumeSource{
Cinder: &corev1api.CinderVolumeSource{},
},
},
expected: Cinder,
},
{
name: "Test CephFS",
inputVol: &corev1api.Volume{
VolumeSource: corev1api.VolumeSource{
CephFS: &corev1api.CephFSVolumeSource{},
},
},
expected: CephFS,
},
{
name: "Test Flocker",
inputVol: &corev1api.Volume{
VolumeSource: corev1api.VolumeSource{
Flocker: &corev1api.FlockerVolumeSource{},
},
},
expected: Flocker,
},
{
name: "Test DownwardAPI",
inputVol: &corev1api.Volume{
VolumeSource: corev1api.VolumeSource{
DownwardAPI: &corev1api.DownwardAPIVolumeSource{},
},
},
expected: DownwardAPI,
},
{
name: "Test FC",
inputVol: &corev1api.Volume{
VolumeSource: corev1api.VolumeSource{
FC: &corev1api.FCVolumeSource{},
},
},
expected: FC,
},
{
name: "Test AzureFile",
inputVol: &corev1api.Volume{
VolumeSource: corev1api.VolumeSource{
AzureFile: &corev1api.AzureFileVolumeSource{},
},
},
expected: AzureFile,
},
{
name: "Test ConfigMap",
inputVol: &corev1api.Volume{
VolumeSource: corev1api.VolumeSource{
ConfigMap: &corev1api.ConfigMapVolumeSource{},
},
},
expected: ConfigMap,
},
{
name: "Test VsphereVolume",
inputVol: &corev1api.Volume{
VolumeSource: corev1api.VolumeSource{
VsphereVolume: &corev1api.VsphereVirtualDiskVolumeSource{},
},
},
expected: VsphereVolume,
},
{
name: "Test Quobyte",
inputVol: &corev1api.Volume{
VolumeSource: corev1api.VolumeSource{
Quobyte: &corev1api.QuobyteVolumeSource{},
},
},
expected: Quobyte,
},
{
name: "Test AzureDisk",
inputVol: &corev1api.Volume{
VolumeSource: corev1api.VolumeSource{
AzureDisk: &corev1api.AzureDiskVolumeSource{},
},
},
expected: AzureDisk,
},
{
name: "Test PhotonPersistentDisk",
inputVol: &corev1api.Volume{
VolumeSource: corev1api.VolumeSource{
PhotonPersistentDisk: &corev1api.PhotonPersistentDiskVolumeSource{},
},
},
expected: PhotonPersistentDisk,
},
{
name: "Test Projected",
inputVol: &corev1api.Volume{
VolumeSource: corev1api.VolumeSource{
Projected: &corev1api.ProjectedVolumeSource{},
},
},
expected: Projected,
},
{
name: "Test PortworxVolume",
inputVol: &corev1api.Volume{
VolumeSource: corev1api.VolumeSource{
PortworxVolume: &corev1api.PortworxVolumeSource{},
},
},
expected: PortworxVolume,
},
{
name: "Test ScaleIO",
inputVol: &corev1api.Volume{
VolumeSource: corev1api.VolumeSource{
ScaleIO: &corev1api.ScaleIOVolumeSource{},
},
},
expected: ScaleIO,
},
{
name: "Test StorageOS",
inputVol: &corev1api.Volume{
VolumeSource: corev1api.VolumeSource{
StorageOS: &corev1api.StorageOSVolumeSource{},
},
},
expected: StorageOS,
},
{
name: "Test CSI",
inputVol: &corev1api.Volume{
VolumeSource: corev1api.VolumeSource{
CSI: &corev1api.CSIVolumeSource{},
},
},
expected: CSI,
},
{
name: "Test Ephemeral",
inputVol: &corev1api.Volume{
VolumeSource: corev1api.VolumeSource{
Ephemeral: &corev1api.EphemeralVolumeSource{},
},
},
expected: Ephemeral,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
result := getVolumeTypeFromVolume(tc.inputVol)
if result != tc.expected {
t.Errorf("Expected %s, but got %s", tc.expected, result)
}
})
}
}

View File

@@ -18,6 +18,7 @@ package storage
import (
"context"
"fmt"
"time"
"github.com/pkg/errors"
@@ -92,3 +93,18 @@ func ListBackupStorageLocations(ctx context.Context, kbClient client.Client, nam
return locations, nil
}
func GetDefaultBackupStorageLocations(ctx context.Context, kbClient client.Client, namespace string) (*velerov1api.BackupStorageLocationList, error) {
locations := new(velerov1api.BackupStorageLocationList)
defaultLocations := new(velerov1api.BackupStorageLocationList)
if err := kbClient.List(context.Background(), locations, &client.ListOptions{Namespace: namespace}); err != nil {
return defaultLocations, errors.Wrapf(err, fmt.Sprintf("failed to list backup storage locations in namespace %s", namespace))
}
for _, location := range locations.Items {
if location.Spec.Default {
defaultLocations.Items = append(defaultLocations.Items, location)
}
}
return defaultLocations, nil
}

View File

@@ -26,8 +26,8 @@ import (
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/builder"
"github.com/vmware-tanzu/velero/pkg/generated/clientset/versioned/scheme"
velerotest "github.com/vmware-tanzu/velero/pkg/test"
"github.com/vmware-tanzu/velero/pkg/util"
)
func TestIsReadyToValidate(t *testing.T) {
@@ -163,7 +163,7 @@ func TestListBackupStorageLocations(t *testing.T) {
t.Run(tt.name, func(t *testing.T) {
g := NewWithT(t)
client := fake.NewClientBuilder().WithScheme(scheme.Scheme).WithRuntimeObjects(tt.backupLocations).Build()
client := fake.NewClientBuilder().WithScheme(util.VeleroScheme).WithRuntimeObjects(tt.backupLocations).Build()
if tt.expectError {
_, err := ListBackupStorageLocations(context.Background(), client, "ns-1")
g.Expect(err).NotTo(BeNil())

View File

@@ -1,37 +0,0 @@
/*
Copyright 2020 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// TODO(2.0) After converting all controllers to runtime-controller,
// the functions in this file will no longer be needed and should be removed.
package managercontroller
import (
"context"
"sigs.k8s.io/controller-runtime/pkg/manager"
"github.com/vmware-tanzu/velero/pkg/controller"
)
// Runnable will turn a "regular" runnable component (such as a controller)
// into a controller-runtime Runnable
func Runnable(p controller.Interface, numWorkers int) manager.Runnable {
// Pass the provided Context down to the run function.
f := func(ctx context.Context) error {
return p.Run(ctx, numWorkers)
}
return manager.RunnableFunc(f)
}

View File

@@ -0,0 +1,571 @@
/*
Copyright The Velero Contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package volume
import (
"context"
"strconv"
snapshotv1api "github.com/kubernetes-csi/external-snapshotter/client/v7/apis/volumesnapshot/v1"
"github.com/sirupsen/logrus"
corev1api "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
kbclient "sigs.k8s.io/controller-runtime/pkg/client"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
velerov2alpha1 "github.com/vmware-tanzu/velero/pkg/apis/velero/v2alpha1"
"github.com/vmware-tanzu/velero/pkg/itemoperation"
"github.com/vmware-tanzu/velero/pkg/kuberesource"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
"github.com/vmware-tanzu/velero/pkg/volume"
)
type VolumeBackupMethod string
const (
NativeSnapshot VolumeBackupMethod = "NativeSnapshot"
PodVolumeBackup VolumeBackupMethod = "PodVolumeBackup"
CSISnapshot VolumeBackupMethod = "CSISnapshot"
)
const (
FieldValueIsUnknown string = "unknown"
)
type VolumeInfo struct {
// The PVC's name.
PVCName string `json:"pvcName,omitempty"`
// The PVC's namespace
PVCNamespace string `json:"pvcNamespace,omitempty"`
// The PV name.
PVName string `json:"pvName,omitempty"`
// The way the volume data is backed up. The valid value includes `VeleroNativeSnapshot`, `PodVolumeBackup` and `CSISnapshot`.
BackupMethod VolumeBackupMethod `json:"backupMethod,omitempty"`
// Whether the volume's snapshot data is moved to specified storage.
SnapshotDataMoved bool `json:"snapshotDataMoved"`
// Whether the local snapshot is preserved after snapshot is moved.
// The local snapshot may be a result of CSI snapshot backup(no data movement)
// or a CSI snapshot data movement plus preserve local snapshot.
PreserveLocalSnapshot bool `json:"preserveLocalSnapshot"`
// Whether the Volume is skipped in this backup.
Skipped bool `json:"skipped"`
// The reason for the volume is skipped in the backup.
SkippedReason string `json:"skippedReason,omitempty"`
// Snapshot starts timestamp.
StartTimestamp *metav1.Time `json:"startTimestamp,omitempty"`
CSISnapshotInfo *CSISnapshotInfo `json:"csiSnapshotInfo,omitempty"`
SnapshotDataMovementInfo *SnapshotDataMovementInfo `json:"snapshotDataMovementInfo,omitempty"`
NativeSnapshotInfo *NativeSnapshotInfo `json:"nativeSnapshotInfo,omitempty"`
PVBInfo *PodVolumeBackupInfo `json:"pvbInfo,omitempty"`
PVInfo *PVInfo `json:"pvInfo,omitempty"`
}
// CSISnapshotInfo is used for displaying the CSI snapshot status
type CSISnapshotInfo struct {
// It's the storage provider's snapshot ID for CSI.
SnapshotHandle string `json:"snapshotHandle"`
// The snapshot corresponding volume size.
Size int64 `json:"size"`
// The name of the CSI driver.
Driver string `json:"driver"`
// The name of the VolumeSnapshotContent.
VSCName string `json:"vscName"`
// The Async Operation's ID.
OperationID string `json:"operationID"`
}
// SnapshotDataMovementInfo is used for displaying the snapshot data mover status.
type SnapshotDataMovementInfo struct {
// The data mover used by the backup. The valid values are `velero` and ``(equals to `velero`).
DataMover string `json:"dataMover"`
// The type of the uploader that uploads the snapshot data. The valid values are `kopia` and `restic`.
UploaderType string `json:"uploaderType"`
// The name or ID of the snapshot associated object(SAO).
// SAO is used to support local snapshots for the snapshot data mover,
// e.g. it could be a VolumeSnapshot for CSI snapshot data movement.
RetainedSnapshot string `json:"retainedSnapshot"`
// It's the filesystem repository's snapshot ID.
SnapshotHandle string `json:"snapshotHandle"`
// The Async Operation's ID.
OperationID string `json:"operationID"`
}
// NativeSnapshotInfo is used for displaying the Velero native snapshot status.
// A Velero Native Snapshot is a cloud storage snapshot taken by the Velero native
// plugins, e.g. velero-plugin-for-aws, velero-plugin-for-gcp, and
// velero-plugin-for-microsoft-azure.
type NativeSnapshotInfo struct {
// It's the storage provider's snapshot ID for the Velero-native snapshot.
SnapshotHandle string `json:"snapshotHandle"`
// The cloud provider snapshot volume type.
VolumeType string `json:"volumeType"`
// The cloud provider snapshot volume's availability zones.
VolumeAZ string `json:"volumeAZ"`
// The cloud provider snapshot volume's IOPS.
IOPS string `json:"iops"`
}
// PodVolumeBackupInfo is used for displaying the PodVolumeBackup snapshot status.
type PodVolumeBackupInfo struct {
// It's the file-system uploader's snapshot ID for PodVolumeBackup.
SnapshotHandle string `json:"snapshotHandle"`
// The snapshot corresponding volume size.
Size int64 `json:"size"`
// The type of the uploader that uploads the data. The valid values are `kopia` and `restic`.
UploaderType string `json:"uploaderType"`
// The PVC's corresponding volume name used by Pod
// https://github.com/kubernetes/kubernetes/blob/e4b74dd12fa8cb63c174091d5536a10b8ec19d34/pkg/apis/core/types.go#L48
VolumeName string `json:"volumeName"`
// The Pod name mounting this PVC.
PodName string `json:"podName"`
// The Pod namespace
PodNamespace string `json:"podNamespace"`
// The PVB-taken k8s node's name.
NodeName string `json:"nodeName"`
}
// PVInfo is used to store some PV information modified after creation.
// Those information are lost after PV recreation.
type PVInfo struct {
// ReclaimPolicy of PV. It could be different from the referenced StorageClass.
ReclaimPolicy string `json:"reclaimPolicy"`
// The PV's labels should be kept after recreation.
Labels map[string]string `json:"labels"`
}
// VolumesInformation contains the information needs by generating
// the backup VolumeInfo array.
type VolumesInformation struct {
// A map contains the backup-included PV detail content. The key is PV name.
pvMap map[string]pvcPvInfo
volumeInfos []*VolumeInfo
logger logrus.FieldLogger
crClient kbclient.Client
volumeSnapshots []snapshotv1api.VolumeSnapshot
volumeSnapshotContents []snapshotv1api.VolumeSnapshotContent
volumeSnapshotClasses []snapshotv1api.VolumeSnapshotClass
SkippedPVs map[string]string
NativeSnapshots []*volume.Snapshot
PodVolumeBackups []*velerov1api.PodVolumeBackup
BackupOperations []*itemoperation.BackupOperation
BackupName string
}
type pvcPvInfo struct {
PVCName string
PVCNamespace string
PV corev1api.PersistentVolume
}
func (v *VolumesInformation) Init() {
v.pvMap = make(map[string]pvcPvInfo)
v.volumeInfos = make([]*VolumeInfo, 0)
}
func (v *VolumesInformation) InsertPVMap(pv corev1api.PersistentVolume, pvcName, pvcNamespace string) {
if v.pvMap == nil {
v.Init()
}
v.pvMap[pv.Name] = pvcPvInfo{
PVCName: pvcName,
PVCNamespace: pvcNamespace,
PV: pv,
}
}
func (v *VolumesInformation) Result(
csiVolumeSnapshots []snapshotv1api.VolumeSnapshot,
csiVolumeSnapshotContents []snapshotv1api.VolumeSnapshotContent,
csiVolumesnapshotClasses []snapshotv1api.VolumeSnapshotClass,
crClient kbclient.Client,
logger logrus.FieldLogger,
) []*VolumeInfo {
v.logger = logger
v.crClient = crClient
v.volumeSnapshots = csiVolumeSnapshots
v.volumeSnapshotContents = csiVolumeSnapshotContents
v.volumeSnapshotClasses = csiVolumesnapshotClasses
v.generateVolumeInfoForSkippedPV()
v.generateVolumeInfoForVeleroNativeSnapshot()
v.generateVolumeInfoForCSIVolumeSnapshot()
v.generateVolumeInfoFromPVB()
v.generateVolumeInfoFromDataUpload()
return v.volumeInfos
}
// generateVolumeInfoForSkippedPV generate VolumeInfos for SkippedPV.
func (v *VolumesInformation) generateVolumeInfoForSkippedPV() {
tmpVolumeInfos := make([]*VolumeInfo, 0)
for pvName, skippedReason := range v.SkippedPVs {
if pvcPVInfo := v.retrievePvcPvInfo(pvName, "", ""); pvcPVInfo != nil {
volumeInfo := &VolumeInfo{
PVCName: pvcPVInfo.PVCName,
PVCNamespace: pvcPVInfo.PVCNamespace,
PVName: pvName,
SnapshotDataMoved: false,
Skipped: true,
SkippedReason: skippedReason,
PVInfo: &PVInfo{
ReclaimPolicy: string(pvcPVInfo.PV.Spec.PersistentVolumeReclaimPolicy),
Labels: pvcPVInfo.PV.Labels,
},
}
tmpVolumeInfos = append(tmpVolumeInfos, volumeInfo)
} else {
v.logger.Warnf("Cannot find info for PV %s", pvName)
continue
}
}
v.volumeInfos = append(v.volumeInfos, tmpVolumeInfos...)
}
// generateVolumeInfoForVeleroNativeSnapshot generate VolumeInfos for Velero native snapshot
func (v *VolumesInformation) generateVolumeInfoForVeleroNativeSnapshot() {
tmpVolumeInfos := make([]*VolumeInfo, 0)
for _, nativeSnapshot := range v.NativeSnapshots {
var iops int64
if nativeSnapshot.Spec.VolumeIOPS != nil {
iops = *nativeSnapshot.Spec.VolumeIOPS
}
if pvcPVInfo := v.retrievePvcPvInfo(nativeSnapshot.Spec.PersistentVolumeName, "", ""); pvcPVInfo != nil {
volumeInfo := &VolumeInfo{
BackupMethod: NativeSnapshot,
PVCName: pvcPVInfo.PVCName,
PVCNamespace: pvcPVInfo.PVCNamespace,
PVName: pvcPVInfo.PV.Name,
SnapshotDataMoved: false,
Skipped: false,
NativeSnapshotInfo: &NativeSnapshotInfo{
SnapshotHandle: nativeSnapshot.Status.ProviderSnapshotID,
VolumeType: nativeSnapshot.Spec.VolumeType,
VolumeAZ: nativeSnapshot.Spec.VolumeAZ,
IOPS: strconv.FormatInt(iops, 10),
},
PVInfo: &PVInfo{
ReclaimPolicy: string(pvcPVInfo.PV.Spec.PersistentVolumeReclaimPolicy),
Labels: pvcPVInfo.PV.Labels,
},
}
tmpVolumeInfos = append(tmpVolumeInfos, volumeInfo)
} else {
v.logger.Warnf("cannot find info for PV %s", nativeSnapshot.Spec.PersistentVolumeName)
continue
}
}
v.volumeInfos = append(v.volumeInfos, tmpVolumeInfos...)
}
// generateVolumeInfoForCSIVolumeSnapshot generate VolumeInfos for CSI VolumeSnapshot
func (v *VolumesInformation) generateVolumeInfoForCSIVolumeSnapshot() {
tmpVolumeInfos := make([]*VolumeInfo, 0)
for _, volumeSnapshot := range v.volumeSnapshots {
var volumeSnapshotClass *snapshotv1api.VolumeSnapshotClass
var volumeSnapshotContent *snapshotv1api.VolumeSnapshotContent
// This is protective logic. The passed-in VS should be all related
// to this backup.
if volumeSnapshot.Labels[velerov1api.BackupNameLabel] != v.BackupName {
continue
}
if volumeSnapshot.Spec.VolumeSnapshotClassName == nil {
v.logger.Warnf("Cannot find VolumeSnapshotClass for VolumeSnapshot %s/%s", volumeSnapshot.Namespace, volumeSnapshot.Name)
continue
}
if volumeSnapshot.Status == nil || volumeSnapshot.Status.BoundVolumeSnapshotContentName == nil {
v.logger.Warnf("Cannot fine VolumeSnapshotContent for VolumeSnapshot %s/%s", volumeSnapshot.Namespace, volumeSnapshot.Name)
continue
}
if volumeSnapshot.Spec.Source.PersistentVolumeClaimName == nil {
v.logger.Warnf("VolumeSnapshot %s/%s doesn't have a source PVC", volumeSnapshot.Namespace, volumeSnapshot.Name)
continue
}
for index := range v.volumeSnapshotClasses {
if *volumeSnapshot.Spec.VolumeSnapshotClassName == v.volumeSnapshotClasses[index].Name {
volumeSnapshotClass = &v.volumeSnapshotClasses[index]
}
}
for index := range v.volumeSnapshotContents {
if *volumeSnapshot.Status.BoundVolumeSnapshotContentName == v.volumeSnapshotContents[index].Name {
volumeSnapshotContent = &v.volumeSnapshotContents[index]
}
}
if volumeSnapshotClass == nil || volumeSnapshotContent == nil {
v.logger.Warnf("fail to get VolumeSnapshotContent or VolumeSnapshotClass for VolumeSnapshot: %s/%s",
volumeSnapshot.Namespace, volumeSnapshot.Name)
continue
}
var operation itemoperation.BackupOperation
for _, op := range v.BackupOperations {
if op.Spec.ResourceIdentifier.GroupResource.String() == kuberesource.VolumeSnapshots.String() &&
op.Spec.ResourceIdentifier.Name == volumeSnapshot.Name &&
op.Spec.ResourceIdentifier.Namespace == volumeSnapshot.Namespace {
operation = *op
}
}
var size int64
if volumeSnapshot.Status.RestoreSize != nil {
size = volumeSnapshot.Status.RestoreSize.Value()
}
snapshotHandle := ""
if volumeSnapshotContent.Status.SnapshotHandle != nil {
snapshotHandle = *volumeSnapshotContent.Status.SnapshotHandle
}
if pvcPVInfo := v.retrievePvcPvInfo("", *volumeSnapshot.Spec.Source.PersistentVolumeClaimName, volumeSnapshot.Namespace); pvcPVInfo != nil {
volumeInfo := &VolumeInfo{
BackupMethod: CSISnapshot,
PVCName: pvcPVInfo.PVCName,
PVCNamespace: pvcPVInfo.PVCNamespace,
PVName: pvcPVInfo.PV.Name,
Skipped: false,
SnapshotDataMoved: false,
PreserveLocalSnapshot: true,
StartTimestamp: &(volumeSnapshot.CreationTimestamp),
CSISnapshotInfo: &CSISnapshotInfo{
VSCName: *volumeSnapshot.Status.BoundVolumeSnapshotContentName,
Size: size,
Driver: volumeSnapshotClass.Driver,
SnapshotHandle: snapshotHandle,
OperationID: operation.Spec.OperationID,
},
PVInfo: &PVInfo{
ReclaimPolicy: string(pvcPVInfo.PV.Spec.PersistentVolumeReclaimPolicy),
Labels: pvcPVInfo.PV.Labels,
},
}
tmpVolumeInfos = append(tmpVolumeInfos, volumeInfo)
} else {
v.logger.Warnf("cannot find info for PVC %s/%s", volumeSnapshot.Namespace, volumeSnapshot.Spec.Source.PersistentVolumeClaimName)
continue
}
}
v.volumeInfos = append(v.volumeInfos, tmpVolumeInfos...)
}
// generateVolumeInfoFromPVB generate VolumeInfo for PVB.
func (v *VolumesInformation) generateVolumeInfoFromPVB() {
tmpVolumeInfos := make([]*VolumeInfo, 0)
for _, pvb := range v.PodVolumeBackups {
volumeInfo := &VolumeInfo{
BackupMethod: PodVolumeBackup,
SnapshotDataMoved: false,
Skipped: false,
StartTimestamp: pvb.Status.StartTimestamp,
PVBInfo: &PodVolumeBackupInfo{
SnapshotHandle: pvb.Status.SnapshotID,
Size: pvb.Status.Progress.TotalBytes,
UploaderType: pvb.Spec.UploaderType,
VolumeName: pvb.Spec.Volume,
PodName: pvb.Spec.Pod.Name,
PodNamespace: pvb.Spec.Pod.Namespace,
NodeName: pvb.Spec.Node,
},
}
pod := new(corev1api.Pod)
pvcName := ""
err := v.crClient.Get(context.TODO(), kbclient.ObjectKey{Namespace: pvb.Spec.Pod.Namespace, Name: pvb.Spec.Pod.Name}, pod)
if err != nil {
v.logger.WithError(err).Warn("Fail to get pod for PodVolumeBackup: ", pvb.Name)
continue
}
for _, volume := range pod.Spec.Volumes {
if volume.Name == pvb.Spec.Volume && volume.PersistentVolumeClaim != nil {
pvcName = volume.PersistentVolumeClaim.ClaimName
}
}
if pvcName != "" {
if pvcPVInfo := v.retrievePvcPvInfo("", pvcName, pod.Namespace); pvcPVInfo != nil {
volumeInfo.PVCName = pvcPVInfo.PVCName
volumeInfo.PVCNamespace = pvcPVInfo.PVCNamespace
volumeInfo.PVName = pvcPVInfo.PV.Name
volumeInfo.PVInfo = &PVInfo{
ReclaimPolicy: string(pvcPVInfo.PV.Spec.PersistentVolumeReclaimPolicy),
Labels: pvcPVInfo.PV.Labels,
}
} else {
v.logger.Warnf("Cannot find info for PVC %s/%s", pod.Namespace, pvcName)
continue
}
} else {
v.logger.Debug("The PVB %s doesn't have a corresponding PVC", pvb.Name)
}
tmpVolumeInfos = append(tmpVolumeInfos, volumeInfo)
}
v.volumeInfos = append(v.volumeInfos, tmpVolumeInfos...)
}
// generateVolumeInfoFromDataUpload generate VolumeInfo for DataUpload.
func (v *VolumesInformation) generateVolumeInfoFromDataUpload() {
tmpVolumeInfos := make([]*VolumeInfo, 0)
vsClassList := new(snapshotv1api.VolumeSnapshotClassList)
if err := v.crClient.List(context.TODO(), vsClassList); err != nil {
v.logger.WithError(err).Errorf("cannot list VolumeSnapshotClass %s", err.Error())
return
}
for _, operation := range v.BackupOperations {
if operation.Spec.ResourceIdentifier.GroupResource.String() == kuberesource.PersistentVolumeClaims.String() {
var duIdentifier velero.ResourceIdentifier
for _, identifier := range operation.Spec.PostOperationItems {
if identifier.GroupResource.String() == "datauploads.velero.io" {
duIdentifier = identifier
}
}
if duIdentifier.Empty() {
v.logger.Warnf("cannot find DataUpload for PVC %s/%s backup async operation",
operation.Spec.ResourceIdentifier.Namespace, operation.Spec.ResourceIdentifier.Name)
continue
}
dataUpload := new(velerov2alpha1.DataUpload)
err := v.crClient.Get(
context.TODO(),
kbclient.ObjectKey{
Namespace: duIdentifier.Namespace,
Name: duIdentifier.Name},
dataUpload,
)
if err != nil {
v.logger.Warnf("fail to get DataUpload for operation %s: %s", operation.Spec.OperationID, err.Error())
continue
}
driverUsedByVSClass := ""
for index := range vsClassList.Items {
if vsClassList.Items[index].Name == dataUpload.Spec.CSISnapshot.SnapshotClass {
driverUsedByVSClass = vsClassList.Items[index].Driver
}
}
if pvcPVInfo := v.retrievePvcPvInfo("", operation.Spec.ResourceIdentifier.Name, operation.Spec.ResourceIdentifier.Namespace); pvcPVInfo != nil {
dataMover := "velero"
if dataUpload.Spec.DataMover != "" {
dataMover = dataUpload.Spec.DataMover
}
volumeInfo := &VolumeInfo{
BackupMethod: CSISnapshot,
PVCName: pvcPVInfo.PVCName,
PVCNamespace: pvcPVInfo.PVCNamespace,
PVName: pvcPVInfo.PV.Name,
SnapshotDataMoved: true,
Skipped: false,
StartTimestamp: operation.Status.Created,
CSISnapshotInfo: &CSISnapshotInfo{
SnapshotHandle: FieldValueIsUnknown,
VSCName: FieldValueIsUnknown,
OperationID: FieldValueIsUnknown,
Driver: driverUsedByVSClass,
},
SnapshotDataMovementInfo: &SnapshotDataMovementInfo{
DataMover: dataMover,
UploaderType: "kopia",
OperationID: operation.Spec.OperationID,
},
PVInfo: &PVInfo{
ReclaimPolicy: string(pvcPVInfo.PV.Spec.PersistentVolumeReclaimPolicy),
Labels: pvcPVInfo.PV.Labels,
},
}
tmpVolumeInfos = append(tmpVolumeInfos, volumeInfo)
} else {
v.logger.Warnf("Cannot find info for PVC %s/%s", operation.Spec.ResourceIdentifier.Namespace, operation.Spec.ResourceIdentifier.Name)
continue
}
}
}
v.volumeInfos = append(v.volumeInfos, tmpVolumeInfos...)
}
// retrievePvcPvInfo gets the PvcPvInfo from the PVMap.
// support retrieve info by PV's name, or by PVC's name
// and namespace.
func (v *VolumesInformation) retrievePvcPvInfo(pvName, pvcName, pvcNS string) *pvcPvInfo {
if pvName != "" {
if info, ok := v.pvMap[pvName]; ok {
return &info
}
return nil
}
if pvcNS == "" || pvcName == "" {
return nil
}
for _, info := range v.pvMap {
if pvcNS == info.PVCNamespace && pvcName == info.PVCName {
return &info
}
}
return nil
}

View File

@@ -0,0 +1,866 @@
/*
Copyright The Velero Contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package volume
import (
"context"
"testing"
snapshotv1api "github.com/kubernetes-csi/external-snapshotter/client/v7/apis/volumesnapshot/v1"
"github.com/sirupsen/logrus"
"github.com/stretchr/testify/require"
corev1api "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
velerov2alpha1 "github.com/vmware-tanzu/velero/pkg/apis/velero/v2alpha1"
"github.com/vmware-tanzu/velero/pkg/builder"
"github.com/vmware-tanzu/velero/pkg/itemoperation"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
velerotest "github.com/vmware-tanzu/velero/pkg/test"
"github.com/vmware-tanzu/velero/pkg/util/logging"
"github.com/vmware-tanzu/velero/pkg/volume"
)
func TestGenerateVolumeInfoForSkippedPV(t *testing.T) {
tests := []struct {
name string
skippedPVName string
pvMap map[string]pvcPvInfo
expectedVolumeInfos []*VolumeInfo
}{
{
name: "Cannot find info for PV",
skippedPVName: "testPV",
pvMap: map[string]pvcPvInfo{
"velero/testPVC": {
PVCName: "testPVC",
PVCNamespace: "velero",
PV: corev1api.PersistentVolume{
ObjectMeta: metav1.ObjectMeta{
Name: "testPV",
Labels: map[string]string{"a": "b"},
},
Spec: corev1api.PersistentVolumeSpec{
PersistentVolumeReclaimPolicy: corev1api.PersistentVolumeReclaimDelete,
},
},
},
},
expectedVolumeInfos: []*VolumeInfo{},
},
{
name: "Normal Skipped PV info",
skippedPVName: "testPV",
pvMap: map[string]pvcPvInfo{
"velero/testPVC": {
PVCName: "testPVC",
PVCNamespace: "velero",
PV: corev1api.PersistentVolume{
ObjectMeta: metav1.ObjectMeta{
Name: "testPV",
Labels: map[string]string{"a": "b"},
},
Spec: corev1api.PersistentVolumeSpec{
PersistentVolumeReclaimPolicy: corev1api.PersistentVolumeReclaimDelete,
},
},
},
"testPV": {
PVCName: "testPVC",
PVCNamespace: "velero",
PV: corev1api.PersistentVolume{
ObjectMeta: metav1.ObjectMeta{
Name: "testPV",
Labels: map[string]string{"a": "b"},
},
Spec: corev1api.PersistentVolumeSpec{
PersistentVolumeReclaimPolicy: corev1api.PersistentVolumeReclaimDelete,
},
},
},
},
expectedVolumeInfos: []*VolumeInfo{
{
PVCName: "testPVC",
PVCNamespace: "velero",
PVName: "testPV",
Skipped: true,
SkippedReason: "CSI: skipped for PodVolumeBackup",
PVInfo: &PVInfo{
ReclaimPolicy: "Delete",
Labels: map[string]string{
"a": "b",
},
},
},
},
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
volumesInfo := VolumesInformation{}
volumesInfo.Init()
if tc.skippedPVName != "" {
volumesInfo.SkippedPVs = map[string]string{
tc.skippedPVName: "CSI: skipped for PodVolumeBackup",
}
}
if tc.pvMap != nil {
for k, v := range tc.pvMap {
volumesInfo.pvMap[k] = v
}
}
volumesInfo.logger = logging.DefaultLogger(logrus.DebugLevel, logging.FormatJSON)
volumesInfo.generateVolumeInfoForSkippedPV()
require.Equal(t, tc.expectedVolumeInfos, volumesInfo.volumeInfos)
})
}
}
func TestGenerateVolumeInfoForVeleroNativeSnapshot(t *testing.T) {
tests := []struct {
name string
nativeSnapshot volume.Snapshot
pvMap map[string]pvcPvInfo
expectedVolumeInfos []*VolumeInfo
}{
{
name: "Native snapshot's IPOS pointer is nil",
nativeSnapshot: volume.Snapshot{
Spec: volume.SnapshotSpec{
PersistentVolumeName: "testPV",
VolumeIOPS: nil,
},
},
expectedVolumeInfos: []*VolumeInfo{},
},
{
name: "Cannot find info for the PV",
nativeSnapshot: volume.Snapshot{
Spec: volume.SnapshotSpec{
PersistentVolumeName: "testPV",
VolumeIOPS: int64Ptr(100),
},
},
expectedVolumeInfos: []*VolumeInfo{},
},
{
name: "Cannot find PV info in pvMap",
pvMap: map[string]pvcPvInfo{
"velero/testPVC": {
PVCName: "testPVC",
PVCNamespace: "velero",
PV: corev1api.PersistentVolume{
ObjectMeta: metav1.ObjectMeta{
Name: "testPV",
Labels: map[string]string{"a": "b"},
},
Spec: corev1api.PersistentVolumeSpec{
PersistentVolumeReclaimPolicy: corev1api.PersistentVolumeReclaimDelete,
},
},
},
},
nativeSnapshot: volume.Snapshot{
Spec: volume.SnapshotSpec{
PersistentVolumeName: "testPV",
VolumeIOPS: int64Ptr(100),
VolumeType: "ssd",
VolumeAZ: "us-central1-a",
},
Status: volume.SnapshotStatus{
ProviderSnapshotID: "pvc-b31e3386-4bbb-4937-95d-7934cd62-b0a1-494b-95d7-0687440e8d0c",
},
},
expectedVolumeInfos: []*VolumeInfo{},
},
{
name: "Normal native snapshot",
pvMap: map[string]pvcPvInfo{
"testPV": {
PVCName: "testPVC",
PVCNamespace: "velero",
PV: corev1api.PersistentVolume{
ObjectMeta: metav1.ObjectMeta{
Name: "testPV",
Labels: map[string]string{"a": "b"},
},
Spec: corev1api.PersistentVolumeSpec{
PersistentVolumeReclaimPolicy: corev1api.PersistentVolumeReclaimDelete,
},
},
},
},
nativeSnapshot: volume.Snapshot{
Spec: volume.SnapshotSpec{
PersistentVolumeName: "testPV",
VolumeIOPS: int64Ptr(100),
VolumeType: "ssd",
VolumeAZ: "us-central1-a",
},
Status: volume.SnapshotStatus{
ProviderSnapshotID: "pvc-b31e3386-4bbb-4937-95d-7934cd62-b0a1-494b-95d7-0687440e8d0c",
},
},
expectedVolumeInfos: []*VolumeInfo{
{
PVCName: "testPVC",
PVCNamespace: "velero",
PVName: "testPV",
BackupMethod: NativeSnapshot,
PVInfo: &PVInfo{
ReclaimPolicy: "Delete",
Labels: map[string]string{
"a": "b",
},
},
NativeSnapshotInfo: &NativeSnapshotInfo{
SnapshotHandle: "pvc-b31e3386-4bbb-4937-95d-7934cd62-b0a1-494b-95d7-0687440e8d0c",
VolumeType: "ssd",
VolumeAZ: "us-central1-a",
IOPS: "100",
},
},
},
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
volumesInfo := VolumesInformation{}
volumesInfo.Init()
volumesInfo.NativeSnapshots = append(volumesInfo.NativeSnapshots, &tc.nativeSnapshot)
if tc.pvMap != nil {
for k, v := range tc.pvMap {
volumesInfo.pvMap[k] = v
}
}
volumesInfo.logger = logging.DefaultLogger(logrus.DebugLevel, logging.FormatJSON)
volumesInfo.generateVolumeInfoForVeleroNativeSnapshot()
require.Equal(t, tc.expectedVolumeInfos, volumesInfo.volumeInfos)
})
}
}
func TestGenerateVolumeInfoForCSIVolumeSnapshot(t *testing.T) {
resourceQuantity := resource.MustParse("100Gi")
now := metav1.Now()
tests := []struct {
name string
volumeSnapshot snapshotv1api.VolumeSnapshot
volumeSnapshotContent snapshotv1api.VolumeSnapshotContent
volumeSnapshotClass snapshotv1api.VolumeSnapshotClass
pvMap map[string]pvcPvInfo
operation *itemoperation.BackupOperation
expectedVolumeInfos []*VolumeInfo
}{
{
name: "VS doesn't have VolumeSnapshotClass name",
volumeSnapshot: snapshotv1api.VolumeSnapshot{
ObjectMeta: metav1.ObjectMeta{
Name: "testVS",
Namespace: "velero",
},
Spec: snapshotv1api.VolumeSnapshotSpec{},
},
expectedVolumeInfos: []*VolumeInfo{},
},
{
name: "VS doesn't have status",
volumeSnapshot: snapshotv1api.VolumeSnapshot{
ObjectMeta: metav1.ObjectMeta{
Name: "testVS",
Namespace: "velero",
},
Spec: snapshotv1api.VolumeSnapshotSpec{
VolumeSnapshotClassName: stringPtr("testClass"),
},
},
expectedVolumeInfos: []*VolumeInfo{},
},
{
name: "VS doesn't have PVC",
volumeSnapshot: snapshotv1api.VolumeSnapshot{
ObjectMeta: metav1.ObjectMeta{
Name: "testVS",
Namespace: "velero",
},
Spec: snapshotv1api.VolumeSnapshotSpec{
VolumeSnapshotClassName: stringPtr("testClass"),
},
Status: &snapshotv1api.VolumeSnapshotStatus{
BoundVolumeSnapshotContentName: stringPtr("testContent"),
},
},
expectedVolumeInfos: []*VolumeInfo{},
},
{
name: "Cannot find VSC for VS",
volumeSnapshot: snapshotv1api.VolumeSnapshot{
ObjectMeta: metav1.ObjectMeta{
Name: "testVS",
Namespace: "velero",
},
Spec: snapshotv1api.VolumeSnapshotSpec{
VolumeSnapshotClassName: stringPtr("testClass"),
Source: snapshotv1api.VolumeSnapshotSource{
PersistentVolumeClaimName: stringPtr("testPVC"),
},
},
Status: &snapshotv1api.VolumeSnapshotStatus{
BoundVolumeSnapshotContentName: stringPtr("testContent"),
},
},
expectedVolumeInfos: []*VolumeInfo{},
},
{
name: "Cannot find VolumeInfo for PVC",
volumeSnapshot: snapshotv1api.VolumeSnapshot{
ObjectMeta: metav1.ObjectMeta{
Name: "testVS",
Namespace: "velero",
},
Spec: snapshotv1api.VolumeSnapshotSpec{
VolumeSnapshotClassName: stringPtr("testClass"),
Source: snapshotv1api.VolumeSnapshotSource{
PersistentVolumeClaimName: stringPtr("testPVC"),
},
},
Status: &snapshotv1api.VolumeSnapshotStatus{
BoundVolumeSnapshotContentName: stringPtr("testContent"),
},
},
volumeSnapshotClass: *builder.ForVolumeSnapshotClass("testClass").Driver("pd.csi.storage.gke.io").Result(),
volumeSnapshotContent: *builder.ForVolumeSnapshotContent("testContent").Status(&snapshotv1api.VolumeSnapshotContentStatus{SnapshotHandle: stringPtr("testSnapshotHandle")}).Result(),
expectedVolumeInfos: []*VolumeInfo{},
},
{
name: "Normal VolumeSnapshot case",
volumeSnapshot: snapshotv1api.VolumeSnapshot{
ObjectMeta: metav1.ObjectMeta{
Name: "testVS",
Namespace: "velero",
CreationTimestamp: now,
},
Spec: snapshotv1api.VolumeSnapshotSpec{
VolumeSnapshotClassName: stringPtr("testClass"),
Source: snapshotv1api.VolumeSnapshotSource{
PersistentVolumeClaimName: stringPtr("testPVC"),
},
},
Status: &snapshotv1api.VolumeSnapshotStatus{
BoundVolumeSnapshotContentName: stringPtr("testContent"),
RestoreSize: &resourceQuantity,
},
},
volumeSnapshotClass: *builder.ForVolumeSnapshotClass("testClass").Driver("pd.csi.storage.gke.io").Result(),
volumeSnapshotContent: *builder.ForVolumeSnapshotContent("testContent").Status(&snapshotv1api.VolumeSnapshotContentStatus{SnapshotHandle: stringPtr("testSnapshotHandle")}).Result(),
pvMap: map[string]pvcPvInfo{
"testPV": {
PVCName: "testPVC",
PVCNamespace: "velero",
PV: corev1api.PersistentVolume{
ObjectMeta: metav1.ObjectMeta{
Name: "testPV",
Labels: map[string]string{"a": "b"},
},
Spec: corev1api.PersistentVolumeSpec{
PersistentVolumeReclaimPolicy: corev1api.PersistentVolumeReclaimDelete,
},
},
},
},
operation: &itemoperation.BackupOperation{
Spec: itemoperation.BackupOperationSpec{
OperationID: "testID",
ResourceIdentifier: velero.ResourceIdentifier{
GroupResource: schema.GroupResource{
Group: "snapshot.storage.k8s.io",
Resource: "volumesnapshots",
},
Namespace: "velero",
Name: "testVS",
},
},
},
expectedVolumeInfos: []*VolumeInfo{
{
PVCName: "testPVC",
PVCNamespace: "velero",
PVName: "testPV",
BackupMethod: CSISnapshot,
StartTimestamp: &now,
PreserveLocalSnapshot: true,
CSISnapshotInfo: &CSISnapshotInfo{
Driver: "pd.csi.storage.gke.io",
SnapshotHandle: "testSnapshotHandle",
Size: 107374182400,
VSCName: "testContent",
OperationID: "testID",
},
PVInfo: &PVInfo{
ReclaimPolicy: "Delete",
Labels: map[string]string{
"a": "b",
},
},
},
},
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
volumesInfo := VolumesInformation{}
volumesInfo.Init()
if tc.pvMap != nil {
for k, v := range tc.pvMap {
volumesInfo.pvMap[k] = v
}
}
if tc.operation != nil {
volumesInfo.BackupOperations = append(volumesInfo.BackupOperations, tc.operation)
}
volumesInfo.volumeSnapshots = []snapshotv1api.VolumeSnapshot{tc.volumeSnapshot}
volumesInfo.volumeSnapshotContents = []snapshotv1api.VolumeSnapshotContent{tc.volumeSnapshotContent}
volumesInfo.volumeSnapshotClasses = []snapshotv1api.VolumeSnapshotClass{tc.volumeSnapshotClass}
volumesInfo.logger = logging.DefaultLogger(logrus.DebugLevel, logging.FormatJSON)
volumesInfo.generateVolumeInfoForCSIVolumeSnapshot()
require.Equal(t, tc.expectedVolumeInfos, volumesInfo.volumeInfos)
})
}
}
func TestGenerateVolumeInfoFromPVB(t *testing.T) {
tests := []struct {
name string
pvb *velerov1api.PodVolumeBackup
pod *corev1api.Pod
pvMap map[string]pvcPvInfo
expectedVolumeInfos []*VolumeInfo
}{
{
name: "cannot find PVB's pod, should fail",
pvb: builder.ForPodVolumeBackup("velero", "testPVB").PodName("testPod").PodNamespace("velero").Result(),
expectedVolumeInfos: []*VolumeInfo{},
},
{
name: "PVB doesn't have a related PVC",
pvb: builder.ForPodVolumeBackup("velero", "testPVB").PodName("testPod").PodNamespace("velero").Result(),
pod: builder.ForPod("velero", "testPod").Containers(&corev1api.Container{
Name: "test",
VolumeMounts: []corev1api.VolumeMount{
{
Name: "testVolume",
MountPath: "/data",
},
},
}).Volumes(
&corev1api.Volume{
Name: "",
VolumeSource: corev1api.VolumeSource{
HostPath: &corev1api.HostPathVolumeSource{},
},
},
).Result(),
expectedVolumeInfos: []*VolumeInfo{
{
PVCName: "",
PVCNamespace: "",
PVName: "",
BackupMethod: PodVolumeBackup,
PVBInfo: &PodVolumeBackupInfo{
PodName: "testPod",
PodNamespace: "velero",
},
},
},
},
{
name: "Backup doesn't have information for PVC",
pvb: builder.ForPodVolumeBackup("velero", "testPVB").PodName("testPod").PodNamespace("velero").Result(),
pod: builder.ForPod("velero", "testPod").Containers(&corev1api.Container{
Name: "test",
VolumeMounts: []corev1api.VolumeMount{
{
Name: "testVolume",
MountPath: "/data",
},
},
}).Volumes(
&corev1api.Volume{
Name: "",
VolumeSource: corev1api.VolumeSource{
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
ClaimName: "testPVC",
},
},
},
).Result(),
expectedVolumeInfos: []*VolumeInfo{},
},
{
name: "PVB's volume has a PVC",
pvMap: map[string]pvcPvInfo{
"testPV": {
PVCName: "testPVC",
PVCNamespace: "velero",
PV: corev1api.PersistentVolume{
ObjectMeta: metav1.ObjectMeta{
Name: "testPV",
Labels: map[string]string{"a": "b"},
},
Spec: corev1api.PersistentVolumeSpec{
PersistentVolumeReclaimPolicy: corev1api.PersistentVolumeReclaimDelete,
},
},
},
},
pvb: builder.ForPodVolumeBackup("velero", "testPVB").PodName("testPod").PodNamespace("velero").Result(),
pod: builder.ForPod("velero", "testPod").Containers(&corev1api.Container{
Name: "test",
VolumeMounts: []corev1api.VolumeMount{
{
Name: "testVolume",
MountPath: "/data",
},
},
}).Volumes(
&corev1api.Volume{
Name: "",
VolumeSource: corev1api.VolumeSource{
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
ClaimName: "testPVC",
},
},
},
).Result(),
expectedVolumeInfos: []*VolumeInfo{
{
PVCName: "testPVC",
PVCNamespace: "velero",
PVName: "testPV",
BackupMethod: PodVolumeBackup,
PVBInfo: &PodVolumeBackupInfo{
PodName: "testPod",
PodNamespace: "velero",
},
PVInfo: &PVInfo{
ReclaimPolicy: string(corev1api.PersistentVolumeReclaimDelete),
Labels: map[string]string{"a": "b"},
},
},
},
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
volumesInfo := VolumesInformation{}
volumesInfo.Init()
volumesInfo.crClient = velerotest.NewFakeControllerRuntimeClient(t)
volumesInfo.PodVolumeBackups = append(volumesInfo.PodVolumeBackups, tc.pvb)
if tc.pvMap != nil {
for k, v := range tc.pvMap {
volumesInfo.pvMap[k] = v
}
}
if tc.pod != nil {
require.NoError(t, volumesInfo.crClient.Create(context.TODO(), tc.pod))
}
volumesInfo.logger = logging.DefaultLogger(logrus.DebugLevel, logging.FormatJSON)
volumesInfo.generateVolumeInfoFromPVB()
require.Equal(t, tc.expectedVolumeInfos, volumesInfo.volumeInfos)
})
}
}
func TestGenerateVolumeInfoFromDataUpload(t *testing.T) {
now := metav1.Now()
tests := []struct {
name string
volumeSnapshotClass *snapshotv1api.VolumeSnapshotClass
dataUpload *velerov2alpha1.DataUpload
operation *itemoperation.BackupOperation
pvMap map[string]pvcPvInfo
expectedVolumeInfos []*VolumeInfo
}{
{
name: "Operation is not for PVC",
operation: &itemoperation.BackupOperation{
Spec: itemoperation.BackupOperationSpec{
ResourceIdentifier: velero.ResourceIdentifier{
GroupResource: schema.GroupResource{
Group: "",
Resource: "configmaps",
},
},
},
},
expectedVolumeInfos: []*VolumeInfo{},
},
{
name: "Operation doesn't have DataUpload PostItemOperation",
operation: &itemoperation.BackupOperation{
Spec: itemoperation.BackupOperationSpec{
ResourceIdentifier: velero.ResourceIdentifier{
GroupResource: schema.GroupResource{
Group: "",
Resource: "persistentvolumeclaims",
},
Namespace: "velero",
Name: "testPVC",
},
PostOperationItems: []velero.ResourceIdentifier{
{
GroupResource: schema.GroupResource{
Group: "",
Resource: "configmaps",
},
},
},
},
},
expectedVolumeInfos: []*VolumeInfo{},
},
{
name: "DataUpload cannot be found for operation",
operation: &itemoperation.BackupOperation{
Spec: itemoperation.BackupOperationSpec{
OperationID: "testOperation",
ResourceIdentifier: velero.ResourceIdentifier{
GroupResource: schema.GroupResource{
Group: "",
Resource: "persistentvolumeclaims",
},
Namespace: "velero",
Name: "testPVC",
},
PostOperationItems: []velero.ResourceIdentifier{
{
GroupResource: schema.GroupResource{
Group: "velero.io",
Resource: "datauploads",
},
Namespace: "velero",
Name: "testDU",
},
},
},
},
expectedVolumeInfos: []*VolumeInfo{},
},
{
name: "VolumeSnapshotClass cannot be found for operation",
dataUpload: builder.ForDataUpload("velero", "testDU").DataMover("velero").CSISnapshot(&velerov2alpha1.CSISnapshotSpec{
VolumeSnapshot: "testVS",
}).SnapshotID("testSnapshotHandle").Result(),
operation: &itemoperation.BackupOperation{
Spec: itemoperation.BackupOperationSpec{
OperationID: "testOperation",
ResourceIdentifier: velero.ResourceIdentifier{
GroupResource: schema.GroupResource{
Group: "",
Resource: "persistentvolumeclaims",
},
Namespace: "velero",
Name: "testPVC",
},
PostOperationItems: []velero.ResourceIdentifier{
{
GroupResource: schema.GroupResource{
Group: "velero.io",
Resource: "datauploads",
},
Namespace: "velero",
Name: "testDU",
},
},
},
},
pvMap: map[string]pvcPvInfo{
"testPV": {
PVCName: "testPVC",
PVCNamespace: "velero",
PV: corev1api.PersistentVolume{
ObjectMeta: metav1.ObjectMeta{
Name: "testPV",
Labels: map[string]string{"a": "b"},
},
Spec: corev1api.PersistentVolumeSpec{
PersistentVolumeReclaimPolicy: corev1api.PersistentVolumeReclaimDelete,
},
},
},
},
expectedVolumeInfos: []*VolumeInfo{
{
PVCName: "testPVC",
PVCNamespace: "velero",
PVName: "testPV",
BackupMethod: CSISnapshot,
SnapshotDataMoved: true,
CSISnapshotInfo: &CSISnapshotInfo{
SnapshotHandle: FieldValueIsUnknown,
VSCName: FieldValueIsUnknown,
OperationID: FieldValueIsUnknown,
Size: 0,
},
SnapshotDataMovementInfo: &SnapshotDataMovementInfo{
DataMover: "velero",
UploaderType: "kopia",
OperationID: "testOperation",
},
PVInfo: &PVInfo{
ReclaimPolicy: string(corev1api.PersistentVolumeReclaimDelete),
Labels: map[string]string{"a": "b"},
},
},
},
},
{
name: "Normal DataUpload case",
dataUpload: builder.ForDataUpload("velero", "testDU").DataMover("velero").CSISnapshot(&velerov2alpha1.CSISnapshotSpec{
VolumeSnapshot: "testVS",
SnapshotClass: "testClass",
}).SnapshotID("testSnapshotHandle").Result(),
volumeSnapshotClass: builder.ForVolumeSnapshotClass("testClass").Driver("pd.csi.storage.gke.io").Result(),
operation: &itemoperation.BackupOperation{
Spec: itemoperation.BackupOperationSpec{
OperationID: "testOperation",
ResourceIdentifier: velero.ResourceIdentifier{
GroupResource: schema.GroupResource{
Group: "",
Resource: "persistentvolumeclaims",
},
Namespace: "velero",
Name: "testPVC",
},
PostOperationItems: []velero.ResourceIdentifier{
{
GroupResource: schema.GroupResource{
Group: "velero.io",
Resource: "datauploads",
},
Namespace: "velero",
Name: "testDU",
},
},
},
Status: itemoperation.OperationStatus{
Created: &now,
},
},
pvMap: map[string]pvcPvInfo{
"testPV": {
PVCName: "testPVC",
PVCNamespace: "velero",
PV: corev1api.PersistentVolume{
ObjectMeta: metav1.ObjectMeta{
Name: "testPV",
Labels: map[string]string{"a": "b"},
},
Spec: corev1api.PersistentVolumeSpec{
PersistentVolumeReclaimPolicy: corev1api.PersistentVolumeReclaimDelete,
},
},
},
},
expectedVolumeInfos: []*VolumeInfo{
{
PVCName: "testPVC",
PVCNamespace: "velero",
PVName: "testPV",
BackupMethod: CSISnapshot,
SnapshotDataMoved: true,
StartTimestamp: &now,
CSISnapshotInfo: &CSISnapshotInfo{
VSCName: FieldValueIsUnknown,
SnapshotHandle: FieldValueIsUnknown,
OperationID: FieldValueIsUnknown,
Size: 0,
Driver: "pd.csi.storage.gke.io",
},
SnapshotDataMovementInfo: &SnapshotDataMovementInfo{
DataMover: "velero",
UploaderType: "kopia",
OperationID: "testOperation",
},
PVInfo: &PVInfo{
ReclaimPolicy: string(corev1api.PersistentVolumeReclaimDelete),
Labels: map[string]string{"a": "b"},
},
},
},
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
volumesInfo := VolumesInformation{}
volumesInfo.Init()
if tc.operation != nil {
volumesInfo.BackupOperations = append(volumesInfo.BackupOperations, tc.operation)
}
if tc.pvMap != nil {
for k, v := range tc.pvMap {
volumesInfo.pvMap[k] = v
}
}
volumesInfo.crClient = velerotest.NewFakeControllerRuntimeClient(t)
if tc.dataUpload != nil {
volumesInfo.crClient.Create(context.TODO(), tc.dataUpload)
}
if tc.volumeSnapshotClass != nil {
volumesInfo.crClient.Create(context.TODO(), tc.volumeSnapshotClass)
}
volumesInfo.logger = logging.DefaultLogger(logrus.DebugLevel, logging.FormatJSON)
volumesInfo.generateVolumeInfoFromDataUpload()
require.Equal(t, tc.expectedVolumeInfos, volumesInfo.volumeInfos)
})
}
}
func stringPtr(str string) *string {
return &str
}
func int64Ptr(val int) *int64 {
i := int64(val)
return &i
}

View File

@@ -175,6 +175,18 @@ type BackupSpec struct {
// If DataMover is "" or "velero", the built-in data mover will be used.
// +optional
DataMover string `json:"datamover,omitempty"`
// UploaderConfig specifies the configuration for the uploader.
// +optional
// +nullable
UploaderConfig *UploaderConfigForBackup `json:"uploaderConfig,omitempty"`
}
// UploaderConfigForBackup defines the configuration for the uploader when doing backup.
type UploaderConfigForBackup struct {
// ParallelFilesUpload is the number of files parallel uploads to perform when using the uploader.
// +optional
ParallelFilesUpload int `json:"parallelFilesUpload,omitempty"`
}
// BackupHooks contains custom behaviors that should be executed at different phases of the backup.
@@ -261,12 +273,12 @@ type ExecHook struct {
type HookErrorMode string
const (
// HookErrorModeContinue means that an error from a hook is acceptable, and the backup can
// proceed.
// HookErrorModeContinue means that an error from a hook is acceptable and the backup/restore can
// proceed with the rest of hooks' execution. This backup/restore should be in `PartiallyFailed` status.
HookErrorModeContinue HookErrorMode = "Continue"
// HookErrorModeFail means that an error from a hook is problematic, and the backup should be in
// error.
// HookErrorModeFail means that an error from a hook is problematic and Velero should stop executing following hooks.
// This backup/restore should be in `PartiallyFailed` status.
HookErrorModeFail HookErrorMode = "Fail"
)
@@ -434,6 +446,11 @@ type BackupStatus struct {
// BackupItemAction operations for this backup which ended with an error.
// +optional
BackupItemOperationsFailed int `json:"backupItemOperationsFailed,omitempty"`
// HookStatus contains information about the status of the hooks.
// +optional
// +nullable
HookStatus *HookStatus `json:"hookStatus,omitempty"`
}
// BackupProgress stores information about the progress of a Backup's execution.
@@ -451,6 +468,19 @@ type BackupProgress struct {
ItemsBackedUp int `json:"itemsBackedUp,omitempty"`
}
// HookStatus stores information about the status of the hooks.
type HookStatus struct {
// HooksAttempted is the total number of attempted hooks
// Specifically, HooksAttempted represents the number of hooks that failed to execute
// and the number of hooks that executed successfully.
// +optional
HooksAttempted int `json:"hooksAttempted,omitempty"`
// HooksFailed is the total number of hooks which ended with an error
// +optional
HooksFailed int `json:"hooksFailed,omitempty"`
}
// +genclient
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// +kubebuilder:object:root=true

View File

@@ -25,7 +25,7 @@ type DownloadRequestSpec struct {
}
// DownloadTargetKind represents what type of file to download.
// +kubebuilder:validation:Enum=BackupLog;BackupContents;BackupVolumeSnapshots;BackupItemOperations;BackupResourceList;BackupResults;RestoreLog;RestoreResults;RestoreResourceList;RestoreItemOperations;CSIBackupVolumeSnapshots;CSIBackupVolumeSnapshotContents
// +kubebuilder:validation:Enum=BackupLog;BackupContents;BackupVolumeSnapshots;BackupItemOperations;BackupResourceList;BackupResults;RestoreLog;RestoreResults;RestoreResourceList;RestoreItemOperations;CSIBackupVolumeSnapshots;CSIBackupVolumeSnapshotContents;BackupVolumeInfos
type DownloadTargetKind string
const (
@@ -41,6 +41,7 @@ const (
DownloadTargetKindRestoreItemOperations DownloadTargetKind = "RestoreItemOperations"
DownloadTargetKindCSIBackupVolumeSnapshots DownloadTargetKind = "CSIBackupVolumeSnapshots"
DownloadTargetKindCSIBackupVolumeSnapshotContents DownloadTargetKind = "CSIBackupVolumeSnapshotContents"
DownloadTargetKindBackupVolumeInfos DownloadTargetKind = "BackupVolumeInfos"
)
// DownloadTarget is the specification for what kind of file to download, and the name of the

View File

@@ -83,12 +83,20 @@ const (
// AsyncOperationIDLabel is the label key used to identify the async operation ID
AsyncOperationIDLabel = "velero.io/async-operation-id"
// PVCNameLabel is the label key used to identify the the PVC's namespace and name.
// PVCNameLabel is the label key used to identify the PVC's namespace and name.
// The format is <namespace>/<name>.
PVCNamespaceNameLabel = "velero.io/pvc-namespace-name"
// ResourceUsageLabel is the label key to explain the Velero resource usage.
ResourceUsageLabel = "velero.io/resource-usage"
// VolumesToBackupAnnotation is the annotation on a pod whose mounted volumes
// need to be backed up using pod volume backup.
VolumesToBackupAnnotation = "backup.velero.io/backup-volumes"
// VolumesToExcludeAnnotation is the annotation on a pod whose mounted volumes
// should be excluded from pod volume backup.
VolumesToExcludeAnnotation = "backup.velero.io/backup-volumes-excludes"
)
type AsyncOperationIDPrefix string

View File

@@ -51,6 +51,12 @@ type PodVolumeBackupSpec struct {
// volume backup as tags.
// +optional
Tags map[string]string `json:"tags,omitempty"`
// UploaderSettings are a map of key-value pairs that should be applied to the
// uploader configuration.
// +optional
// +nullable
UploaderSettings map[string]string `json:"uploaderSettings,omitempty"`
}
// PodVolumeBackupPhase represents the lifecycle phase of a PodVolumeBackup.
@@ -114,7 +120,6 @@ type PodVolumeBackupStatus struct {
// +kubebuilder:printcolumn:name="Namespace",type="string",JSONPath=".spec.pod.namespace",description="Namespace of the pod containing the volume to be backed up"
// +kubebuilder:printcolumn:name="Pod",type="string",JSONPath=".spec.pod.name",description="Name of the pod containing the volume to be backed up"
// +kubebuilder:printcolumn:name="Volume",type="string",JSONPath=".spec.volume",description="Name of the volume to be backed up"
// +kubebuilder:printcolumn:name="Repository ID",type="string",JSONPath=".spec.repoIdentifier",description="Backup repository identifier for this backup"
// +kubebuilder:printcolumn:name="Uploader Type",type="string",JSONPath=".spec.uploaderType",description="The type of the uploader to handle data transfer"
// +kubebuilder:printcolumn:name="Storage Location",type="string",JSONPath=".spec.backupStorageLocation",description="Name of the Backup Storage Location where this backup should be stored"
// +kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp"

View File

@@ -48,6 +48,12 @@ type PodVolumeRestoreSpec struct {
// SourceNamespace is the original namespace for namaspace mapping.
SourceNamespace string `json:"sourceNamespace"`
// UploaderSettings are a map of key-value pairs that should be applied to the
// uploader configuration.
// +optional
// +nullable
UploaderSettings map[string]string `json:"uploaderSettings,omitempty"`
}
// PodVolumeRestorePhase represents the lifecycle phase of a PodVolumeRestore.

View File

@@ -61,8 +61,8 @@ func CustomResources() map[string]typeInfo {
}
// CustomResourceKinds returns a list of all custom resources kinds within the Velero
func CustomResourceKinds() sets.String {
kinds := sets.NewString()
func CustomResourceKinds() sets.Set[string] {
kinds := sets.New[string]()
resources := CustomResources()
for kind := range resources {

View File

@@ -123,6 +123,19 @@ type RestoreSpec struct {
// +optional
// +nullable
ResourceModifier *v1.TypedLocalObjectReference `json:"resourceModifier,omitempty"`
// UploaderConfig specifies the configuration for the restore.
// +optional
// +nullable
UploaderConfig *UploaderConfigForRestore `json:"uploaderConfig,omitempty"`
}
// UploaderConfigForRestore defines the configuration for the restore.
type UploaderConfigForRestore struct {
// WriteSparseFiles is a flag to indicate whether write files sparsely or not.
// +optional
// +nullable
WriteSparseFiles *bool `json:"writeSparseFiles,omitempty"`
}
// RestoreHooks contains custom behaviors that should be executed during or post restore.
@@ -214,6 +227,11 @@ type ExecRestoreHook struct {
// before attempting to run the command.
// +optional
WaitTimeout metav1.Duration `json:"waitTimeout,omitempty"`
// WaitForReady ensures command will be launched when container is Ready instead of Running.
// +optional
// +nullable
WaitForReady *bool `json:"waitForReady,omitempty"`
}
// InitRestoreHook is a hook that adds an init container to a PodSpec to run commands before the
@@ -340,6 +358,11 @@ type RestoreStatus struct {
// RestoreItemAction operations for this restore which ended with an error.
// +optional
RestoreItemOperationsFailed int `json:"restoreItemOperationsFailed,omitempty"`
// HookStatus contains information about the status of the hooks.
// +optional
// +nullable
HookStatus *HookStatus `json:"hookStatus,omitempty"`
}
// RestoreProgress stores information about the restore's execution progress

View File

@@ -42,6 +42,13 @@ type ScheduleSpec struct {
// Paused specifies whether the schedule is paused or not
// +optional
Paused bool `json:"paused,omitempty"`
// SkipImmediately specifies whether to skip backup if schedule is due immediately from `schedule.status.lastBackup` timestamp when schedule is unpaused or if schedule is new.
// If true, backup will be skipped immediately when schedule is unpaused if it is due based on .Status.LastBackupTimestamp or schedule is new, and will run at next schedule time.
// If false, backup will not be skipped immediately when schedule is unpaused, but will run at next schedule time.
// If empty, will follow server configuration (default: false).
// +optional
SkipImmediately *bool `json:"skipImmediately,omitempty"`
}
// SchedulePhase is a string representation of the lifecycle phase
@@ -75,6 +82,11 @@ type ScheduleStatus struct {
// +nullable
LastBackup *metav1.Time `json:"lastBackup,omitempty"`
// LastSkipped is the last time a Schedule was skipped
// +optional
// +nullable
LastSkipped *metav1.Time `json:"lastSkipped,omitempty"`
// ValidationErrors is a slice of all validation errors (if
// applicable)
// +optional

View File

@@ -381,6 +381,11 @@ func (in *BackupSpec) DeepCopyInto(out *BackupSpec) {
*out = new(bool)
**out = **in
}
if in.UploaderConfig != nil {
in, out := &in.UploaderConfig, &out.UploaderConfig
*out = new(UploaderConfigForBackup)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BackupSpec.
@@ -418,6 +423,11 @@ func (in *BackupStatus) DeepCopyInto(out *BackupStatus) {
*out = new(BackupProgress)
**out = **in
}
if in.HookStatus != nil {
in, out := &in.HookStatus, &out.HookStatus
*out = new(HookStatus)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BackupStatus.
@@ -784,6 +794,11 @@ func (in *ExecRestoreHook) DeepCopyInto(out *ExecRestoreHook) {
}
out.ExecTimeout = in.ExecTimeout
out.WaitTimeout = in.WaitTimeout
if in.WaitForReady != nil {
in, out := &in.WaitForReady, &out.WaitForReady
*out = new(bool)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ExecRestoreHook.
@@ -796,6 +811,21 @@ func (in *ExecRestoreHook) DeepCopy() *ExecRestoreHook {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *HookStatus) DeepCopyInto(out *HookStatus) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HookStatus.
func (in *HookStatus) DeepCopy() *HookStatus {
if in == nil {
return nil
}
out := new(HookStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *InitRestoreHook) DeepCopyInto(out *InitRestoreHook) {
*out = *in
@@ -946,6 +976,13 @@ func (in *PodVolumeBackupSpec) DeepCopyInto(out *PodVolumeBackupSpec) {
(*out)[key] = val
}
}
if in.UploaderSettings != nil {
in, out := &in.UploaderSettings, &out.UploaderSettings
*out = make(map[string]string, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PodVolumeBackupSpec.
@@ -987,7 +1024,7 @@ func (in *PodVolumeRestore) DeepCopyInto(out *PodVolumeRestore) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
out.Spec = in.Spec
in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status)
}
@@ -1045,6 +1082,13 @@ func (in *PodVolumeRestoreList) DeepCopyObject() runtime.Object {
func (in *PodVolumeRestoreSpec) DeepCopyInto(out *PodVolumeRestoreSpec) {
*out = *in
out.Pod = in.Pod
if in.UploaderSettings != nil {
in, out := &in.UploaderSettings, &out.UploaderSettings
*out = make(map[string]string, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PodVolumeRestoreSpec.
@@ -1322,6 +1366,11 @@ func (in *RestoreSpec) DeepCopyInto(out *RestoreSpec) {
*out = new(corev1.TypedLocalObjectReference)
(*in).DeepCopyInto(*out)
}
if in.UploaderConfig != nil {
in, out := &in.UploaderConfig, &out.UploaderConfig
*out = new(UploaderConfigForRestore)
(*in).DeepCopyInto(*out)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RestoreSpec.
@@ -1355,6 +1404,11 @@ func (in *RestoreStatus) DeepCopyInto(out *RestoreStatus) {
*out = new(RestoreProgress)
**out = **in
}
if in.HookStatus != nil {
in, out := &in.HookStatus, &out.HookStatus
*out = new(HookStatus)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RestoreStatus.
@@ -1460,6 +1514,11 @@ func (in *ScheduleSpec) DeepCopyInto(out *ScheduleSpec) {
*out = new(bool)
**out = **in
}
if in.SkipImmediately != nil {
in, out := &in.SkipImmediately, &out.SkipImmediately
*out = new(bool)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ScheduleSpec.
@@ -1479,6 +1538,10 @@ func (in *ScheduleStatus) DeepCopyInto(out *ScheduleStatus) {
in, out := &in.LastBackup, &out.LastBackup
*out = (*in).DeepCopy()
}
if in.LastSkipped != nil {
in, out := &in.LastSkipped, &out.LastSkipped
*out = (*in).DeepCopy()
}
if in.ValidationErrors != nil {
in, out := &in.ValidationErrors, &out.ValidationErrors
*out = make([]string, len(*in))
@@ -1614,6 +1677,41 @@ func (in *StorageType) DeepCopy() *StorageType {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *UploaderConfigForBackup) DeepCopyInto(out *UploaderConfigForBackup) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new UploaderConfigForBackup.
func (in *UploaderConfigForBackup) DeepCopy() *UploaderConfigForBackup {
if in == nil {
return nil
}
out := new(UploaderConfigForBackup)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *UploaderConfigForRestore) DeepCopyInto(out *UploaderConfigForRestore) {
*out = *in
if in.WriteSparseFiles != nil {
in, out := &in.WriteSparseFiles, &out.WriteSparseFiles
*out = new(bool)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new UploaderConfigForRestore.
func (in *UploaderConfigForRestore) DeepCopy() *UploaderConfigForRestore {
if in == nil {
return nil
}
out := new(UploaderConfigForRestore)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *VolumeSnapshotLocation) DeepCopyInto(out *VolumeSnapshotLocation) {
*out = *in

View File

@@ -56,7 +56,7 @@ type DataDownloadSpec struct {
OperationTimeout metav1.Duration `json:"operationTimeout"`
}
// TargetPVCSpec is the specification for a target PVC.
// TargetVolumeSpec is the specification for a target PVC.
type TargetVolumeSpec struct {
// PVC is the name of the target PVC that is created by Velero restore
PVC string `json:"pvc"`
@@ -131,6 +131,7 @@ type DataDownloadStatus struct {
// +kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp",description="Time duration since this DataDownload was created"
// +kubebuilder:printcolumn:name="Node",type="string",JSONPath=".status.node",description="Name of the node where the DataDownload is processed"
// DataDownload acts as the protocol between data mover plugins and data mover controller for the datamover restore operation
type DataDownload struct {
metav1.TypeMeta `json:",inline"`

View File

@@ -51,7 +51,7 @@ type DataUploadSpec struct {
// DataMoverConfig is for data-mover-specific configuration fields.
// +optional
// +nullable
DataMoverConfig *map[string]string `json:"dataMoverConfig,omitempty"`
DataMoverConfig map[string]string `json:"dataMoverConfig,omitempty"`
// Cancel indicates request to cancel the ongoing DataUpload. It can be set
// when the DataUpload is in InProgress phase
@@ -161,6 +161,7 @@ type DataUploadStatus struct {
// +kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp",description="Time duration since this DataUpload was created"
// +kubebuilder:printcolumn:name="Node",type="string",JSONPath=".status.node",description="Name of the node where the DataUpload is processed"
// DataUpload acts as the protocol between data mover plugins and data mover controller for the datamover backup operation
type DataUpload struct {
metav1.TypeMeta `json:",inline"`

View File

@@ -52,8 +52,8 @@ func CustomResources() map[string]typeInfo {
}
// CustomResourceKinds returns a list of all custom resources kinds within the Velero
func CustomResourceKinds() sets.String {
kinds := sets.NewString()
func CustomResourceKinds() sets.Set[string] {
kinds := sets.New[string]()
resources := CustomResources()
for kind := range resources {

View File

@@ -1,34 +1,17 @@
//go:build !ignore_autogenerated
// +build !ignore_autogenerated
/*
Copyright the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by deepcopy-gen. DO NOT EDIT.
// Code generated by controller-gen. DO NOT EDIT.
package v2alpha1
import (
runtime "k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime"
)
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CSISnapshotSpec) DeepCopyInto(out *CSISnapshotSpec) {
*out = *in
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CSISnapshotSpec.
@@ -48,7 +31,6 @@ func (in *DataDownload) DeepCopyInto(out *DataDownload) {
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status)
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DataDownload.
@@ -81,7 +63,6 @@ func (in *DataDownloadList) DeepCopyInto(out *DataDownloadList) {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DataDownloadList.
@@ -114,7 +95,6 @@ func (in *DataDownloadSpec) DeepCopyInto(out *DataDownloadSpec) {
}
}
out.OperationTimeout = in.OperationTimeout
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DataDownloadSpec.
@@ -139,7 +119,6 @@ func (in *DataDownloadStatus) DeepCopyInto(out *DataDownloadStatus) {
*out = (*in).DeepCopy()
}
out.Progress = in.Progress
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DataDownloadStatus.
@@ -159,7 +138,6 @@ func (in *DataUpload) DeepCopyInto(out *DataUpload) {
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status)
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DataUpload.
@@ -192,7 +170,6 @@ func (in *DataUploadList) DeepCopyInto(out *DataUploadList) {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DataUploadList.
@@ -227,7 +204,6 @@ func (in *DataUploadResult) DeepCopyInto(out *DataUploadResult) {
}
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DataUploadResult.
@@ -250,17 +226,12 @@ func (in *DataUploadSpec) DeepCopyInto(out *DataUploadSpec) {
}
if in.DataMoverConfig != nil {
in, out := &in.DataMoverConfig, &out.DataMoverConfig
*out = new(map[string]string)
if **in != nil {
in, out := *in, *out
*out = make(map[string]string, len(*in))
for key, val := range *in {
(*out)[key] = val
}
*out = make(map[string]string, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
out.OperationTimeout = in.OperationTimeout
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DataUploadSpec.
@@ -296,7 +267,6 @@ func (in *DataUploadStatus) DeepCopyInto(out *DataUploadStatus) {
*out = (*in).DeepCopy()
}
out.Progress = in.Progress
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DataUploadStatus.
@@ -312,7 +282,6 @@ func (in *DataUploadStatus) DeepCopy() *DataUploadStatus {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *TargetVolumeSpec) DeepCopyInto(out *TargetVolumeSpec) {
*out = *in
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TargetVolumeSpec.

View File

@@ -84,7 +84,7 @@ func (e *Extractor) readBackup(tarRdr *tar.Reader) (string, error) {
return "", err
}
target := filepath.Join(dir, header.Name) //nolint:gosec
target := filepath.Join(dir, header.Name) //nolint:gosec // Internal usage. No need to check.
switch header.Typeflag {
case tar.TypeDir:

View File

@@ -302,6 +302,7 @@ func (kb *kubernetesBackupper) BackupWithResolvers(log logrus.FieldLogger,
itemHookHandler: &hook.DefaultItemHookHandler{
PodCommandExecutor: kb.podCommandExecutor,
},
hookTracker: hook.NewHookTracker(),
}
// helper struct to send current progress between the main
@@ -427,11 +428,23 @@ func (kb *kubernetesBackupper) BackupWithResolvers(log logrus.FieldLogger,
updated.Status.Progress.TotalItems = len(backupRequest.BackedUpItems)
updated.Status.Progress.ItemsBackedUp = len(backupRequest.BackedUpItems)
if err := kube.PatchResource(backupRequest.Backup, updated, kb.kbClient); err != nil {
log.WithError(errors.WithStack((err))).Warn("Got error trying to update backup's status.progress")
// update the hooks execution status
if updated.Status.HookStatus == nil {
updated.Status.HookStatus = &velerov1api.HookStatus{}
}
skippedPVSummary, _ := json.Marshal(backupRequest.SkippedPVTracker.Summary())
log.Infof("Summary for skipped PVs: %s", skippedPVSummary)
updated.Status.HookStatus.HooksAttempted, updated.Status.HookStatus.HooksFailed = itemBackupper.hookTracker.Stat()
log.Infof("hookTracker: %+v, hookAttempted: %d, hookFailed: %d", itemBackupper.hookTracker.GetTracker(), updated.Status.HookStatus.HooksAttempted, updated.Status.HookStatus.HooksFailed)
if err := kube.PatchResource(backupRequest.Backup, updated, kb.kbClient); err != nil {
log.WithError(errors.WithStack((err))).Warn("Got error trying to update backup's status.progress and hook status")
}
if skippedPVSummary, err := json.Marshal(backupRequest.SkippedPVTracker.Summary()); err != nil {
log.WithError(errors.WithStack(err)).Warn("Fail to generate skipped PV summary.")
} else {
log.Infof("Summary for skipped PVs: %s", skippedPVSummary)
}
backupRequest.Status.Progress = &velerov1api.BackupProgress{TotalItems: len(backupRequest.BackedUpItems), ItemsBackedUp: len(backupRequest.BackedUpItems)}
log.WithField("progress", "").Infof("Backed up a total of %d items", len(backupRequest.BackedUpItems))
@@ -598,6 +611,7 @@ func (kb *kubernetesBackupper) FinalizeBackup(log logrus.FieldLogger,
discoveryHelper: kb.discoveryHelper,
itemHookHandler: &hook.NoOpItemHookHandler{},
podVolumeSnapshotTracker: newPVCSnapshotTracker(),
hookTracker: hook.NewHookTracker(),
}
updateFiles := make(map[string]FileForArchive)
backedUpGroupResources := map[schema.GroupResource]bool{}

View File

@@ -46,6 +46,7 @@ import (
"github.com/vmware-tanzu/velero/pkg/builder"
"github.com/vmware-tanzu/velero/pkg/client"
"github.com/vmware-tanzu/velero/pkg/discovery"
"github.com/vmware-tanzu/velero/pkg/features"
"github.com/vmware-tanzu/velero/pkg/itemoperation"
"github.com/vmware-tanzu/velero/pkg/kuberesource"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
@@ -71,6 +72,7 @@ func TestBackedUpItemsMatchesTarballContents(t *testing.T) {
Backup: defaultBackup().Result(),
SkippedPVTracker: NewSkipPVTracker(),
}
backupFile := bytes.NewBuffer([]byte{})
apiResources := []*test.APIResource{
@@ -83,8 +85,8 @@ func TestBackedUpItemsMatchesTarballContents(t *testing.T) {
builder.ForDeployment("zoo", "raz").Result(),
),
test.PVs(
builder.ForPersistentVolume("bar").Result(),
builder.ForPersistentVolume("baz").Result(),
builder.ForPersistentVolume("bar").ClaimRef("foo", "pvc1").Result(),
builder.ForPersistentVolume("baz").ClaimRef("bar", "pvc2").Result(),
),
}
for _, resource := range apiResources {
@@ -1366,6 +1368,7 @@ func TestBackupItemActionsForSkippedPV(t *testing.T) {
"any": "whatever reason",
},
},
includedPVs: map[string]struct{}{},
},
},
apiResources: []*test.APIResource{
@@ -1379,6 +1382,12 @@ func TestBackupItemActionsForSkippedPV(t *testing.T) {
expectNotSkippedPVs: []string{"pv-1"},
},
}
// Enable CSI feature before running the test, because Velero will check whether
// CSI feature is enabled before executing CSI plugin actions.
features.NewFeatureFlagSet("EnableCSI")
defer func() {
features.NewFeatureFlagSet("")
}()
for _, tc := range tests {
t.Run(tc.name, func(tt *testing.T) {
var (
@@ -2747,7 +2756,7 @@ func TestBackupWithInvalidHooks(t *testing.T) {
builder.ForPod("foo", "bar").Result(),
),
},
want: errors.New("\"nonexistent-operator\" is not a valid pod selector operator"),
want: errors.New("\"nonexistent-operator\" is not a valid label selector operator"),
},
}

View File

@@ -52,6 +52,8 @@ import (
vsv1 "github.com/vmware-tanzu/velero/pkg/plugin/velero/volumesnapshotter/v1"
"github.com/vmware-tanzu/velero/pkg/podvolume"
"github.com/vmware-tanzu/velero/pkg/util/boolptr"
csiutil "github.com/vmware-tanzu/velero/pkg/util/csi"
pdvolumeutil "github.com/vmware-tanzu/velero/pkg/util/podvolume"
"github.com/vmware-tanzu/velero/pkg/volume"
)
@@ -76,6 +78,7 @@ type itemBackupper struct {
itemHookHandler hook.ItemHookHandler
snapshotLocationVolumeSnapshotters map[string]vsv1.VolumeSnapshotter
hookTracker *hook.HookTracker
}
type FileForArchive struct {
@@ -182,7 +185,7 @@ func (ib *itemBackupper) backupItemInternal(logger logrus.FieldLogger, obj runti
)
log.Debug("Executing pre hooks")
if err := ib.itemHookHandler.HandleHooks(log, groupResource, obj, ib.backupRequest.ResourceHooks, hook.PhasePre); err != nil {
if err := ib.itemHookHandler.HandleHooks(log, groupResource, obj, ib.backupRequest.ResourceHooks, hook.PhasePre, ib.hookTracker); err != nil {
return false, itemFiles, err
}
if optedOut, podName := ib.podVolumeSnapshotTracker.OptedoutByPod(namespace, name); optedOut {
@@ -200,7 +203,7 @@ func (ib *itemBackupper) backupItemInternal(logger logrus.FieldLogger, obj runti
// Get the list of volumes to back up using pod volume backup from the pod's annotations. Remove from this list
// any volumes that use a PVC that we've already backed up (this would be in a read-write-many scenario,
// where it's been backed up from another pod), since we don't need >1 backup per PVC.
includedVolumes, optedOutVolumes := podvolume.GetVolumesByPod(pod, boolptr.IsSetToTrue(ib.backupRequest.Spec.DefaultVolumesToFsBackup))
includedVolumes, optedOutVolumes := pdvolumeutil.GetVolumesByPod(pod, boolptr.IsSetToTrue(ib.backupRequest.Spec.DefaultVolumesToFsBackup))
for _, volume := range includedVolumes {
// track the volumes that are PVCs using the PVC snapshot tracker, so that when we backup PVCs/PVs
// via an item action in the next step, we don't snapshot PVs that will have their data backed up
@@ -232,7 +235,7 @@ func (ib *itemBackupper) backupItemInternal(logger logrus.FieldLogger, obj runti
// if there was an error running actions, execute post hooks and return
log.Debug("Executing post hooks")
if err := ib.itemHookHandler.HandleHooks(log, groupResource, obj, ib.backupRequest.ResourceHooks, hook.PhasePost); err != nil {
if err := ib.itemHookHandler.HandleHooks(log, groupResource, obj, ib.backupRequest.ResourceHooks, hook.PhasePost, ib.hookTracker); err != nil {
backupErrs = append(backupErrs, err)
}
return false, itemFiles, kubeerrs.NewAggregate(backupErrs)
@@ -248,6 +251,10 @@ func (ib *itemBackupper) backupItemInternal(logger logrus.FieldLogger, obj runti
namespace = metadata.GetNamespace()
if groupResource == kuberesource.PersistentVolumes {
if err := ib.addVolumeInfo(obj, log); err != nil {
backupErrs = append(backupErrs, err)
}
if err := ib.takePVSnapshot(obj, log); err != nil {
backupErrs = append(backupErrs, err)
}
@@ -287,7 +294,7 @@ func (ib *itemBackupper) backupItemInternal(logger logrus.FieldLogger, obj runti
}
log.Debug("Executing post hooks")
if err := ib.itemHookHandler.HandleHooks(log, groupResource, obj, ib.backupRequest.ResourceHooks, hook.PhasePost); err != nil {
if err := ib.itemHookHandler.HandleHooks(log, groupResource, obj, ib.backupRequest.ResourceHooks, hook.PhasePost, ib.hookTracker); err != nil {
backupErrs = append(backupErrs, err)
}
@@ -360,6 +367,14 @@ func (ib *itemBackupper) executeActions(
ib.trackSkippedPV(obj, groupResource, "", "skipped due to resource policy ", log)
continue
}
// If the EnableCSI feature is not enabled, but the executing action is from CSI plugin, skip the action.
if csiutil.ShouldSkipAction(actionName) {
log.Infof("Skip action %s for resource %s:%s/%s, because the CSI feature is not enabled. Feature setting is %s.",
actionName, groupResource.String(), metadata.GetNamespace(), metadata.GetName(), features.Serialize())
continue
}
updatedItem, additionalItemIdentifiers, operationID, postOperationItems, err := action.Execute(obj, ib.backupRequest.Backup)
if err != nil {
return nil, itemFiles, errors.Wrapf(err, "error executing custom action (groupResource=%s, namespace=%s, name=%s)", groupResource.String(), namespace, name)
@@ -369,8 +384,8 @@ func (ib *itemBackupper) executeActions(
// snapshot was skipped by CSI plugin
ib.trackSkippedPV(obj, groupResource, csiSnapshotApproach, "skipped b/c it's not a CSI volume", log)
delete(u.GetAnnotations(), skippedNoCSIPVAnnotation)
} else if actionName == csiBIAPluginName || actionName == vsphereBIAPluginName {
// the snapshot has been taken
} else if (actionName == csiBIAPluginName || actionName == vsphereBIAPluginName) && !boolptr.IsSetToFalse(ib.backupRequest.Backup.Spec.SnapshotVolumes) {
// the snapshot has been taken by the BIA plugin
ib.unTrackSkippedPV(obj, groupResource, log)
}
mustInclude := u.GetAnnotations()[mustIncludeAdditionalItemAnnotation] == "true" || finalize
@@ -495,6 +510,7 @@ func (ib *itemBackupper) takePVSnapshot(obj runtime.Unstructured, log logrus.Fie
if boolptr.IsSetToFalse(ib.backupRequest.Spec.SnapshotVolumes) {
log.Info("Backup has volume snapshots disabled; skipping volume snapshot action.")
ib.trackSkippedPV(obj, kuberesource.PersistentVolumes, volumeSnapshotApproach, "backup has volume snapshots disabled", log)
return nil
}
@@ -651,6 +667,7 @@ func (ib *itemBackupper) getMatchAction(obj runtime.Unstructured, groupResource
}
return ib.backupRequest.ResPolicies.GetMatchAction(pv)
}
return nil, nil
}
@@ -674,6 +691,26 @@ func (ib *itemBackupper) unTrackSkippedPV(obj runtime.Unstructured, groupResourc
}
}
func (ib *itemBackupper) addVolumeInfo(obj runtime.Unstructured, log logrus.FieldLogger) error {
pv := new(corev1api.PersistentVolume)
err := runtime.DefaultUnstructuredConverter.FromUnstructured(obj.UnstructuredContent(), pv)
if err != nil {
log.WithError(err).Warnf("Fail to convert PV")
return err
}
pvcName := ""
pvcNamespace := ""
if pv.Spec.ClaimRef != nil {
pvcName = pv.Spec.ClaimRef.Name
pvcNamespace = pv.Spec.ClaimRef.Namespace
}
ib.backupRequest.VolumesInformation.InsertPVMap(*pv, pvcName, pvcNamespace)
return nil
}
// convert the input object to PV/PVC and get the PV name
func getPVName(obj runtime.Unstructured, groupResource schema.GroupResource) (string, error) {
if groupResource == kuberesource.PersistentVolumes {

View File

@@ -19,6 +19,7 @@ package backup
import (
"testing"
"github.com/sirupsen/logrus"
"github.com/stretchr/testify/require"
"k8s.io/apimachinery/pkg/runtime/schema"
@@ -237,3 +238,34 @@ func TestRandom(t *testing.T) {
err2 := runtime.DefaultUnstructuredConverter.FromUnstructured(o, pvc)
t.Logf("err1: %v, err2: %v", err1, err2)
}
func TestAddVolumeInfo(t *testing.T) {
tests := []struct {
name string
pv *corev1api.PersistentVolume
}{
{
name: "PV has ClaimRef",
pv: builder.ForPersistentVolume("testPV").ClaimRef("testNS", "testPVC").Result(),
},
{
name: "PV has no ClaimRef",
pv: builder.ForPersistentVolume("testPV").Result(),
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
ib := itemBackupper{}
ib.backupRequest = new(Request)
ib.backupRequest.VolumesInformation.Init()
pvObj, err := runtime.DefaultUnstructuredConverter.ToUnstructured(tc.pv)
require.NoError(t, err)
logger := logrus.StandardLogger()
err = ib.addVolumeInfo(&unstructured.Unstructured{Object: pvObj}, logger)
require.NoError(t, err)
})
}
}

View File

@@ -196,9 +196,8 @@ func (r *itemCollector) getResourceItems(log logrus.FieldLogger, gv schema.Group
log.Info("Getting items for resource")
var (
gvr = gv.WithResource(resource.Name)
gr = gvr.GroupResource()
clusterScoped = !resource.Namespaced
gvr = gv.WithResource(resource.Name)
gr = gvr.GroupResource()
)
orders := getOrderedResourcesForType(r.backupRequest.Backup.Spec.OrderedResources, resource.Name)
@@ -272,8 +271,6 @@ func (r *itemCollector) getResourceItems(log logrus.FieldLogger, gv schema.Group
}
}
namespacesToList := getNamespacesToList(r.backupRequest.NamespaceIncludesExcludes)
// Handle namespace resource here.
// Namespace are only filtered by namespace include/exclude filters.
// Label selectors are not checked.
@@ -289,11 +286,14 @@ func (r *itemCollector) getResourceItems(log logrus.FieldLogger, gv schema.Group
return nil, errors.WithStack(err)
}
items := r.backupNamespaces(unstructuredList, namespacesToList, gr, preferredGVR, log)
items := r.backupNamespaces(unstructuredList, r.backupRequest.NamespaceIncludesExcludes, gr, preferredGVR, log)
return items, nil
}
clusterScoped := !resource.Namespaced
namespacesToList := getNamespacesToList(r.backupRequest.NamespaceIncludesExcludes)
// If we get here, we're backing up something other than namespaces
if clusterScoped {
namespacesToList = []string{""}
@@ -533,31 +533,13 @@ func (r *itemCollector) listItemsForLabel(unstructuredItems []unstructured.Unstr
// backupNamespaces process namespace resource according to namespace filters.
func (r *itemCollector) backupNamespaces(unstructuredList *unstructured.UnstructuredList,
namespacesToList []string, gr schema.GroupResource, preferredGVR schema.GroupVersionResource,
ie *collections.IncludesExcludes, gr schema.GroupResource, preferredGVR schema.GroupVersionResource,
log logrus.FieldLogger) []*kubernetesResource {
var items []*kubernetesResource
for index, unstructured := range unstructuredList.Items {
found := false
if len(namespacesToList) == 0 {
// No namespace found. By far, this condition cannot be triggered. Either way,
// namespacesToList is not empty.
log.Debug("Skip namespace resource, because no item found by namespace filters.")
break
} else if len(namespacesToList) == 1 && namespacesToList[0] == "" {
// All namespaces are included.
log.Debugf("Backup namespace %s due to full cluster backup.", unstructured.GetName())
found = true
} else {
for _, ns := range namespacesToList {
if unstructured.GetName() == ns {
log.Debugf("Backup namespace %s due to namespace filters setting.", unstructured.GetName())
found = true
break
}
}
}
if ie.ShouldInclude(unstructured.GetName()) {
log.Debugf("Backup namespace %s due to namespace filters setting.", unstructured.GetName())
if found {
path, err := r.writeToFile(&unstructuredList.Items[index])
if err != nil {
log.WithError(err).Error("Error writing item to file")

View File

@@ -10,6 +10,14 @@ type SkippedPV struct {
Reasons []PVSkipReason `json:"reasons"`
}
func (s *SkippedPV) SerializeSkipReasons() string {
ret := ""
for _, reason := range s.Reasons {
ret = ret + reason.Approach + ": " + reason.Reason + ";"
}
return ret
}
type PVSkipReason struct {
Approach string `json:"approach"`
Reason string `json:"reason"`
@@ -21,6 +29,8 @@ type skipPVTracker struct {
// pvs is a map of name of the pv to the list of reasons why it is skipped.
// The reasons are stored in a map each key of the map is the backup approach, each approach can have one reason
pvs map[string]map[string]string
// includedPVs is a set of pv to be included in the backup, the element in this set should not be in the "pvs" map
includedPVs map[string]struct{}
}
const (
@@ -32,8 +42,9 @@ const (
func NewSkipPVTracker() *skipPVTracker {
return &skipPVTracker{
RWMutex: &sync.RWMutex{},
pvs: make(map[string]map[string]string),
RWMutex: &sync.RWMutex{},
pvs: make(map[string]map[string]string),
includedPVs: make(map[string]struct{}),
}
}
@@ -44,9 +55,12 @@ func (pt *skipPVTracker) Track(name, approach, reason string) {
if name == "" || reason == "" {
return
}
if _, ok := pt.includedPVs[name]; ok {
return
}
skipReasons := pt.pvs[name]
if skipReasons == nil {
skipReasons = make(map[string]string, 0)
skipReasons = make(map[string]string)
pt.pvs[name] = skipReasons
}
if approach == "" {
@@ -56,9 +70,12 @@ func (pt *skipPVTracker) Track(name, approach, reason string) {
}
// Untrack removes the pvc with the specified namespace and name.
// This func should be called when the PV is taken for snapshot, regardless native snapshot, CSI snapshot or fsb backup
// therefore, in one backup processed if a PV is Untracked once, it will not be tracked again.
func (pt *skipPVTracker) Untrack(name string) {
pt.Lock()
defer pt.Unlock()
pt.includedPVs[name] = struct{}{}
delete(pt.pvs, name)
}

View File

@@ -4,6 +4,7 @@ import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestSummary(t *testing.T) {
@@ -41,3 +42,23 @@ func TestSummary(t *testing.T) {
}
assert.Equal(t, expected, tracker.Summary())
}
func TestSerializeSkipReasons(t *testing.T) {
tracker := NewSkipPVTracker()
//tracker.Track("pv5", "", "skipped due to policy")
tracker.Track("pv3", podVolumeApproach, "it's set to opt-out")
tracker.Track("pv3", csiSnapshotApproach, "not applicable for CSI ")
for _, skippedPV := range tracker.Summary() {
require.Equal(t, "csiSnapshot: not applicable for CSI ;podvolume: it's set to opt-out;", skippedPV.SerializeSkipReasons())
}
}
func TestTrackUntrack(t *testing.T) {
// If a pv is untracked explicitly it can't be Tracked again, b/c the pv is considered backed up already.
tracker := NewSkipPVTracker()
tracker.Track("pv3", podVolumeApproach, "it's set to opt-out")
tracker.Untrack("pv3")
tracker.Track("pv3", csiSnapshotApproach, "not applicable for CSI ")
assert.Equal(t, 0, len(tracker.Summary()))
}

View File

@@ -20,10 +20,9 @@ import (
"fmt"
"sort"
snapshotv1api "github.com/kubernetes-csi/external-snapshotter/client/v4/apis/volumesnapshot/v1"
"github.com/vmware-tanzu/velero/internal/hook"
"github.com/vmware-tanzu/velero/internal/resourcepolicies"
internalVolume "github.com/vmware-tanzu/velero/internal/volume"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/itemoperation"
"github.com/vmware-tanzu/velero/pkg/plugin/framework"
@@ -51,12 +50,15 @@ type Request struct {
VolumeSnapshots []*volume.Snapshot
PodVolumeBackups []*velerov1api.PodVolumeBackup
BackedUpItems map[itemKey]struct{}
CSISnapshots []snapshotv1api.VolumeSnapshot
itemOperationsList *[]*itemoperation.BackupOperation
ResPolicies *resourcepolicies.Policies
SkippedPVTracker *skipPVTracker
VolumesInformation internalVolume.VolumesInformation
}
// VolumesInformation contains the information needs by generating
// the backup VolumeInfo array.
// GetItemOperationsList returns ItemOperationsList, initializing it if necessary
func (r *Request) GetItemOperationsList() *[]*itemoperation.BackupOperation {
if r.itemOperationsList == nil {
@@ -85,3 +87,17 @@ func (r *Request) BackupResourceList() map[string][]string {
return resources
}
func (r *Request) FillVolumesInformation() {
skippedPVMap := make(map[string]string)
for _, skippedPV := range r.SkippedPVTracker.Summary() {
skippedPVMap[skippedPV.Name] = skippedPV.SerializeSkipReasons()
}
r.VolumesInformation.SkippedPVs = skippedPVMap
r.VolumesInformation.NativeSnapshots = r.VolumeSnapshots
r.VolumesInformation.PodVolumeBackups = r.PodVolumeBackups
r.VolumesInformation.BackupOperations = *r.GetItemOperationsList()
r.VolumesInformation.BackupName = r.Backup.Name
}

View File

@@ -1,5 +1,5 @@
/*
Copyright 2019 the Velero contributors.
Copyright the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

Some files were not shown because too many files have changed in this diff Show More