Compare commits

...

55 Commits

Author SHA1 Message Date
dependabot[bot]
d2555bfc28 Bump github/codeql-action from 3 to 4
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m19s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3 to 4.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/github/codeql-action/compare/v3...v4)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-version: '4'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-10-15 13:54:08 +08:00
lyndon-li
99a46ed818 Merge pull request #9329 from T4iFooN-IX/9327
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m9s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 11s
Main CI / Build (push) Has been skipped
Close stale issues and PRs / stale (push) Successful in 17s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m22s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-aws, main) (push) Failing after 56s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-gcp, main) (push) Failing after 56s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-microsoft-azure, main) (push) Failing after 53s
Fix typos (#9327)
2025-10-15 10:02:58 +08:00
Daniel Wituschek
93e8379530 Add changelog file
Signed-off-by: Daniel Wituschek <daniel.wituschek@intrexx.com>
2025-10-14 14:58:05 +02:00
Daniel Wituschek
8b3ba78c8c Fix typos
Signed-off-by: Daniel Wituschek <daniel.wituschek@intrexx.com>
2025-10-13 16:00:23 +02:00
Daniel Wituschek
b34f2deff2 Fix typos
Signed-off-by: Daniel Wituschek <daniel.wituschek@intrexx.com>
2025-10-13 15:58:20 +02:00
lyndon-li
e6aab8ca93 add events to diagnose (#9296)
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m10s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 16s
Main CI / Build (push) Has been skipped
Close stale issues and PRs / stale (push) Successful in 11s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m43s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-aws, main) (push) Failing after 1m5s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-gcp, main) (push) Failing after 1m2s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-microsoft-azure, main) (push) Failing after 1m2s
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-10-09 14:13:43 -04:00
Xun Jiang/Bruce Jiang
99f12b85ba Merge pull request #9301 from vmware-tanzu/fix_action_isse
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 39s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 3s
Main CI / Build (push) Has been skipped
Close stale issues and PRs / stale (push) Successful in 15s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m26s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-aws, main) (push) Failing after 55s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-gcp, main) (push) Failing after 59s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-microsoft-azure, main) (push) Failing after 51s
Fix the push action invalid variable ref issue.
2025-09-30 16:40:02 +08:00
Xun Jiang/Bruce Jiang
f8938e7fed VerifyJSONConfigs verify every elements in Data. (#9302)
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 48s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 4s
Main CI / Build (push) Has been skipped
Close stale issues and PRs / stale (push) Failing after 6s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 7s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-aws, main) (push) Failing after 3s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-gcp, main) (push) Failing after 3s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-microsoft-azure, main) (push) Failing after 4s
Add error message in the velero install CLI output if VerifyJSONConfigs fail.
Only allow one element in node-agent-configmap's Data.

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-09-29 15:08:05 -04:00
Xun Jiang
cabb04575e Fix the push action invalid variable ref issue.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-09-27 23:33:37 +08:00
lyndon-li
60dbcbc60d Merge pull request #9295 from sseago/privileged-fs-backup-pods
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 44s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 4s
Main CI / Build (push) Has been skipped
Close stale issues and PRs / stale (push) Failing after 6s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 5s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-aws, main) (push) Failing after 3s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-gcp, main) (push) Failing after 3s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-microsoft-azure, main) (push) Failing after 3s
Privileged fs backup pods
2025-09-26 10:31:59 +08:00
Scott Seago
4ade8cf8a2 Add option for privileged fs-backup pod
Signed-off-by: Scott Seago <sseago@redhat.com>
2025-09-25 15:38:39 -04:00
Daniel Jiang
826c73131e Merge pull request #9233 from Lyndon-Li/backup-pvc-to-different-node
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 59s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 4s
Main CI / Build (push) Has been skipped
Close stale issues and PRs / stale (push) Failing after 4s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 4s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-aws, main) (push) Failing after 2s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-gcp, main) (push) Failing after 3s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-microsoft-azure, main) (push) Failing after 3s
backupPVC to different node
2025-09-24 22:16:36 +08:00
lyndon-li
21691451e9 Merge branch 'main' into backup-pvc-to-different-node 2025-09-23 11:43:24 +08:00
lyndon-li
50d7b1cff1 Merge pull request #9248 from 0xLeo258/main
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m0s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 5s
Main CI / Build (push) Has been skipped
Close stale issues and PRs / stale (push) Failing after 5m54s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 2m33s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-aws, main) (push) Failing after 1m22s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-gcp, main) (push) Failing after 1m49s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-microsoft-azure, main) (push) Failing after 1m4s
Issue #9247: Protect VolumeSnapshot field from race condition
2025-09-23 11:39:57 +08:00
lyndon-li
d545ad49ba Merge branch 'main' into main 2025-09-23 11:10:38 +08:00
lyndon-li
7831bf25b9 Merge pull request #9281 from 0xLeo258/issue9234
Issue #9234: Fix plugin reentry with safe VolumeSnapshotterCache
2025-09-23 11:06:21 +08:00
Xun Jiang/Bruce Jiang
2abe91e08c Merge pull request #9250 from blackpiglet/distinguish_go_version_for_main_and_other_release
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 42s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 3s
Main CI / Build (push) Has been skipped
Close stale issues and PRs / stale (push) Failing after 9s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 7s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-aws, main) (push) Failing after 5s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-gcp, main) (push) Failing after 3s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-microsoft-azure, main) (push) Failing after 3s
Use different go version check logic for main and other branches.
2025-09-22 22:48:21 +08:00
0xLeo258
1ebe357d18 Add built-in mutex for SynchronizedVSList && Update unit tests
Signed-off-by: 0xLeo258 <noixe0312@gmail.com>
2025-09-20 09:13:07 +08:00
0xLeo258
9df17eb02b add changelog
Signed-off-by: 0xLeo258 <noixe0312@gmail.com>
2025-09-20 09:13:07 +08:00
0xLeo258
f2a27c3864 fix9247: Protect VolumeSnapshot field
Signed-off-by: 0xLeo258 <noixe0312@gmail.com>
2025-09-20 09:13:07 +08:00
Xun Jiang
4847eeaf62 Use different go version check logic for main and other branches.
main branch will read go version from go.mod's go primitive, and
only keep major and minor version, because we want the actions to use
the lastest patch version automatically, even the go.mod specify version
like 1.24.0.
release branch can read the go version from go.mod file by setup-go
action's own logic.
Refactor the get Go version to reusable workflow.

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-09-19 16:58:18 +08:00
dependabot[bot]
1ec281a64e Bump actions/setup-go from 5 to 6 (#9231)
Some checks failed
Run the E2E test on kind / build (push) Failing after 10s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 5s
Close stale issues and PRs / stale (push) Failing after 2m15s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 7s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-aws, main) (push) Failing after 5s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-gcp, main) (push) Failing after 4s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-microsoft-azure, main) (push) Failing after 4s
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 5 to 6.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-18 12:29:45 -04:00
0xLeo258
25de1bb3b6 add changelog
Signed-off-by: 0xLeo258 <noixe0312@gmail.com>
2025-09-18 17:36:07 +08:00
0xLeo258
e21b21c19e fix 9234: Add safe VolumeSnapshotterCache
Signed-off-by: 0xLeo258 <noixe0312@gmail.com>
2025-09-18 17:21:25 +08:00
Xun Jiang/Bruce Jiang
b19cad9d01 Merge pull request #9280 from kaovilai/bitnamiminio
Some checks failed
Run the E2E test on kind / build (push) Failing after 3s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 3s
Fix E2E tests: Build MinIO from Bitnami Dockerfile to replace deprecated image
2025-09-18 14:25:41 +08:00
Tiger Kaovilai
9b6c4b1d47 Fix E2E tests: Build MinIO from Bitnami Dockerfile to replace deprecated image
The Bitnami MinIO image bitnami/minio:2021.6.17-debian-10-r7 is no longer
available on Docker Hub, causing E2E tests to fail.

This change implements a solution to build the MinIO image locally from
Bitnami's public Dockerfile and cache it for subsequent runs:
- Fetches the latest commit hash of the Bitnami MinIO Dockerfile
- Uses GitHub Actions cache to store/retrieve built images
- Only rebuilds when the upstream Dockerfile changes
- Maintains compatibility with existing environment variables

Fixes #9279

🤖 Generated with [Claude Code](https://claude.ai/code)

Update .github/workflows/e2e-test-kind.yaml

Signed-off-by: Tiger Kaovilai <passawit.kaovilai@gmail.com>
Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2025-09-17 19:08:07 -04:00
Wenkai Yin(尹文开)
f50cafa472 Merge pull request #9264 from shubham-pampattiwar/fix-backup-q-accum
Some checks failed
Run the E2E test on kind / build (push) Failing after 7s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 4s
Close stale issues and PRs / stale (push) Failing after 10m30s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 11s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-aws, main) (push) Failing after 5s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-gcp, main) (push) Failing after 5s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-microsoft-azure, main) (push) Failing after 4s
Fix Schedule Backup Queue Accumulation During Extended Blocking Scenarios
2025-09-17 14:07:32 +08:00
Shubham Pampattiwar
a7b2985c83 add changelog file
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2025-09-15 16:07:40 -07:00
Shubham Pampattiwar
59289fba76 Fix Schedule Backup Queue Accumulation During Extended Blocking Scenarios
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2025-09-15 16:01:33 -07:00
Wenkai Yin(尹文开)
925479553a Merge pull request #9256 from shubham-pampattiwar/inhrerit-tolr-jobs
Some checks failed
Run the E2E test on kind / build (push) Failing after 9s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 3s
Close stale issues and PRs / stale (push) Failing after 4m11s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m1s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-aws, main) (push) Failing after 4s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-gcp, main) (push) Failing after 4s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-microsoft-azure, main) (push) Failing after 5s
Fix maintenance jobs toleration inheritance from Velero deployment
2025-09-15 14:49:21 +08:00
lyndon-li
47340e67af Merge branch 'main' into backup-pvc-to-different-node 2025-09-12 13:30:34 +08:00
Lyndon-Li
25a7ef0e87 backupPVC to different node
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-09-12 13:27:58 +08:00
lyndon-li
799d596d5c Merge pull request #9226 from sseago/iba-perf
Some checks failed
Run the E2E test on kind / build (push) Failing after 5s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 4s
Close stale issues and PRs / stale (push) Failing after 2m10s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 39s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-aws, main) (push) Failing after 6s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-gcp, main) (push) Failing after 5s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-microsoft-azure, main) (push) Failing after 6s
Get pod list once per namespace in pvc IBA
2025-09-12 10:55:43 +08:00
Shubham Pampattiwar
5ba00dfb09 Fix maintenance jobs toleration inheritance from Velero deployment
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

fix codespell and add changelog file

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2025-09-11 16:04:26 -07:00
Priyansh Choudhary
f1476defde Update AzureAD Microsoft Authentication Library to v1.5.0 (#9244)
Some checks failed
Run the E2E test on kind / build (push) Failing after 9s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 4s
Close stale issues and PRs / stale (push) Failing after 6m3s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-aws, main) (push) Failing after 38s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-gcp, main) (push) Failing after 2s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-microsoft-azure, main) (push) Failing after 6s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 11m22s
* Update AzureAD Microsoft Authentication Library to v1.5.0
Signed-off-by: Priyansh Choudhary <im1706@gmail.com>

* Added Changelog
Signed-off-by: Priyansh Choudhary <im1706@gmail.com>

---------

Signed-off-by: Priyansh Choudhary <im1706@gmail.com>
2025-09-11 14:07:46 -04:00
Xun Jiang/Bruce Jiang
67ff0dcbe0 Merge pull request #9240 from vmware-tanzu/update_velero_supported_k8s_versions
Some checks failed
Run the E2E test on kind / build (push) Failing after 8s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 4s
Add v1.34.0 for v1.17 compatible k8s versions.
2025-09-11 15:20:13 +08:00
lyndon-li
aad9dd9068 Merge branch 'main' into backup-pvc-to-different-node 2025-09-11 14:47:35 +08:00
lyndon-li
b636334079 Merge pull request #9241 from blackpiglet/bump_k8s_lib_to_1.33_for_main
Bump k8s library to v1.33.
2025-09-11 14:20:40 +08:00
lyndon-li
4d44705ed8 Merge branch 'main' into backup-pvc-to-different-node 2025-09-11 13:08:22 +08:00
Lyndon-Li
81c5b6692d backupPVC to different node
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-09-11 13:04:24 +08:00
dependabot[bot]
02edbc0c65 Bump actions/stale from 9.1.0 to 10.0.0 (#9232)
Some checks failed
Run the E2E test on kind / build (push) Failing after 4s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 4s
Bumps [actions/stale](https://github.com/actions/stale) from 9.1.0 to 10.0.0.
- [Release notes](https://github.com/actions/stale/releases)
- [Changelog](https://github.com/actions/stale/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/stale/compare/v9.1.0...v10.0.0)

---
updated-dependencies:
- dependency-name: actions/stale
  dependency-version: 10.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-10 16:44:18 -05:00
Xun Jiang
e8208097ba Bump k8s library to v1.33.
Replace deprecated EventExpansion method with WithContext methods.
Modify UTs.
Align the E2E ginkgo CLI version with go.mod

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-09-10 17:58:38 +08:00
Xun Jiang
4c30499340 Add v1.34.0 for v1.17 compatible k8s versions.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-09-10 17:28:23 +08:00
Scott Seago
2a9203f1b2 Get pod list once per namespace in pvc IBA
Signed-off-by: Scott Seago <sseago@redhat.com>
2025-09-09 13:19:06 -04:00
lyndon-li
3be76da952 Merge pull request #8991 from sseago/concurrent-backup-design
Some checks failed
Run the E2E test on kind / build (push) Failing after 2m18s
Run the E2E test on kind / setup-test-matrix (push) Successful in 1m20s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 43s
Close stale issues and PRs / stale (push) Successful in 14s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 7s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-aws, main) (push) Failing after 4s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-gcp, main) (push) Failing after 3s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-microsoft-azure, main) (push) Failing after 4s
Concurrent backup design doc
2025-09-05 11:21:40 +08:00
Scott Seago
7132720a49 Concurrent backup design doc
Signed-off-by: Scott Seago <sseago@redhat.com>
2025-09-03 12:09:55 -04:00
Xun Jiang/Bruce Jiang
2dbfbc29e8 Merge pull request #9214 from weeix/patch-1
Some checks failed
Run the E2E test on kind / build (push) Failing after 7s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 3s
Close stale issues and PRs / stale (push) Successful in 26s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-aws, main) (push) Failing after 8s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-gcp, main) (push) Failing after 4s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-microsoft-azure, main) (push) Failing after 4s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 14m0s
clarify VolumeSnapshotClass error for mismatched driver/provisioner
2025-09-03 15:12:09 +08:00
weeix
80da461458 clarify VolumeSnapshotClass error for mismatched driver/provisioner
Signed-off-by: weeix <weeix@users.noreply.github.com>
2025-09-02 18:31:13 -05:00
Xun Jiang/Bruce Jiang
fdee2700a7 Merge pull request #9219 from blackpiglet/9157_e2e
Some checks failed
Run the E2E test on kind / build (push) Failing after 4s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 3s
Close stale issues and PRs / stale (push) Successful in 14s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 10s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-aws, main) (push) Failing after 5s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-gcp, main) (push) Failing after 4s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-microsoft-azure, main) (push) Failing after 5s
Add E2E auto case for node-agent-config validation.
2025-09-02 22:37:33 +08:00
Xun Jiang
8e1c4a7dc5 Add E2E cases for node-agent-configmap.
Some checks failed
Run the E2E test on kind / build (push) Failing after 11s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Fix the default BackupRepoConfig setting issue.
Delete PriorityClass in migration case clean stage.

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-09-02 15:03:20 +08:00
lyndon-li
09b5183fce Merge pull request #9173 from clementnuss/feat/backup-pvc-annotations
Some checks failed
Run the E2E test on kind / build (push) Failing after 8s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 3s
Close stale issues and PRs / stale (push) Successful in 13s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 3m0s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-aws, main) (push) Failing after 55s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-gcp, main) (push) Failing after 2s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-microsoft-azure, main) (push) Failing after 21s
feat: Permit specifying annotations for the BackupPVC
2025-08-29 16:46:30 +08:00
Clément Nussbaumer
c5b70b4a0d test: fix backuppvc annotations test case
Signed-off-by: Clément Nussbaumer <clement.nussbaumer@postfinance.ch>
2025-08-29 10:10:41 +02:00
Clément Nussbaumer
248a840918 feat: Permit specifying annotations for the BackupPVC
Signed-off-by: Clément Nussbaumer <clement.nussbaumer@postfinance.ch>
2025-08-29 10:10:41 +02:00
Xun Jiang/Bruce Jiang
04fb20676d Merge pull request #9215 from blackpiglet/9135_e2e
Add E2E test cases for repository maintenance job configuration.
2025-08-29 13:27:55 +08:00
Xun Jiang
996d2a025f Add E2E test cases for repository maintenance job configuration.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-08-28 20:06:15 +08:00
76 changed files with 2527 additions and 250 deletions

View File

@@ -8,16 +8,26 @@ on:
- "design/**"
- "**/*.md"
jobs:
get-go-version:
uses: ./.github/workflows/get-go-version.yaml
with:
ref: ${{ github.event.pull_request.base.ref }}
# Build the Velero CLI and image once for all Kubernetes versions, and cache it so the fan-out workers can get it.
build:
runs-on: ubuntu-latest
needs: get-go-version
outputs:
minio-dockerfile-sha: ${{ steps.minio-version.outputs.dockerfile_sha }}
steps:
- name: Check out the code
uses: actions/checkout@v5
- name: Set up Go
uses: actions/setup-go@v5
- name: Set up Go version
uses: actions/setup-go@v6
with:
go-version-file: 'go.mod'
go-version: ${{ needs.get-go-version.outputs.version }}
# Look for a CLI that's made for this PR
- name: Fetch built CLI
id: cli-cache
@@ -44,6 +54,26 @@ jobs:
run: |
IMAGE=velero VERSION=pr-test BUILD_OUTPUT_TYPE=docker make container
docker save velero:pr-test-linux-amd64 -o ./velero.tar
# Check and build MinIO image once for all e2e tests
- name: Check Bitnami MinIO Dockerfile version
id: minio-version
run: |
DOCKERFILE_SHA=$(curl -s https://api.github.com/repos/bitnami/containers/commits?path=bitnami/minio/2025/debian-12/Dockerfile\&per_page=1 | jq -r '.[0].sha')
echo "dockerfile_sha=${DOCKERFILE_SHA}" >> $GITHUB_OUTPUT
- name: Cache MinIO Image
uses: actions/cache@v4
id: minio-cache
with:
path: ./minio-image.tar
key: minio-bitnami-${{ steps.minio-version.outputs.dockerfile_sha }}
- name: Build MinIO Image from Bitnami Dockerfile
if: steps.minio-cache.outputs.cache-hit != 'true'
run: |
echo "Building MinIO image from Bitnami Dockerfile..."
git clone --depth 1 https://github.com/bitnami/containers.git /tmp/bitnami-containers
cd /tmp/bitnami-containers/bitnami/minio/2025/debian-12
docker build -t bitnami/minio:local .
docker save bitnami/minio:local > ${{ github.workspace }}/minio-image.tar
# Create json of k8s versions to test
# from guide: https://stackoverflow.com/a/65094398/4590470
setup-test-matrix:
@@ -75,6 +105,7 @@ jobs:
needs:
- build
- setup-test-matrix
- get-go-version
runs-on: ubuntu-latest
strategy:
matrix: ${{fromJson(needs.setup-test-matrix.outputs.matrix)}}
@@ -82,13 +113,26 @@ jobs:
steps:
- name: Check out the code
uses: actions/checkout@v5
- name: Set up Go
uses: actions/setup-go@v5
- name: Set up Go version
uses: actions/setup-go@v6
with:
go-version-file: 'go.mod'
go-version: ${{ needs.get-go-version.outputs.version }}
# Fetch the pre-built MinIO image from the build job
- name: Fetch built MinIO Image
uses: actions/cache@v4
id: minio-cache
with:
path: ./minio-image.tar
key: minio-bitnami-${{ needs.build.outputs.minio-dockerfile-sha }}
- name: Load MinIO Image
run: |
echo "Loading MinIO image..."
docker load < ./minio-image.tar
- name: Install MinIO
run:
docker run -d --rm -p 9000:9000 -e "MINIO_ACCESS_KEY=minio" -e "MINIO_SECRET_KEY=minio123" -e "MINIO_DEFAULT_BUCKETS=bucket,additional-bucket" bitnami/minio:2021.6.17-debian-10-r7
run: |
docker run -d --rm -p 9000:9000 -e "MINIO_ROOT_USER=minio" -e "MINIO_ROOT_PASSWORD=minio123" -e "MINIO_DEFAULT_BUCKETS=bucket,additional-bucket" bitnami/minio:local
- uses: engineerd/setup-kind@v0.6.2
with:
skipClusterLogsExport: true

33
.github/workflows/get-go-version.yaml vendored Normal file
View File

@@ -0,0 +1,33 @@
on:
workflow_call:
inputs:
ref:
description: "The target branch's ref"
required: true
type: string
outputs:
version:
description: "The expected Go version"
value: ${{ jobs.extract.outputs.version }}
jobs:
extract:
runs-on: ubuntu-latest
outputs:
version: ${{ steps.pick-version.outputs.version }}
steps:
- name: Check out the code
uses: actions/checkout@v5
- id: pick-version
run: |
if [ "${{ inputs.ref }}" == "main" ]; then
version=$(grep '^go ' go.mod | awk '{print $2}' | cut -d. -f1-2)
else
goDirectiveVersion=$(grep '^go ' go.mod | awk '{print $2}')
toolChainVersion=$(grep '^toolchain ' go.mod | awk '{print $2}')
version=$(printf "%s\n%s\n" "$goDirectiveVersion" "$toolChainVersion" | sort -V | tail -n1)
fi
echo "version=$version"
echo "version=$version" >> $GITHUB_OUTPUT

View File

@@ -31,6 +31,6 @@ jobs:
output: 'trivy-results.sarif'
- name: Upload Trivy scan results to GitHub Security tab
uses: github/codeql-action/upload-sarif@v3
uses: github/codeql-action/upload-sarif@v4
with:
sarif_file: 'trivy-results.sarif'

View File

@@ -1,18 +1,26 @@
name: Pull Request CI Check
on: [pull_request]
jobs:
get-go-version:
uses: ./.github/workflows/get-go-version.yaml
with:
ref: ${{ github.event.pull_request.base.ref }}
build:
name: Run CI
needs: get-go-version
runs-on: ubuntu-latest
strategy:
fail-fast: false
steps:
- name: Check out the code
uses: actions/checkout@v5
- name: Set up Go
uses: actions/setup-go@v5
- name: Set up Go version
uses: actions/setup-go@v6
with:
go-version-file: 'go.mod'
go-version: ${{ needs.get-go-version.outputs.version }}
- name: Make ci
run: make ci
- name: Upload test coverage

View File

@@ -7,16 +7,24 @@ on:
- "design/**"
- "**/*.md"
jobs:
get-go-version:
uses: ./.github/workflows/get-go-version.yaml
with:
ref: ${{ github.event.pull_request.base.ref }}
build:
name: Run Linter Check
runs-on: ubuntu-latest
needs: get-go-version
steps:
- name: Check out the code
uses: actions/checkout@v5
- name: Set up Go
uses: actions/setup-go@v5
- name: Set up Go version
uses: actions/setup-go@v6
with:
go-version-file: 'go.mod'
go-version: ${{ needs.get-go-version.outputs.version }}
- name: Linter check
uses: golangci/golangci-lint-action@v8
with:

View File

@@ -9,17 +9,24 @@ on:
- '*'
jobs:
get-go-version:
uses: ./.github/workflows/get-go-version.yaml
with:
ref: ${{ github.ref }}
build:
name: Build
runs-on: ubuntu-latest
needs: get-go-version
steps:
- name: Check out the code
uses: actions/checkout@v5
- name: Set up Go
uses: actions/setup-go@v5
- name: Set up Go version
uses: actions/setup-go@v6
with:
go-version-file: 'go.mod'
go-version: ${{ needs.get-go-version.outputs.version }}
- name: Set up QEMU
id: qemu
uses: docker/setup-qemu-action@v3

View File

@@ -7,7 +7,7 @@ jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v9.1.0
- uses: actions/stale@v10.0.0
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-issue-message: "This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 14 days. If a Velero team member has requested log or more information, please provide the output of the shared commands."

View File

@@ -42,7 +42,7 @@ The following is a list of the supported Kubernetes versions for each Velero ver
| Velero version | Expected Kubernetes version compatibility | Tested on Kubernetes version |
|----------------|-------------------------------------------|-------------------------------------|
| 1.17 | 1.18-latest | 1.31.7, 1.32.3, and 1.33.1 |
| 1.17 | 1.18-latest | 1.31.7, 1.32.3, 1.33.1, and 1.34.0 |
| 1.16 | 1.18-latest | 1.31.4, 1.32.3, and 1.33.0 |
| 1.15 | 1.18-latest | 1.28.8, 1.29.8, 1.30.4 and 1.31.1 |
| 1.14 | 1.18-latest | 1.27.9, 1.28.9, and 1.29.4 |

View File

@@ -0,0 +1 @@
feat: Permit specifying annotations for the BackupPVC

View File

@@ -0,0 +1 @@
Get pod list once per namespace in pvc IBA

View File

@@ -0,0 +1 @@
Fix issue #9229, don't attach backupPVC to the source node

View File

@@ -0,0 +1 @@
Update AzureAD Microsoft Authentication Library to v1.5.0

View File

@@ -0,0 +1 @@
Protect VolumeSnapshot field from race condition during multi-thread backup

View File

@@ -0,0 +1 @@
Fix repository maintenance jobs to inherit allowlisted tolerations from Velero deployment

View File

@@ -0,0 +1 @@
Fix schedule controller to prevent backup queue accumulation during extended blocking scenarios by properly handling empty backup phases

View File

@@ -0,0 +1 @@
Implement concurrency control for cache of native VolumeSnapshotter plugin.

View File

@@ -0,0 +1 @@
Add option for privileged fs-backup pod

View File

@@ -0,0 +1 @@
Fix issue #9267, add events to data mover prepare diagnostic

View File

@@ -0,0 +1 @@
VerifyJSONConfigs verify every elements in Data.

View File

@@ -0,0 +1 @@
Fix typos in documentation

View File

@@ -0,0 +1,257 @@
# Concurrent Backup Processing
This enhancement will enable Velero to process multiple backups at the same time. This is largely a usability enhancement rather than a performance enhancement, since the overall backup throughput may not be significantly improved over the current implementation, since we are already processing individual backup items in parallel. It is a significant usability improvement, though, as with the current design, a user who submits a small backup may have to wait significantly longer than expected if the backup is submitted immediately after a large backup.
## Background
With the current implementation, only one backup may be `InProgress` at a time. A second backup created will not start processing until the first backup moves on to `WaitingForPluginOperations` or `Finalizing`. This is a usability concern, especially in clusters when multiple users are initiating backups. With this enhancement, we intend to allow multiple backups to be processed concurrently. This will allow backups to start processing immediately, even if a large backup was just submitted by another user. This enhancement will build on top of the prior parallel item processing feature by creating a dedicatede ItemBlock worker pool for each running backup. The pool will be created at the beginning of the backup reconcile, and the input channel will be passed to the Kubernetes backupper just like it is in the current release.
The primary challenge is to make sure that the same workload in multiple backups is not backed up concurrently. If that were to happen, we would risk data corruption, especially around the processing of pod hooks and volume backup. For this first release we will take a conservative, high-level approach to overlap detection. Two backups will not run concurrently if there is any overlap in included namespaces. For example, if a backup that includes `ns1` and `ns2` is running, then a second backup for `ns2` and `ns3` will not be started. If a backup which does not filter namespaces is running (either a whole cluster backup or a non-namespace-limited backup with a label selector) then no other backups will be started, since a backup across all namespaces overlaps with any other backup. Calculating item-level overlap for queued backups is problematic since we don't know which items are included in a backup until backup processing has begun. A future release may add ItemBlock overlap detection, where at the item block worker level, the same item will not be processed by two different workers at the same time. This works together with workload conflict detection to further detect conflicts in a more granular level for shared resources between backups. Eventually, with a more complete understanding of individual workloads (either via ItemBlocks or some higher level model), the namespace level overlap detection may be relaxed in future versions.
## Goals
- Process multiple backups concurrently
- Detect namespace overlap to avoid conflicts
- For queued backups (not yet runnable due to concurrency limits or overlap), indicate the queue position in status
## Non Goals
- Handling NFS PVs when more than one PV point to the same underlying NFS share
- Handling VGDP cancellation for failed backups on restart
- Mounting a PVC for scenarios in which /tmp is too small for the number of concurrent backups
- Providing a mechanism to identify high priority backups which get preferential treatment in terms of ItemBlock worker availability
- Item-level overlap detection (future feature)
- Providing the ability to disable namespace-level overlap detection once Item-level overlap detection is in place (although this may be supported in a future version).
## High-Level Design
### Backup CRD changes
Two new backup phases will be added: `Queued` and `ReadyToStart`. In the Backup workflow, new backups will be moved to the Queued phase when they are added to the backup queue. When a backup is removed from the queue because it is now able to run, it will be moved to the `ReadyToStart` phase, which will allow the backup controller to start processing it.
In addition, a new Status field, `QueuePosition`, will be added to track the backup's current position in the queue.
### New Controller: `backupQueueReconciler`
A new reconciler will be added, `backupQueueReconciler` which will use the current `backupReconciler` logic for reconciling `New` backups but instead of running the backup, it will move the Backup to the `Queued` phase and set `QueuePosition`.
In addition, this reconciler will periodically reconcile all queued backups (on some configurable time interval) and if there is a runnable backup, remove it from the queue, update `QueuePosition` for any queued backups behind it, and update its phase to `ReadyToStart`.
Queued backups will be reconciled in order based on `QueuePosition`, so the first runnable backup found will be processed. A backup is runnable if both of the following conditions are true:
1) The total number of backups either `InProgress` or `ReadyToStart` is less than the configured number of concurrent backups.
2) The backup has no overlap with any backups currently `InProgress` or `ReadyToStart` or with any `Queued` backups with a higher (i.e. closer to 1) queue position than this backup.
### Updates to Backup controller
The current `backupReconciler` will change its reconciling rules. Instead of watching and reconciling New backups, it will reconcile `ReadyToStart` backups. In addition, it will be configured to run in parallel by setting `MaxConcurrentReconciles` based on the `concurrent-backups` server arg.
The startup (and shutdown) of the ItemBlock worker pool will be moved from reconciler startup to the backup reconcile, which will give each running backup its own dedicated worker pool. The per-backup worker pool will will use the existing `--item-block-worker-count` installer/server arg. This means that the maximum number of ItemBlock workers for the entire Velero pod will be the ItemBlock worker count multiplied by concurrentBackups. For example, if concurrentBackups is 5, and itemBlockWorkerCount is 6, then there will be, at most, 30 worker threads active, 5 dedicated to each InProgress backup, but this maximum will only be achieved when the maximum number of backups are InProgress. This also means that each InProgress backup will have a dedicated ItemBlock input channel with the same fixed buffer size.
## Detailed Design
### New Install/Server configuration args
A new install/server arg, `concurrent-backups` will be added. This will be an int-valued field specifying the number of backups which may be processed concurrently (with phase `InProgress`). If not specified, the default value of 1 will be used.
### Consideration of backup overlap and concurrent backup processing
The primary consideration for running additional backups concurrently is the configured `concurrent-backups` parameter. If the total number of `InProgress` and `ReadyToStart` backups is equal to `concurrent-backups` then any `Queued` backups will remain in the queue.
The second consideration is backup overlap. In order to prevent interaction between running backups (particularly around volume backup and pod hooks), we cannot allow two overlapping backups to run at the same time. For now, we will define overlap broadly -- requiring that two concurrent backups don't include any of the same namespaces. A backup for `ns1` can run concurrently with a backup for `ns2`, but a backup for `[ns1,ns2]` cannot run concurrently with a backup for `ns1`. One consequence of this approach is that a backup which includes all namespaces (even if further filtered by resource or label) cannot run concurrently with *any other backup*.
When determining which queued backup to run next, velero will look for the next queued backup which has no overlap with any InProgress backup or any Queued backup ahead of it. The reason we need to consider queued as well as running backups for overlap detection is as follows.
Consider the following scenario. These are the current not-completed backups (ordered from oldest to newest)
1. backup1, includedNamespaces: [ns1, ns2], phase: InProgress
2. backup2, includedNamespaces: [ns2, ns3, ns5], phase: Queued, QueuePosition: 1
3. backup3, includedNamespaces: [ns4, ns3], phase: Queued, QueuePosition: 2
4. backup4, includedNamespaces: [ns5, ns6], phase: Queued, QueuePosition: 2
5. backup5, includedNamespaces: [ns8, ns9], phase: Queued, QueuePosition: 3
Assuming `concurrent-backups` is 2, on the next reconcile, Velero will be able to start a second backup if there is one with no overlap. `backup2` cannot run, since `ns2` overlaps between it and the running `backup1`. If we only considered running overlap (and not queued overlap), then `backup3` could run now. It conflicts with the queued `backup2` on `ns3` but it does not conflict with the running backup. However, if it runs now, then when `backup1` completes, then `backup2` still can't run (since it now overlaps with running `backup3`on `ns3`), so `backup4` starts instead. Now when `backup3` completes, `backup2` still can't run (since it now conflicts with `backup4` on `ns5`). This means that even though it was the second backup created, it's the fourth to run -- providing worse time to completion than without parallel backups. If a queued backup has a large number of namespaces (a full-cluster backup for example), it would never run as long as new single-namespace backups keep being added to the queue.
To resolve this problem we consider both running backups as well as backups ahead in the queue when resolving overlap conflicts. In the above scenario, `backup2` can't run yet since it overlaps with the running backup on `ns2`. In addition, `backup3` and `backup4` also can't run yet since they overlap with queued `backup2`. Therefore, `backup5` will run now. Once `backup1` completes, `backup2` will be free to run.
### Backup CRD changes
New Backup phases:
```go
const (
// BackupPhaseQueued means the backup has been added to the
// queue by the BackupQueueReconciler.
BackupPhaseQueued BackupPhase = "Queued"
// BackupPhaseReadyToStart means the backup has been removed from the
// queue by the BackupQueueReconciler and is ready to start.
BackupPhaseReadyToStart BackupPhase = "ReadyToStart"
)
```
In addition, a new Status field, `queuePosition`, will be added to track the backup's current position in the queue.
```go
// QueuePosition is the position held by the backup in the queue.
// QueuePosition=1 means this backup is the next to be considered.
// Only relevant when Phase is "Queued"
// +optional
QueuePosition int `json:"queuePosition,omitempty"`
```
### New Controller: `backupQueueReconciler`
A new reconciler will be added, `backupQueueReconciler` which will reconcile backups under these conditions:
1) Watching Create/Update for backups in `New` (or empty) phase
2) Watching for Backup phase transition from `InProgress` to something else to reconcile all `Queued` backups
2) Watching for Backup phase transition from `New` (or empty) to `Queued` to reconcile all `Queued` backups
2) Periodic reconcile of `Queued` backups to handle backups queued at server startup as well as to make sure we never have a situation where backups are queued indefinitely because of a race condition or was otherwise missed in the reconcile on prior backup completion.
The reconciler will be set up as follows -- note that New backups are reconciled on Create/Update, while Queued backups are reconciled when an InProgress backup moves on to another state or when a new backup moves to the Queued state. We also reconcile Queued backups periodically to handle the case of a Velero pod restart with Queued backups, as well as to handle possible edge cases where a queued backup doesn't get moved out of the queue at the point of backup completion or an error occurs during a prior Queued backup reconcile.
```go
func (c *backupOperationsReconciler) SetupWithManager(mgr ctrl.Manager) error {
// only consider Queued backups, order by QueuePosition
gp := kube.NewGenericEventPredicate(func(object client.Object) bool {
backup := object.(*velerov1api.Backup)
return (backup.Status.Phase == velerov1api.BackupPhaseQueued)
})
s := kube.NewPeriodicalEnqueueSource(c.logger.WithField("controller", constant.ControllerBackupOperations), mgr.GetClient(), &velerov1api.BackupList{}, c.frequency, kube.PeriodicalEnqueueSourceOption{
Predicates: []predicate.Predicate{gp},
OrderFunc: queuePositionOrderFunc,
})
return ctrl.NewControllerManagedBy(mgr).
For(&velerov1api.Backup{}, builder.WithPredicates(predicate.Funcs{
UpdateFunc: func(ue event.UpdateEvent) bool {
backup := ue.ObjectNew.(*velerov1api.Backup)
return backup.Status.Phase == "" || backup.status.Phase == velerov1api.BackupPhaseNew
},
CreateFunc: func(event.CreateEvent) bool {
return backup.Status.Phase == "" || backup.status.Phase == velerov1api.BackupPhaseNew
},
DeleteFunc: func(de event.DeleteEvent) bool {
return false
},
GenericFunc: func(ge event.GenericEvent) bool {
return false
},
})).
Watch(
&source.Kind{Type: &velerov1api.Backup{}},
&handler.EnqueueRequestsFromMapFunc{
ToRequests: handler.ToRequestsFunc(func(a handler.MapObject) []reconcile.Request {
backupList := velerov1api.BackupList{}
if err := p.List(ctx, backupList); err != nil {
p.logger.WithError(err).Error("error listing backups")
return
}
requests = []reconcile.request{}
// filter backup list by Phase=queued
// sort backup list by queuePosition
return requests
}),
},
builder.WithPredicates(predicate.Funcs{
UpdateFunc: func(ue event.UpdateEvent) bool {
oldBackup := ue.ObjectOld.(*velerov1api.Backup)
newBackup := ue.ObjectNew.(*velerov1api.Backup)
return oldBackup.Status.Phase == velerov1api.BackupPhaseInProgress &&
newBackup.Status.Phase != velerov1api.BackupPhaseInProgress ||
oldBackup.Status.Phase != velerov1api.BackupPhaseQueued &&
newBackup.Status.Phase == velerov1api.BackupPhaseQueued
},
CreateFunc: func(event.CreateEvent) bool {
return false
},
DeleteFunc: func(de event.DeleteEvent) bool {
return false
},
GenericFunc: func(ge event.GenericEvent) bool {
return false
},
}).
WatchesRawSource(s).
Named(constant.ControllerBackupQueue).
Complete(c)
}
```
New backups will be queued: Phase will be set to `Queued`, and `QueuePosition` will be set to a int value incremented from the highest current `QueuePosition` value among Queued backups.
Queued backups will be removed from the queue if runnable:
1) If the total number of backups either InProgress or ReadyToStart is greater than or equal to the concurrency limit, then exit without removing from the queue.
2) If the current backup overlaps with any InProgress, ReadyToStart, or Queued backup with `QueuePosition < currentBackup.QueuePosition` then exit without removing from the queue.
3) If we get here, the backup is runnable. To resolve a potential race condition where an InProgress backup completes between reconciling the backup with QueuePosition `n-1` and reconciling the current backup with QueuePosition `n`, we also check to see whether there are any runnable backups in the queue ahead of this one. The only time this will happen is if a backup completes immediately before reconcile starts which either frees up a concurrency slot or removes a namespace conflict. In this case, we don't want to run the current backup since the one ahead of this one in the queue (which was recently passed over before the InProgress backup completed) must run first. In this case, exit without removing from the queue.
4) If we get here, remove the backup from the queue by setting Phase to `ReadyToStart` and `QueuePosition` to zero. Decrement the `QueuePosition` of any other Queued backups with a `QueuePosition` higher than the current backup's queue position prior to dequeuing. At this point, the backup reconciler will start the backup.
`if len(inProgressBackups)+len(pendingStartBackups) >= concurrentBackups`
```
switch original.Status.Phase {
case "", velerov1api.BackupPhaseNew:
// enqueue backup -- set phase=Queued, set queuePosition=maxCurrentQueuePosition+1
}
// We should only ever get these events when added in order by the periodical enqueue source
// so as long as the current backup has not conflicts ahead of it or running, we should be good to
// dequeue
case "", velerov1api.BackupPhaseQueued:
// list backups, filter on Queued, ReadyToStart, and InProgress
// if number of InProgress backups + number of ReadyToStart backups >= concurrency limit, exit
// generate list of all namespaces included in InProgress, ReadyToStart, and Queued backups with
// queuePosition < backup.Status.QueuePosition
// if overlap found, exit
// check backups ahead of this one in the queue for runnability. If any are runnable, exit
// dequeue backup: set Phase to ReadyToStart, QueuePosition to 0, and decrement QueuePosition
// for all QueuedBackups behind this one in the queue
}
```
The queue controller will run as a single reconciler thread, so we will not need to deal with concurrency issues when moving backups from New to Queued or from Queued to ReadyToStart, and all of the updates to QueuePosition will be from a single thread.
### Updates to Backup controller
The Reconcile logic will be updated to respond to ReadyToStart backups instead of New backups:
```
@@ -234,8 +234,8 @@ func (b *backupReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctr
// InProgress, we still need this check so we can return nil to indicate we've finished processing
// this key (even though it was a no-op).
switch original.Status.Phase {
- case "", velerov1api.BackupPhaseNew:
- // only process new backups
+ case velerov1api.BackupPhaseReadyToStart:
+ // only process ReadyToStart backups
default:
b.logger.WithFields(logrus.Fields{
"backup": kubeutil.NamespaceAndName(original),
```
In addition, it will be configured to run in parallel by setting `MaxConcurrentReconciles` based on the `concurrent-backups` server arg.
```
@@ -149,6 +149,9 @@ func NewBackupReconciler(
func (b *backupReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&velerov1api.Backup{}).
+ WithOptions(controller.Options{
+ MaxConcurrentReconciles: concurrentBackups,
+ }).
Named(constant.ControllerBackup).
Complete(b)
}
```
The controller-runtime core reconciler logic already prevents the same resource from being reconciled by two different reconciler threads, so we don't need to worry about concurrency issues at the controller level.
The workerPool reference will be moved from the backupReconciler to the backupRequest, since this will now be backup-specific, and the initialization code for the worker pool will be moved from the reconciler init into the backup reconcile. This worker pool will be shut down upon exiting the Reconcile method.
### Resilience to restart of velero pod
The new backup phases (`Queued` and `ReadyToStart`) will be resilient to velero pod restarts. If the velero pod crashes or is restarted, only backups in the `InProgress` phase will be failed, so there is no change to current behavior. Queued backups will retain their queue position on restart, and ReadyToStart backups will move to InProgress when reconciled.
### Observability
#### Logging
When a backup is dequeued, an info log message will also include the wait time, calculated as `now - creationTimestamp`. When a backup is passed over due to overlap, an info log message will indicate which namespaces were in conflict.
#### Velero CLI
The `velero backup describe` output will include the current queue position for queued backups.

58
go.mod
View File

@@ -1,6 +1,6 @@
module github.com/vmware-tanzu/velero
go 1.24
go 1.24.0
require (
cloud.google.com/go/storage v1.55.0
@@ -17,7 +17,7 @@ require (
github.com/aws/aws-sdk-go-v2/service/s3 v1.48.0
github.com/aws/aws-sdk-go-v2/service/sts v1.26.7
github.com/bombsimon/logrusr/v3 v3.0.0
github.com/evanphx/json-patch/v5 v5.9.0
github.com/evanphx/json-patch/v5 v5.9.11
github.com/fatih/color v1.18.0
github.com/gobwas/glob v0.2.3
github.com/google/go-cmp v0.7.0
@@ -27,8 +27,8 @@ require (
github.com/joho/godotenv v1.3.0
github.com/kopia/kopia v0.16.0
github.com/kubernetes-csi/external-snapshotter/client/v8 v8.2.0
github.com/onsi/ginkgo/v2 v2.19.0
github.com/onsi/gomega v1.33.1
github.com/onsi/ginkgo/v2 v2.22.0
github.com/onsi/gomega v1.36.1
github.com/petar/GoLLRB v0.0.0-20210522233825-ae3b015fd3e9
github.com/pkg/errors v0.9.1
github.com/prometheus/client_golang v1.22.0
@@ -49,17 +49,17 @@ require (
google.golang.org/grpc v1.73.0
google.golang.org/protobuf v1.36.6
gopkg.in/yaml.v3 v3.0.1
k8s.io/api v0.31.3
k8s.io/apiextensions-apiserver v0.31.3
k8s.io/apimachinery v0.31.3
k8s.io/cli-runtime v0.31.3
k8s.io/client-go v0.31.3
k8s.io/api v0.33.3
k8s.io/apiextensions-apiserver v0.33.3
k8s.io/apimachinery v0.33.3
k8s.io/cli-runtime v0.33.3
k8s.io/client-go v0.33.3
k8s.io/klog/v2 v2.130.1
k8s.io/kube-aggregator v0.31.3
k8s.io/metrics v0.31.3
k8s.io/utils v0.0.0-20240711033017-18e509b52bc8
sigs.k8s.io/controller-runtime v0.19.3
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd
k8s.io/kube-aggregator v0.33.3
k8s.io/metrics v0.33.3
k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738
sigs.k8s.io/controller-runtime v0.21.0
sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3
sigs.k8s.io/yaml v1.4.0
)
@@ -72,8 +72,8 @@ require (
cloud.google.com/go/iam v1.5.2 // indirect
cloud.google.com/go/monitoring v1.24.2 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.1 // indirect
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 // indirect
github.com/AzureAD/microsoft-authentication-library-for-go v1.4.2 // indirect
github.com/Azure/go-ansiterm v0.0.0-20230124172434-306776ec8161 // indirect
github.com/AzureAD/microsoft-authentication-library-for-go v1.5.0 // indirect
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.27.0 // indirect
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.51.0 // indirect
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.51.0 // indirect
@@ -91,6 +91,7 @@ require (
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.21.6 // indirect
github.com/aws/smithy-go v1.19.0 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/chmduquesne/rollinghash v4.0.0+incompatible // indirect
github.com/cncf/xds/go v0.0.0-20250326154945-ae57f3c0d45f // indirect
@@ -101,32 +102,31 @@ require (
github.com/envoyproxy/go-control-plane/envoy v1.32.4 // indirect
github.com/envoyproxy/protoc-gen-validate v1.2.1 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/fsnotify/fsnotify v1.7.0 // indirect
github.com/fxamacker/cbor/v2 v2.7.0 // indirect
github.com/go-ini/ini v1.67.0 // indirect
github.com/go-jose/go-jose/v4 v4.0.5 // indirect
github.com/go-logr/logr v1.4.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-ole/go-ole v1.3.0 // indirect
github.com/go-openapi/jsonpointer v0.19.6 // indirect
github.com/go-openapi/jsonpointer v0.21.0 // indirect
github.com/go-openapi/jsonreference v0.20.2 // indirect
github.com/go-openapi/swag v0.22.4 // indirect
github.com/go-openapi/swag v0.23.0 // indirect
github.com/go-task/slim-sprig/v3 v3.0.0 // indirect
github.com/goccy/go-json v0.10.5 // indirect
github.com/gofrs/flock v0.12.1 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang-jwt/jwt/v5 v5.2.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.4 // indirect
github.com/google/gnostic-models v0.6.8 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/google/pprof v0.0.0-20240525223248-4bfdf5a9a2af // indirect
github.com/google/btree v1.1.3 // indirect
github.com/google/gnostic-models v0.6.9 // indirect
github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db // indirect
github.com/google/s2a-go v0.1.9 // indirect
github.com/googleapis/enterprise-certificate-proxy v0.3.6 // indirect
github.com/googleapis/gax-go/v2 v2.14.2 // indirect
github.com/gorilla/websocket v1.5.0 // indirect
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 // indirect
github.com/hashicorp/cronexpr v1.1.2 // indirect
github.com/hashicorp/yamux v0.1.1 // indirect
github.com/imdario/mergo v0.3.13 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
@@ -144,7 +144,7 @@ require (
github.com/minio/md5-simd v1.1.2 // indirect
github.com/minio/minio-go/v7 v7.0.94 // indirect
github.com/mitchellh/go-testing-interface v1.0.0 // indirect
github.com/moby/spdystream v0.4.0 // indirect
github.com/moby/spdystream v0.5.0 // indirect
github.com/moby/term v0.5.0 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
@@ -181,7 +181,7 @@ require (
go.starlark.net v0.0.0-20230525235612-a134d8f9ddca // indirect
go.uber.org/multierr v1.11.0 // indirect
golang.org/x/crypto v0.40.0 // indirect
golang.org/x/exp v0.0.0-20230522175609-2e198f4a06a1 // indirect
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 // indirect
golang.org/x/sync v0.16.0 // indirect
golang.org/x/sys v0.34.0 // indirect
golang.org/x/term v0.33.0 // indirect
@@ -193,9 +193,9 @@ require (
google.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822 // indirect
gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.4.1 // indirect
k8s.io/kube-openapi v0.0.0-20250318190949-c8a335a9a2ff // indirect
sigs.k8s.io/randfill v1.0.0 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.6.0 // indirect
)
replace github.com/kopia/kopia => github.com/project-velero/kopia v0.0.0-20250722052735-3ea24d208777

106
go.sum
View File

@@ -84,8 +84,8 @@ github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.8.0
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.8.0/go.mod h1:DWAciXemNf++PQJLeXUB4HHH5OpsAh12HZnu2wXE1jA=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.1 h1:lhZdRq7TIx0GJQvSyX2Si406vrYsov2FXGp/RnSEtcs=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.1/go.mod h1:8cl44BDmi+effbARHMQjgOKA2AYvcohNm7KEt42mSV8=
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 h1:UQHMgLO+TxOElx5B5HZ4hJQsoJ/PvUvKRhJHDQXO8P8=
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
github.com/Azure/go-ansiterm v0.0.0-20230124172434-306776ec8161 h1:L/gRVlceqvL25UVaW/CKtUDjefjrs0SPonmDGUVOYP0=
github.com/Azure/go-ansiterm v0.0.0-20230124172434-306776ec8161/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
github.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
github.com/Azure/go-autorest/autorest v0.11.18/go.mod h1:dSiJPy22c3u0OtOKDNttNgqpNFY/GeWa7GH/Pz56QRA=
github.com/Azure/go-autorest/autorest/adal v0.9.13/go.mod h1:W/MM4U6nLxnIskrw4UwWzlHfGjwUS50aOsc/I3yuU8M=
@@ -95,8 +95,8 @@ github.com/Azure/go-autorest/logger v0.2.1/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZ
github.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBpUA79WCAKPPZVC2DeU=
github.com/AzureAD/microsoft-authentication-extensions-for-go/cache v0.1.1 h1:WJTmL004Abzc5wDB5VtZG2PJk5ndYDgVacGqfirKxjM=
github.com/AzureAD/microsoft-authentication-extensions-for-go/cache v0.1.1/go.mod h1:tCcJZ0uHAmvjsVYzEFivsRTN00oz5BEsRgQHu5JZ9WE=
github.com/AzureAD/microsoft-authentication-library-for-go v1.4.2 h1:oygO0locgZJe7PpYPXT5A29ZkwJaPqcva7BVeemZOZs=
github.com/AzureAD/microsoft-authentication-library-for-go v1.4.2/go.mod h1:wP83P5OoQ5p6ip3ScPr0BAq0BvuPAvacpEuSzyouqAI=
github.com/AzureAD/microsoft-authentication-library-for-go v1.5.0 h1:XkkQbfMyuH2jTSjQjSoihryI8GINRcs4xp8lNawg0FI=
github.com/AzureAD/microsoft-authentication-library-for-go v1.5.0/go.mod h1:HKpQxkWaGLJ+D/5H8QRpyQXA1eKjxkFlOMwck5+33Jk=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/GehirnInc/crypt v0.0.0-20230320061759-8cc1b52080c5 h1:IEjq88XO4PuBDcvmjQJcQGg+w+UaafSy8G5Kcb5tBhI=
@@ -170,6 +170,8 @@ github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6r
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
github.com/bketelsen/crypt v0.0.3-0.20200106085610-5cbc8cc4026c/go.mod h1:MKsuJmJgSg28kpZDP6UIiPt0e0Oz0kqKNGyRaWEPv84=
github.com/bketelsen/crypt v0.0.4/go.mod h1:aI6NrJ0pMGgvZKL1iVgXLnfIFJtfV+bKCoqOes/6LfM=
github.com/blang/semver/v4 v4.0.0 h1:1PFHFE6yCCTv8C1TeyNNarDzntLi7wMI5i/pzqYIsAM=
github.com/blang/semver/v4 v4.0.0/go.mod h1:IbckMUScFkM3pff0VJDNKRiT6TG/YpiHIM2yvyW5YoQ=
github.com/bombsimon/logrusr/v3 v3.0.0 h1:tcAoLfuAhKP9npBxWzSdpsvKPQt1XV02nSf2lZA82TQ=
github.com/bombsimon/logrusr/v3 v3.0.0/go.mod h1:PksPPgSFEL2I52pla2glgCyyd2OqOHAnFF5E+g8Ixco=
github.com/bufbuild/protocompile v0.4.0 h1:LbFKd2XowZvQ/kajzguUp2DC9UEIQhIq77fZZlaQsNA=
@@ -239,8 +241,8 @@ github.com/envoyproxy/protoc-gen-validate v1.2.1/go.mod h1:d/C80l/jxXLdfEIhX1W2T
github.com/evanphx/json-patch v4.11.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/evanphx/json-patch v5.6.0+incompatible h1:jBYDEEiFBPxA0v50tFdvOzQQTCvpL6mnFh5mB2/l16U=
github.com/evanphx/json-patch v5.6.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/evanphx/json-patch/v5 v5.9.0 h1:kcBlZQbplgElYIlo/n1hJbls2z/1awpXxpRi0/FOJfg=
github.com/evanphx/json-patch/v5 v5.9.0/go.mod h1:VNkHZ/282BpEyt/tObQO8s5CMPmYYq14uClGH4abBuQ=
github.com/evanphx/json-patch/v5 v5.9.11 h1:/8HVnzMq13/3x9TPvjG08wUGqBTmZBsCWzjTM0wiaDU=
github.com/evanphx/json-patch/v5 v5.9.11/go.mod h1:3j+LviiESTElxA4p3EMKAB9HXj3/XEtnUf6OZxqIQTM=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM=
github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU=
@@ -282,8 +284,9 @@ github.com/go-ole/go-ole v1.3.0 h1:Dt6ye7+vXGIKZ7Xtk4s6/xVdGDQynvom7xCFEdWr6uE=
github.com/go-ole/go-ole v1.3.0/go.mod h1:5LS6F96DhAwUc7C+1HLexzMXY1xGRSryjyPPKW6zv78=
github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
github.com/go-openapi/jsonpointer v0.19.5/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
github.com/go-openapi/jsonpointer v0.19.6 h1:eCs3fxoIi3Wh6vtgmLTOjdhSpiqphQ+DaPn38N2ZdrE=
github.com/go-openapi/jsonpointer v0.19.6/go.mod h1:osyAmYz/mB/C3I+WsTTSgw1ONzaLJoLCyoi6/zppojs=
github.com/go-openapi/jsonpointer v0.21.0 h1:YgdVicSA9vH5RiHs9TZW5oyafXZFc6+2Vc1rr/O9oNQ=
github.com/go-openapi/jsonpointer v0.21.0/go.mod h1:IUyH9l/+uyhIYQ/PXVA41Rexl+kOkAPDdXEYns6fzUY=
github.com/go-openapi/jsonreference v0.19.3/go.mod h1:rjx6GuL8TTa9VaixXglHmQmIL98+wF9xc8zWvFonSJ8=
github.com/go-openapi/jsonreference v0.19.5/go.mod h1:RdybgQwPxbL4UEjuAruzK1x3nE69AqPYEJeo/TWfEeg=
github.com/go-openapi/jsonreference v0.20.2 h1:3sVjiK66+uXK/6oQ8xgcRKcFgQ5KXa2KvnJRumpMGbE=
@@ -291,8 +294,8 @@ github.com/go-openapi/jsonreference v0.20.2/go.mod h1:Bl1zwGIM8/wsvqjsOQLJ/SH+En
github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk=
github.com/go-openapi/swag v0.19.14/go.mod h1:QYRuS/SOXUCsnplDa677K7+DxSOj6IPNl/eQntq43wQ=
github.com/go-openapi/swag v0.22.3/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14=
github.com/go-openapi/swag v0.22.4 h1:QLMzNJnMGPRNDCbySlcj1x01tzU8/9LTTL9hZZZogBU=
github.com/go-openapi/swag v0.22.4/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14=
github.com/go-openapi/swag v0.23.0 h1:vsEVJDUo2hPJ2tu0/Xc+4noaxyEffXNIs3cOULZ+GrE=
github.com/go-openapi/swag v0.23.0/go.mod h1:esZ8ITTYEsH1V2trKHjAN8Ai7xHb8RV+YSZ577vPjgQ=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI=
github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8=
@@ -318,7 +321,6 @@ github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4er
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
@@ -350,8 +352,10 @@ github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.1/go.mod h1:xXMiIv4Fb/0kKde4SpL7qlzvu5cMJDRkFDxJfI9uaxA=
github.com/google/gnostic-models v0.6.8 h1:yo/ABAfM5IMRsS1VnXjTBvUb61tFIHozhlYvRgGre9I=
github.com/google/gnostic-models v0.6.8/go.mod h1:5n7qKqH0f5wFt+aWF8CW6pZLLNOfYuF5OpfBSENuI8U=
github.com/google/btree v1.1.3 h1:CVpQJjYgC4VbzxeGVHfvZrv1ctoYCAI8vbl07Fcxlyg=
github.com/google/btree v1.1.3/go.mod h1:qOPhT0dTNdNzV6Z/lhRX0YXUafgPLFUh+gZMl761Gm4=
github.com/google/gnostic-models v0.6.9 h1:MU/8wDLif2qCXZmzncUQ/BOfxWfthHi63KqpoNbWqVw=
github.com/google/gnostic-models v0.6.9/go.mod h1:CiWsm0s6BSQd1hRn8/QmxqB6BesYcbSZxsz9b0KuDBw=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
@@ -389,8 +393,8 @@ github.com/google/pprof v0.0.0-20201203190320-1bf35d6f28c2/go.mod h1:kpwsk12EmLe
github.com/google/pprof v0.0.0-20201218002935-b9804c9f04c2/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210122040257-d980be63207e/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210226084205-cbba55b83ad5/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20240525223248-4bfdf5a9a2af h1:kmjWCqn2qkEml422C2Rrd27c3VGxi6a/6HNq8QmHRKM=
github.com/google/pprof v0.0.0-20240525223248-4bfdf5a9a2af/go.mod h1:K1liHPHnj73Fdn/EKuT8nrFqBihUSKXoLYU0BuatOYo=
github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db h1:097atOisP2aRj7vFgYQBbFN4U4JNXUNYpxael3UzMyo=
github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db/go.mod h1:vavhavw2zAxS5dIdcRluK6cSGGPlZynqzFM8NdvU144=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/s2a-go v0.1.9 h1:LGD7gtMgezd8a/Xak7mEWL0PjoTQFvpRudN895yqKW0=
github.com/google/s2a-go v0.1.9/go.mod h1:YA0Ei2ZQL3acow2O62kdp9UlnvMmU7kA6Eutn0dXayM=
@@ -413,8 +417,8 @@ github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=
github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ=
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/gorilla/websocket v1.5.0 h1:PPwGk2jz7EePpoHN/+ClbZu8SPxiqlu12wZP/3sWmnc=
github.com/gorilla/websocket v1.5.0/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 h1:JeSE6pjso5THxAzdVpqr6/geYxZytqFMBCOtn/ujyeo=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674/go.mod h1:r4w70xmWCQKmi1ONH4KIaBptdivuRPyosB9RmPlGEwA=
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
@@ -455,8 +459,6 @@ github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:
github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/imdario/mergo v0.3.5/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/imdario/mergo v0.3.11/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
github.com/imdario/mergo v0.3.13 h1:lFzP57bqS/wsqKssCGmtLAb8A0wKjLGrve2q3PPVcBk=
github.com/imdario/mergo v0.3.13/go.mod h1:4lJ1jqUDcsbIECGy0RUJAXNIhg+6ocWgb1ALK2O4oXg=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
@@ -550,8 +552,8 @@ github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:F
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.4.1/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/moby/spdystream v0.2.0/go.mod h1:f7i0iNDQJ059oMTcWxx8MA/zKFIuD/lY+0GqbN2Wy8c=
github.com/moby/spdystream v0.4.0 h1:Vy79D6mHeJJjiPdFEL2yku1kl0chZpJfZcPpb16BRl8=
github.com/moby/spdystream v0.4.0/go.mod h1:xBAYlnt/ay+11ShkdFKNAG7LsyK/tmNBVvVOwrfMgdI=
github.com/moby/spdystream v0.5.0 h1:7r0J1Si3QO/kjRitvSLVVFUjxMEb/YLj6S9FF62JBCU=
github.com/moby/spdystream v0.5.0/go.mod h1:xBAYlnt/ay+11ShkdFKNAG7LsyK/tmNBVvVOwrfMgdI=
github.com/moby/term v0.5.0 h1:xt8Q1nalod/v7BqbG21f8mQPqH+xAaC9C3N3wfWbVP0=
github.com/moby/term v0.5.0/go.mod h1:8FzsFHVUBGZdbDsJw/ot+X+d5HLUbvklYLJ9uGfcI3Y=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
@@ -584,13 +586,13 @@ github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+W
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
github.com/onsi/ginkgo v1.14.0 h1:2mOpI4JVVPBN+WQRa0WKH2eXR+Ey+uK4n7Zj0aYpIQA=
github.com/onsi/ginkgo v1.14.0/go.mod h1:iSB4RoI2tjJc9BBv4NKIKWKya62Rps+oPG/Lv9klQyY=
github.com/onsi/ginkgo/v2 v2.19.0 h1:9Cnnf7UHo57Hy3k6/m5k3dRfGTMXGvxhHFvkDTCTpvA=
github.com/onsi/ginkgo/v2 v2.19.0/go.mod h1:rlwLi9PilAFJ8jCg9UE1QP6VBpd6/xj3SRC0d6TU0To=
github.com/onsi/ginkgo/v2 v2.22.0 h1:Yed107/8DjTr0lKCNt7Dn8yQ6ybuDRQoMGrNFKzMfHg=
github.com/onsi/ginkgo/v2 v2.22.0/go.mod h1:7Du3c42kxCUegi0IImZ1wUQzMBVecgIHjR1C+NkhLQo=
github.com/onsi/gomega v0.0.0-20170829124025-dcabb60a477c/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA=
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
github.com/onsi/gomega v1.33.1 h1:dsYjIxxSR755MDmKVsaFQTE22ChNBcuuTWgkUDSubOk=
github.com/onsi/gomega v1.33.1/go.mod h1:U4R44UsT+9eLIaYRB2a5qajjtQYn0hauxvRm16AVYg0=
github.com/onsi/gomega v1.36.1 h1:bJDPBO7ibjxcbHMgSCoo4Yj18UWbKDlLwX1x9sybDcw=
github.com/onsi/gomega v1.36.1/go.mod h1:PvZbdDc8J6XJEpDK4HCuRBm8a6Fzp9/DmhC9C7yFlog=
github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/pelletier/go-toml v1.9.3/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCkoOuaOx1Y+c=
@@ -804,8 +806,8 @@ golang.org/x/exp v0.0.0-20191227195350-da58074b4299/go.mod h1:2RIsYlXP63K8oxa1u0
golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EHIKF9dgMWnmCNThgcyBT1FY9mM=
golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU=
golang.org/x/exp v0.0.0-20230522175609-2e198f4a06a1 h1:k/i9J1pBpvlfR+9QsetwPyERsqu1GIbi967PQMq3Ivc=
golang.org/x/exp v0.0.0-20230522175609-2e198f4a06a1/go.mod h1:V1LtkGg67GoY2N1AnLN78QLrzxkLyJw7RJb1gzOOz9w=
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 h1:2dVuKD2vS7b0QIHQbpyTISPd0LeHDbnYEryqj5Q1ug8=
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56/go.mod h1:M4RDyNAINzryxdtnbRXRL/OHtkFuWGRjvuhBJpk2IlY=
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
@@ -1206,7 +1208,6 @@ gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
@@ -1217,47 +1218,50 @@ honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt
honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
k8s.io/api v0.22.2/go.mod h1:y3ydYpLJAaDI+BbSe2xmGcqxiWHmWjkEeIbiwHvnPR8=
k8s.io/api v0.31.3 h1:umzm5o8lFbdN/hIXbrK9oRpOproJO62CV1zqxXrLgk8=
k8s.io/api v0.31.3/go.mod h1:UJrkIp9pnMOI9K2nlL6vwpxRzzEX5sWgn8kGQe92kCE=
k8s.io/apiextensions-apiserver v0.31.3 h1:+GFGj2qFiU7rGCsA5o+p/rul1OQIq6oYpQw4+u+nciE=
k8s.io/apiextensions-apiserver v0.31.3/go.mod h1:2DSpFhUZZJmn/cr/RweH1cEVVbzFw9YBu4T+U3mf1e4=
k8s.io/api v0.33.3 h1:SRd5t//hhkI1buzxb288fy2xvjubstenEKL9K51KBI8=
k8s.io/api v0.33.3/go.mod h1:01Y/iLUjNBM3TAvypct7DIj0M0NIZc+PzAHCIo0CYGE=
k8s.io/apiextensions-apiserver v0.33.3 h1:qmOcAHN6DjfD0v9kxL5udB27SRP6SG/MTopmge3MwEs=
k8s.io/apiextensions-apiserver v0.33.3/go.mod h1:oROuctgo27mUsyp9+Obahos6CWcMISSAPzQ77CAQGz8=
k8s.io/apimachinery v0.22.2/go.mod h1:O3oNtNadZdeOMxHFVxOreoznohCpy0z6mocxbZr7oJ0=
k8s.io/apimachinery v0.31.3 h1:6l0WhcYgasZ/wk9ktLq5vLaoXJJr5ts6lkaQzgeYPq4=
k8s.io/apimachinery v0.31.3/go.mod h1:rsPdaZJfTfLsNJSQzNHQvYoTmxhoOEofxtOsF3rtsMo=
k8s.io/apimachinery v0.33.3 h1:4ZSrmNa0c/ZpZJhAgRdcsFcZOw1PQU1bALVQ0B3I5LA=
k8s.io/apimachinery v0.33.3/go.mod h1:BHW0YOu7n22fFv/JkYOEfkUYNRN0fj0BlvMFWA7b+SM=
k8s.io/cli-runtime v0.22.2/go.mod h1:tkm2YeORFpbgQHEK/igqttvPTRIHFRz5kATlw53zlMI=
k8s.io/cli-runtime v0.31.3 h1:fEQD9Xokir78y7pVK/fCJN090/iYNrLHpFbGU4ul9TI=
k8s.io/cli-runtime v0.31.3/go.mod h1:Q2jkyTpl+f6AtodQvgDI8io3jrfr+Z0LyQBPJJ2Btq8=
k8s.io/cli-runtime v0.33.3 h1:Dgy4vPjNIu8LMJBSvs8W0LcdV0PX/8aGG1DA1W8lklA=
k8s.io/cli-runtime v0.33.3/go.mod h1:yklhLklD4vLS8HNGgC9wGiuHWze4g7x6XQZ+8edsKEo=
k8s.io/client-go v0.22.2/go.mod h1:sAlhrkVDf50ZHx6z4K0S40wISNTarf1r800F+RlCF6U=
k8s.io/client-go v0.31.3 h1:CAlZuM+PH2cm+86LOBemaJI/lQ5linJ6UFxKX/SoG+4=
k8s.io/client-go v0.31.3/go.mod h1:2CgjPUTpv3fE5dNygAr2NcM8nhHzXvxB8KL5gYc3kJs=
k8s.io/client-go v0.33.3 h1:M5AfDnKfYmVJif92ngN532gFqakcGi6RvaOF16efrpA=
k8s.io/client-go v0.33.3/go.mod h1:luqKBQggEf3shbxHY4uVENAxrDISLOarxpTKMiUuujg=
k8s.io/gengo v0.0.0-20200413195148-3a45101e95ac/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
k8s.io/klog/v2 v2.0.0/go.mod h1:PBfzABfn139FHAV07az/IF9Wp1bkk3vpT2XSJ76fSDE=
k8s.io/klog/v2 v2.9.0/go.mod h1:hy9LJ/NvuK+iVyP4Ehqva4HxZG/oXyIS3n3Jmire4Ec=
k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk=
k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE=
k8s.io/kube-aggregator v0.31.3 h1:DqHPdTglJHgOfB884AaroyxrML/aL82ASYOh65m7MSk=
k8s.io/kube-aggregator v0.31.3/go.mod h1:Kx59Xjnf0SnY47qf9Or++4y3XCHQ3kR0xk1Di6KFiFU=
k8s.io/kube-aggregator v0.33.3 h1:Pa6hQpKJMX0p0D2wwcxXJgu02++gYcGWXoW1z1ZJDfo=
k8s.io/kube-aggregator v0.33.3/go.mod h1:hwvkUoQ8q6gv0+SgNnlmQ3eUue1zHhJKTHsX7BwxwSE=
k8s.io/kube-openapi v0.0.0-20210421082810-95288971da7e/go.mod h1:vHXdDvt9+2spS2Rx9ql3I8tycm3H9FDfdUoIuKCefvw=
k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 h1:BZqlfIlq5YbRMFko6/PM7FjZpUb45WallggurYhKGag=
k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340/go.mod h1:yD4MZYeKMBwQKVht279WycxKyM84kkAx2DPrTXaeb98=
k8s.io/metrics v0.31.3 h1:DkT9I3gFlb2/z+/4BMY7WrQ/PnbukuV4Yli82v/KBCM=
k8s.io/metrics v0.31.3/go.mod h1:2w9gpd8z+13oJmaPR6p3kDyrDqnxSyoKpnOw2qLIdhI=
k8s.io/kube-openapi v0.0.0-20250318190949-c8a335a9a2ff h1:/usPimJzUKKu+m+TE36gUyGcf03XZEP0ZIKgKj35LS4=
k8s.io/kube-openapi v0.0.0-20250318190949-c8a335a9a2ff/go.mod h1:5jIi+8yX4RIb8wk3XwBo5Pq2ccx4FP10ohkbSKCZoK8=
k8s.io/metrics v0.33.3 h1:9CcqBz15JZfISqwca33gdHS8I6XfsK1vA8WUdEnG70g=
k8s.io/metrics v0.33.3/go.mod h1:Aw+cdg4AYHw0HvUY+lCyq40FOO84awrqvJRTw0cmXDs=
k8s.io/utils v0.0.0-20210819203725-bdf08cb9a70a/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
k8s.io/utils v0.0.0-20240711033017-18e509b52bc8 h1:pUdcCO1Lk/tbT5ztQWOBi5HBgbBP1J8+AsQnQCKsi8A=
k8s.io/utils v0.0.0-20240711033017-18e509b52bc8/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738 h1:M3sRQVHv7vB20Xc2ybTt7ODCeFj6JSWYFzOFnYeS6Ro=
k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=
sigs.k8s.io/controller-runtime v0.19.3 h1:XO2GvC9OPftRst6xWCpTgBZO04S2cbp0Qqkj8bX1sPw=
sigs.k8s.io/controller-runtime v0.19.3/go.mod h1:j4j87DqtsThvwTv5/Tc5NFRyyF/RF0ip4+62tbTSIUM=
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd h1:EDPBXCAspyGV4jQlpZSudPeMmr1bNJefnuqLsRAsHZo=
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0=
sigs.k8s.io/controller-runtime v0.21.0 h1:CYfjpEuicjUecRk+KAeyYh+ouUBn4llGyDYytIGcJS8=
sigs.k8s.io/controller-runtime v0.21.0/go.mod h1:OSg14+F65eWqIu4DceX7k/+QRAbTTvxeQSNSOQpukWM=
sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3 h1:/Rv+M11QRah1itp8VhT6HoVx1Ray9eB4DBr+K+/sCJ8=
sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3/go.mod h1:18nIHnGi6636UCz6m8i4DhaJ65T6EruyzmoQqI2BVDo=
sigs.k8s.io/kustomize/api v0.8.11/go.mod h1:a77Ls36JdfCWojpUqR6m60pdGY1AYFix4AH83nJtY1g=
sigs.k8s.io/kustomize/kyaml v0.11.0/go.mod h1:GNMwjim4Ypgp/MueD3zXHLRJEjz7RvtPae0AwlvEMFM=
sigs.k8s.io/randfill v0.0.0-20250304075658-069ef1bbf016/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY=
sigs.k8s.io/randfill v1.0.0 h1:JfjMILfT8A6RbawdsK2JXGBR5AQVfd+9TbzrlneTyrU=
sigs.k8s.io/randfill v1.0.0/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY=
sigs.k8s.io/structured-merge-diff/v4 v4.0.2/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw=
sigs.k8s.io/structured-merge-diff/v4 v4.1.2/go.mod h1:j/nl6xW8vLS49O8YvXW1ocPhZawJtm+Yrr7PPRQ0Vg4=
sigs.k8s.io/structured-merge-diff/v4 v4.4.1 h1:150L+0vs/8DA78h1u02ooW1/fFq/Lwr+sGiqlzvrtq4=
sigs.k8s.io/structured-merge-diff/v4 v4.4.1/go.mod h1:N8hJocpFajUSSeSJ9bOZ77VzejKZaXsTtZo4/u7Io08=
sigs.k8s.io/structured-merge-diff/v4 v4.6.0 h1:IUA9nvMmnKWcj5jl84xn+T5MnlZKThmUW1TdblaLVAc=
sigs.k8s.io/structured-merge-diff/v4 v4.6.0/go.mod h1:dDy58f92j70zLsuZVuUX5Wp9vtxXpaZnkPGWeqDfCps=
sigs.k8s.io/yaml v1.2.0/go.mod h1:yfXDCHCao9+ENCvLSE62v9VSji2MKu5jeNfTrofGhJc=
sigs.k8s.io/yaml v1.4.0 h1:Mk1wCc2gy/F0THH0TAp1QYyJNzRm2KCLy3o5ASXVI5E=
sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY=

View File

@@ -366,7 +366,7 @@ func (kb *kubernetesBackupper) BackupWithResolvers(
discoveryHelper: kb.discoveryHelper,
podVolumeBackupper: podVolumeBackupper,
podVolumeSnapshotTracker: podvolume.NewTracker(),
volumeSnapshotterGetter: volumeSnapshotterGetter,
volumeSnapshotterCache: NewVolumeSnapshotterCache(volumeSnapshotterGetter),
itemHookHandler: &hook.DefaultItemHookHandler{
PodCommandExecutor: kb.podCommandExecutor,
},

View File

@@ -3269,7 +3269,7 @@ func TestBackupWithSnapshots(t *testing.T) {
err := h.backupper.Backup(h.log, tc.req, backupFile, nil, nil, tc.snapshotterGetter)
require.NoError(t, err)
assert.Equal(t, tc.want, tc.req.VolumeSnapshots)
assert.Equal(t, tc.want, tc.req.VolumeSnapshots.Get())
})
}
}
@@ -4213,7 +4213,7 @@ func TestBackupWithPodVolume(t *testing.T) {
assert.Equal(t, tc.want, req.PodVolumeBackups)
// this assumes that we don't have any test cases where some PVs should be snapshotted using a VolumeSnapshotter
assert.Nil(t, req.VolumeSnapshots)
assert.Nil(t, req.VolumeSnapshots.Get())
})
}
}

View File

@@ -70,13 +70,11 @@ type itemBackupper struct {
discoveryHelper discovery.Helper
podVolumeBackupper podvolume.Backupper
podVolumeSnapshotTracker *podvolume.Tracker
volumeSnapshotterGetter VolumeSnapshotterGetter
kubernetesBackupper *kubernetesBackupper
itemHookHandler hook.ItemHookHandler
snapshotLocationVolumeSnapshotters map[string]vsv1.VolumeSnapshotter
hookTracker *hook.HookTracker
volumeHelperImpl volumehelper.VolumeHelper
volumeSnapshotterCache *VolumeSnapshotterCache
itemHookHandler hook.ItemHookHandler
hookTracker *hook.HookTracker
volumeHelperImpl volumehelper.VolumeHelper
}
type FileForArchive struct {
@@ -502,30 +500,6 @@ func (ib *itemBackupper) executeActions(
return obj, itemFiles, nil
}
// volumeSnapshotter instantiates and initializes a VolumeSnapshotter given a VolumeSnapshotLocation,
// or returns an existing one if one's already been initialized for the location.
func (ib *itemBackupper) volumeSnapshotter(snapshotLocation *velerov1api.VolumeSnapshotLocation) (vsv1.VolumeSnapshotter, error) {
if bs, ok := ib.snapshotLocationVolumeSnapshotters[snapshotLocation.Name]; ok {
return bs, nil
}
bs, err := ib.volumeSnapshotterGetter.GetVolumeSnapshotter(snapshotLocation.Spec.Provider)
if err != nil {
return nil, err
}
if err := bs.Init(snapshotLocation.Spec.Config); err != nil {
return nil, err
}
if ib.snapshotLocationVolumeSnapshotters == nil {
ib.snapshotLocationVolumeSnapshotters = make(map[string]vsv1.VolumeSnapshotter)
}
ib.snapshotLocationVolumeSnapshotters[snapshotLocation.Name] = bs
return bs, nil
}
// zoneLabelDeprecated is the label that stores availability-zone info
// on PVs this is deprecated on Kubernetes >= 1.17.0
// zoneLabel is the label that stores availability-zone info
@@ -641,7 +615,7 @@ func (ib *itemBackupper) takePVSnapshot(obj runtime.Unstructured, log logrus.Fie
for _, snapshotLocation := range ib.backupRequest.SnapshotLocations {
log := log.WithField("volumeSnapshotLocation", snapshotLocation.Name)
bs, err := ib.volumeSnapshotter(snapshotLocation)
bs, err := ib.volumeSnapshotterCache.SetNX(snapshotLocation)
if err != nil {
log.WithError(err).Error("Error getting volume snapshotter for volume snapshot location")
continue
@@ -699,7 +673,7 @@ func (ib *itemBackupper) takePVSnapshot(obj runtime.Unstructured, log logrus.Fie
snapshot.Status.Phase = volume.SnapshotPhaseCompleted
snapshot.Status.ProviderSnapshotID = snapshotID
}
ib.backupRequest.VolumeSnapshots = append(ib.backupRequest.VolumeSnapshots, snapshot)
ib.backupRequest.VolumeSnapshots.Add(snapshot)
// nil errors are automatically removed
return kubeerrs.NewAggregate(errs)

View File

@@ -17,6 +17,8 @@ limitations under the License.
package backup
import (
"sync"
"github.com/vmware-tanzu/velero/internal/hook"
"github.com/vmware-tanzu/velero/internal/resourcepolicies"
"github.com/vmware-tanzu/velero/internal/volume"
@@ -32,11 +34,27 @@ type itemKey struct {
name string
}
type SynchronizedVSList struct {
sync.Mutex
VolumeSnapshotList []*volume.Snapshot
}
func (s *SynchronizedVSList) Add(vs *volume.Snapshot) {
s.Lock()
defer s.Unlock()
s.VolumeSnapshotList = append(s.VolumeSnapshotList, vs)
}
func (s *SynchronizedVSList) Get() []*volume.Snapshot {
s.Lock()
defer s.Unlock()
return s.VolumeSnapshotList
}
// Request is a request for a backup, with all references to other objects
// materialized (e.g. backup/snapshot locations, includes/excludes, etc.)
type Request struct {
*velerov1api.Backup
StorageLocation *velerov1api.BackupStorageLocation
SnapshotLocations []*velerov1api.VolumeSnapshotLocation
NamespaceIncludesExcludes *collections.IncludesExcludes
@@ -44,7 +62,7 @@ type Request struct {
ResourceHooks []hook.ResourceHook
ResolvedActions []framework.BackupItemResolvedActionV2
ResolvedItemBlockActions []framework.ItemBlockResolvedAction
VolumeSnapshots []*volume.Snapshot
VolumeSnapshots SynchronizedVSList
PodVolumeBackups []*velerov1api.PodVolumeBackup
BackedUpItems *backedUpItemsMap
itemOperationsList *[]*itemoperation.BackupOperation
@@ -80,7 +98,7 @@ func (r *Request) FillVolumesInformation() {
}
r.VolumesInformation.SkippedPVs = skippedPVMap
r.VolumesInformation.NativeSnapshots = r.VolumeSnapshots
r.VolumesInformation.NativeSnapshots = r.VolumeSnapshots.Get()
r.VolumesInformation.PodVolumeBackups = r.PodVolumeBackups
r.VolumesInformation.BackupOperations = *r.GetItemOperationsList()
r.VolumesInformation.BackupName = r.Backup.Name

View File

@@ -0,0 +1,42 @@
package backup
import (
"sync"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
vsv1 "github.com/vmware-tanzu/velero/pkg/plugin/velero/volumesnapshotter/v1"
)
type VolumeSnapshotterCache struct {
cache map[string]vsv1.VolumeSnapshotter
mutex sync.Mutex
getter VolumeSnapshotterGetter
}
func NewVolumeSnapshotterCache(getter VolumeSnapshotterGetter) *VolumeSnapshotterCache {
return &VolumeSnapshotterCache{
cache: make(map[string]vsv1.VolumeSnapshotter),
getter: getter,
}
}
func (c *VolumeSnapshotterCache) SetNX(location *velerov1api.VolumeSnapshotLocation) (vsv1.VolumeSnapshotter, error) {
c.mutex.Lock()
defer c.mutex.Unlock()
if snapshotter, exists := c.cache[location.Name]; exists {
return snapshotter, nil
}
snapshotter, err := c.getter.GetVolumeSnapshotter(location.Spec.Provider)
if err != nil {
return nil, err
}
if err := snapshotter.Init(location.Spec.Config); err != nil {
return nil, err
}
c.cache[location.Name] = snapshotter
return snapshotter, nil
}

View File

@@ -0,0 +1,52 @@
/*
Copyright 2019 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package builder
import (
corev1api "k8s.io/api/core/v1"
schedulingv1api "k8s.io/api/scheduling/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
type PriorityClassBuilder struct {
object *schedulingv1api.PriorityClass
}
func ForPriorityClass(name string) *PriorityClassBuilder {
return &PriorityClassBuilder{
object: &schedulingv1api.PriorityClass{
ObjectMeta: metav1.ObjectMeta{
Name: name,
},
},
}
}
func (p *PriorityClassBuilder) Value(value int) *PriorityClassBuilder {
p.object.Value = int32(value)
return p
}
func (p *PriorityClassBuilder) PreemptionPolicy(policy string) *PriorityClassBuilder {
preemptionPolicy := corev1api.PreemptionPolicy(policy)
p.object.PreemptionPolicy = &preemptionPolicy
return p
}
func (p *PriorityClassBuilder) Result() *schedulingv1api.PriorityClass {
return p.object
}

View File

@@ -545,24 +545,22 @@ func (o *Options) Validate(c *cobra.Command, args []string, f client.Factory) er
return fmt.Errorf("fail to create go-client %w", err)
}
// If either Linux or Windows node-agent is installed, and the node-agent-configmap
// is specified, need to validate the ConfigMap.
if (o.UseNodeAgent || o.UseNodeAgentWindows) && len(o.NodeAgentConfigMap) > 0 {
if len(o.NodeAgentConfigMap) > 0 {
if err := kubeutil.VerifyJSONConfigs(c.Context(), o.Namespace, crClient, o.NodeAgentConfigMap, &velerotypes.NodeAgentConfigs{}); err != nil {
return fmt.Errorf("--node-agent-configmap specified ConfigMap %s is invalid", o.NodeAgentConfigMap)
return fmt.Errorf("--node-agent-configmap specified ConfigMap %s is invalid: %w", o.NodeAgentConfigMap, err)
}
}
if len(o.RepoMaintenanceJobConfigMap) > 0 {
if err := kubeutil.VerifyJSONConfigs(c.Context(), o.Namespace, crClient, o.RepoMaintenanceJobConfigMap, &velerotypes.JobConfigs{}); err != nil {
return fmt.Errorf("--repo-maintenance-job-configmap specified ConfigMap %s is invalid", o.RepoMaintenanceJobConfigMap)
return fmt.Errorf("--repo-maintenance-job-configmap specified ConfigMap %s is invalid: %w", o.RepoMaintenanceJobConfigMap, err)
}
}
if len(o.BackupRepoConfigMap) > 0 {
config := make(map[string]any)
if err := kubeutil.VerifyJSONConfigs(c.Context(), o.Namespace, crClient, o.BackupRepoConfigMap, &config); err != nil {
return fmt.Errorf("--backup-repository-configmap specified ConfigMap %s is invalid", o.BackupRepoConfigMap)
return fmt.Errorf("--backup-repository-configmap specified ConfigMap %s is invalid: %w", o.BackupRepoConfigMap, err)
}
}

View File

@@ -308,6 +308,8 @@ func (s *nodeAgentServer) run() {
s.logger.Infof("Using customized backupPVC config %v", backupPVCConfig)
}
privilegedFsBackup := s.dataPathConfigs != nil && s.dataPathConfigs.PrivilegedFsBackup
podResources := corev1api.ResourceRequirements{}
if s.dataPathConfigs != nil && s.dataPathConfigs.PodResources != nil {
if res, err := kube.ParseResourceRequirements(s.dataPathConfigs.PodResources.CPURequest, s.dataPathConfigs.PodResources.MemoryRequest, s.dataPathConfigs.PodResources.CPULimit, s.dataPathConfigs.PodResources.MemoryLimit); err != nil {
@@ -327,12 +329,12 @@ func (s *nodeAgentServer) run() {
}
}
pvbReconciler := controller.NewPodVolumeBackupReconciler(s.mgr.GetClient(), s.mgr, s.kubeClient, s.dataPathMgr, s.vgdpCounter, s.nodeName, s.config.dataMoverPrepareTimeout, s.config.resourceTimeout, podResources, s.metrics, s.logger, dataMovePriorityClass)
pvbReconciler := controller.NewPodVolumeBackupReconciler(s.mgr.GetClient(), s.mgr, s.kubeClient, s.dataPathMgr, s.vgdpCounter, s.nodeName, s.config.dataMoverPrepareTimeout, s.config.resourceTimeout, podResources, s.metrics, s.logger, dataMovePriorityClass, privilegedFsBackup)
if err := pvbReconciler.SetupWithManager(s.mgr); err != nil {
s.logger.Fatal(err, "unable to create controller", "controller", constant.ControllerPodVolumeBackup)
}
pvrReconciler := controller.NewPodVolumeRestoreReconciler(s.mgr.GetClient(), s.mgr, s.kubeClient, s.dataPathMgr, s.vgdpCounter, s.nodeName, s.config.dataMoverPrepareTimeout, s.config.resourceTimeout, podResources, s.logger, dataMovePriorityClass)
pvrReconciler := controller.NewPodVolumeRestoreReconciler(s.mgr.GetClient(), s.mgr, s.kubeClient, s.dataPathMgr, s.vgdpCounter, s.nodeName, s.config.dataMoverPrepareTimeout, s.config.resourceTimeout, podResources, s.logger, dataMovePriorityClass, privilegedFsBackup)
if err := pvrReconciler.SetupWithManager(s.mgr); err != nil {
s.logger.WithError(err).Fatal("Unable to create the pod volume restore controller")
}

View File

@@ -734,8 +734,8 @@ func (b *backupReconciler) runBackup(backup *pkgbackup.Request) error {
// native snapshots phase will either be failed or completed right away
// https://github.com/vmware-tanzu/velero/blob/de3ea52f0cc478e99efa7b9524c7f353514261a4/pkg/backup/item_backupper.go#L632-L639
backup.Status.VolumeSnapshotsAttempted = len(backup.VolumeSnapshots)
for _, snap := range backup.VolumeSnapshots {
backup.Status.VolumeSnapshotsAttempted = len(backup.VolumeSnapshots.Get())
for _, snap := range backup.VolumeSnapshots.Get() {
if snap.Status.Phase == volume.SnapshotPhaseCompleted {
backup.Status.VolumeSnapshotsCompleted++
}
@@ -882,7 +882,7 @@ func persistBackup(backup *pkgbackup.Request,
}
// Velero-native volume snapshots (as opposed to CSI ones)
nativeVolumeSnapshots, errs := encode.ToJSONGzip(backup.VolumeSnapshots, "native volumesnapshots list")
nativeVolumeSnapshots, errs := encode.ToJSONGzip(backup.VolumeSnapshots.Get(), "native volumesnapshots list")
if errs != nil {
persistErrs = append(persistErrs, errs...)
}

View File

@@ -1047,7 +1047,7 @@ func TestRecallMaintenance(t *testing.T) {
{
name: "wait completion error",
runtimeScheme: schemeFail,
expectedErr: "error waiting incomplete repo maintenance job for repo repo: error listing maintenance job for repo repo: no kind is registered for the type v1.JobList in scheme \"pkg/runtime/scheme.go:100\"",
expectedErr: "error waiting incomplete repo maintenance job for repo repo: error listing maintenance job for repo repo: no kind is registered for the type v1.JobList in scheme",
},
{
name: "no consolidate result",
@@ -1105,7 +1105,7 @@ func TestRecallMaintenance(t *testing.T) {
err := r.recallMaintenance(t.Context(), backupRepo, velerotest.NewLogger())
if test.expectedErr != "" {
assert.EqualError(t, err, test.expectedErr)
assert.ErrorContains(t, err, test.expectedErr)
} else {
assert.NoError(t, err)

View File

@@ -916,6 +916,13 @@ func (r *DataUploadReconciler) setupExposeParam(du *velerov2alpha1api.DataUpload
return nil, errors.Wrapf(err, "failed to get PVC %s/%s", du.Spec.SourceNamespace, du.Spec.SourcePVC)
}
pv := &corev1api.PersistentVolume{}
if err := r.client.Get(context.Background(), types.NamespacedName{
Name: pvc.Spec.VolumeName,
}, pv); err != nil {
return nil, errors.Wrapf(err, "failed to get source PV %s", pvc.Spec.VolumeName)
}
nodeOS := kube.GetPVCAttachingNodeOS(pvc, r.kubeClient.CoreV1(), r.kubeClient.StorageV1(), log)
if err := kube.HasNodeWithOS(context.Background(), nodeOS, r.kubeClient.CoreV1()); err != nil {
@@ -963,6 +970,8 @@ func (r *DataUploadReconciler) setupExposeParam(du *velerov2alpha1api.DataUpload
return &exposer.CSISnapshotExposeParam{
SnapshotName: du.Spec.CSISnapshot.VolumeSnapshot,
SourceNamespace: du.Spec.SourceNamespace,
SourcePVCName: pvc.Name,
SourcePVName: pv.Name,
StorageClass: du.Spec.CSISnapshot.StorageClass,
HostingPodLabels: hostingPodLabels,
HostingPodAnnotations: hostingPodAnnotation,

View File

@@ -60,7 +60,7 @@ const (
// NewPodVolumeBackupReconciler creates the PodVolumeBackupReconciler instance
func NewPodVolumeBackupReconciler(client client.Client, mgr manager.Manager, kubeClient kubernetes.Interface, dataPathMgr *datapath.Manager,
counter *exposer.VgdpCounter, nodeName string, preparingTimeout time.Duration, resourceTimeout time.Duration, podResources corev1api.ResourceRequirements,
metrics *metrics.ServerMetrics, logger logrus.FieldLogger, dataMovePriorityClass string) *PodVolumeBackupReconciler {
metrics *metrics.ServerMetrics, logger logrus.FieldLogger, dataMovePriorityClass string, privileged bool) *PodVolumeBackupReconciler {
return &PodVolumeBackupReconciler{
client: client,
mgr: mgr,
@@ -77,6 +77,7 @@ func NewPodVolumeBackupReconciler(client client.Client, mgr manager.Manager, kub
exposer: exposer.NewPodVolumeExposer(kubeClient, logger),
cancelledPVB: make(map[string]time.Time),
dataMovePriorityClass: dataMovePriorityClass,
privileged: privileged,
}
}
@@ -97,6 +98,7 @@ type PodVolumeBackupReconciler struct {
resourceTimeout time.Duration
cancelledPVB map[string]time.Time
dataMovePriorityClass string
privileged bool
}
// +kubebuilder:rbac:groups=velero.io,resources=podvolumebackups,verbs=get;list;watch;create;update;patch;delete
@@ -837,6 +839,7 @@ func (r *PodVolumeBackupReconciler) setupExposeParam(pvb *velerov1api.PodVolumeB
Resources: r.podResources,
// Priority class name for the data mover pod, retrieved from node-agent-configmap
PriorityClassName: r.dataMovePriorityClass,
Privileged: r.privileged,
}
}

View File

@@ -151,7 +151,8 @@ func initPVBReconcilerWithError(needError ...error) (*PodVolumeBackupReconciler,
corev1api.ResourceRequirements{},
metrics.NewServerMetrics(),
velerotest.NewLogger(),
"", // dataMovePriorityClass
"", // dataMovePriorityClass
false, // privileged
), nil
}

View File

@@ -56,7 +56,7 @@ import (
func NewPodVolumeRestoreReconciler(client client.Client, mgr manager.Manager, kubeClient kubernetes.Interface, dataPathMgr *datapath.Manager,
counter *exposer.VgdpCounter, nodeName string, preparingTimeout time.Duration, resourceTimeout time.Duration, podResources corev1api.ResourceRequirements,
logger logrus.FieldLogger, dataMovePriorityClass string) *PodVolumeRestoreReconciler {
logger logrus.FieldLogger, dataMovePriorityClass string, privileged bool) *PodVolumeRestoreReconciler {
return &PodVolumeRestoreReconciler{
client: client,
mgr: mgr,
@@ -72,6 +72,7 @@ func NewPodVolumeRestoreReconciler(client client.Client, mgr manager.Manager, ku
exposer: exposer.NewPodVolumeExposer(kubeClient, logger),
cancelledPVR: make(map[string]time.Time),
dataMovePriorityClass: dataMovePriorityClass,
privileged: privileged,
}
}
@@ -90,6 +91,7 @@ type PodVolumeRestoreReconciler struct {
resourceTimeout time.Duration
cancelledPVR map[string]time.Time
dataMovePriorityClass string
privileged bool
}
// +kubebuilder:rbac:groups=velero.io,resources=podvolumerestores,verbs=get;list;watch;create;update;patch;delete
@@ -896,6 +898,7 @@ func (r *PodVolumeRestoreReconciler) setupExposeParam(pvr *velerov1api.PodVolume
Resources: r.podResources,
// Priority class name for the data mover pod, retrieved from node-agent-configmap
PriorityClassName: r.dataMovePriorityClass,
Privileged: r.privileged,
}
}

View File

@@ -617,7 +617,7 @@ func initPodVolumeRestoreReconcilerWithError(objects []runtime.Object, cliObj []
dataPathMgr := datapath.NewManager(1)
return NewPodVolumeRestoreReconciler(fakeClient, nil, fakeKubeClient, dataPathMgr, nil, "test-node", time.Minute*5, time.Minute, corev1api.ResourceRequirements{}, velerotest.NewLogger(), ""), nil
return NewPodVolumeRestoreReconciler(fakeClient, nil, fakeKubeClient, dataPathMgr, nil, "test-node", time.Minute*5, time.Minute, corev1api.ResourceRequirements{}, velerotest.NewLogger(), "", false), nil
}
func TestPodVolumeRestoreReconcile(t *testing.T) {

View File

@@ -229,7 +229,7 @@ func (c *scheduleReconciler) checkIfBackupInNewOrProgress(schedule *velerov1.Sch
}
for _, backup := range backupList.Items {
if backup.Status.Phase == velerov1.BackupPhaseNew || backup.Status.Phase == velerov1.BackupPhaseInProgress {
if backup.Status.Phase == "" || backup.Status.Phase == velerov1.BackupPhaseNew || backup.Status.Phase == velerov1.BackupPhaseInProgress {
log.Debugf("%s/%s still has backups that are in InProgress or New...", schedule.Namespace, schedule.Name)
return true
}

View File

@@ -149,6 +149,13 @@ func TestReconcileOfSchedule(t *testing.T) {
expectedPhase: string(velerov1.SchedulePhaseEnabled),
backup: builder.ForBackup("ns", "name-20220905120000").ObjectMeta(builder.WithLabels(velerov1.ScheduleNameLabel, "name")).Phase(velerov1.BackupPhaseNew).Result(),
},
{
name: "schedule already has backup with empty phase (not yet reconciled).",
schedule: newScheduleBuilder(velerov1.SchedulePhaseEnabled).CronSchedule("@every 5m").LastBackupTime("2000-01-01 00:00:00").Result(),
fakeClockTime: "2017-01-01 12:00:00",
expectedPhase: string(velerov1.SchedulePhaseEnabled),
backup: builder.ForBackup("ns", "name-20220905120000").ObjectMeta(builder.WithLabels(velerov1.ScheduleNameLabel, "name")).Phase("").Result(),
},
}
for _, test := range tests {
@@ -215,10 +222,10 @@ func TestReconcileOfSchedule(t *testing.T) {
backups := &velerov1.BackupList{}
require.NoError(t, client.List(ctx, backups))
// If backup associated with schedule's status is in New or InProgress,
// If backup associated with schedule's status is in New or InProgress or empty phase,
// new backup shouldn't be submitted.
if test.backup != nil &&
(test.backup.Status.Phase == velerov1.BackupPhaseNew || test.backup.Status.Phase == velerov1.BackupPhaseInProgress) {
(test.backup.Status.Phase == "" || test.backup.Status.Phase == velerov1.BackupPhaseNew || test.backup.Status.Phase == velerov1.BackupPhaseInProgress) {
assert.Len(t, backups.Items, 1)
require.NoError(t, client.Delete(ctx, test.backup))
}
@@ -479,4 +486,19 @@ func TestCheckIfBackupInNewOrProgress(t *testing.T) {
reconciler = NewScheduleReconciler("namespace", logger, client, metrics.NewServerMetrics(), false)
result = reconciler.checkIfBackupInNewOrProgress(testSchedule)
assert.True(t, result)
// Clean backup in InProgress phase.
err = client.Delete(ctx, inProgressBackup)
require.NoError(t, err, "fail to delete backup in InProgress phase in TestCheckIfBackupInNewOrProgress: %v", err)
// Create backup with empty phase (not yet reconciled).
emptyPhaseBackup := builder.ForBackup("ns", "backup-3").
ObjectMeta(builder.WithLabels(velerov1.ScheduleNameLabel, "name")).
Phase("").Result()
err = client.Create(ctx, emptyPhaseBackup)
require.NoError(t, err, "fail to create backup with empty phase in TestCheckIfBackupInNewOrProgress: %v", err)
reconciler = NewScheduleReconciler("namespace", logger, client, metrics.NewServerMetrics(), false)
result = reconciler.checkIfBackupInNewOrProgress(testSchedule)
assert.True(t, result)
}

View File

@@ -35,6 +35,7 @@ import (
"github.com/vmware-tanzu/velero/pkg/nodeagent"
velerotypes "github.com/vmware-tanzu/velero/pkg/types"
"github.com/vmware-tanzu/velero/pkg/util"
"github.com/vmware-tanzu/velero/pkg/util/boolptr"
"github.com/vmware-tanzu/velero/pkg/util/csi"
"github.com/vmware-tanzu/velero/pkg/util/kube"
@@ -48,6 +49,12 @@ type CSISnapshotExposeParam struct {
// SourceNamespace is the original namespace of the volume that the snapshot is taken for
SourceNamespace string
// SourcePVCName is the original name of the PVC that the snapshot is taken for
SourcePVCName string
// SourcePVName is the name of PV for SourcePVC
SourcePVName string
// AccessMode defines the mode to access the snapshot
AccessMode string
@@ -188,6 +195,8 @@ func (e *csiSnapshotExposer) Expose(ctx context.Context, ownerObject corev1api.O
backupPVCStorageClass := csiExposeParam.StorageClass
backupPVCReadOnly := false
spcNoRelabeling := false
backupPVCAnnotations := map[string]string{}
intoleratableNodes := []string{}
if value, exists := csiExposeParam.BackupPVCConfig[csiExposeParam.StorageClass]; exists {
if value.StorageClass != "" {
backupPVCStorageClass = value.StorageClass
@@ -201,9 +210,22 @@ func (e *csiSnapshotExposer) Expose(ctx context.Context, ownerObject corev1api.O
curLog.WithField("vs name", volumeSnapshot.Name).Warn("Ignoring spcNoRelabling for read-write volume")
}
}
if len(value.Annotations) > 0 {
backupPVCAnnotations = value.Annotations
}
if _, found := backupPVCAnnotations[util.VSphereCNSFastCloneAnno]; found {
if n, err := kube.GetPVAttachedNodes(ctx, csiExposeParam.SourcePVName, e.kubeClient.StorageV1()); err != nil {
curLog.WithField("source PV", csiExposeParam.SourcePVName).WithError(err).Warnf("Failed to get attached node for source PV, ignore %s annotation", util.VSphereCNSFastCloneAnno)
delete(backupPVCAnnotations, util.VSphereCNSFastCloneAnno)
} else {
intoleratableNodes = n
}
}
}
backupPVC, err := e.createBackupPVC(ctx, ownerObject, backupVS.Name, backupPVCStorageClass, csiExposeParam.AccessMode, volumeSize, backupPVCReadOnly)
backupPVC, err := e.createBackupPVC(ctx, ownerObject, backupVS.Name, backupPVCStorageClass, csiExposeParam.AccessMode, volumeSize, backupPVCReadOnly, backupPVCAnnotations)
if err != nil {
return errors.Wrap(err, "error to create backup pvc")
}
@@ -231,6 +253,7 @@ func (e *csiSnapshotExposer) Expose(ctx context.Context, ownerObject corev1api.O
spcNoRelabeling,
csiExposeParam.NodeOS,
csiExposeParam.PriorityClassName,
intoleratableNodes,
)
if err != nil {
return errors.Wrap(err, "error to create backup pod")
@@ -358,8 +381,13 @@ func (e *csiSnapshotExposer) DiagnoseExpose(ctx context.Context, ownerObject cor
diag += fmt.Sprintf("error getting backup vs %s, err: %v\n", backupVSName, err)
}
events, err := e.kubeClient.CoreV1().Events(ownerObject.Namespace).List(ctx, metav1.ListOptions{})
if err != nil {
diag += fmt.Sprintf("error listing events, err: %v\n", err)
}
if pod != nil {
diag += kube.DiagnosePod(pod)
diag += kube.DiagnosePod(pod, events)
if pod.Spec.NodeName != "" {
if err := nodeagent.KbClientIsRunningInNode(ctx, ownerObject.Namespace, pod.Spec.NodeName, e.kubeClient); err != nil {
@@ -369,7 +397,7 @@ func (e *csiSnapshotExposer) DiagnoseExpose(ctx context.Context, ownerObject cor
}
if pvc != nil {
diag += kube.DiagnosePVC(pvc)
diag += kube.DiagnosePVC(pvc, events)
if pvc.Spec.VolumeName != "" {
if pv, err := e.kubeClient.CoreV1().PersistentVolumes().Get(ctx, pvc.Spec.VolumeName, metav1.GetOptions{}); err != nil {
@@ -381,7 +409,7 @@ func (e *csiSnapshotExposer) DiagnoseExpose(ctx context.Context, ownerObject cor
}
if vs != nil {
diag += csi.DiagnoseVS(vs)
diag += csi.DiagnoseVS(vs, events)
if vs.Status != nil && vs.Status.BoundVolumeSnapshotContentName != nil && *vs.Status.BoundVolumeSnapshotContentName != "" {
if vsc, err := e.csiSnapshotClient.VolumeSnapshotContents().Get(ctx, *vs.Status.BoundVolumeSnapshotContentName, metav1.GetOptions{}); err != nil {
@@ -485,7 +513,7 @@ func (e *csiSnapshotExposer) createBackupVSC(ctx context.Context, ownerObject co
return e.csiSnapshotClient.VolumeSnapshotContents().Create(ctx, vsc, metav1.CreateOptions{})
}
func (e *csiSnapshotExposer) createBackupPVC(ctx context.Context, ownerObject corev1api.ObjectReference, backupVS, storageClass, accessMode string, resource resource.Quantity, readOnly bool) (*corev1api.PersistentVolumeClaim, error) {
func (e *csiSnapshotExposer) createBackupPVC(ctx context.Context, ownerObject corev1api.ObjectReference, backupVS, storageClass, accessMode string, resource resource.Quantity, readOnly bool, annotations map[string]string) (*corev1api.PersistentVolumeClaim, error) {
backupPVCName := ownerObject.Name
volumeMode, err := getVolumeModeByAccessMode(accessMode)
@@ -507,8 +535,9 @@ func (e *csiSnapshotExposer) createBackupPVC(ctx context.Context, ownerObject co
pvc := &corev1api.PersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Namespace: ownerObject.Namespace,
Name: backupPVCName,
Namespace: ownerObject.Namespace,
Name: backupPVCName,
Annotations: annotations,
OwnerReferences: []metav1.OwnerReference{
{
APIVersion: ownerObject.APIVersion,
@@ -558,6 +587,7 @@ func (e *csiSnapshotExposer) createBackupPod(
spcNoRelabeling bool,
nodeOS string,
priorityClassName string,
intoleratableNodes []string,
) (*corev1api.Pod, error) {
podName := ownerObject.Name
@@ -658,6 +688,18 @@ func (e *csiSnapshotExposer) createBackupPod(
}
var podAffinity *corev1api.Affinity
if len(intoleratableNodes) > 0 {
if affinity == nil {
affinity = &kube.LoadAffinity{}
}
affinity.NodeSelector.MatchExpressions = append(affinity.NodeSelector.MatchExpressions, metav1.LabelSelectorRequirement{
Key: "kubernetes.io/hostname",
Values: intoleratableNodes,
Operator: metav1.LabelSelectorOpNotIn,
})
}
if affinity != nil {
podAffinity = kube.ToSystemAffinity([]*kube.LoadAffinity{affinity})
}

View File

@@ -153,6 +153,7 @@ func TestCreateBackupPodWithPriorityClass(t *testing.T) {
false, // spcNoRelabeling
kube.NodeOSLinux,
tc.expectedPriorityClass,
nil,
)
require.NoError(t, err, tc.description)
@@ -237,6 +238,7 @@ func TestCreateBackupPodWithMissingConfigMap(t *testing.T) {
false, // spcNoRelabeling
kube.NodeOSLinux,
"", // empty priority class since config map is missing
nil,
)
// Should succeed even when config map is missing

View File

@@ -39,8 +39,11 @@ import (
velerov1 "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
velerotest "github.com/vmware-tanzu/velero/pkg/test"
velerotypes "github.com/vmware-tanzu/velero/pkg/types"
"github.com/vmware-tanzu/velero/pkg/util"
"github.com/vmware-tanzu/velero/pkg/util/boolptr"
"github.com/vmware-tanzu/velero/pkg/util/kube"
storagev1api "k8s.io/api/storage/v1"
)
type reactor struct {
@@ -156,6 +159,31 @@ func TestExpose(t *testing.T) {
},
}
pvName := "pv-1"
volumeAttachement1 := &storagev1api.VolumeAttachment{
ObjectMeta: metav1.ObjectMeta{
Name: "va1",
},
Spec: storagev1api.VolumeAttachmentSpec{
Source: storagev1api.VolumeAttachmentSource{
PersistentVolumeName: &pvName,
},
NodeName: "node-1",
},
}
volumeAttachement2 := &storagev1api.VolumeAttachment{
ObjectMeta: metav1.ObjectMeta{
Name: "va2",
},
Spec: storagev1api.VolumeAttachmentSpec{
Source: storagev1api.VolumeAttachmentSource{
PersistentVolumeName: &pvName,
},
NodeName: "node-2",
},
}
tests := []struct {
name string
snapshotClientObj []runtime.Object
@@ -169,6 +197,7 @@ func TestExpose(t *testing.T) {
expectedReadOnlyPVC bool
expectedBackupPVCStorageClass string
expectedAffinity *corev1api.Affinity
expectedPVCAnnotation map[string]string
}{
{
name: "wait vs ready fail",
@@ -624,6 +653,117 @@ func TestExpose(t *testing.T) {
expectedBackupPVCStorageClass: "fake-sc-read-only",
expectedAffinity: nil,
},
{
name: "IntolerateSourceNode, get source node fail",
ownerBackup: backup,
exposeParam: CSISnapshotExposeParam{
SnapshotName: "fake-vs",
SourceNamespace: "fake-ns",
SourcePVName: pvName,
StorageClass: "fake-sc",
AccessMode: AccessModeFileSystem,
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
BackupPVCConfig: map[string]velerotypes.BackupPVC{
"fake-sc": {
Annotations: map[string]string{util.VSphereCNSFastCloneAnno: "true"},
},
},
Affinity: nil,
},
snapshotClientObj: []runtime.Object{
vsObject,
vscObj,
},
kubeClientObj: []runtime.Object{
daemonSet,
},
kubeReactors: []reactor{
{
verb: "list",
resource: "volumeattachments",
reactorFunc: func(action clientTesting.Action) (handled bool, ret runtime.Object, err error) {
return true, nil, errors.New("fake-create-error")
},
},
},
expectedAffinity: nil,
expectedPVCAnnotation: nil,
},
{
name: "IntolerateSourceNode, get empty source node",
ownerBackup: backup,
exposeParam: CSISnapshotExposeParam{
SnapshotName: "fake-vs",
SourceNamespace: "fake-ns",
SourcePVName: pvName,
StorageClass: "fake-sc",
AccessMode: AccessModeFileSystem,
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
BackupPVCConfig: map[string]velerotypes.BackupPVC{
"fake-sc": {
Annotations: map[string]string{util.VSphereCNSFastCloneAnno: "true"},
},
},
Affinity: nil,
},
snapshotClientObj: []runtime.Object{
vsObject,
vscObj,
},
kubeClientObj: []runtime.Object{
daemonSet,
},
expectedAffinity: nil,
expectedPVCAnnotation: map[string]string{util.VSphereCNSFastCloneAnno: "true"},
},
{
name: "IntolerateSourceNode, get source nodes",
ownerBackup: backup,
exposeParam: CSISnapshotExposeParam{
SnapshotName: "fake-vs",
SourceNamespace: "fake-ns",
SourcePVName: pvName,
StorageClass: "fake-sc",
AccessMode: AccessModeFileSystem,
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
BackupPVCConfig: map[string]velerotypes.BackupPVC{
"fake-sc": {
Annotations: map[string]string{util.VSphereCNSFastCloneAnno: "true"},
},
},
Affinity: nil,
},
snapshotClientObj: []runtime.Object{
vsObject,
vscObj,
},
kubeClientObj: []runtime.Object{
daemonSet,
volumeAttachement1,
volumeAttachement2,
},
expectedAffinity: &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "kubernetes.io/hostname",
Operator: corev1api.NodeSelectorOpNotIn,
Values: []string{"node-1", "node-2"},
},
},
},
},
},
},
},
expectedPVCAnnotation: map[string]string{util.VSphereCNSFastCloneAnno: "true"},
},
}
for _, test := range tests {
@@ -705,6 +845,12 @@ func TestExpose(t *testing.T) {
if test.expectedAffinity != nil {
assert.Equal(t, test.expectedAffinity, backupPod.Spec.Affinity)
}
if test.expectedPVCAnnotation != nil {
assert.Equal(t, test.expectedPVCAnnotation, backupPVC.Annotations)
} else {
assert.Empty(t, backupPVC.Annotations)
}
} else {
assert.EqualError(t, err, test.err)
}
@@ -1001,8 +1147,9 @@ func Test_csiSnapshotExposer_createBackupPVC(t *testing.T) {
backupPVC := corev1api.PersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Namespace: velerov1.DefaultNamespace,
Name: "fake-backup",
Namespace: velerov1.DefaultNamespace,
Name: "fake-backup",
Annotations: map[string]string{},
OwnerReferences: []metav1.OwnerReference{
{
APIVersion: backup.APIVersion,
@@ -1031,8 +1178,9 @@ func Test_csiSnapshotExposer_createBackupPVC(t *testing.T) {
backupPVCReadOnly := corev1api.PersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Namespace: velerov1.DefaultNamespace,
Name: "fake-backup",
Namespace: velerov1.DefaultNamespace,
Name: "fake-backup",
Annotations: map[string]string{},
OwnerReferences: []metav1.OwnerReference{
{
APIVersion: backup.APIVersion,
@@ -1114,7 +1262,7 @@ func Test_csiSnapshotExposer_createBackupPVC(t *testing.T) {
APIVersion: tt.ownerBackup.APIVersion,
}
}
got, err := e.createBackupPVC(t.Context(), ownerObject, tt.backupVS, tt.storageClass, tt.accessMode, tt.resource, tt.readOnly)
got, err := e.createBackupPVC(t.Context(), ownerObject, tt.backupVS, tt.storageClass, tt.accessMode, tt.resource, tt.readOnly, map[string]string{})
if !tt.wantErr(t, err, fmt.Sprintf("createBackupPVC(%v, %v, %v, %v, %v, %v)", ownerObject, tt.backupVS, tt.storageClass, tt.accessMode, tt.resource, tt.readOnly)) {
return
}
@@ -1140,6 +1288,7 @@ func Test_csiSnapshotExposer_DiagnoseExpose(t *testing.T) {
ObjectMeta: metav1.ObjectMeta{
Namespace: velerov1.DefaultNamespace,
Name: "fake-backup",
UID: "fake-pod-uid",
OwnerReferences: []metav1.OwnerReference{
{
APIVersion: backup.APIVersion,
@@ -1165,6 +1314,7 @@ func Test_csiSnapshotExposer_DiagnoseExpose(t *testing.T) {
ObjectMeta: metav1.ObjectMeta{
Namespace: velerov1.DefaultNamespace,
Name: "fake-backup",
UID: "fake-pod-uid",
OwnerReferences: []metav1.OwnerReference{
{
APIVersion: backup.APIVersion,
@@ -1193,6 +1343,7 @@ func Test_csiSnapshotExposer_DiagnoseExpose(t *testing.T) {
ObjectMeta: metav1.ObjectMeta{
Namespace: velerov1.DefaultNamespace,
Name: "fake-backup",
UID: "fake-pvc-uid",
OwnerReferences: []metav1.OwnerReference{
{
APIVersion: backup.APIVersion,
@@ -1211,6 +1362,7 @@ func Test_csiSnapshotExposer_DiagnoseExpose(t *testing.T) {
ObjectMeta: metav1.ObjectMeta{
Namespace: velerov1.DefaultNamespace,
Name: "fake-backup",
UID: "fake-pvc-uid",
OwnerReferences: []metav1.OwnerReference{
{
APIVersion: backup.APIVersion,
@@ -1256,6 +1408,7 @@ func Test_csiSnapshotExposer_DiagnoseExpose(t *testing.T) {
ObjectMeta: metav1.ObjectMeta{
Namespace: velerov1.DefaultNamespace,
Name: "fake-backup",
UID: "fake-vs-uid",
OwnerReferences: []metav1.OwnerReference{
{
APIVersion: backup.APIVersion,
@@ -1271,6 +1424,7 @@ func Test_csiSnapshotExposer_DiagnoseExpose(t *testing.T) {
ObjectMeta: metav1.ObjectMeta{
Namespace: velerov1.DefaultNamespace,
Name: "fake-backup",
UID: "fake-vs-uid",
OwnerReferences: []metav1.OwnerReference{
{
APIVersion: backup.APIVersion,
@@ -1288,6 +1442,7 @@ func Test_csiSnapshotExposer_DiagnoseExpose(t *testing.T) {
ObjectMeta: metav1.ObjectMeta{
Namespace: velerov1.DefaultNamespace,
Name: "fake-backup",
UID: "fake-vs-uid",
OwnerReferences: []metav1.OwnerReference{
{
APIVersion: backup.APIVersion,
@@ -1485,6 +1640,74 @@ PVC velero/fake-backup, phase Pending, binding to fake-pv
PV fake-pv, phase Pending, reason , message fake-pv-message
VS velero/fake-backup, bind to fake-vsc, readyToUse false, errMessage fake-vs-message
VSC fake-vsc, readyToUse false, errMessage fake-vsc-message, handle
end diagnose CSI exposer`,
},
{
name: "with events",
ownerBackup: backup,
kubeClientObj: []runtime.Object{
&backupPodWithNodeName,
&backupPVCWithVolumeName,
&backupPV,
&nodeAgentPod,
&corev1api.Event{
ObjectMeta: metav1.ObjectMeta{Namespace: velerov1.DefaultNamespace, Name: "event-1"},
Type: corev1api.EventTypeWarning,
InvolvedObject: corev1api.ObjectReference{UID: "fake-uid-1"},
Reason: "reason-1",
Message: "message-1",
},
&corev1api.Event{
ObjectMeta: metav1.ObjectMeta{Namespace: velerov1.DefaultNamespace, Name: "event-2"},
Type: corev1api.EventTypeWarning,
InvolvedObject: corev1api.ObjectReference{UID: "fake-pod-uid"},
Reason: "reason-2",
Message: "message-2",
},
&corev1api.Event{
ObjectMeta: metav1.ObjectMeta{Namespace: velerov1.DefaultNamespace, Name: "event-3"},
Type: corev1api.EventTypeWarning,
InvolvedObject: corev1api.ObjectReference{UID: "fake-pvc-uid"},
Reason: "reason-3",
Message: "message-3",
},
&corev1api.Event{
ObjectMeta: metav1.ObjectMeta{Namespace: velerov1.DefaultNamespace, Name: "event-4"},
Type: corev1api.EventTypeWarning,
InvolvedObject: corev1api.ObjectReference{UID: "fake-vs-uid"},
Reason: "reason-4",
Message: "message-4",
},
&corev1api.Event{
ObjectMeta: metav1.ObjectMeta{Namespace: "other-namespace", Name: "event-5"},
Type: corev1api.EventTypeWarning,
InvolvedObject: corev1api.ObjectReference{UID: "fake-pod-uid"},
Reason: "reason-5",
Message: "message-5",
},
&corev1api.Event{
ObjectMeta: metav1.ObjectMeta{Namespace: velerov1.DefaultNamespace, Name: "event-6"},
Type: corev1api.EventTypeWarning,
InvolvedObject: corev1api.ObjectReference{UID: "fake-pod-uid"},
Reason: "reason-6",
Message: "message-6",
},
},
snapshotClientObj: []runtime.Object{
&backupVSWithVSC,
&backupVSC,
},
expected: `begin diagnose CSI exposer
Pod velero/fake-backup, phase Pending, node name fake-node
Pod condition Initialized, status True, reason , message fake-pod-message
Pod event reason reason-2, message message-2
Pod event reason reason-6, message message-6
PVC velero/fake-backup, phase Pending, binding to fake-pv
PVC event reason reason-3, message message-3
PV fake-pv, phase Pending, reason , message fake-pv-message
VS velero/fake-backup, bind to fake-vsc, readyToUse false, errMessage fake-vs-message
VS event reason reason-4, message message-4
VSC fake-vsc, readyToUse false, errMessage fake-vsc-message, handle
end diagnose CSI exposer`,
},
}

View File

@@ -287,8 +287,13 @@ func (e *genericRestoreExposer) DiagnoseExpose(ctx context.Context, ownerObject
diag += fmt.Sprintf("error getting restore pvc %s, err: %v\n", restorePVCName, err)
}
events, err := e.kubeClient.CoreV1().Events(ownerObject.Namespace).List(ctx, metav1.ListOptions{})
if err != nil {
diag += fmt.Sprintf("error listing events, err: %v\n", err)
}
if pod != nil {
diag += kube.DiagnosePod(pod)
diag += kube.DiagnosePod(pod, events)
if pod.Spec.NodeName != "" {
if err := nodeagent.KbClientIsRunningInNode(ctx, ownerObject.Namespace, pod.Spec.NodeName, e.kubeClient); err != nil {
@@ -298,7 +303,7 @@ func (e *genericRestoreExposer) DiagnoseExpose(ctx context.Context, ownerObject
}
if pvc != nil {
diag += kube.DiagnosePVC(pvc)
diag += kube.DiagnosePVC(pvc, events)
if pvc.Spec.VolumeName != "" {
if pv, err := e.kubeClient.CoreV1().PersistentVolumes().Get(ctx, pvc.Spec.VolumeName, metav1.GetOptions{}); err != nil {

View File

@@ -549,6 +549,7 @@ func Test_ReastoreDiagnoseExpose(t *testing.T) {
ObjectMeta: metav1.ObjectMeta{
Namespace: velerov1.DefaultNamespace,
Name: "fake-restore",
UID: "fake-pod-uid",
OwnerReferences: []metav1.OwnerReference{
{
APIVersion: restore.APIVersion,
@@ -574,6 +575,7 @@ func Test_ReastoreDiagnoseExpose(t *testing.T) {
ObjectMeta: metav1.ObjectMeta{
Namespace: velerov1.DefaultNamespace,
Name: "fake-restore",
UID: "fake-pod-uid",
OwnerReferences: []metav1.OwnerReference{
{
APIVersion: restore.APIVersion,
@@ -602,6 +604,7 @@ func Test_ReastoreDiagnoseExpose(t *testing.T) {
ObjectMeta: metav1.ObjectMeta{
Namespace: velerov1.DefaultNamespace,
Name: "fake-restore",
UID: "fake-pvc-uid",
OwnerReferences: []metav1.OwnerReference{
{
APIVersion: restore.APIVersion,
@@ -620,6 +623,7 @@ func Test_ReastoreDiagnoseExpose(t *testing.T) {
ObjectMeta: metav1.ObjectMeta{
Namespace: velerov1.DefaultNamespace,
Name: "fake-restore",
UID: "fake-pvc-uid",
OwnerReferences: []metav1.OwnerReference{
{
APIVersion: restore.APIVersion,
@@ -758,6 +762,60 @@ Pod velero/fake-restore, phase Pending, node name fake-node
Pod condition Initialized, status True, reason , message fake-pod-message
PVC velero/fake-restore, phase Pending, binding to fake-pv
PV fake-pv, phase Pending, reason , message fake-pv-message
end diagnose restore exposer`,
},
{
name: "with events",
ownerRestore: restore,
kubeClientObj: []runtime.Object{
&restorePodWithNodeName,
&restorePVCWithVolumeName,
&restorePV,
&nodeAgentPod,
&corev1api.Event{
ObjectMeta: metav1.ObjectMeta{Namespace: velerov1.DefaultNamespace, Name: "event-1"},
Type: corev1api.EventTypeWarning,
InvolvedObject: corev1api.ObjectReference{UID: "fake-uid-1"},
Reason: "reason-1",
Message: "message-1",
},
&corev1api.Event{
ObjectMeta: metav1.ObjectMeta{Namespace: velerov1.DefaultNamespace, Name: "event-2"},
Type: corev1api.EventTypeWarning,
InvolvedObject: corev1api.ObjectReference{UID: "fake-pod-uid"},
Reason: "reason-2",
Message: "message-2",
},
&corev1api.Event{
ObjectMeta: metav1.ObjectMeta{Namespace: velerov1.DefaultNamespace, Name: "event-3"},
Type: corev1api.EventTypeWarning,
InvolvedObject: corev1api.ObjectReference{UID: "fake-pvc-uid"},
Reason: "reason-3",
Message: "message-3",
},
&corev1api.Event{
ObjectMeta: metav1.ObjectMeta{Namespace: "other-namespace", Name: "event-4"},
Type: corev1api.EventTypeWarning,
InvolvedObject: corev1api.ObjectReference{UID: "fake-pod-uid"},
Reason: "reason-4",
Message: "message-4",
},
&corev1api.Event{
ObjectMeta: metav1.ObjectMeta{Namespace: velerov1.DefaultNamespace, Name: "event-5"},
Type: corev1api.EventTypeWarning,
InvolvedObject: corev1api.ObjectReference{UID: "fake-pod-uid"},
Reason: "reason-5",
Message: "message-5",
},
},
expected: `begin diagnose restore exposer
Pod velero/fake-restore, phase Pending, node name fake-node
Pod condition Initialized, status True, reason , message fake-pod-message
Pod event reason reason-2, message message-2
Pod event reason reason-5, message message-5
PVC velero/fake-restore, phase Pending, binding to fake-pv
PVC event reason reason-3, message message-3
PV fake-pv, phase Pending, reason , message fake-pv-message
end diagnose restore exposer`,
},
}

View File

@@ -73,6 +73,9 @@ type PodVolumeExposeParam struct {
// PriorityClassName is the priority class name for the data mover pod
PriorityClassName string
// Privileged indicates whether to create the pod with a privileged container
Privileged bool
}
// PodVolumeExposer is the interfaces for a pod volume exposer
@@ -153,7 +156,7 @@ func (e *podVolumeExposer) Expose(ctx context.Context, ownerObject corev1api.Obj
curLog.WithField("path", path).Infof("Host path is retrieved for pod %s, volume %s", param.ClientPodName, param.ClientPodVolume)
hostingPod, err := e.createHostingPod(ctx, ownerObject, param.Type, path.ByPath, param.OperationTimeout, param.HostingPodLabels, param.HostingPodAnnotations, param.HostingPodTolerations, pod.Spec.NodeName, param.Resources, nodeOS, param.PriorityClassName)
hostingPod, err := e.createHostingPod(ctx, ownerObject, param.Type, path.ByPath, param.OperationTimeout, param.HostingPodLabels, param.HostingPodAnnotations, param.HostingPodTolerations, pod.Spec.NodeName, param.Resources, nodeOS, param.PriorityClassName, param.Privileged)
if err != nil {
return errors.Wrapf(err, "error to create hosting pod")
}
@@ -248,8 +251,13 @@ func (e *podVolumeExposer) DiagnoseExpose(ctx context.Context, ownerObject corev
diag += fmt.Sprintf("error getting hosting pod %s, err: %v\n", hostingPodName, err)
}
events, err := e.kubeClient.CoreV1().Events(ownerObject.Namespace).List(ctx, metav1.ListOptions{})
if err != nil {
diag += fmt.Sprintf("error listing events, err: %v\n", err)
}
if pod != nil {
diag += kube.DiagnosePod(pod)
diag += kube.DiagnosePod(pod, events)
if pod.Spec.NodeName != "" {
if err := nodeagent.KbClientIsRunningInNode(ctx, ownerObject.Namespace, pod.Spec.NodeName, e.kubeClient); err != nil {
@@ -269,7 +277,7 @@ func (e *podVolumeExposer) CleanUp(ctx context.Context, ownerObject corev1api.Ob
}
func (e *podVolumeExposer) createHostingPod(ctx context.Context, ownerObject corev1api.ObjectReference, exposeType string, hostPath string,
operationTimeout time.Duration, label map[string]string, annotation map[string]string, toleration []corev1api.Toleration, selectedNode string, resources corev1api.ResourceRequirements, nodeOS string, priorityClassName string) (*corev1api.Pod, error) {
operationTimeout time.Duration, label map[string]string, annotation map[string]string, toleration []corev1api.Toleration, selectedNode string, resources corev1api.ResourceRequirements, nodeOS string, priorityClassName string, privileged bool) (*corev1api.Pod, error) {
hostingPodName := ownerObject.Name
containerName := string(ownerObject.UID)
@@ -327,6 +335,7 @@ func (e *podVolumeExposer) createHostingPod(ctx context.Context, ownerObject cor
args = append(args, podInfo.logLevelArgs...)
var securityCtx *corev1api.PodSecurityContext
var containerSecurityCtx *corev1api.SecurityContext
nodeSelector := map[string]string{}
podOS := corev1api.PodOS{}
if nodeOS == kube.NodeOSWindows {
@@ -359,6 +368,9 @@ func (e *podVolumeExposer) createHostingPod(ctx context.Context, ownerObject cor
securityCtx = &corev1api.PodSecurityContext{
RunAsUser: &userID,
}
containerSecurityCtx = &corev1api.SecurityContext{
Privileged: &privileged,
}
nodeSelector[kube.NodeOSLabel] = kube.NodeOSLinux
podOS.Name = kube.NodeOSLinux
@@ -394,6 +406,7 @@ func (e *podVolumeExposer) createHostingPod(ctx context.Context, ownerObject cor
Env: podInfo.env,
EnvFrom: podInfo.envFrom,
Resources: resources,
SecurityContext: containerSecurityCtx,
},
},
PriorityClassName: priorityClassName,

View File

@@ -190,6 +190,29 @@ func TestPodVolumeExpose(t *testing.T) {
return "/var/lib/kubelet/pods/pod-id-xxx/volumes/kubernetes.io~csi/pvc-id-xxx/mount", nil
},
},
{
name: "succeed with privileged pod",
ownerBackup: backup,
exposeParam: PodVolumeExposeParam{
ClientNamespace: "fake-ns",
ClientPodName: "fake-client-pod",
ClientPodVolume: "fake-client-volume",
Privileged: true,
},
kubeClientObj: []runtime.Object{
podWithNode,
node,
daemonSet,
},
funcGetPodVolumeHostPath: func(context.Context, *corev1api.Pod, string, kubernetes.Interface, filesystem.Interface, logrus.FieldLogger) (datapath.AccessPoint, error) {
return datapath.AccessPoint{
ByPath: "/host_pods/pod-id-xxx/volumes/kubernetes.io~csi/pvc-id-xxx/mount",
}, nil
},
funcExtractPodVolumeHostPath: func(context.Context, string, kubernetes.Interface, string, string) (string, error) {
return "/var/lib/kubelet/pods/pod-id-xxx/volumes/kubernetes.io~csi/pvc-id-xxx/mount", nil
},
},
}
for _, test := range tests {
@@ -443,6 +466,7 @@ func TestPodVolumeDiagnoseExpose(t *testing.T) {
ObjectMeta: metav1.ObjectMeta{
Namespace: velerov1.DefaultNamespace,
Name: "fake-backup",
UID: "fake-pod-uid",
OwnerReferences: []metav1.OwnerReference{
{
APIVersion: backup.APIVersion,
@@ -468,6 +492,7 @@ func TestPodVolumeDiagnoseExpose(t *testing.T) {
ObjectMeta: metav1.ObjectMeta{
Namespace: velerov1.DefaultNamespace,
Name: "fake-backup",
UID: "fake-pod-uid",
OwnerReferences: []metav1.OwnerReference{
{
APIVersion: backup.APIVersion,
@@ -564,6 +589,48 @@ end diagnose pod volume exposer`,
expected: `begin diagnose pod volume exposer
Pod velero/fake-backup, phase Pending, node name fake-node
Pod condition Initialized, status True, reason , message fake-pod-message
end diagnose pod volume exposer`,
},
{
name: "with events",
ownerBackup: backup,
kubeClientObj: []runtime.Object{
&backupPodWithNodeName,
&nodeAgentPod,
&corev1api.Event{
ObjectMeta: metav1.ObjectMeta{Namespace: velerov1.DefaultNamespace, Name: "event-1"},
Type: corev1api.EventTypeWarning,
InvolvedObject: corev1api.ObjectReference{UID: "fake-uid-1"},
Reason: "reason-1",
Message: "message-1",
},
&corev1api.Event{
ObjectMeta: metav1.ObjectMeta{Namespace: velerov1.DefaultNamespace, Name: "event-2"},
Type: corev1api.EventTypeWarning,
InvolvedObject: corev1api.ObjectReference{UID: "fake-pod-uid"},
Reason: "reason-2",
Message: "message-2",
},
&corev1api.Event{
ObjectMeta: metav1.ObjectMeta{Namespace: "other-namespace", Name: "event-3"},
Type: corev1api.EventTypeWarning,
InvolvedObject: corev1api.ObjectReference{UID: "fake-pod-uid"},
Reason: "reason-3",
Message: "message-3",
},
&corev1api.Event{
ObjectMeta: metav1.ObjectMeta{Namespace: velerov1.DefaultNamespace, Name: "event-4"},
Type: corev1api.EventTypeWarning,
InvolvedObject: corev1api.ObjectReference{UID: "fake-pod-uid"},
Reason: "reason-4",
Message: "message-4",
},
},
expected: `begin diagnose pod volume exposer
Pod velero/fake-backup, phase Pending, node name fake-node
Pod condition Initialized, status True, reason , message fake-pod-message
Pod event reason reason-2, message message-2
Pod event reason reason-4, message message-4
end diagnose pod volume exposer`,
},
}

View File

@@ -39,6 +39,8 @@ import (
type PVCAction struct {
log logrus.FieldLogger
crClient crclient.Client
// map[namespace]->[map[pvcVolumes]->[]podName]
nsPVCs map[string]map[string][]string
}
func NewPVCAction(f client.Factory) plugincommon.HandlerInitializer {
@@ -78,31 +80,18 @@ func (a *PVCAction) GetRelatedItems(item runtime.Unstructured, backup *v1.Backup
// Adds pods mounting this PVC to ensure that multiple pods mounting the same RWX
// volume get backed up together.
pods := new(corev1api.PodList)
err := a.crClient.List(context.Background(), pods, crclient.InNamespace(pvc.Namespace))
pvcs, err := a.getPVCList(pvc.Namespace)
if err != nil {
return nil, errors.Wrap(err, "failed to list pods")
return nil, err
}
for i := range pods.Items {
for _, volume := range pods.Items[i].Spec.Volumes {
if volume.VolumeSource.PersistentVolumeClaim == nil {
continue
}
if volume.PersistentVolumeClaim.ClaimName == pvc.Name {
if kube.IsPodRunning(&pods.Items[i]) != nil {
a.log.Infof("Related pod %s is not running, not adding to ItemBlock for PVC %s", pods.Items[i].Name, pvc.Name)
} else {
a.log.Infof("Adding related Pod %s to PVC %s", pods.Items[i].Name, pvc.Name)
relatedItems = append(relatedItems, velero.ResourceIdentifier{
GroupResource: kuberesource.Pods,
Namespace: pods.Items[i].Namespace,
Name: pods.Items[i].Name,
})
}
break
}
}
for _, pod := range pvcs[pvc.Name] {
a.log.Infof("Adding related Pod %s to PVC %s", pod, pvc.Name)
relatedItems = append(relatedItems, velero.ResourceIdentifier{
GroupResource: kuberesource.Pods,
Namespace: pvc.Namespace,
Name: pod,
})
}
// Gather groupedPVCs based on VGS label provided in the backup
@@ -117,6 +106,35 @@ func (a *PVCAction) GetRelatedItems(item runtime.Unstructured, backup *v1.Backup
return relatedItems, nil
}
func (a *PVCAction) getPVCList(ns string) (map[string][]string, error) {
if a.nsPVCs == nil {
a.nsPVCs = make(map[string]map[string][]string)
}
pvcList, ok := a.nsPVCs[ns]
if ok {
return pvcList, nil
}
pvcList = make(map[string][]string)
pods := new(corev1api.PodList)
err := a.crClient.List(context.Background(), pods, crclient.InNamespace(ns))
if err != nil {
return nil, errors.Wrap(err, "failed to list pods")
}
for i := range pods.Items {
if kube.IsPodRunning(&pods.Items[i]) != nil {
a.log.Debugf("Pod %s is not running, not adding to Pod list for PVC IBA plugin", pods.Items[i].Name)
continue
}
for _, volume := range pods.Items[i].Spec.Volumes {
if volume.VolumeSource.PersistentVolumeClaim != nil {
pvcList[volume.VolumeSource.PersistentVolumeClaim.ClaimName] = append(pvcList[volume.VolumeSource.PersistentVolumeClaim.ClaimName], pods.Items[i].Name)
}
}
}
a.nsPVCs[ns] = pvcList
return pvcList, nil
}
func (a *PVCAction) Name() string {
return "PVCItemBlockAction"
}

View File

@@ -143,6 +143,10 @@ func GetConfigs(ctx context.Context, namespace string, kubeClient kubernetes.Int
return nil, errors.Errorf("data is not available in config map %s", configName)
}
if len(cm.Data) > 1 {
return nil, errors.Errorf("more than one keys are found in ConfigMap %s's data. only expect one", configName)
}
jsonString := ""
for _, v := range cm.Data {
jsonString = v

View File

@@ -249,6 +249,7 @@ func TestGetConfigs(t *testing.T) {
cmWithValidData := builder.ForConfigMap("fake-ns", "node-agent-config").Data("fake-key", "{\"loadConcurrency\":{\"globalConfig\": 5}}").Result()
cmWithPriorityClass := builder.ForConfigMap("fake-ns", "node-agent-config").Data("fake-key", "{\"priorityClassName\": \"high-priority\"}").Result()
cmWithPriorityClassAndOther := builder.ForConfigMap("fake-ns", "node-agent-config").Data("fake-key", "{\"priorityClassName\": \"low-priority\", \"loadConcurrency\":{\"globalConfig\": 3}}").Result()
cmWithMultipleKeysInData := builder.ForConfigMap("fake-ns", "node-agent-config").Data("fake-key-1", "{}", "fake-key-2", "{}").Result()
tests := []struct {
name string
@@ -331,6 +332,14 @@ func TestGetConfigs(t *testing.T) {
},
},
},
{
name: "ConfigMap's Data has more than one key",
namespace: "fake-ns",
kubeClientObj: []runtime.Object{
cmWithMultipleKeysInData,
},
expectErr: "more than one keys are found in ConfigMap node-agent-config's data. only expect one",
},
}
for _, test := range tests {

View File

@@ -81,7 +81,7 @@ func TestEnsureRepo(t *testing.T) {
namespace: "fake-ns",
bsl: "fake-bsl",
repositoryType: "fake-repo-type",
err: "error getting backup repository list: no kind is registered for the type v1.BackupRepositoryList in scheme \"pkg/runtime/scheme.go:100\"",
err: "error getting backup repository list: no kind is registered for the type v1.BackupRepositoryList in scheme",
},
{
name: "success on existing repo",
@@ -128,7 +128,7 @@ func TestEnsureRepo(t *testing.T) {
repo, err := ensurer.EnsureRepo(t.Context(), velerov1.DefaultNamespace, test.namespace, test.bsl, test.repositoryType)
if err != nil {
require.EqualError(t, err, test.err)
require.ErrorContains(t, err, test.err)
} else {
require.NoError(t, err)
}
@@ -190,7 +190,7 @@ func TestCreateBackupRepositoryAndWait(t *testing.T) {
namespace: "fake-ns",
bsl: "fake-bsl",
repositoryType: "fake-repo-type",
err: "unable to create backup repository resource: no kind is registered for the type v1.BackupRepository in scheme \"pkg/runtime/scheme.go:100\"",
err: "unable to create backup repository resource: no kind is registered for the type v1.BackupRepository in scheme",
},
{
name: "get repo fail",
@@ -252,7 +252,7 @@ func TestCreateBackupRepositoryAndWait(t *testing.T) {
RepositoryType: test.repositoryType,
})
if err != nil {
require.EqualError(t, err, test.err)
require.ErrorContains(t, err, test.err)
} else {
require.NoError(t, err)
}

View File

@@ -449,6 +449,35 @@ func StartNewJob(
return maintenanceJob.Name, nil
}
// buildTolerationsForMaintenanceJob builds the tolerations for maintenance jobs.
// It includes the required Windows toleration for backward compatibility and filters
// tolerations from the Velero deployment to only include those with keys that are
// in the ThirdPartyTolerations allowlist, following the same pattern as labels and annotations.
func buildTolerationsForMaintenanceJob(deployment *appsv1api.Deployment) []corev1api.Toleration {
// Start with the Windows toleration for backward compatibility
windowsToleration := corev1api.Toleration{
Key: "os",
Operator: "Equal",
Effect: "NoSchedule",
Value: "windows",
}
result := []corev1api.Toleration{windowsToleration}
// Filter tolerations from the Velero deployment to only include allowed ones
// Only tolerations that exist on the deployment AND have keys in the allowlist are inherited
deploymentTolerations := veleroutil.GetTolerationsFromVeleroServer(deployment)
for _, k := range util.ThirdPartyTolerations {
for _, toleration := range deploymentTolerations {
if toleration.Key == k {
result = append(result, toleration)
break // Only add the first matching toleration for each allowed key
}
}
}
return result
}
func getPriorityClassName(ctx context.Context, cli client.Client, config *velerotypes.JobConfigs, logger logrus.FieldLogger) string {
// Use the priority class name from the global job configuration if available
// Note: Priority class is only read from global config, not per-repository
@@ -593,15 +622,8 @@ func buildJob(
SecurityContext: podSecurityContext,
Volumes: volumes,
ServiceAccountName: serviceAccount,
Tolerations: []corev1api.Toleration{
{
Key: "os",
Operator: "Equal",
Effect: "NoSchedule",
Value: "windows",
},
},
ImagePullSecrets: imagePullSecrets,
Tolerations: buildTolerationsForMaintenanceJob(deployment),
ImagePullSecrets: imagePullSecrets,
},
},
},

View File

@@ -698,7 +698,7 @@ func TestWaitAllJobsComplete(t *testing.T) {
{
name: "list job error",
runtimeScheme: schemeFail,
expectedError: "error listing maintenance job for repo fake-repo: no kind is registered for the type v1.JobList in scheme \"pkg/runtime/scheme.go:100\"",
expectedError: "error listing maintenance job for repo fake-repo: no kind is registered for the type v1.JobList in scheme",
},
{
name: "job not exist",
@@ -847,7 +847,7 @@ func TestWaitAllJobsComplete(t *testing.T) {
history, err := WaitAllJobsComplete(test.ctx, fakeClient, repo, 3, velerotest.NewLogger())
if test.expectedError != "" {
require.EqualError(t, err, test.expectedError)
require.ErrorContains(t, err, test.expectedError)
} else {
require.NoError(t, err)
}
@@ -1481,3 +1481,291 @@ func TestBuildJobWithPriorityClassName(t *testing.T) {
})
}
}
func TestBuildTolerationsForMaintenanceJob(t *testing.T) {
windowsToleration := corev1api.Toleration{
Key: "os",
Operator: "Equal",
Effect: "NoSchedule",
Value: "windows",
}
testCases := []struct {
name string
deploymentTolerations []corev1api.Toleration
expectedTolerations []corev1api.Toleration
}{
{
name: "no tolerations should only include Windows toleration",
deploymentTolerations: nil,
expectedTolerations: []corev1api.Toleration{
windowsToleration,
},
},
{
name: "empty tolerations should only include Windows toleration",
deploymentTolerations: []corev1api.Toleration{},
expectedTolerations: []corev1api.Toleration{
windowsToleration,
},
},
{
name: "non-allowed toleration should not be inherited",
deploymentTolerations: []corev1api.Toleration{
{
Key: "vng-ondemand",
Operator: "Equal",
Effect: "NoSchedule",
Value: "amd64",
},
},
expectedTolerations: []corev1api.Toleration{
windowsToleration,
},
},
{
name: "allowed toleration should be inherited",
deploymentTolerations: []corev1api.Toleration{
{
Key: "kubernetes.azure.com/scalesetpriority",
Operator: "Equal",
Effect: "NoSchedule",
Value: "spot",
},
},
expectedTolerations: []corev1api.Toleration{
windowsToleration,
{
Key: "kubernetes.azure.com/scalesetpriority",
Operator: "Equal",
Effect: "NoSchedule",
Value: "spot",
},
},
},
{
name: "mixed allowed and non-allowed tolerations should only inherit allowed",
deploymentTolerations: []corev1api.Toleration{
{
Key: "vng-ondemand", // not in allowlist
Operator: "Equal",
Effect: "NoSchedule",
Value: "amd64",
},
{
Key: "CriticalAddonsOnly", // in allowlist
Operator: "Exists",
Effect: "NoSchedule",
},
{
Key: "custom-key", // not in allowlist
Operator: "Equal",
Effect: "NoSchedule",
Value: "custom-value",
},
},
expectedTolerations: []corev1api.Toleration{
windowsToleration,
{
Key: "CriticalAddonsOnly",
Operator: "Exists",
Effect: "NoSchedule",
},
},
},
{
name: "multiple allowed tolerations should all be inherited",
deploymentTolerations: []corev1api.Toleration{
{
Key: "kubernetes.azure.com/scalesetpriority",
Operator: "Equal",
Effect: "NoSchedule",
Value: "spot",
},
{
Key: "CriticalAddonsOnly",
Operator: "Exists",
Effect: "NoSchedule",
},
},
expectedTolerations: []corev1api.Toleration{
windowsToleration,
{
Key: "kubernetes.azure.com/scalesetpriority",
Operator: "Equal",
Effect: "NoSchedule",
Value: "spot",
},
{
Key: "CriticalAddonsOnly",
Operator: "Exists",
Effect: "NoSchedule",
},
},
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Create a deployment with the specified tolerations
deployment := &appsv1api.Deployment{
Spec: appsv1api.DeploymentSpec{
Template: corev1api.PodTemplateSpec{
Spec: corev1api.PodSpec{
Tolerations: tc.deploymentTolerations,
},
},
},
}
result := buildTolerationsForMaintenanceJob(deployment)
assert.Equal(t, tc.expectedTolerations, result)
})
}
}
func TestBuildJobWithTolerationsInheritance(t *testing.T) {
// Define allowed tolerations that would be set on Velero deployment
allowedTolerations := []corev1api.Toleration{
{
Key: "kubernetes.azure.com/scalesetpriority",
Operator: "Equal",
Effect: "NoSchedule",
Value: "spot",
},
{
Key: "CriticalAddonsOnly",
Operator: "Exists",
Effect: "NoSchedule",
},
}
// Mixed tolerations (allowed and non-allowed)
mixedTolerations := []corev1api.Toleration{
{
Key: "vng-ondemand", // not in allowlist
Operator: "Equal",
Effect: "NoSchedule",
Value: "amd64",
},
{
Key: "CriticalAddonsOnly", // in allowlist
Operator: "Exists",
Effect: "NoSchedule",
},
}
// Windows toleration that should always be present
windowsToleration := corev1api.Toleration{
Key: "os",
Operator: "Equal",
Effect: "NoSchedule",
Value: "windows",
}
testCases := []struct {
name string
deploymentTolerations []corev1api.Toleration
expectedTolerations []corev1api.Toleration
}{
{
name: "no tolerations on deployment should only have Windows toleration",
deploymentTolerations: nil,
expectedTolerations: []corev1api.Toleration{
windowsToleration,
},
},
{
name: "allowed tolerations should be inherited along with Windows toleration",
deploymentTolerations: allowedTolerations,
expectedTolerations: []corev1api.Toleration{
windowsToleration,
{
Key: "kubernetes.azure.com/scalesetpriority",
Operator: "Equal",
Effect: "NoSchedule",
Value: "spot",
},
{
Key: "CriticalAddonsOnly",
Operator: "Exists",
Effect: "NoSchedule",
},
},
},
{
name: "mixed tolerations should only inherit allowed ones",
deploymentTolerations: mixedTolerations,
expectedTolerations: []corev1api.Toleration{
windowsToleration,
{
Key: "CriticalAddonsOnly",
Operator: "Exists",
Effect: "NoSchedule",
},
},
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Create a new scheme and add necessary API types
localScheme := runtime.NewScheme()
err := velerov1api.AddToScheme(localScheme)
require.NoError(t, err)
err = appsv1api.AddToScheme(localScheme)
require.NoError(t, err)
err = batchv1api.AddToScheme(localScheme)
require.NoError(t, err)
// Create a deployment with the specified tolerations
deployment := &appsv1api.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: "velero",
Namespace: "velero",
},
Spec: appsv1api.DeploymentSpec{
Template: corev1api.PodTemplateSpec{
Spec: corev1api.PodSpec{
Containers: []corev1api.Container{
{
Name: "velero",
Image: "velero/velero:latest",
},
},
Tolerations: tc.deploymentTolerations,
},
},
},
}
// Create a backup repository
repo := &velerov1api.BackupRepository{
ObjectMeta: metav1.ObjectMeta{
Name: "test-repo",
Namespace: "velero",
},
Spec: velerov1api.BackupRepositorySpec{
VolumeNamespace: "velero",
BackupStorageLocation: "default",
},
}
// Create fake client and add the deployment
client := fake.NewClientBuilder().WithScheme(localScheme).WithObjects(deployment).Build()
// Create minimal job configs and resources
jobConfig := &velerotypes.JobConfigs{}
logLevel := logrus.InfoLevel
logFormat := logging.NewFormatFlag()
logFormat.Set("text")
// Call buildJob
job, err := buildJob(client, t.Context(), repo, "default", jobConfig, logLevel, logFormat, logrus.New())
require.NoError(t, err)
// Verify the tolerations are set correctly
assert.Equal(t, tc.expectedTolerations, job.Spec.Template.Spec.Tolerations)
})
}
}

View File

@@ -56,6 +56,9 @@ type BackupPVC struct {
// SPCNoRelabeling sets Spec.SecurityContext.SELinux.Type to "spc_t" for the pod mounting the backupPVC
// ignored if ReadOnly is false
SPCNoRelabeling bool `json:"spcNoRelabeling,omitempty"`
// Annotations permits setting annotations for the backupPVC
Annotations map[string]string `json:"annotations,omitempty"`
}
type RestorePVC struct {
@@ -81,4 +84,7 @@ type NodeAgentConfigs struct {
// PriorityClassName is the priority class name for data mover pods created by the node agent
PriorityClassName string `json:"priorityClassName,omitempty"`
// PrivilegedFsBackup determines whether to create fs-backup pods as privileged pods
PrivilegedFsBackup bool `json:"privilegedFsBackup,omitempty"`
}

View File

@@ -447,8 +447,13 @@ func GetVolumeSnapshotClassForStorageClass(
return &vsClass, nil
}
return nil, fmt.Errorf(
"failed to get VolumeSnapshotClass for provisioner %s, ensure that the desired VolumeSnapshot class has the %s label or %s annotation",
provisioner, velerov1api.VolumeSnapshotClassSelectorLabel, velerov1api.VolumeSnapshotClassKubernetesAnnotation)
"failed to get VolumeSnapshotClass for provisioner %s: "+
"ensure that the desired VolumeSnapshotClass has the %s label or %s annotation, "+
"and that its driver matches the StorageClass provisioner",
provisioner,
velerov1api.VolumeSnapshotClassSelectorLabel,
velerov1api.VolumeSnapshotClassKubernetesAnnotation,
)
}
// IsVolumeSnapshotClassHasListerSecret returns whether a volumesnapshotclass has a snapshotlister secret
@@ -684,7 +689,7 @@ func WaitUntilVSCHandleIsReady(
return vsc, nil
}
func DiagnoseVS(vs *snapshotv1api.VolumeSnapshot) string {
func DiagnoseVS(vs *snapshotv1api.VolumeSnapshot, events *corev1api.EventList) string {
vscName := ""
readyToUse := false
errMessage := ""
@@ -705,6 +710,14 @@ func DiagnoseVS(vs *snapshotv1api.VolumeSnapshot) string {
diag := fmt.Sprintf("VS %s/%s, bind to %s, readyToUse %v, errMessage %s\n", vs.Namespace, vs.Name, vscName, readyToUse, errMessage)
if events != nil {
for _, e := range events.Items {
if e.InvolvedObject.UID == vs.UID && e.Type == corev1api.EventTypeWarning {
diag += fmt.Sprintf("VS event reason %s, message %s\n", e.Reason, e.Message)
}
}
}
return diag
}

View File

@@ -1699,6 +1699,7 @@ func TestDiagnoseVS(t *testing.T) {
testCases := []struct {
name string
vs *snapshotv1api.VolumeSnapshot
events *corev1api.EventList
expected string
}{
{
@@ -1781,11 +1782,81 @@ func TestDiagnoseVS(t *testing.T) {
},
expected: "VS fake-ns/fake-vs, bind to fake-vsc, readyToUse true, errMessage fake-message\n",
},
{
name: "VS with VSC and empty event",
vs: &snapshotv1api.VolumeSnapshot{
ObjectMeta: metav1.ObjectMeta{
Name: "fake-vs",
Namespace: "fake-ns",
},
Status: &snapshotv1api.VolumeSnapshotStatus{
BoundVolumeSnapshotContentName: &vscName,
ReadyToUse: &readyToUse,
Error: &snapshotv1api.VolumeSnapshotError{},
},
},
events: &corev1api.EventList{},
expected: "VS fake-ns/fake-vs, bind to fake-vsc, readyToUse true, errMessage \n",
},
{
name: "VS with VSC and events",
vs: &snapshotv1api.VolumeSnapshot{
ObjectMeta: metav1.ObjectMeta{
Name: "fake-vs",
Namespace: "fake-ns",
UID: "fake-vs-uid",
},
Status: &snapshotv1api.VolumeSnapshotStatus{
BoundVolumeSnapshotContentName: &vscName,
ReadyToUse: &readyToUse,
Error: &snapshotv1api.VolumeSnapshotError{},
},
},
events: &corev1api.EventList{Items: []corev1api.Event{
{
InvolvedObject: corev1api.ObjectReference{UID: "fake-uid-1"},
Type: corev1api.EventTypeWarning,
Reason: "reason-1",
Message: "message-1",
},
{
InvolvedObject: corev1api.ObjectReference{UID: "fake-uid-2"},
Type: corev1api.EventTypeWarning,
Reason: "reason-2",
Message: "message-2",
},
{
InvolvedObject: corev1api.ObjectReference{UID: "fake-vs-uid"},
Type: corev1api.EventTypeWarning,
Reason: "reason-3",
Message: "message-3",
},
{
InvolvedObject: corev1api.ObjectReference{UID: "fake-vs-uid"},
Type: corev1api.EventTypeNormal,
Reason: "reason-4",
Message: "message-4",
},
{
InvolvedObject: corev1api.ObjectReference{UID: "fake-vs-uid"},
Type: corev1api.EventTypeNormal,
Reason: "reason-5",
Message: "message-5",
},
{
InvolvedObject: corev1api.ObjectReference{UID: "fake-vs-uid"},
Type: corev1api.EventTypeWarning,
Reason: "reason-6",
Message: "message-6",
},
}},
expected: "VS fake-ns/fake-vs, bind to fake-vsc, readyToUse true, errMessage \nVS event reason reason-3, message message-3\nVS event reason reason-6, message message-6\n",
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
diag := DiagnoseVS(tc.vs)
diag := DiagnoseVS(tc.vs, tc.events)
assert.Equal(t, tc.expected, diag)
})
}

View File

@@ -16,6 +16,7 @@ limitations under the License.
package kube
import (
"context"
"math"
"sync"
"time"
@@ -182,13 +183,13 @@ func (es *eventSink) Create(event *corev1api.Event) (*corev1api.Event, error) {
return event, nil
}
return es.sink.CreateWithEventNamespace(event)
return es.sink.CreateWithEventNamespaceWithContext(context.Background(), event)
}
func (es *eventSink) Update(event *corev1api.Event) (*corev1api.Event, error) {
return es.sink.UpdateWithEventNamespace(event)
return es.sink.UpdateWithEventNamespaceWithContext(context.Background(), event)
}
func (es *eventSink) Patch(event *corev1api.Event, data []byte) (*corev1api.Event, error) {
return es.sink.PatchWithEventNamespace(event, data)
return es.sink.PatchWithEventNamespaceWithContext(context.Background(), event, data)
}

View File

@@ -268,13 +268,21 @@ func ToSystemAffinity(loadAffinities []*LoadAffinity) *corev1api.Affinity {
return nil
}
func DiagnosePod(pod *corev1api.Pod) string {
func DiagnosePod(pod *corev1api.Pod, events *corev1api.EventList) string {
diag := fmt.Sprintf("Pod %s/%s, phase %s, node name %s\n", pod.Namespace, pod.Name, pod.Status.Phase, pod.Spec.NodeName)
for _, condition := range pod.Status.Conditions {
diag += fmt.Sprintf("Pod condition %s, status %s, reason %s, message %s\n", condition.Type, condition.Status, condition.Reason, condition.Message)
}
if events != nil {
for _, e := range events.Items {
if e.InvolvedObject.UID == pod.UID && e.Type == corev1api.EventTypeWarning {
diag += fmt.Sprintf("Pod event reason %s, message %s\n", e.Reason, e.Message)
}
}
}
return diag
}

View File

@@ -896,10 +896,11 @@ func TestDiagnosePod(t *testing.T) {
testCases := []struct {
name string
pod *corev1api.Pod
events *corev1api.EventList
expected string
}{
{
name: "pod with all info",
name: "pod with all info but event",
pod: &corev1api.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "fake-pod",
@@ -928,11 +929,111 @@ func TestDiagnosePod(t *testing.T) {
},
expected: "Pod fake-ns/fake-pod, phase Pending, node name fake-node\nPod condition Initialized, status True, reason fake-reason-1, message fake-message-1\nPod condition PodScheduled, status False, reason fake-reason-2, message fake-message-2\n",
},
{
name: "pod with all info and empty event list",
pod: &corev1api.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "fake-pod",
Namespace: "fake-ns",
},
Spec: corev1api.PodSpec{
NodeName: "fake-node",
},
Status: corev1api.PodStatus{
Phase: corev1api.PodPending,
Conditions: []corev1api.PodCondition{
{
Type: corev1api.PodInitialized,
Status: corev1api.ConditionTrue,
Reason: "fake-reason-1",
Message: "fake-message-1",
},
{
Type: corev1api.PodScheduled,
Status: corev1api.ConditionFalse,
Reason: "fake-reason-2",
Message: "fake-message-2",
},
},
},
},
events: &corev1api.EventList{},
expected: "Pod fake-ns/fake-pod, phase Pending, node name fake-node\nPod condition Initialized, status True, reason fake-reason-1, message fake-message-1\nPod condition PodScheduled, status False, reason fake-reason-2, message fake-message-2\n",
},
{
name: "pod with all info and events",
pod: &corev1api.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "fake-pod",
Namespace: "fake-ns",
UID: "fake-pod-uid",
},
Spec: corev1api.PodSpec{
NodeName: "fake-node",
},
Status: corev1api.PodStatus{
Phase: corev1api.PodPending,
Conditions: []corev1api.PodCondition{
{
Type: corev1api.PodInitialized,
Status: corev1api.ConditionTrue,
Reason: "fake-reason-1",
Message: "fake-message-1",
},
{
Type: corev1api.PodScheduled,
Status: corev1api.ConditionFalse,
Reason: "fake-reason-2",
Message: "fake-message-2",
},
},
},
},
events: &corev1api.EventList{Items: []corev1api.Event{
{
InvolvedObject: corev1api.ObjectReference{UID: "fake-uid-1"},
Type: corev1api.EventTypeWarning,
Reason: "reason-1",
Message: "message-1",
},
{
InvolvedObject: corev1api.ObjectReference{UID: "fake-uid-2"},
Type: corev1api.EventTypeWarning,
Reason: "reason-2",
Message: "message-2",
},
{
InvolvedObject: corev1api.ObjectReference{UID: "fake-pod-uid"},
Type: corev1api.EventTypeWarning,
Reason: "reason-3",
Message: "message-3",
},
{
InvolvedObject: corev1api.ObjectReference{UID: "fake-pod-uid"},
Type: corev1api.EventTypeNormal,
Reason: "reason-4",
Message: "message-4",
},
{
InvolvedObject: corev1api.ObjectReference{UID: "fake-pod-uid"},
Type: corev1api.EventTypeNormal,
Reason: "reason-5",
Message: "message-5",
},
{
InvolvedObject: corev1api.ObjectReference{UID: "fake-pod-uid"},
Type: corev1api.EventTypeWarning,
Reason: "reason-6",
Message: "message-6",
},
}},
expected: "Pod fake-ns/fake-pod, phase Pending, node name fake-node\nPod condition Initialized, status True, reason fake-reason-1, message fake-message-1\nPod condition PodScheduled, status False, reason fake-reason-2, message fake-message-2\nPod event reason reason-3, message message-3\nPod event reason reason-6, message message-6\n",
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
diag := DiagnosePod(tc.pod)
diag := DiagnosePod(tc.pod, tc.events)
assert.Equal(t, tc.expected, diag)
})
}

View File

@@ -463,8 +463,18 @@ func GetPVCForPodVolume(vol *corev1api.Volume, pod *corev1api.Pod, crClient crcl
return pvc, nil
}
func DiagnosePVC(pvc *corev1api.PersistentVolumeClaim) string {
return fmt.Sprintf("PVC %s/%s, phase %s, binding to %s\n", pvc.Namespace, pvc.Name, pvc.Status.Phase, pvc.Spec.VolumeName)
func DiagnosePVC(pvc *corev1api.PersistentVolumeClaim, events *corev1api.EventList) string {
diag := fmt.Sprintf("PVC %s/%s, phase %s, binding to %s\n", pvc.Namespace, pvc.Name, pvc.Status.Phase, pvc.Spec.VolumeName)
if events != nil {
for _, e := range events.Items {
if e.InvolvedObject.UID == pvc.UID && e.Type == corev1api.EventTypeWarning {
diag += fmt.Sprintf("PVC event reason %s, message %s\n", e.Reason, e.Message)
}
}
}
return diag
}
func DiagnosePV(pv *corev1api.PersistentVolume) string {
@@ -554,3 +564,19 @@ func GetPVAttachedNode(ctx context.Context, pv string, storageClient storagev1.S
return "", nil
}
func GetPVAttachedNodes(ctx context.Context, pv string, storageClient storagev1.StorageV1Interface) ([]string, error) {
vaList, err := storageClient.VolumeAttachments().List(ctx, metav1.ListOptions{})
if err != nil {
return nil, errors.Wrapf(err, "error listing volumeattachment")
}
nodes := []string{}
for _, va := range vaList.Items {
if va.Spec.Source.PersistentVolumeName != nil && *va.Spec.Source.PersistentVolumeName == pv {
nodes = append(nodes, va.Spec.NodeName)
}
}
return nodes, nil
}

View File

@@ -1593,10 +1593,11 @@ func TestDiagnosePVC(t *testing.T) {
testCases := []struct {
name string
pvc *corev1api.PersistentVolumeClaim
events *corev1api.EventList
expected string
}{
{
name: "pvc with all info",
name: "pvc with all info but events",
pvc: &corev1api.PersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Name: "fake-pvc",
@@ -1611,11 +1612,83 @@ func TestDiagnosePVC(t *testing.T) {
},
expected: "PVC fake-ns/fake-pvc, phase Pending, binding to fake-pv\n",
},
{
name: "pvc with all info and empty events",
pvc: &corev1api.PersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Name: "fake-pvc",
Namespace: "fake-ns",
},
Spec: corev1api.PersistentVolumeClaimSpec{
VolumeName: "fake-pv",
},
Status: corev1api.PersistentVolumeClaimStatus{
Phase: corev1api.ClaimPending,
},
},
events: &corev1api.EventList{},
expected: "PVC fake-ns/fake-pvc, phase Pending, binding to fake-pv\n",
},
{
name: "pvc with all info and events",
pvc: &corev1api.PersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Name: "fake-pvc",
Namespace: "fake-ns",
UID: "fake-pvc-uid",
},
Spec: corev1api.PersistentVolumeClaimSpec{
VolumeName: "fake-pv",
},
Status: corev1api.PersistentVolumeClaimStatus{
Phase: corev1api.ClaimPending,
},
},
events: &corev1api.EventList{Items: []corev1api.Event{
{
InvolvedObject: corev1api.ObjectReference{UID: "fake-uid-1"},
Type: corev1api.EventTypeWarning,
Reason: "reason-1",
Message: "message-1",
},
{
InvolvedObject: corev1api.ObjectReference{UID: "fake-uid-2"},
Type: corev1api.EventTypeWarning,
Reason: "reason-2",
Message: "message-2",
},
{
InvolvedObject: corev1api.ObjectReference{UID: "fake-pvc-uid"},
Type: corev1api.EventTypeWarning,
Reason: "reason-3",
Message: "message-3",
},
{
InvolvedObject: corev1api.ObjectReference{UID: "fake-pvc-uid"},
Type: corev1api.EventTypeNormal,
Reason: "reason-4",
Message: "message-4",
},
{
InvolvedObject: corev1api.ObjectReference{UID: "fake-pvc-uid"},
Type: corev1api.EventTypeNormal,
Reason: "reason-5",
Message: "message-5",
},
{
InvolvedObject: corev1api.ObjectReference{UID: "fake-pvc-uid"},
Type: corev1api.EventTypeWarning,
Reason: "reason-6",
Message: "message-6",
},
}},
expected: "PVC fake-ns/fake-pvc, phase Pending, binding to fake-pv\nPVC event reason reason-3, message message-3\nPVC event reason reason-6, message message-6\n",
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
diag := DiagnosePVC(tc.pvc)
diag := DiagnosePVC(tc.pvc, tc.events)
assert.Equal(t, tc.expected, diag)
})
}

View File

@@ -371,15 +371,16 @@ func VerifyJSONConfigs(ctx context.Context, namespace string, crClient client.Cl
return errors.Errorf("data is not available in ConfigMap %s", configName)
}
// Verify all the keys in ConfigMap's data.
jsonString := ""
for _, v := range cm.Data {
jsonString = v
}
configs := configType
err = json.Unmarshal([]byte(jsonString), configs)
if err != nil {
return errors.Wrapf(err, "error to unmarshall data from ConfigMap %s", configName)
configs := configType
err = json.Unmarshal([]byte(jsonString), configs)
if err != nil {
return errors.Wrapf(err, "error to unmarshall data from ConfigMap %s", configName)
}
}
return nil

View File

@@ -28,3 +28,7 @@ var ThirdPartyTolerations = []string{
"kubernetes.azure.com/scalesetpriority",
"CriticalAddonsOnly",
}
const (
VSphereCNSFastCloneAnno = "csi.vsphere.volume/fast-provisioning"
)

View File

@@ -23,6 +23,8 @@ By default, `velero install` does not install Velero's [File System Backup][3].
If you've already run `velero install` without the `--use-node-agent` flag, you can run the same command again, including the `--use-node-agent` flag, to add the file system backup to your existing install.
Note that for some use cases (including installation on OpenShift clusters) the fs-backup pods must run in a Privileged security context. This is configured through the node-agent configmap (see below) by setting `privilegedFsBackup` to `true` in the configmap.
## CSI Snapshot Data Movement
Velero node-agent is required by [CSI Snapshot Data Movement][12] when Velero built-in data mover is used. By default, `velero install` does not install Velero's node-agent. To enable it, specify the `--use-node-agent` flag.

View File

@@ -37,6 +37,9 @@ default the source PVC's storage class will be used.
The users can specify the ConfigMap name during velero installation by CLI:
`velero install --node-agent-configmap=<ConfigMap-Name>`
- `annotations`: permits to set annotations on the backupPVC itself. typically useful for some CSI provider which cannot mount
a VolumeSnapshot without a custom annotation.
A sample of `backupPVC` config as part of the ConfigMap would look like:
```json
{
@@ -49,8 +52,11 @@ A sample of `backupPVC` config as part of the ConfigMap would look like:
"storageClass": "backupPVC-storage-class"
},
"storage-class-3": {
"readOnly": true
}
"readOnly": true,
"annotations": {
"some-csi.provider.io/readOnlyClone": true
}
},
"storage-class-4": {
"readOnly": true,
"spcNoRelabeling": true

View File

@@ -15,7 +15,7 @@ Note: If less resources are assigned to data mover pods, the data movement activ
Refer to [Performance Guidance][3] for a guidance of performance vs. resource usage, and it is highly recommended that you perform your own testing to find the best resource limits for your data.
Velero introduces a new section in the node-agent configMap, called ```podResources```, through which you can set customized resources configurations for data mover pods.
If it is not there, a configMap should be created manually. The configMap should be in the same namespace where Velero is installed. If multiple Velero instances are installed in different namespaces, there should be one configMap in each namespace which applies to node-agent in that namespace only. The name of the configMap should be specified in the node-agent server parameter ```--node-agent-config```.
If it is not there, a configMap should be created manually. The configMap should be in the same namespace where Velero is installed. If multiple Velero instances are installed in different namespaces, there should be one configMap in each namespace which applies to node-agent in that namespace only. The name of the configMap should be specified in the node-agent server parameter ```--node-agent-configmap```.
Node-agent server checks these configurations at startup time. Therefore, you could edit this configMap any time, but in order to make the changes effective, node-agent server needs to be restarted.
### Sample
@@ -39,19 +39,19 @@ To create the configMap, save something like the above sample to a json file and
kubectl create cm node-agent-config -n velero --from-file=<json file name>
```
To provide the configMap to node-agent, edit the node-agent daemonset and add the ```- --node-agent-config``` argument to the spec:
To provide the configMap to node-agent, edit the node-agent daemonset and add the ```- --node-agent-configmap``` argument to the spec:
1. Open the node-agent daemonset spec
```
kubectl edit ds node-agent -n velero
```
2. Add ```- --node-agent-config``` to ```spec.template.spec.containers```
2. Add ```- --node-agent-configmap``` to ```spec.template.spec.containers```
```
spec:
template:
spec:
containers:
- args:
- --node-agent-config=<configMap name>
- --node-agent-configmap=<configMap name>
```
### Priority Class
@@ -126,4 +126,4 @@ kubectl create cm node-agent-config -n velero --from-file=node-agent-config.json
[1]: csi-snapshot-data-movement.md
[2]: file-system-backup.md
[3]: https://kubernetes.io/docs/concepts/workloads/pods/pod-qos/
[4]: performance-guidance.md
[4]: performance-guidance.md

View File

@@ -27,22 +27,22 @@ To create the configMap, save something like the above sample to a json file and
kubectl create cm node-agent-config -n velero --from-file=<json file name>
```
To provide the configMap to node-agent, edit the node-agent daemonset and add the ```- --node-agent-config``` argument to the spec:
To provide the configMap to node-agent, edit the node-agent daemonset and add the ```- --node-agent-configmap`` argument to the spec:
1. Open the node-agent daemonset spec
```
kubectl edit ds node-agent -n velero
```
2. Add ```- --node-agent-config``` to ```spec.template.spec.containers```
2. Add ```- --node-agent-configmap``` to ```spec.template.spec.containers```
```
spec:
template:
spec:
containers:
- args:
- --node-agent-config=<configMap name>
- --node-agent-configmap=<configMap name>
```
[1]: csi-snapshot-data-movement.md
[2]: file-system-backup.md
[3]: node-agent-concurrency.md
[4]: data-movement-node-selection.md
[4]: data-movement-node-selection.md

View File

@@ -184,7 +184,7 @@ ginkgo: ${GOBIN}/ginkgo
# This target does not run if ginkgo is already in $GOBIN
${GOBIN}/ginkgo:
GOBIN=${GOBIN} go install github.com/onsi/ginkgo/v2/ginkgo@v2.19.0
GOBIN=${GOBIN} go install github.com/onsi/ginkgo/v2/ginkgo@v2.22.0
.PHONY: run-e2e
run-e2e: ginkgo

View File

@@ -39,10 +39,12 @@ import (
. "github.com/vmware-tanzu/velero/test/e2e/basic/resources-check"
. "github.com/vmware-tanzu/velero/test/e2e/bsl-mgmt"
. "github.com/vmware-tanzu/velero/test/e2e/migration"
. "github.com/vmware-tanzu/velero/test/e2e/nodeagentconfig"
. "github.com/vmware-tanzu/velero/test/e2e/parallelfilesdownload"
. "github.com/vmware-tanzu/velero/test/e2e/parallelfilesupload"
. "github.com/vmware-tanzu/velero/test/e2e/privilegesmgmt"
. "github.com/vmware-tanzu/velero/test/e2e/pv-backup"
. "github.com/vmware-tanzu/velero/test/e2e/repomaintenance"
. "github.com/vmware-tanzu/velero/test/e2e/resource-filtering"
. "github.com/vmware-tanzu/velero/test/e2e/resourcemodifiers"
. "github.com/vmware-tanzu/velero/test/e2e/resourcepolicies"
@@ -660,6 +662,24 @@ var _ = Describe(
ParallelFilesDownloadTest,
)
var _ = Describe(
"Test Repository Maintenance Job Configuration's global part",
Label("RepoMaintenance", "LongTime"),
GlobalRepoMaintenanceTest,
)
var _ = Describe(
"Test Repository Maintenance Job Configuration's specific part",
Label("RepoMaintenance", "LongTime"),
SpecificRepoMaintenanceTest,
)
var _ = Describe(
"Test node agent config's LoadAffinity part",
Label("NodeAgentConfig", "LoadAffinity"),
LoadAffinities,
)
func GetKubeConfigContext() error {
var err error
var tcDefault, tcStandby k8s.TestClient
@@ -740,6 +760,12 @@ var _ = BeforeSuite(func() {
).To(Succeed())
}
By("Install PriorityClasses for E2E.")
Expect(veleroutil.CreatePriorityClasses(
context.Background(),
test.VeleroCfg.ClientToInstallVelero.Kubebuilder,
)).To(Succeed())
if test.InstallVelero {
By("Install test resources before testing")
Expect(
@@ -764,6 +790,8 @@ var _ = AfterSuite(func() {
test.StorageClassName,
),
).To(Succeed())
By("Delete PriorityClasses created by E2E")
Expect(
k8s.DeleteStorageClass(
ctx,
@@ -783,6 +811,11 @@ var _ = AfterSuite(func() {
).To(Succeed())
}
Expect(veleroutil.DeletePriorityClasses(
ctx,
test.VeleroCfg.ClientToInstallVelero.Kubebuilder,
)).To(Succeed())
// If the Velero is installed during test, and the FailFast is not enabled,
// uninstall Velero. If not, either Velero is not installed, or kept it for debug on failure.
if test.InstallVelero && (testSuitePassed || !test.VeleroCfg.FailFast) {

View File

@@ -342,6 +342,12 @@ func (m *migrationE2E) Restore() error {
Expect(veleroutil.InstallStorageClasses(
m.VeleroCfg.StandbyClusterCloudProvider)).To(Succeed())
By("Install PriorityClass for E2E.")
Expect(veleroutil.CreatePriorityClasses(
context.Background(),
test.VeleroCfg.StandbyClient.Kubebuilder,
)).To(Succeed())
if strings.EqualFold(m.VeleroCfg.Features, test.FeatureCSI) &&
m.VeleroCfg.UseVolumeSnapshots {
By("Install VolumeSnapshotClass for E2E.")
@@ -447,6 +453,7 @@ func (m *migrationE2E) Clean() error {
Expect(k8sutil.KubectlConfigUseContext(
m.Ctx, m.VeleroCfg.StandbyClusterContext)).To(Succeed())
m.VeleroCfg.ClientToInstallVelero = m.VeleroCfg.StandbyClient
m.VeleroCfg.ClusterToInstallVelero = m.VeleroCfg.StandbyClusterName
@@ -459,7 +466,6 @@ func (m *migrationE2E) Clean() error {
fmt.Println("Fail to delete StorageClass1: ", err)
return
}
if err := k8sutil.DeleteStorageClass(
m.Ctx,
*m.VeleroCfg.ClientToInstallVelero,
@@ -469,6 +475,12 @@ func (m *migrationE2E) Clean() error {
return
}
By("Delete PriorityClasses created by E2E")
Expect(veleroutil.DeletePriorityClasses(
m.Ctx,
m.VeleroCfg.ClientToInstallVelero.Kubebuilder,
)).To(Succeed())
if strings.EqualFold(m.VeleroCfg.Features, test.FeatureCSI) &&
m.VeleroCfg.UseVolumeSnapshots {
By("Delete VolumeSnapshotClass created by E2E")

View File

@@ -0,0 +1,347 @@
/*
Copyright the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package nodeagentconfig
import (
"context"
"encoding/json"
"fmt"
"strings"
"time"
. "github.com/onsi/gomega"
"github.com/pkg/errors"
corev1api "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/util/wait"
"sigs.k8s.io/controller-runtime/pkg/client"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
velerov2alpha1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v2alpha1"
"github.com/vmware-tanzu/velero/pkg/builder"
velerotypes "github.com/vmware-tanzu/velero/pkg/types"
"github.com/vmware-tanzu/velero/pkg/util/kube"
velerokubeutil "github.com/vmware-tanzu/velero/pkg/util/kube"
"github.com/vmware-tanzu/velero/test"
. "github.com/vmware-tanzu/velero/test/e2e/test"
k8sutil "github.com/vmware-tanzu/velero/test/util/k8s"
veleroutil "github.com/vmware-tanzu/velero/test/util/velero"
)
type NodeAgentConfigTestCase struct {
TestCase
nodeAgentConfigs velerotypes.NodeAgentConfigs
nodeAgentConfigMapName string
}
var LoadAffinities func() = TestFunc(&NodeAgentConfigTestCase{
nodeAgentConfigs: velerotypes.NodeAgentConfigs{
LoadAffinity: []*kube.LoadAffinity{
{
NodeSelector: metav1.LabelSelector{
MatchLabels: map[string]string{
"beta.kubernetes.io/arch": "amd64",
},
},
StorageClass: test.StorageClassName,
},
{
NodeSelector: metav1.LabelSelector{
MatchLabels: map[string]string{
"kubernetes.io/arch": "amd64",
},
},
StorageClass: test.StorageClassName2,
},
},
BackupPVCConfig: map[string]velerotypes.BackupPVC{
test.StorageClassName: {
StorageClass: test.StorageClassName2,
},
},
RestorePVCConfig: &velerotypes.RestorePVC{
IgnoreDelayBinding: true,
},
PriorityClassName: test.PriorityClassNameForDataMover,
},
nodeAgentConfigMapName: "node-agent-config",
})
func (n *NodeAgentConfigTestCase) Init() error {
// generate random number as UUIDgen and set one default timeout duration
n.TestCase.Init()
// generate variable names based on CaseBaseName + UUIDgen
n.CaseBaseName = "node-agent-config-" + n.UUIDgen
n.BackupName = "backup-" + n.CaseBaseName
n.RestoreName = "restore-" + n.CaseBaseName
// generate namespaces by NamespacesTotal
n.NamespacesTotal = 1
n.NSIncluded = &[]string{}
for nsNum := 0; nsNum < n.NamespacesTotal; nsNum++ {
createNSName := fmt.Sprintf("%s-%00000d", n.CaseBaseName, nsNum)
*n.NSIncluded = append(*n.NSIncluded, createNSName)
}
// assign values to the inner variable for specific case
n.VeleroCfg.UseNodeAgent = true
n.VeleroCfg.UseNodeAgentWindows = true
// Need to verify the data mover pod content, so don't wait until backup completion.
n.BackupArgs = []string{
"create", "--namespace", n.VeleroCfg.VeleroNamespace, "backup", n.BackupName,
"--include-namespaces", strings.Join(*n.NSIncluded, ","),
"--snapshot-volumes=true", "--snapshot-move-data",
}
// Need to verify the data mover pod content, so don't wait until restore completion.
n.RestoreArgs = []string{
"create", "--namespace", n.VeleroCfg.VeleroNamespace, "restore", n.RestoreName,
"--from-backup", n.BackupName,
}
// Message output by ginkgo
n.TestMsg = &TestMSG{
Desc: "Validate Node Agent ConfigMap configuration",
FailedMSG: "Failed to apply and / or validate configuration in VGDP pod.",
Text: "Should be able to apply and validate configuration in VGDP pod.",
}
return nil
}
func (n *NodeAgentConfigTestCase) InstallVelero() error {
// Because this test needs to use customized Node Agent ConfigMap,
// need to uninstall and reinstall Velero.
fmt.Println("Start to uninstall Velero")
if err := veleroutil.VeleroUninstall(n.Ctx, n.VeleroCfg); err != nil {
fmt.Printf("Fail to uninstall Velero: %s\n", err.Error())
return err
}
result, err := json.Marshal(n.nodeAgentConfigs)
if err != nil {
return err
}
repoMaintenanceConfig := builder.ForConfigMap(n.VeleroCfg.VeleroNamespace, n.nodeAgentConfigMapName).
Data("node-agent-config", string(result)).Result()
n.VeleroCfg.NodeAgentConfigMap = n.nodeAgentConfigMapName
return veleroutil.PrepareVelero(
n.Ctx,
n.CaseBaseName,
n.VeleroCfg,
repoMaintenanceConfig,
)
}
func (n *NodeAgentConfigTestCase) CreateResources() error {
for _, ns := range *n.NSIncluded {
if err := k8sutil.CreateNamespace(n.Ctx, n.Client, ns); err != nil {
fmt.Printf("Fail to create ns %s: %s\n", ns, err.Error())
return err
}
pvc, err := k8sutil.CreatePVC(n.Client, ns, "volume-1", test.StorageClassName, nil)
if err != nil {
fmt.Printf("Fail to create PVC %s: %s\n", "volume-1", err.Error())
return err
}
vols := k8sutil.CreateVolumes(pvc.Name, []string{"volume-1"})
deployment := k8sutil.NewDeployment(
n.CaseBaseName,
(*n.NSIncluded)[0],
1,
map[string]string{"app": "test"},
n.VeleroCfg.ImageRegistryProxy,
n.VeleroCfg.WorkerOS,
).WithVolume(vols).Result()
deployment, err = k8sutil.CreateDeployment(n.Client.ClientGo, ns, deployment)
if err != nil {
fmt.Printf("Fail to create deployment %s: %s \n", deployment.Name, err.Error())
return errors.Wrap(err, fmt.Sprintf("failed to create deployment: %s", err.Error()))
}
if err := k8sutil.WaitForReadyDeployment(n.Client.ClientGo, deployment.Namespace, deployment.Name); err != nil {
fmt.Printf("Fail to create deployment %s: %s\n", n.CaseBaseName, err.Error())
return err
}
}
return nil
}
func (n *NodeAgentConfigTestCase) Backup() error {
if err := veleroutil.VeleroCmdExec(n.Ctx, n.VeleroCfg.VeleroCLI, n.BackupArgs); err != nil {
return err
}
backupPodList := new(corev1api.PodList)
wait.PollUntilContextTimeout(n.Ctx, 5*time.Second, 5*time.Minute, true, func(ctx context.Context) (bool, error) {
duList := new(velerov2alpha1api.DataUploadList)
if err := n.VeleroCfg.ClientToInstallVelero.Kubebuilder.List(
n.Ctx,
duList,
&client.ListOptions{Namespace: n.VeleroCfg.VeleroNamespace},
); err != nil {
fmt.Printf("Fail to list DataUpload: %s\n", err.Error())
return false, fmt.Errorf("Fail to list DataUpload: %w", err)
} else {
if len(duList.Items) <= 0 {
fmt.Println("No DataUpload found yet. Continue polling.")
return false, nil
}
}
if err := n.VeleroCfg.ClientToInstallVelero.Kubebuilder.List(
n.Ctx,
backupPodList,
&client.ListOptions{
LabelSelector: labels.SelectorFromSet(map[string]string{
velerov1api.DataUploadLabel: duList.Items[0].Name,
}),
}); err != nil {
fmt.Printf("Fail to list backupPod %s\n", err.Error())
return false, errors.Wrapf(err, "error to list backup pods")
} else {
if len(backupPodList.Items) <= 0 {
fmt.Println("No backupPod found yet. Continue polling.")
return false, nil
}
}
return true, nil
})
fmt.Println("Start to verify backupPod content.")
Expect(backupPodList.Items[0].Spec.PriorityClassName).To(Equal(n.nodeAgentConfigs.PriorityClassName))
// In backup, only the second element of LoadAffinity array should be used.
expectedAffinity := velerokubeutil.ToSystemAffinity(n.nodeAgentConfigs.LoadAffinity[1:])
Expect(backupPodList.Items[0].Spec.Affinity).To(Equal(expectedAffinity))
fmt.Println("backupPod content verification completed successfully.")
wait.PollUntilContextTimeout(n.Ctx, 5*time.Second, 5*time.Minute, true, func(ctx context.Context) (bool, error) {
backup := new(velerov1api.Backup)
if err := n.VeleroCfg.ClientToInstallVelero.Kubebuilder.Get(
n.Ctx,
client.ObjectKey{Namespace: n.VeleroCfg.VeleroNamespace, Name: n.BackupName},
backup,
); err != nil {
return false, err
}
if backup.Status.Phase != velerov1api.BackupPhaseCompleted &&
backup.Status.Phase != velerov1api.BackupPhaseFailed &&
backup.Status.Phase != velerov1api.BackupPhasePartiallyFailed {
fmt.Printf("backup status is %s. Continue polling until backup reach to a final state.\n", backup.Status.Phase)
return false, nil
}
return true, nil
})
return nil
}
func (n *NodeAgentConfigTestCase) Restore() error {
if err := veleroutil.VeleroCmdExec(n.Ctx, n.VeleroCfg.VeleroCLI, n.RestoreArgs); err != nil {
return err
}
restorePodList := new(corev1api.PodList)
wait.PollUntilContextTimeout(n.Ctx, 5*time.Second, 5*time.Minute, true, func(ctx context.Context) (bool, error) {
ddList := new(velerov2alpha1api.DataDownloadList)
if err := n.VeleroCfg.ClientToInstallVelero.Kubebuilder.List(
n.Ctx,
ddList,
&client.ListOptions{Namespace: n.VeleroCfg.VeleroNamespace},
); err != nil {
fmt.Printf("Fail to list DataDownload: %s\n", err.Error())
return false, fmt.Errorf("Fail to list DataDownload %w", err)
} else {
if len(ddList.Items) <= 0 {
fmt.Println("No DataDownload found yet. Continue polling.")
return false, nil
}
}
if err := n.VeleroCfg.ClientToInstallVelero.Kubebuilder.List(
n.Ctx,
restorePodList,
&client.ListOptions{
LabelSelector: labels.SelectorFromSet(map[string]string{
velerov1api.DataDownloadLabel: ddList.Items[0].Name,
}),
}); err != nil {
fmt.Printf("Fail to list restorePod %s\n", err.Error())
return false, errors.Wrapf(err, "error to list restore pods")
} else {
if len(restorePodList.Items) <= 0 {
fmt.Println("No restorePod found yet. Continue polling.")
return false, nil
}
}
return true, nil
})
fmt.Println("Start to verify restorePod content.")
Expect(restorePodList.Items[0].Spec.PriorityClassName).To(Equal(n.nodeAgentConfigs.PriorityClassName))
// In restore, only the first element of LoadAffinity array should be used.
expectedAffinity := velerokubeutil.ToSystemAffinity(n.nodeAgentConfigs.LoadAffinity[:1])
Expect(restorePodList.Items[0].Spec.Affinity).To(Equal(expectedAffinity))
fmt.Println("restorePod content verification completed successfully.")
wait.PollUntilContextTimeout(n.Ctx, 5*time.Second, 5*time.Minute, true, func(ctx context.Context) (bool, error) {
restore := new(velerov1api.Restore)
if err := n.VeleroCfg.ClientToInstallVelero.Kubebuilder.Get(
n.Ctx,
client.ObjectKey{Namespace: n.VeleroCfg.VeleroNamespace, Name: n.RestoreName},
restore,
); err != nil {
return false, err
}
if restore.Status.Phase != velerov1api.RestorePhaseCompleted &&
restore.Status.Phase != velerov1api.RestorePhaseFailed &&
restore.Status.Phase != velerov1api.RestorePhasePartiallyFailed {
fmt.Printf("restore status is %s. Continue polling until restore reach to a final state.\n", restore.Status.Phase)
return false, nil
}
return true, nil
})
return nil
}

View File

@@ -0,0 +1,247 @@
/*
Copyright 2021 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package repomaintenance
import (
"encoding/json"
"fmt"
"strings"
"time"
. "github.com/onsi/gomega"
"github.com/pkg/errors"
batchv1api "k8s.io/api/batch/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"sigs.k8s.io/controller-runtime/pkg/client"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/builder"
velerotypes "github.com/vmware-tanzu/velero/pkg/types"
"github.com/vmware-tanzu/velero/pkg/util/kube"
velerokubeutil "github.com/vmware-tanzu/velero/pkg/util/kube"
"github.com/vmware-tanzu/velero/test"
. "github.com/vmware-tanzu/velero/test/e2e/test"
k8sutil "github.com/vmware-tanzu/velero/test/util/k8s"
veleroutil "github.com/vmware-tanzu/velero/test/util/velero"
)
type RepoMaintenanceTestCase struct {
TestCase
repoMaintenanceConfigMapName string
repoMaintenanceConfigKey string
jobConfigs velerotypes.JobConfigs
}
var keepJobNum = 1
var GlobalRepoMaintenanceTest func() = TestFunc(&RepoMaintenanceTestCase{
repoMaintenanceConfigKey: "global",
repoMaintenanceConfigMapName: "global",
jobConfigs: velerotypes.JobConfigs{
KeepLatestMaintenanceJobs: &keepJobNum,
PodResources: &velerokubeutil.PodResources{
CPURequest: "100m",
MemoryRequest: "100Mi",
CPULimit: "200m",
MemoryLimit: "200Mi",
},
PriorityClassName: test.PriorityClassNameForRepoMaintenance,
},
})
var SpecificRepoMaintenanceTest func() = TestFunc(&RepoMaintenanceTestCase{
repoMaintenanceConfigKey: "",
repoMaintenanceConfigMapName: "specific",
jobConfigs: velerotypes.JobConfigs{
KeepLatestMaintenanceJobs: &keepJobNum,
PodResources: &velerokubeutil.PodResources{
CPURequest: "100m",
MemoryRequest: "100Mi",
CPULimit: "200m",
MemoryLimit: "200Mi",
},
PriorityClassName: test.PriorityClassNameForRepoMaintenance,
},
})
func (r *RepoMaintenanceTestCase) Init() error {
// generate random number as UUIDgen and set one default timeout duration
r.TestCase.Init()
// generate variable names based on CaseBaseName + UUIDgen
r.CaseBaseName = "repo-maintenance-" + r.UUIDgen
r.BackupName = "backup-" + r.CaseBaseName
r.RestoreName = "restore-" + r.CaseBaseName
// generate namespaces by NamespacesTotal
r.NamespacesTotal = 1
r.NSIncluded = &[]string{}
for nsNum := 0; nsNum < r.NamespacesTotal; nsNum++ {
createNSName := fmt.Sprintf("%s-%00000d", r.CaseBaseName, nsNum)
*r.NSIncluded = append(*r.NSIncluded, createNSName)
}
// If repoMaintenanceConfigKey is not set, it means testing the specific repo case.
// Need to assemble the BackupRepository name. The format is "volumeNamespace-bslName-uploaderName"
if r.repoMaintenanceConfigKey == "" {
r.repoMaintenanceConfigKey = (*r.NSIncluded)[0] + "-" + "default" + "-" + test.UploaderTypeKopia
}
// assign values to the inner variable for specific case
r.VeleroCfg.UseNodeAgent = true
r.VeleroCfg.UseNodeAgentWindows = true
r.BackupArgs = []string{
"create", "--namespace", r.VeleroCfg.VeleroNamespace, "backup", r.BackupName,
"--include-namespaces", strings.Join(*r.NSIncluded, ","),
"--snapshot-volumes=true", "--snapshot-move-data", "--wait",
}
// Message output by ginkgo
r.TestMsg = &TestMSG{
Desc: "Validate Repository Maintenance Job configuration",
FailedMSG: "Failed to apply and / or validate configuration in repository maintenance jobs.",
Text: "Should be able to apply and validate configuration in repository maintenance jobs.",
}
return nil
}
func (r *RepoMaintenanceTestCase) InstallVelero() error {
// Because this test needs to use customized repository maintenance ConfigMap,
// need to uninstall and reinstall Velero.
fmt.Println("Start to uninstall Velero")
if err := veleroutil.VeleroUninstall(r.Ctx, r.VeleroCfg); err != nil {
fmt.Printf("Fail to uninstall Velero: %s\n", err.Error())
return err
}
result, err := json.Marshal(r.jobConfigs)
if err != nil {
return err
}
repoMaintenanceConfig := builder.ForConfigMap(r.VeleroCfg.VeleroNamespace, r.repoMaintenanceConfigMapName).
Data(r.repoMaintenanceConfigKey, string(result)).Result()
r.VeleroCfg.RepoMaintenanceJobConfigMap = r.repoMaintenanceConfigMapName
return veleroutil.PrepareVelero(
r.Ctx,
r.CaseBaseName,
r.VeleroCfg,
repoMaintenanceConfig,
)
}
func (r *RepoMaintenanceTestCase) CreateResources() error {
for _, ns := range *r.NSIncluded {
if err := k8sutil.CreateNamespace(r.Ctx, r.Client, ns); err != nil {
fmt.Printf("Fail to create ns %s: %s\n", ns, err.Error())
return err
}
pvc, err := k8sutil.CreatePVC(r.Client, ns, "volume-1", test.StorageClassName, nil)
if err != nil {
fmt.Printf("Fail to create PVC %s: %s\n", "volume-1", err.Error())
return err
}
vols := k8sutil.CreateVolumes(pvc.Name, []string{"volume-1"})
deployment := k8sutil.NewDeployment(
r.CaseBaseName,
(*r.NSIncluded)[0],
1,
map[string]string{"app": "test"},
r.VeleroCfg.ImageRegistryProxy,
r.VeleroCfg.WorkerOS,
).WithVolume(vols).Result()
deployment, err = k8sutil.CreateDeployment(r.Client.ClientGo, ns, deployment)
if err != nil {
fmt.Printf("Fail to create deployment %s: %s \n", deployment.Name, err.Error())
return errors.Wrap(err, fmt.Sprintf("failed to create deployment: %s", err.Error()))
}
if err := k8sutil.WaitForReadyDeployment(r.Client.ClientGo, deployment.Namespace, deployment.Name); err != nil {
fmt.Printf("Fail to create deployment %s: %s\n", r.CaseBaseName, err.Error())
return err
}
}
return nil
}
func (r *RepoMaintenanceTestCase) Verify() error {
// Reduce the MaintenanceFrequency to 1 minute.
backupRepositoryList := new(velerov1api.BackupRepositoryList)
if err := r.VeleroCfg.ClientToInstallVelero.Kubebuilder.List(
r.Ctx,
backupRepositoryList,
&client.ListOptions{
Namespace: r.VeleroCfg.Namespace,
LabelSelector: labels.SelectorFromSet(map[string]string{velerov1api.VolumeNamespaceLabel: (*r.NSIncluded)[0]}),
},
); err != nil {
return err
}
if len(backupRepositoryList.Items) <= 0 {
return fmt.Errorf("fail list BackupRepository. no item is returned")
}
backupRepository := backupRepositoryList.Items[0]
updated := backupRepository.DeepCopy()
updated.Spec.MaintenanceFrequency = metav1.Duration{Duration: time.Minute}
if err := r.VeleroCfg.ClientToInstallVelero.Kubebuilder.Patch(r.Ctx, updated, client.MergeFrom(&backupRepository)); err != nil {
fmt.Printf("failed to patch BackupRepository %q: %s", backupRepository.GetName(), err.Error())
return err
}
// The minimal time unit of Repository Maintenance is 5 minutes.
// Wait for more than one cycles to make sure the result is valid.
time.Sleep(6 * time.Minute)
jobList := new(batchv1api.JobList)
if err := r.VeleroCfg.ClientToInstallVelero.Kubebuilder.List(r.Ctx, jobList, &client.ListOptions{
Namespace: r.VeleroCfg.Namespace,
LabelSelector: labels.SelectorFromSet(map[string]string{"velero.io/repo-name": backupRepository.Name}),
}); err != nil {
return nil
}
resources, err := kube.ParseResourceRequirements(
r.jobConfigs.PodResources.CPURequest,
r.jobConfigs.PodResources.MemoryRequest,
r.jobConfigs.PodResources.CPULimit,
r.jobConfigs.PodResources.MemoryLimit,
)
if err != nil {
return errors.Wrap(err, "failed to parse resource requirements for maintenance job")
}
Expect(jobList.Items[0].Spec.Template.Spec.Containers[0].Resources).To(Equal(resources))
Expect(jobList.Items).To(HaveLen(*r.jobConfigs.KeepLatestMaintenanceJobs))
Expect(jobList.Items[0].Spec.Template.Spec.PriorityClassName).To(Equal(r.jobConfigs.PriorityClassName))
return nil
}

View File

@@ -41,6 +41,7 @@ depends on your test patterns.
*/
type VeleroBackupRestoreTest interface {
Init() error
InstallVelero() error
CreateResources() error
Backup() error
Destroy() error
@@ -109,6 +110,10 @@ func (t *TestCase) GenerateUUID() string {
return fmt.Sprintf("%08d", rand.IntN(100000000))
}
func (t *TestCase) InstallVelero() error {
return PrepareVelero(context.Background(), t.GetTestCase().CaseBaseName, t.GetTestCase().VeleroCfg)
}
func (t *TestCase) CreateResources() error {
return nil
}
@@ -221,7 +226,8 @@ func RunTestCase(test VeleroBackupRestoreTest) error {
fmt.Printf("Running test case %s %s\n", test.GetTestMsg().Desc, time.Now().Format("2006-01-02 15:04:05"))
if InstallVelero {
Expect(PrepareVelero(context.Background(), test.GetTestCase().CaseBaseName, test.GetTestCase().VeleroCfg)).To(Succeed())
fmt.Printf("Install Velero for test case %s: %s", test.GetTestCase().CaseBaseName, time.Now().Format("2006-01-02 15:04:05"))
Expect(test.InstallVelero()).To(Succeed())
}
defer test.Clean()

View File

@@ -57,6 +57,11 @@ const (
BackupRepositoryConfigName = "backup-repository-config"
)
const (
PriorityClassNameForDataMover = "data-mover"
PriorityClassNameForRepoMaintenance = "repo-maintenance"
)
var PublicCloudProviders = []string{AWS, Azure, GCP, Vsphere}
var LocalCloudProviders = []string{Kind, VanillaZFS}
var CloudProviders = append(PublicCloudProviders, LocalCloudProviders...)

View File

@@ -36,6 +36,7 @@ import (
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/util/wait"
clientset "k8s.io/client-go/kubernetes"
"sigs.k8s.io/controller-runtime/pkg/client"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/cmd/cli/install"
@@ -56,7 +57,17 @@ type installOptions struct {
WorkerOS string
}
func VeleroInstall(ctx context.Context, veleroCfg *test.VeleroConfig, isStandbyCluster bool) error {
/*
VeleroInstall is used to install Velero for E2E test
params:
ctx: The context
veleroCfg: Velero E2E case configuration
isStandbyCluster: Whether Velero is installed on standby cluster
objects: The objects are installed in Velero installed namespace, e.g. the ConfigMaps.
*/
func VeleroInstall(ctx context.Context, veleroCfg *test.VeleroConfig, isStandbyCluster bool, objects ...client.Object) error {
fmt.Printf("Velero install %s\n", time.Now().Format("2006-01-02 15:04:05"))
// veleroCfg struct including a set of BSL params and a set of additional BSL params,
@@ -152,6 +163,15 @@ func VeleroInstall(ctx context.Context, veleroCfg *test.VeleroConfig, isStandbyC
veleroCfg.VeleroNamespace,
)
}
veleroCfg.BackupRepoConfigMap = test.BackupRepositoryConfigName
// Install the passed-in objects in Velero installed namespace
for _, obj := range objects {
if err := veleroCfg.ClientToInstallVelero.Kubebuilder.Create(ctx, obj); err != nil {
fmt.Printf("fail to create object %s in namespace %s: %s\n", obj.GetName(), obj.GetNamespace(), err.Error())
return fmt.Errorf("fail to create object %s in namespace %s: %w", obj.GetName(), obj.GetNamespace(), err)
}
}
// For AWS IRSA credential test, AWS IAM service account is required, so if ServiceAccountName and EKSPolicyARN
// are both provided, we assume IRSA test is running, otherwise skip this IAM service account creation part.
@@ -635,7 +655,7 @@ func patchResources(resources *unstructured.UnstructuredList, namespace string,
APIVersion: corev1api.SchemeGroupVersion.String(),
},
ObjectMeta: metav1.ObjectMeta{
Name: "restic-restore-action-config",
Name: "fs-restore-action-config",
Namespace: namespace,
Labels: map[string]string{
"velero.io/plugin-config": "",
@@ -652,7 +672,7 @@ func patchResources(resources *unstructured.UnstructuredList, namespace string,
return errors.Wrapf(err, "failed to convert restore action config to unstructure")
}
resources.Items = append(resources.Items, un)
fmt.Printf("the restic restore helper image is set by the configmap %q \n", "restic-restore-action-config")
fmt.Printf("the restic restore helper image is set by the configmap %q \n", "fs-restore-action-config")
}
return nil
@@ -790,7 +810,7 @@ func CheckBSL(ctx context.Context, ns string, bslName string) error {
return err
}
func PrepareVelero(ctx context.Context, caseName string, veleroCfg test.VeleroConfig) error {
func PrepareVelero(ctx context.Context, caseName string, veleroCfg test.VeleroConfig, objects ...client.Object) error {
ready, err := IsVeleroReady(context.Background(), &veleroCfg)
if err != nil {
fmt.Printf("error in checking velero status with %v", err)
@@ -804,7 +824,7 @@ func PrepareVelero(ctx context.Context, caseName string, veleroCfg test.VeleroCo
return nil
}
fmt.Printf("need to install velero for case %s \n", caseName)
return VeleroInstall(context.Background(), &veleroCfg, false)
return VeleroInstall(context.Background(), &veleroCfg, false, objects...)
}
func VeleroUninstall(ctx context.Context, veleroCfg test.VeleroConfig) error {

View File

@@ -38,6 +38,8 @@ import (
"github.com/pkg/errors"
"golang.org/x/mod/semver"
schedulingv1api "k8s.io/api/scheduling/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
ver "k8s.io/apimachinery/pkg/util/version"
"k8s.io/apimachinery/pkg/util/wait"
@@ -45,9 +47,11 @@ import (
"github.com/vmware-tanzu/velero/internal/volume"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/builder"
cliinstall "github.com/vmware-tanzu/velero/pkg/cmd/cli/install"
"github.com/vmware-tanzu/velero/pkg/cmd/util/flag"
veleroexec "github.com/vmware-tanzu/velero/pkg/util/exec"
"github.com/vmware-tanzu/velero/test"
. "github.com/vmware-tanzu/velero/test"
common "github.com/vmware-tanzu/velero/test/util/common"
. "github.com/vmware-tanzu/velero/test/util/k8s"
@@ -274,6 +278,9 @@ func getProviderVeleroInstallOptions(veleroCfg *VeleroConfig,
io.ItemBlockWorkerCount = veleroCfg.ItemBlockWorkerCount
io.ServerPriorityClassName = veleroCfg.ServerPriorityClassName
io.NodeAgentPriorityClassName = veleroCfg.NodeAgentPriorityClassName
io.RepoMaintenanceJobConfigMap = veleroCfg.RepoMaintenanceJobConfigMap
io.BackupRepoConfigMap = veleroCfg.BackupRepoConfigMap
io.NodeAgentConfigMap = veleroCfg.NodeAgentConfigMap
return io, nil
}
@@ -1812,3 +1819,43 @@ func KubectlGetAllDeleteBackupRequest(ctx context.Context, backupName, veleroNam
return common.GetListByCmdPipes(ctx, cmds)
}
func CreatePriorityClasses(ctx context.Context, client kbclient.Client) error {
dataMoverPriorityClass := builder.ForPriorityClass(test.PriorityClassNameForDataMover).
Value(90000).PreemptionPolicy("Never").Result()
if err := client.Create(ctx, dataMoverPriorityClass); err != nil {
fmt.Printf("Fail to create PriorityClass %s: %s\n", test.PriorityClassNameForDataMover, err.Error())
return fmt.Errorf("fail to create PriorityClass %s: %w", test.PriorityClassNameForDataMover, err)
}
repoMaintenancePriorityClass := builder.ForPriorityClass(test.PriorityClassNameForRepoMaintenance).
Value(80000).PreemptionPolicy("Never").Result()
if err := client.Create(ctx, repoMaintenancePriorityClass); err != nil {
fmt.Printf("Fail to create PriorityClass %s: %s\n", test.PriorityClassNameForRepoMaintenance, err.Error())
return fmt.Errorf("fail to create PriorityClass %s: %w", test.PriorityClassNameForRepoMaintenance, err)
}
return nil
}
func DeletePriorityClasses(ctx context.Context, client kbclient.Client) error {
priorityClassDataMover := &schedulingv1api.PriorityClass{
ObjectMeta: metav1.ObjectMeta{
Name: test.PriorityClassNameForDataMover,
},
}
if err := client.Delete(ctx, priorityClassDataMover); err != nil {
return err
}
priorityClassRepoMaintenance := &schedulingv1api.PriorityClass{
ObjectMeta: metav1.ObjectMeta{
Name: test.PriorityClassNameForRepoMaintenance,
},
}
if err := client.Delete(ctx, priorityClassRepoMaintenance); err != nil {
return err
}
return nil
}