Compare commits

...

66 Commits

Author SHA1 Message Date
Wenkai Yin(尹文开)
bb9a94bebe Merge pull request #9620 from vmware-tanzu/jxun/main/fix_NodeAgentConfig_e2e_error
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m5s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Successful in 17s
Main CI / Build (push) Failing after 34s
[main][cherry-pick] Compare affinity by string instead of exactly same compare.
2026-03-19 16:45:23 +08:00
Xun Jiang/Bruce Jiang
74401b20b0 Merge pull request #9630 from vmware-tanzu/dependabot/go_modules/google.golang.org/grpc-1.79.3
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m3s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Successful in 14s
Main CI / Build (push) Failing after 35s
Bump google.golang.org/grpc from 1.77.0 to 1.79.3
2026-03-19 14:02:57 +08:00
dependabot[bot]
417d3d2562 Bump google.golang.org/grpc from 1.77.0 to 1.79.3
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.77.0 to 1.79.3.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.77.0...v1.79.3)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-version: 1.79.3
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-19 05:05:16 +00:00
Xun Jiang/Bruce Jiang
48e66b1790 Merge pull request #9564 from hollycai05/add-e2e-tests-for-PR9255
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m25s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Successful in 19s
Main CI / Build (push) Failing after 6m20s
Close stale issues and PRs / stale (push) Successful in 16s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m6s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-aws, main) (push) Failing after 23s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-gcp, main) (push) Failing after 51s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-microsoft-azure, main) (push) Failing after 27s
Add e2e test case for PR 9255
2026-03-17 16:18:19 +08:00
Xun Jiang
29a9f80f10 Compare affinity by string instead of exactly same compare.
From 1.18.1, Velero adds some default affinity in the backup/restore pod,
so we can't directly compare the whole affinity,
but we can verify if the expected affinity is contained in the pod affinity.

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2026-03-16 10:49:50 +08:00
Xun Jiang/Bruce Jiang
66ac235e1f Merge pull request #9595 from vmware-tanzu/xj014661/main/disable_search_in_site
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m5s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Successful in 14s
Main CI / Build (push) Failing after 33s
Close stale issues and PRs / stale (push) Successful in 17s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m8s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-aws, main) (push) Failing after 43s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-gcp, main) (push) Failing after 1m5s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-microsoft-azure, main) (push) Failing after 1m13s
Disable Algolia docs search
2026-03-11 11:23:22 +08:00
Shubham Pampattiwar
afe7df17d4 Add itemOperationTimeout to Schedule API type docs (#9599)
The itemOperationTimeout field was missing from the Schedule API type
documentation even though it is supported in the Schedule CRD template.
This led users to believe the field was not available per-schedule.

Fixes #9598

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2026-03-10 16:12:47 -04:00
Shubham Pampattiwar
a31f4abcb3 Fix DBR stuck when CSI snapshot no longer exists in cloud provider (#9581)
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m17s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Successful in 19s
Main CI / Build (push) Failing after 37s
Close stale issues and PRs / stale (push) Successful in 18s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m29s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-aws, main) (push) Failing after 40s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-gcp, main) (push) Failing after 50s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-microsoft-azure, main) (push) Failing after 53s
* Fix DBR stuck when CSI snapshot no longer exists in cloud provider

During backup deletion, VolumeSnapshotContentDeleteItemAction creates a
new VSC with the snapshot handle from the backup and polls for readiness.
If the underlying snapshot no longer exists (e.g., deleted externally),
the CSI driver reports Status.Error but checkVSCReadiness() only checks
ReadyToUse, causing it to poll for the full 10-minute timeout instead of
failing fast. Additionally, the newly created VSC is never cleaned up on
failure, leaving orphaned resources in the cluster.

This commit:
- Adds Status.Error detection in checkVSCReadiness() to fail immediately
  on permanent CSI driver errors (e.g., InvalidSnapshot.NotFound)
- Cleans up the dangling VSC when readiness polling fails

Fixes #9579

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

* Add changelog for PR #9581

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

* Fix typo in pod_volume_test.go: colume -> volume

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

---------

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2026-03-10 13:40:09 -04:00
Xun Jiang/Bruce Jiang
2145c57642 Merge pull request #9562 from hollycai05/add-e2e-test-for-PR9366
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m4s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Successful in 18s
Main CI / Build (push) Failing after 41s
Add e2e test case for PR 9366
2026-03-10 17:28:23 +08:00
Xun Jiang
a9b3cfa062 Disable Algolia docs search.
Revert PR 6105.

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2026-03-10 16:10:44 +08:00
Wenkai Yin(尹文开)
bca6afada7 Merge pull request #9590 from Lyndon-Li/set-latest-do-to-1.18
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m27s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Successful in 19s
Main CI / Build (push) Failing after 1m55s
Close stale issues and PRs / stale (push) Successful in 16s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m3s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-aws, main) (push) Failing after 55s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-gcp, main) (push) Failing after 30s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-microsoft-azure, main) (push) Failing after 46s
Issue 9586: set latest doc to 1.18
2026-03-09 17:27:23 +08:00
Lyndon-Li
d1cc303553 issue 9586: set latest doc to 1.18
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-03-09 15:41:13 +08:00
Xun Jiang/Bruce Jiang
befa61cee1 Merge pull request #9570 from H-M-Quang-Ngo/add-schedule-interval-metric
Add schedule_expected_interval_seconds metric
2026-03-09 15:28:59 +08:00
lyndon-li
245525c26b Merge pull request #9547 from blackpiglet/1.18_add_bia_skip_resource_logic
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m6s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Successful in 15s
Main CI / Build (push) Failing after 37s
Close stale issues and PRs / stale (push) Successful in 16s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m27s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-aws, main) (push) Failing after 52s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-gcp, main) (push) Failing after 29s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-microsoft-azure, main) (push) Failing after 47s
Add BIA skip resource logic
2026-03-06 12:28:05 +08:00
Xun Jiang/Bruce Jiang
55737b9cf1 Merge pull request #9574 from blackpiglet/xj014661/main/ephemeral_storage_config
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m24s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Successful in 17s
Main CI / Build (push) Failing after 40s
Close stale issues and PRs / stale (push) Successful in 16s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m28s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-aws, main) (push) Failing after 41s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-gcp, main) (push) Failing after 35s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-microsoft-azure, main) (push) Failing after 25s
Add ephemeral storage limit and request support for data mover and maintenance job
2026-03-05 22:43:16 +08:00
Xun Jiang
ffea850522 Add ephemeral storage limit and request support for data mover and maintenance job.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2026-03-05 14:22:53 +08:00
dongqingcc
d315bca32b add namespace wildcard test case for restore
Signed-off-by: dongqingcc <dongqingcc@vmware.com>
2026-03-05 13:46:21 +08:00
Quang
b3aff97684 Merge branch 'main' into add-schedule-interval-metric 2026-03-05 09:15:52 +11:00
testsabirweb
23a3c242fa Add test coverage and fix validation for MRAP ARN bucket names (#9554)
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m21s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Successful in 13s
Main CI / Build (push) Failing after 43s
Close stale issues and PRs / stale (push) Successful in 16s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 2m35s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-aws, main) (push) Failing after 1m49s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-gcp, main) (push) Failing after 1m36s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-microsoft-azure, main) (push) Failing after 1m44s
* Issue #9544: Add test coverage and fix validation for MRAP ARN bucket names

S3 Multi-Region Access Point (MRAP) ARNs have the format:
  arn:aws:s3::{account-id}:accesspoint/{mrap-alias}.mrap

These ARNs contain a '/' as part of the ARN path, which caused Velero's
BSL bucket validation to reject them with an error asking the user to
put the value in the Prefix field instead.

Fix the bucket name validation in objectBackupStoreGetter.Get() to
exempt ARNs (identified by the "arn:" prefix) from the slash check,
since slashes are a valid and required part of ARN syntax.

Add unit tests in object_store_mrap_test.go covering:
- A plain MRAP ARN as bucket name succeeds
- A MRAP ARN with a trailing slash is trimmed and accepted

Signed-off-by: Sabir Ali <testsabirweb@gmail.com>
Co-authored-by: Cursor <cursoragent@cursor.com>

* Address review comments: fix changelog filename and import grouping

Signed-off-by: Sabir Ali <testsabirweb@gmail.com>
Co-authored-by: Cursor <cursoragent@cursor.com>

* Restrict MRAP ARN bucket validation to arn:aws:s3: prefix

Per review, use HasPrefix(bucket, "arn:aws:s3:") instead of
HasPrefix(bucket, "arn:") so only S3 ARNs (e.g. MRAP) are exempt
from the slash check, not any ARN from other AWS services.

Signed-off-by: Sabir Ali <sabir.ali@spectrocloud.com>
Co-authored-by: Cursor <cursoragent@cursor.com>

* Move MRAP bucket tests into TestNewObjectBackupStoreGetter

Consolidate MRAP ARN test cases into the existing table in
object_store_test.go and remove object_store_mrap_test.go.

Signed-off-by: Sabir Ali <sabir.ali@spectrocloud.com>
Co-authored-by: Cursor <cursoragent@cursor.com>

---------

Signed-off-by: Sabir Ali <testsabirweb@gmail.com>
Signed-off-by: Sabir Ali <sabir.ali@spectrocloud.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-03-04 15:11:01 +00:00
Xun Jiang/Bruce Jiang
b7bc16f190 Merge pull request #9569 from vmware-tanzu/dependabot/go_modules/go.opentelemetry.io/otel/sdk-1.40.0
Bump go.opentelemetry.io/otel/sdk from 1.38.0 to 1.40.0
2026-03-04 23:00:11 +08:00
dongqingcc
bbec46f6ee Add e2e test case for PR 9366: Use hookIndex for recording multiple restore exec hooks.
Signed-off-by: dongqingcc <dongqingcc@vmware.com>
2026-03-03 17:53:11 +08:00
Quang
475050108b Merge branch 'main' into add-schedule-interval-metric 2026-03-03 01:00:32 +11:00
lyndon-li
b5f7cd92c7 Merge pull request #9571 from Lyndon-Li/fix-compile-error-for-windows
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m14s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Successful in 17s
Main CI / Build (push) Failing after 34s
Close stale issues and PRs / stale (push) Successful in 20s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 2m13s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-aws, main) (push) Failing after 1m42s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-gcp, main) (push) Failing after 1m46s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-microsoft-azure, main) (push) Failing after 1m41s
Fix compile error for Windows
2026-03-02 16:43:59 +08:00
Lyndon-Li
ab31b811ee fix compile error for Windows
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-03-02 15:11:54 +08:00
dependabot[bot]
19360622e7 Bump go.opentelemetry.io/otel/sdk from 1.38.0 to 1.40.0
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m11s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Bumps [go.opentelemetry.io/otel/sdk](https://github.com/open-telemetry/opentelemetry-go) from 1.38.0 to 1.40.0.
- [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.38.0...v1.40.0)

---
updated-dependencies:
- dependency-name: go.opentelemetry.io/otel/sdk
  dependency-version: 1.40.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-02 06:50:57 +00:00
lyndon-li
932d27541c Merge pull request #9561 from Lyndon-Li/uploader-flush-buffer
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m11s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Successful in 15s
Main CI / Build (push) Failing after 36s
Issue 9460: Uploader flush buffer
2026-03-02 14:49:51 +08:00
Quang
b0642b3078 Merge branch 'main' into add-schedule-interval-metric 2026-03-02 15:23:53 +11:00
Lyndon-Li
9cada8fc11 issue 9460: flush buffer when uploader completes
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-03-02 11:43:44 +08:00
Wenkai Yin(尹文开)
25d5fa1b88 Merge pull request #9560 from Lyndon-Li/selected-node-to-node-selector
Issue 9475: Selected node to node selector
2026-03-02 11:26:26 +08:00
Quang Ngo
1c08af8461 Add changelog for #9570
Signed-off-by: Quang Ngo <quang.ngo@canonical.com>
2026-03-02 10:49:14 +11:00
Quang Ngo
6c3d81a146 Add schedule_expected_interval_seconds metric
Add a new Prometheus gauge metric that exposes the expected interval
between consecutive scheduled backups. This enables dynamic alerting
thresholds per schedule backups.

Signed-off-by: Quang Ngo <quang.ngo@canonical.com>
2026-03-02 10:20:09 +11:00
Xun Jiang/Bruce Jiang
8f32696449 Merge branch 'main' into 1.18_add_bia_skip_resource_logic 2026-02-27 11:38:27 +08:00
Xun Jiang
3f15e9219f Remove the skipped item from the resource list when it's skipped by BIA.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2026-02-27 11:37:34 +08:00
dongqingcc
62aa70219b Add e2e test case for PR 9255
Signed-off-by: dongqingcc <dongqingcc@vmware.com>
2026-02-26 17:10:28 +08:00
Lyndon-Li
544b184d6c Merge branch 'main' into uploader-flush-buffer 2026-02-26 13:38:44 +08:00
Lyndon-Li
250c4db158 node-selector for selected node
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-02-26 13:34:43 +08:00
Lyndon-Li
f0d81c56e2 Merge branch 'main' into selected-node-to-node-selector 2026-02-26 13:30:47 +08:00
lyndon-li
8b5559274d Merge pull request #9533 from Lyndon-Li/support-customized-host-os
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m6s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Successful in 12s
Main CI / Build (push) Failing after 46s
Close stale issues and PRs / stale (push) Successful in 15s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 2m12s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-aws, main) (push) Failing after 1m34s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-gcp, main) (push) Failing after 1m28s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-microsoft-azure, main) (push) Failing after 1m45s
Issue 9496: support customized host os
2026-02-26 12:00:02 +08:00
Lyndon-Li
7235180de4 Merge branch 'main' into support-customized-host-os 2026-02-24 15:40:56 +08:00
Tiger Kaovilai
ba5e7681ff rename malformed changelog file name (#9552)
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m2s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Close stale issues and PRs / stale (push) Successful in 21s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 2m3s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-aws, main) (push) Failing after 1m29s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-gcp, main) (push) Failing after 1m15s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-microsoft-azure, main) (push) Failing after 1m29s
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2026-02-19 14:28:15 -05:00
lyndon-li
fc0a16d734 Merge pull request #9548 from Lyndon-Li/doc-for-1.18-2
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m25s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Successful in 14s
Main CI / Build (push) Failing after 41s
Close stale issues and PRs / stale (push) Successful in 14s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 2m28s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-aws, main) (push) Failing after 1m38s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-gcp, main) (push) Failing after 1m25s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-microsoft-azure, main) (push) Failing after 1m22s
Update doc link for 1.18
2026-02-13 18:02:40 +08:00
Xun Jiang
bcdee1b116 If BIA return updateObj with SkipFromBackupAnnotation, treat it as skip the resource from backup.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2026-02-13 17:42:46 +08:00
Lyndon-Li
2a696a4431 update doc link for 1.18
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-02-13 17:34:36 +08:00
Xun Jiang/Bruce Jiang
991bf1b000 Merge pull request #9545 from Lyndon-Li/add-upgrade-to-1.18-doc
Add upgrade-to-1.18 doc
2026-02-13 16:32:47 +08:00
Lyndon-Li
4d47471932 add upgrade-to-1.18 doc
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-02-13 16:20:53 +08:00
lyndon-li
0bf968d24d Merge pull request #9532 from Lyndon-Li/issue-fix-9343
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m33s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Successful in 18s
Main CI / Build (push) Failing after 41s
Issue 9343: include PV topology to data mover pod affinities
2026-02-13 13:14:34 +08:00
Lyndon-Li
05c9a8d8f8 issue 9343: include PV topology to data mover pod affinitiesq
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-02-13 11:22:32 +08:00
Xun Jiang/Bruce Jiang
bc957a22b7 Merge pull request #9542 from blackpiglet/xj014661/main/cherry_pick_e2e_fixes
[main] cherry pick e2e fixes
2026-02-13 10:24:03 +08:00
Xun Jiang
7e3d66adc7 Fix test case issue and add UT.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2026-02-12 13:22:18 +08:00
Xun Jiang
710ebb9d92 Update the migration and upgrade test cases.
Modify Dockerfile to fix GitHub CI action error.

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2026-02-12 13:20:34 +08:00
Joseph Antony Vaikath
1315399f35 Support all glob wildcard characters in namespace validation (#9502)
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m19s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
build-image / Build (push) Failing after 16s
Main CI / get-go-version (push) Successful in 13s
Main CI / Build (push) Failing after 36s
Close stale issues and PRs / stale (push) Successful in 16s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m56s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-aws, main) (push) Failing after 1m33s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-gcp, main) (push) Failing after 1m30s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-microsoft-azure, main) (push) Failing after 1m40s
* Support all glob wildcard characters in namespace validation

Expand namespace validation to allow all valid glob pattern characters
(*, ?, {}, [], ,) by replacing them with valid characters during RFC 1123
validation. The actual glob pattern validation is handled separately by
the wildcard package.

Also add validation to reject unsupported characters (|, (), !) that are
not valid in glob patterns, and update terminology from "regex" to "glob"
for clarity since this implementation uses glob patterns, not regex.

Changes:
- Replace all glob wildcard characters in validateNamespaceName
- Add test coverage for valid glob patterns in includes/excludes
- Add test coverage for unsupported characters
- Reject exclamation mark (!) in wildcard patterns
- Clarify comments and error messages about glob vs regex

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Joseph <jvaikath@redhat.com>

* Changelog

Signed-off-by: Joseph <jvaikath@redhat.com>

* Add documentation: glob patterns are now accepted

Signed-off-by: Joseph <jvaikath@redhat.com>

* Error message fix

Signed-off-by: Joseph <jvaikath@redhat.com>

* Remove negation glob char test

Signed-off-by: Joseph <jvaikath@redhat.com>

* Add bracket pattern validation for namespace glob patterns

Extends wildcard validation to support square bracket patterns [] used in glob character classes. Validates bracket syntax including empty brackets, unclosed brackets, and unmatched brackets. Extracts ValidateNamespaceName as a public function to enable reuse in namespace validation logic.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Joseph <jvaikath@redhat.com>

* Reduce scope to *, ?, [ and ]

Signed-off-by: Joseph <jvaikath@redhat.com>

* Fix tests

Signed-off-by: Joseph <jvaikath@redhat.com>

* Add namespace glob patterns documentation page

Adds dedicated documentation explaining supported glob patterns
for namespace include/exclude filtering to help users understand
the wildcard syntax.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Joseph <jvaikath@redhat.com>

* Fix build-image Dockerfile envtest download

Replace inaccessible go.kubebuilder.io URL with setup-envtest and update envtest version to 1.33.0 to match Kubernetes v0.33.3 dependencies.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Joseph <jvaikath@redhat.com>

* kubebuilder binaries mv

Signed-off-by: Joseph <jvaikath@redhat.com>

* Reject brace patterns and update documentation

Add {, }, and , to unsupported characters list to explicitly reject
brace expansion patterns. Remove { from wildcard detection since these
patterns are not supported in the 1.18 release.

Update all documentation to show supported patterns inline (*, ?, [abc])
with clickable links to the detailed namespace-glob-patterns page.
Simplify YAML comments by removing non-clickable URLs.

Update tests to expect errors when brace patterns are used.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Joseph <jvaikath@redhat.com>

* Document brace expansion as unsupported

Add {} and , to the unsupported patterns section to clarify that
brace expansion patterns like {a,b,c} are not supported.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Joseph <jvaikath@redhat.com>

* Update tests to expect brace pattern rejection

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Joseph <jvaikath@redhat.com>

---------

Signed-off-by: Joseph <jvaikath@redhat.com>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-11 12:43:55 -05:00
lyndon-li
7af688fbf5 Merge pull request #9508 from kaovilai/9507
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m41s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 5s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Successful in 18s
Main CI / Build (push) Failing after 34s
Close stale issues and PRs / stale (push) Successful in 15s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m58s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-aws, main) (push) Failing after 1m34s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-gcp, main) (push) Failing after 2m0s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-microsoft-azure, main) (push) Failing after 1m45s
Fix VolumePolicy PVC phase condition filter for unbound PVCs (#9507)
2026-02-10 17:53:46 +08:00
Lyndon-Li
41fa774844 support custom os
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-02-10 13:35:07 +08:00
Lyndon-Li
5121417457 Merge branch 'main' into support-customized-host-os 2026-02-09 18:36:55 +08:00
Lyndon-Li
ece04e6e39 Merge branch 'main' into issue-fix-9343 2026-02-09 18:34:14 +08:00
Tiger Kaovilai
71ddeefcd6 Fix VolumePolicy PVC phase condition filter for unbound PVCs
Use typed error approach: Make GetPVForPVC return ErrPVNotFoundForPVC
when PV is not expected to be found (unbound PVC), then use errors.Is
to check for this error type. When a matching policy exists (e.g.,
pvcPhase: [Pending, Lost] with action: skip), apply the action without
error. When no policy matches, return the original error to preserve
default behavior.

Changes:
- Add ErrPVNotFoundForPVC sentinel error to pvc_pv.go
- Update ShouldPerformSnapshot to handle unbound PVCs with policies
- Update ShouldPerformFSBackup to handle unbound PVCs with policies
- Update item_backupper.go to handle Lost PVCs in tracking functions
- Remove checkPVCOnlySkip helper (no longer needed)
- Update tests to reflect new behavior

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-09 01:03:45 -05:00
Xun Jiang/Bruce Jiang
e159992f48 Merge pull request #9529 from Lyndon-Li/move-implemented-design-for-1.18
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m49s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Successful in 15s
Main CI / Build (push) Failing after 34s
Close stale issues and PRs / stale (push) Successful in 18s
Trivy Nightly Scan / Trivy nightly scan (velero, main) (push) Failing after 1m44s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-aws, main) (push) Failing after 1m25s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-gcp, main) (push) Failing after 1m35s
Trivy Nightly Scan / Trivy nightly scan (velero-plugin-for-microsoft-azure, main) (push) Failing after 1m36s
Move implemented design for 1.18
2026-02-09 10:32:08 +08:00
Lyndon-Li
48b14194df move implemented design for 1.18
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-02-06 18:46:41 +08:00
Lyndon-Li
18c32ed29c support customized host os
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-01-27 15:23:25 +08:00
Lyndon-Li
598c8c528b support customized host os - use affinity for host os selection
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-01-27 14:49:55 +08:00
Lyndon-Li
8f9beb04f0 support customized host os
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-01-27 14:37:38 +08:00
Lyndon-Li
bb518e6d89 replace nodeName with node selector
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-01-26 13:58:29 +08:00
Lyndon-Li
89c5182c3c flush volume after restore
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-01-26 13:17:44 +08:00
Lyndon-Li
d17435542e Merge branch 'main' into uploader-flush-buffer 2026-01-26 11:15:14 +08:00
Lyndon-Li
e3b501d0d9 issue 9343: include PV topology to data mover pod affinities
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-01-23 15:45:43 +08:00
Lyndon-Li
060b3364f2 uploader flush buffer for restore
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-12-29 18:19:23 +08:00
98 changed files with 3225 additions and 745 deletions

View File

@@ -16,7 +16,7 @@ https://velero.io/docs/v1.18/upgrade-to-1.18/
#### Concurrent backup
In v1.18, Velero is capable to process multiple backups concurrently. This is a significant usability improvement, especially for multiple tenants or multiple users case, backups submitted from different users could run their backups simultaneously without interfering with each other.
Check design https://github.com/vmware-tanzu/velero/blob/main/design/concurrent-backup-processing.md for more details.
Check design https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/concurrent-backup-processing.md for more details.
#### Cache volume for data movers
In v1.18, Velero allows users to configure cache volumes for data mover pods during restore for CSI snapshot data movement and fs-backup. This brings below benefits:
@@ -24,7 +24,7 @@ In v1.18, Velero allows users to configure cache volumes for data mover pods dur
- Solve the problem that multiple data mover pods fail to run concurrently in one node when the node's ephemeral disk is limited
- Working together with backup repository's cache limit configuration, cache volume with appropriate size helps to improve the restore throughput
Check design https://github.com/vmware-tanzu/velero/blob/main/design/backup-repo-cache-volume.md for more details.
Check design https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/backup-repo-cache-volume.md for more details.
#### Incremental size for data movers
In v1.18, Velero allows users to observe the incremental size of data movers backups for CSI snapshot data movement and fs-backup, so that users could visually see the data reduction due to incremental backup.

View File

@@ -0,0 +1 @@
Support all glob wildcard characters in namespace validation

View File

@@ -0,0 +1 @@
Fix VolumePolicy PVC phase condition filter for unbound PVCs (#9507)

View File

@@ -0,0 +1 @@
Fix issue #9343, include PV topology to data mover pod affinities

View File

@@ -0,0 +1 @@
Fix issue #9496, support customized host os

View File

@@ -0,0 +1 @@
If BIA return updateObj with SkipFromBackupAnnotation, treat it as skip the resource from backup.

View File

@@ -0,0 +1 @@
Issue #9544: Add test coverage for S3 bucket name in MRAP ARN notation and fix bucket validation to accept ARN format

View File

@@ -0,0 +1 @@
Fix issue #9475, use node-selector instead of nodName for generic restore

View File

@@ -0,0 +1 @@
Fix issue #9460, flush buffer before data mover completes

View File

@@ -0,0 +1 @@
Add schedule_expected_interval_seconds metric for dynamic backup alerting thresholds (#9559)

View File

@@ -0,0 +1 @@
Add ephemeral storage limit and request support for data mover and maintenance job

View File

@@ -0,0 +1 @@
Fix DBR stuck when CSI snapshot no longer exists in cloud provider

42
go.mod
View File

@@ -42,10 +42,11 @@ require (
github.com/vmware-tanzu/crash-diagnostics v0.3.7
go.uber.org/zap v1.27.1
golang.org/x/mod v0.30.0
golang.org/x/oauth2 v0.33.0
golang.org/x/text v0.31.0
golang.org/x/oauth2 v0.34.0
golang.org/x/sys v0.40.0
golang.org/x/text v0.32.0
google.golang.org/api v0.256.0
google.golang.org/grpc v1.77.0
google.golang.org/grpc v1.79.3
google.golang.org/protobuf v1.36.10
gopkg.in/yaml.v3 v3.0.1
k8s.io/api v0.33.3
@@ -63,7 +64,7 @@ require (
)
require (
cel.dev/expr v0.24.0 // indirect
cel.dev/expr v0.25.1 // indirect
cloud.google.com/go v0.121.6 // indirect
cloud.google.com/go/auth v0.17.0 // indirect
cloud.google.com/go/auth/oauth2adapt v0.2.8 // indirect
@@ -93,13 +94,13 @@ require (
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/chmduquesne/rollinghash v4.0.0+incompatible // indirect
github.com/cncf/xds/go v0.0.0-20251022180443-0feb69152e9f // indirect
github.com/cncf/xds/go v0.0.0-20251210132809-ee656c7534f5 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/edsrzf/mmap-go v1.2.0 // indirect
github.com/emicklei/go-restful/v3 v3.11.0 // indirect
github.com/envoyproxy/go-control-plane/envoy v1.35.0 // indirect
github.com/envoyproxy/protoc-gen-validate v1.2.1 // indirect
github.com/envoyproxy/go-control-plane/envoy v1.36.0 // indirect
github.com/envoyproxy/protoc-gen-validate v1.3.0 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/fsnotify/fsnotify v1.7.0 // indirect
github.com/fxamacker/cbor/v2 v2.7.0 // indirect
@@ -168,29 +169,28 @@ require (
github.com/x448/float16 v0.8.4 // indirect
github.com/zeebo/blake3 v0.2.4 // indirect
go.opentelemetry.io/auto/sdk v1.2.1 // indirect
go.opentelemetry.io/contrib/detectors/gcp v1.38.0 // indirect
go.opentelemetry.io/contrib/detectors/gcp v1.39.0 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0 // indirect
go.opentelemetry.io/otel v1.38.0 // indirect
go.opentelemetry.io/otel/metric v1.38.0 // indirect
go.opentelemetry.io/otel/sdk v1.38.0 // indirect
go.opentelemetry.io/otel/sdk/metric v1.38.0 // indirect
go.opentelemetry.io/otel/trace v1.38.0 // indirect
go.opentelemetry.io/otel v1.40.0 // indirect
go.opentelemetry.io/otel/metric v1.40.0 // indirect
go.opentelemetry.io/otel/sdk v1.40.0 // indirect
go.opentelemetry.io/otel/sdk/metric v1.40.0 // indirect
go.opentelemetry.io/otel/trace v1.40.0 // indirect
go.starlark.net v0.0.0-20230525235612-a134d8f9ddca // indirect
go.uber.org/multierr v1.11.0 // indirect
go.yaml.in/yaml/v2 v2.4.3 // indirect
golang.org/x/crypto v0.45.0 // indirect
golang.org/x/crypto v0.46.0 // indirect
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 // indirect
golang.org/x/net v0.47.0 // indirect
golang.org/x/sync v0.18.0 // indirect
golang.org/x/sys v0.38.0 // indirect
golang.org/x/term v0.37.0 // indirect
golang.org/x/net v0.48.0 // indirect
golang.org/x/sync v0.19.0 // indirect
golang.org/x/term v0.38.0 // indirect
golang.org/x/time v0.14.0 // indirect
golang.org/x/tools v0.38.0 // indirect
golang.org/x/tools v0.39.0 // indirect
gomodules.xyz/jsonpatch/v2 v2.4.0 // indirect
google.golang.org/genproto v0.0.0-20250603155806-513f23925822 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20251022142026-3a174f9686a8 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20251103181224-f26f9409b101 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20251202230838-ff82c1b0f217 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20251202230838-ff82c1b0f217 // indirect
gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
k8s.io/kube-openapi v0.0.0-20250318190949-c8a335a9a2ff // indirect

88
go.sum
View File

@@ -1,7 +1,7 @@
al.essio.dev/pkg/shellescape v1.5.1 h1:86HrALUujYS/h+GtqoB26SBEdkWfmMI6FubjXlsXyho=
al.essio.dev/pkg/shellescape v1.5.1/go.mod h1:6sIqp7X2P6mThCQ7twERpZTuigpr6KbZWtls1U8I890=
cel.dev/expr v0.24.0 h1:56OvJKSH3hDGL0ml5uSxZmz3/3Pq4tJ+fb1unVLAFcY=
cel.dev/expr v0.24.0/go.mod h1:hLPLo1W4QUmuYdA72RBX06QTs6MXw941piREPl3Yfiw=
cel.dev/expr v0.25.1 h1:1KrZg61W6TWSxuNZ37Xy49ps13NUovb66QLprthtwi4=
cel.dev/expr v0.25.1/go.mod h1:hrXvqGP6G6gyx8UAHSHJ5RGk//1Oj5nXQ2NI02Nrsg4=
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
@@ -189,8 +189,8 @@ github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDk
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/cncf/udpa/go v0.0.0-20200629203442-efcf912fb354/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
github.com/cncf/xds/go v0.0.0-20251022180443-0feb69152e9f h1:Y8xYupdHxryycyPlc9Y+bSQAYZnetRJ70VMVKm5CKI0=
github.com/cncf/xds/go v0.0.0-20251022180443-0feb69152e9f/go.mod h1:HlzOvOjVBOfTGSRXRyY0OiCS/3J1akRGQQpRO/7zyF4=
github.com/cncf/xds/go v0.0.0-20251210132809-ee656c7534f5 h1:6xNmx7iTtyBRev0+D/Tv1FZd4SCg8axKApyNyRsAt/w=
github.com/cncf/xds/go v0.0.0-20251210132809-ee656c7534f5/go.mod h1:KdCmV+x/BuvyMxRnYBlmVaq4OLiKW6iRQfvC62cvdkI=
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/etcd v3.3.13+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
@@ -227,15 +227,15 @@ github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1m
github.com/envoyproxy/go-control-plane v0.9.7/go.mod h1:cwu0lG7PUMfa9snN8LXBig5ynNVH9qI8YYLbd1fK2po=
github.com/envoyproxy/go-control-plane v0.9.9-0.20201210154907-fd9021fe5dad/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
github.com/envoyproxy/go-control-plane v0.9.9-0.20210217033140-668b12f5399d/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
github.com/envoyproxy/go-control-plane v0.13.5-0.20251024222203-75eaa193e329 h1:K+fnvUM0VZ7ZFJf0n4L/BRlnsb9pL/GuDG6FqaH+PwM=
github.com/envoyproxy/go-control-plane v0.13.5-0.20251024222203-75eaa193e329/go.mod h1:Alz8LEClvR7xKsrq3qzoc4N0guvVNSS8KmSChGYr9hs=
github.com/envoyproxy/go-control-plane/envoy v1.35.0 h1:ixjkELDE+ru6idPxcHLj8LBVc2bFP7iBytj353BoHUo=
github.com/envoyproxy/go-control-plane/envoy v1.35.0/go.mod h1:09qwbGVuSWWAyN5t/b3iyVfz5+z8QWGrzkoqm/8SbEs=
github.com/envoyproxy/go-control-plane v0.14.0 h1:hbG2kr4RuFj222B6+7T83thSPqLjwBIfQawTkC++2HA=
github.com/envoyproxy/go-control-plane v0.14.0/go.mod h1:NcS5X47pLl/hfqxU70yPwL9ZMkUlwlKxtAohpi2wBEU=
github.com/envoyproxy/go-control-plane/envoy v1.36.0 h1:yg/JjO5E7ubRyKX3m07GF3reDNEnfOboJ0QySbH736g=
github.com/envoyproxy/go-control-plane/envoy v1.36.0/go.mod h1:ty89S1YCCVruQAm9OtKeEkQLTb+Lkz0k8v9W0Oxsv98=
github.com/envoyproxy/go-control-plane/ratelimit v0.1.0 h1:/G9QYbddjL25KvtKTv3an9lx6VBE2cnb8wp1vEGNYGI=
github.com/envoyproxy/go-control-plane/ratelimit v0.1.0/go.mod h1:Wk+tMFAFbCXaJPzVVHnPgRKdUdwW/KdbRt94AzgRee4=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/envoyproxy/protoc-gen-validate v1.2.1 h1:DEo3O99U8j4hBFwbJfrz9VtgcDfUKS7KJ7spH3d86P8=
github.com/envoyproxy/protoc-gen-validate v1.2.1/go.mod h1:d/C80l/jxXLdfEIhX1W2TmLfsJ31lvEjwamM4DxlWXU=
github.com/envoyproxy/protoc-gen-validate v1.3.0 h1:TvGH1wof4H33rezVKWSpqKz5NXWg5VPuZ0uONDT6eb4=
github.com/envoyproxy/protoc-gen-validate v1.3.0/go.mod h1:HvYl7zwPa5mffgyeTUHA9zHIH36nmrm7oCbo4YKoSWA=
github.com/evanphx/json-patch v4.11.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/evanphx/json-patch v5.6.0+incompatible h1:jBYDEEiFBPxA0v50tFdvOzQQTCvpL6mnFh5mB2/l16U=
github.com/evanphx/json-patch v5.6.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
@@ -742,24 +742,24 @@ go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk=
go.opencensus.io v0.23.0/go.mod h1:XItmlyltB5F7CS4xOC1DcqMoFqwtC6OG2xF7mCv7P7E=
go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
go.opentelemetry.io/contrib/detectors/gcp v1.38.0 h1:ZoYbqX7OaA/TAikspPl3ozPI6iY6LiIY9I8cUfm+pJs=
go.opentelemetry.io/contrib/detectors/gcp v1.38.0/go.mod h1:SU+iU7nu5ud4oCb3LQOhIZ3nRLj6FNVrKgtflbaf2ts=
go.opentelemetry.io/contrib/detectors/gcp v1.39.0 h1:kWRNZMsfBHZ+uHjiH4y7Etn2FK26LAGkNFw7RHv1DhE=
go.opentelemetry.io/contrib/detectors/gcp v1.39.0/go.mod h1:t/OGqzHBa5v6RHZwrDBJ2OirWc+4q/w2fTbLZwAKjTk=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0 h1:q4XOmH/0opmeuJtPsbFNivyl7bCt7yRBbeEm2sC/XtQ=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0/go.mod h1:snMWehoOh2wsEwnvvwtDyFCxVeDAODenXHtn5vzrKjo=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0 h1:F7Jx+6hwnZ41NSFTO5q4LYDtJRXBf2PD0rNBkeB/lus=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0/go.mod h1:UHB22Z8QsdRDrnAtX4PntOl36ajSxcdUMt1sF7Y6E7Q=
go.opentelemetry.io/otel v1.38.0 h1:RkfdswUDRimDg0m2Az18RKOsnI8UDzppJAtj01/Ymk8=
go.opentelemetry.io/otel v1.38.0/go.mod h1:zcmtmQ1+YmQM9wrNsTGV/q/uyusom3P8RxwExxkZhjM=
go.opentelemetry.io/otel v1.40.0 h1:oA5YeOcpRTXq6NN7frwmwFR0Cn3RhTVZvXsP4duvCms=
go.opentelemetry.io/otel v1.40.0/go.mod h1:IMb+uXZUKkMXdPddhwAHm6UfOwJyh4ct1ybIlV14J0g=
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.36.0 h1:rixTyDGXFxRy1xzhKrotaHy3/KXdPhlWARrCgK+eqUY=
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.36.0/go.mod h1:dowW6UsM9MKbJq5JTz2AMVp3/5iW5I/TStsk8S+CfHw=
go.opentelemetry.io/otel/metric v1.38.0 h1:Kl6lzIYGAh5M159u9NgiRkmoMKjvbsKtYRwgfrA6WpA=
go.opentelemetry.io/otel/metric v1.38.0/go.mod h1:kB5n/QoRM8YwmUahxvI3bO34eVtQf2i4utNVLr9gEmI=
go.opentelemetry.io/otel/sdk v1.38.0 h1:l48sr5YbNf2hpCUj/FoGhW9yDkl+Ma+LrVl8qaM5b+E=
go.opentelemetry.io/otel/sdk v1.38.0/go.mod h1:ghmNdGlVemJI3+ZB5iDEuk4bWA3GkTpW+DOoZMYBVVg=
go.opentelemetry.io/otel/sdk/metric v1.38.0 h1:aSH66iL0aZqo//xXzQLYozmWrXxyFkBJ6qT5wthqPoM=
go.opentelemetry.io/otel/sdk/metric v1.38.0/go.mod h1:dg9PBnW9XdQ1Hd6ZnRz689CbtrUp0wMMs9iPcgT9EZA=
go.opentelemetry.io/otel/trace v1.38.0 h1:Fxk5bKrDZJUH+AMyyIXGcFAPah0oRcT+LuNtJrmcNLE=
go.opentelemetry.io/otel/trace v1.38.0/go.mod h1:j1P9ivuFsTceSWe1oY+EeW3sc+Pp42sO++GHkg4wwhs=
go.opentelemetry.io/otel/metric v1.40.0 h1:rcZe317KPftE2rstWIBitCdVp89A2HqjkxR3c11+p9g=
go.opentelemetry.io/otel/metric v1.40.0/go.mod h1:ib/crwQH7N3r5kfiBZQbwrTge743UDc7DTFVZrrXnqc=
go.opentelemetry.io/otel/sdk v1.40.0 h1:KHW/jUzgo6wsPh9At46+h4upjtccTmuZCFAc9OJ71f8=
go.opentelemetry.io/otel/sdk v1.40.0/go.mod h1:Ph7EFdYvxq72Y8Li9q8KebuYUr2KoeyHx0DRMKrYBUE=
go.opentelemetry.io/otel/sdk/metric v1.40.0 h1:mtmdVqgQkeRxHgRv4qhyJduP3fYJRMX4AtAlbuWdCYw=
go.opentelemetry.io/otel/sdk/metric v1.40.0/go.mod h1:4Z2bGMf0KSK3uRjlczMOeMhKU2rhUqdWNoKcYrtcBPg=
go.opentelemetry.io/otel/trace v1.40.0 h1:WA4etStDttCSYuhwvEa8OP8I5EWu24lkOzp+ZYblVjw=
go.opentelemetry.io/otel/trace v1.40.0/go.mod h1:zeAhriXecNGP/s2SEG3+Y8X9ujcJOTqQ5RgdEJcawiA=
go.starlark.net v0.0.0-20200306205701-8dd3e2ee1dd5/go.mod h1:nmDLcffg48OtT/PSW0Hg7FvpRQsQh5OSqIylirxKC7o=
go.starlark.net v0.0.0-20201006213952-227f4aabceb5/go.mod h1:f0znQkUKRrkk36XxWbGjMqQM8wGv/xHBVE2qc3B5oFU=
go.starlark.net v0.0.0-20230525235612-a134d8f9ddca h1:VdD38733bfYv5tUZwEIskMM93VanwNIi5bIKnDrJdEY=
@@ -790,8 +790,8 @@ golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0/go.mod h1:LzIPMQfyMNhhGPh
golang.org/x/crypto v0.0.0-20210220033148-5ea612d1eb83/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
golang.org/x/crypto v0.0.0-20210421170649-83a5a9bb288b/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.45.0 h1:jMBrvKuj23MTlT0bQEOBcAE0mjg8mK9RXFhRH6nyF3Q=
golang.org/x/crypto v0.45.0/go.mod h1:XTGrrkGJve7CYK7J8PEww4aY7gM3qMCElcJQ8n8JdX4=
golang.org/x/crypto v0.46.0 h1:cKRW/pmt1pKAfetfu+RCEvjvZkA9RimPbh7bhFjGVBU=
golang.org/x/crypto v0.46.0/go.mod h1:Evb/oLKmMraqjZ2iQTwDwvCtJkczlDuTmdJXoZVzqU0=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@@ -876,8 +876,8 @@ golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4/go.mod h1:RBQZq4jEuRlivfhVLd
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20210520170846-37e1c6afe023/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.47.0 h1:Mx+4dIFzqraBXUugkia1OOvlD6LemFo1ALMHjrXDOhY=
golang.org/x/net v0.47.0/go.mod h1:/jNxtkgq5yWUGYkaZGqo27cfGZ1c5Nen03aYrrKpVRU=
golang.org/x/net v0.48.0 h1:zyQRTTrjc33Lhh0fBgT/H3oZq9WuvRR5gPC70xpDiQU=
golang.org/x/net v0.48.0/go.mod h1:+ndRgGjkh8FGtu1w1FGbEC31if4VrNVMuKTgcAAnQRY=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -891,8 +891,8 @@ golang.org/x/oauth2 v0.0.0-20210220000619-9bb904979d93/go.mod h1:KelEdhl1UZF7XfJ
golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210402161424-2e8d93401602/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.33.0 h1:4Q+qn+E5z8gPRJfmRy7C2gGG3T4jIprK6aSYgTXGRpo=
golang.org/x/oauth2 v0.33.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
golang.org/x/oauth2 v0.34.0 h1:hqK/t4AKgbqWkdkcAeI8XLmbK+4m4G5YeQRrmiotGlw=
golang.org/x/oauth2 v0.34.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -904,8 +904,8 @@ golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4=
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -969,14 +969,14 @@ golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc=
golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/sys v0.40.0 h1:DBZZqJ2Rkml6QMQsZywtnjnnGvHza6BTfYFWY9kjEWQ=
golang.org/x/sys v0.40.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20220526004731-065cf7ba2467/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.37.0 h1:8EGAD0qCmHYZg6J17DvsMy9/wJ7/D/4pV/wfnld5lTU=
golang.org/x/term v0.37.0/go.mod h1:5pB4lxRNYYVZuTLmy8oR2BH8dflOR+IbTYFD8fi3254=
golang.org/x/term v0.38.0 h1:PQ5pkm/rLO6HnxFR7N2lJHOZX6Kez5Y1gDSJla6jo7Q=
golang.org/x/term v0.38.0/go.mod h1:bSEAKrOT1W+VSu9TSCMtoGEOUcKxOKgl3LE5QEF/xVg=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@@ -986,8 +986,8 @@ golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM=
golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM=
golang.org/x/text v0.32.0 h1:ZD01bjUt1FQ9WJ0ClOL5vxgxOI/sVCNgX1YtKwcY0mU=
golang.org/x/text v0.32.0/go.mod h1:o/rUWzghvpD5TXrTIBuJU77MTaN0ljMWE47kxGJQ7jY=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
@@ -1047,8 +1047,8 @@ golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4f
golang.org/x/tools v0.0.0-20210108195828-e2f9c7f1fc8e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0=
golang.org/x/tools v0.1.2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.38.0 h1:Hx2Xv8hISq8Lm16jvBZ2VQf+RLmbd7wVUsALibYI/IQ=
golang.org/x/tools v0.38.0/go.mod h1:yEsQ/d/YK8cjh0L6rZlY8tgtlKiBNTL14pGDJPJpYQs=
golang.org/x/tools v0.39.0 h1:ik4ho21kwuQln40uelmciQPp9SipgNDdrafrYA4TmQQ=
golang.org/x/tools v0.39.0/go.mod h1:JnefbkDPyD8UU2kI5fuf8ZX4/yUeh9W877ZeBONxUqQ=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@@ -1134,10 +1134,10 @@ google.golang.org/genproto v0.0.0-20210402141018-6c239bbf2bb1/go.mod h1:9lPAdzaE
google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
google.golang.org/genproto v0.0.0-20250603155806-513f23925822 h1:rHWScKit0gvAPuOnu87KpaYtjK5zBMLcULh7gxkCXu4=
google.golang.org/genproto v0.0.0-20250603155806-513f23925822/go.mod h1:HubltRL7rMh0LfnQPkMH4NPDFEWp0jw3vixw7jEM53s=
google.golang.org/genproto/googleapis/api v0.0.0-20251022142026-3a174f9686a8 h1:mepRgnBZa07I4TRuomDE4sTIYieg/osKmzIf4USdWS4=
google.golang.org/genproto/googleapis/api v0.0.0-20251022142026-3a174f9686a8/go.mod h1:fDMmzKV90WSg1NbozdqrE64fkuTv6mlq2zxo9ad+3yo=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251103181224-f26f9409b101 h1:tRPGkdGHuewF4UisLzzHHr1spKw92qLM98nIzxbC0wY=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251103181224-f26f9409b101/go.mod h1:7i2o+ce6H/6BluujYR+kqX3GKH+dChPTQU19wjRPiGk=
google.golang.org/genproto/googleapis/api v0.0.0-20251202230838-ff82c1b0f217 h1:fCvbg86sFXwdrl5LgVcTEvNC+2txB5mgROGmRL5mrls=
google.golang.org/genproto/googleapis/api v0.0.0-20251202230838-ff82c1b0f217/go.mod h1:+rXWjjaukWZun3mLfjmVnQi18E1AsFbDN9QdJ5YXLto=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251202230838-ff82c1b0f217 h1:gRkg/vSppuSQoDjxyiGfN4Upv/h/DQmIR10ZU8dh4Ww=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251202230838-ff82c1b0f217/go.mod h1:7i2o+ce6H/6BluujYR+kqX3GKH+dChPTQU19wjRPiGk=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
@@ -1159,8 +1159,8 @@ google.golang.org/grpc v1.35.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAG
google.golang.org/grpc v1.36.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.36.1/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
google.golang.org/grpc v1.77.0 h1:wVVY6/8cGA6vvffn+wWK5ToddbgdU3d8MNENr4evgXM=
google.golang.org/grpc v1.77.0/go.mod h1:z0BY1iVj0q8E1uSQCjL9cppRj+gnZjzDnzV0dHhrNig=
google.golang.org/grpc v1.79.3 h1:sybAEdRIEtvcD68Gx7dmnwjZKlyfuc61Dyo9pGXXkKE=
google.golang.org/grpc v1.79.3/go.mod h1:KmT0Kjez+0dde/v2j9vzwoAScgEPx/Bw1CYChhHLrHQ=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=

View File

@@ -21,9 +21,11 @@ ENV GO111MODULE=on
ENV GOPROXY=${GOPROXY}
# kubebuilder test bundle is separated from kubebuilder. Need to setup it for CI test.
RUN curl -sSLo envtest-bins.tar.gz https://go.kubebuilder.io/test-tools/1.22.1/linux/$(go env GOARCH) && \
mkdir /usr/local/kubebuilder && \
tar -C /usr/local/kubebuilder --strip-components=1 -zvxf envtest-bins.tar.gz
# Using setup-envtest to download envtest binaries
RUN go install sigs.k8s.io/controller-runtime/tools/setup-envtest@latest && \
mkdir -p /usr/local/kubebuilder/bin && \
ENVTEST_ASSETS_DIR=$(setup-envtest use 1.33.0 --bin-dir /usr/local/kubebuilder/bin -p path) && \
cp -r ${ENVTEST_ASSETS_DIR}/* /usr/local/kubebuilder/bin/
RUN wget --quiet https://github.com/kubernetes-sigs/kubebuilder/releases/download/v3.2.0/kubebuilder_linux_$(go env GOARCH) && \
mv kubebuilder_linux_$(go env GOARCH) /usr/local/kubebuilder/bin/kubebuilder && \

View File

@@ -137,6 +137,10 @@ func (p *volumeSnapshotContentDeleteItemAction) Execute(
return checkVSCReadiness(ctx, &snapCont, p.crClient)
},
); err != nil {
// Clean up the VSC we created since it can't become ready
if deleteErr := p.crClient.Delete(context.TODO(), &snapCont); deleteErr != nil && !apierrors.IsNotFound(deleteErr) {
p.log.WithError(deleteErr).Errorf("Failed to clean up VolumeSnapshotContent %s", snapCont.Name)
}
return errors.Wrapf(err, "fail to wait VolumeSnapshotContent %s becomes ready.", snapCont.Name)
}
@@ -167,6 +171,13 @@ var checkVSCReadiness = func(
return true, nil
}
// Fail fast on permanent CSI driver errors (e.g., InvalidSnapshot.NotFound)
if tmpVSC.Status != nil && tmpVSC.Status.Error != nil && tmpVSC.Status.Error.Message != nil {
return false, errors.Errorf(
"VolumeSnapshotContent %s has error: %s", vsc.Name, *tmpVSC.Status.Error.Message,
)
}
return false, nil
}

View File

@@ -94,6 +94,19 @@ func TestVSCExecute(t *testing.T) {
return false, errors.Errorf("test error case")
},
},
{
name: "Error case with CSI error, dangling VSC should be cleaned up",
vsc: builder.ForVolumeSnapshotContent("bar").ObjectMeta(builder.WithLabelsMap(map[string]string{velerov1api.BackupNameLabel: "backup"})).Status(&snapshotv1api.VolumeSnapshotContentStatus{SnapshotHandle: &snapshotHandleStr}).Result(),
backup: builder.ForBackup("velero", "backup").ObjectMeta(builder.WithAnnotationsMap(map[string]string{velerov1api.ResourceTimeoutAnnotation: "5s"})).Result(),
expectErr: true,
function: func(
ctx context.Context,
vsc *snapshotv1api.VolumeSnapshotContent,
client crclient.Client,
) (bool, error) {
return false, errors.Errorf("VolumeSnapshotContent %s has error: InvalidSnapshot.NotFound", vsc.Name)
},
},
}
for _, test := range tests {
@@ -190,6 +203,24 @@ func TestCheckVSCReadiness(t *testing.T) {
expectErr: false,
ready: false,
},
{
name: "VSC has error from CSI driver",
vsc: &snapshotv1api.VolumeSnapshotContent{
ObjectMeta: metav1.ObjectMeta{
Name: "vsc-1",
Namespace: "velero",
},
Status: &snapshotv1api.VolumeSnapshotContentStatus{
ReadyToUse: boolPtr(false),
Error: &snapshotv1api.VolumeSnapshotError{
Message: stringPtr("InvalidSnapshot.NotFound: The snapshot 'snap-0abc123' does not exist."),
},
},
},
createVSC: true,
expectErr: true,
ready: false,
},
}
for _, test := range tests {
@@ -207,3 +238,11 @@ func TestCheckVSCReadiness(t *testing.T) {
})
}
}
func boolPtr(b bool) *bool {
return &b
}
func stringPtr(s string) *string {
return &s
}

View File

@@ -134,6 +134,7 @@ func (v *volumeHelperImpl) ShouldPerformSnapshot(obj runtime.Unstructured, group
pv := new(corev1api.PersistentVolume)
var err error
var pvNotFoundErr error
if groupResource == kuberesource.PersistentVolumeClaims {
if err = runtime.DefaultUnstructuredConverter.FromUnstructured(obj.UnstructuredContent(), &pvc); err != nil {
v.logger.WithError(err).Error("fail to convert unstructured into PVC")
@@ -142,8 +143,10 @@ func (v *volumeHelperImpl) ShouldPerformSnapshot(obj runtime.Unstructured, group
pv, err = kubeutil.GetPVForPVC(pvc, v.client)
if err != nil {
v.logger.WithError(err).Errorf("fail to get PV for PVC %s", pvc.Namespace+"/"+pvc.Name)
return false, err
// Any error means PV not available - save to return later if no policy matches
v.logger.Debugf("PV not found for PVC %s: %v", pvc.Namespace+"/"+pvc.Name, err)
pvNotFoundErr = err
pv = nil
}
}
@@ -158,7 +161,7 @@ func (v *volumeHelperImpl) ShouldPerformSnapshot(obj runtime.Unstructured, group
vfd := resourcepolicies.NewVolumeFilterData(pv, nil, pvc)
action, err := v.volumePolicy.GetMatchAction(vfd)
if err != nil {
v.logger.WithError(err).Errorf("fail to get VolumePolicy match action for PV %s", pv.Name)
v.logger.WithError(err).Errorf("fail to get VolumePolicy match action for %+v", vfd)
return false, err
}
@@ -167,15 +170,21 @@ func (v *volumeHelperImpl) ShouldPerformSnapshot(obj runtime.Unstructured, group
// If there is no match action, go on to the next check.
if action != nil {
if action.Type == resourcepolicies.Snapshot {
v.logger.Infof(fmt.Sprintf("performing snapshot action for pv %s", pv.Name))
v.logger.Infof("performing snapshot action for %+v", vfd)
return true, nil
} else {
v.logger.Infof("Skip snapshot action for pv %s as the action type is %s", pv.Name, action.Type)
v.logger.Infof("Skip snapshot action for %+v as the action type is %s", vfd, action.Type)
return false, nil
}
}
}
// If resource is PVC, and PV is nil (e.g., Pending/Lost PVC with no matching policy), return the original error
if groupResource == kuberesource.PersistentVolumeClaims && pv == nil && pvNotFoundErr != nil {
v.logger.WithError(pvNotFoundErr).Errorf("fail to get PV for PVC %s", pvc.Namespace+"/"+pvc.Name)
return false, pvNotFoundErr
}
// If this PV is claimed, see if we've already taken a (pod volume backup)
// snapshot of the contents of this PV. If so, don't take a snapshot.
if pv.Spec.ClaimRef != nil {
@@ -209,7 +218,7 @@ func (v *volumeHelperImpl) ShouldPerformSnapshot(obj runtime.Unstructured, group
return true, nil
}
v.logger.Infof(fmt.Sprintf("skipping snapshot action for pv %s possibly due to no volume policy setting or snapshotVolumes is false", pv.Name))
v.logger.Infof("skipping snapshot action for pv %s possibly due to no volume policy setting or snapshotVolumes is false", pv.Name)
return false, nil
}
@@ -219,6 +228,7 @@ func (v volumeHelperImpl) ShouldPerformFSBackup(volume corev1api.Volume, pod cor
return false, nil
}
var pvNotFoundErr error
if v.volumePolicy != nil {
var resource any
var err error
@@ -230,10 +240,13 @@ func (v volumeHelperImpl) ShouldPerformFSBackup(volume corev1api.Volume, pod cor
v.logger.WithError(err).Errorf("fail to get PVC for pod %s", pod.Namespace+"/"+pod.Name)
return false, err
}
resource, err = kubeutil.GetPVForPVC(pvc, v.client)
pvResource, err := kubeutil.GetPVForPVC(pvc, v.client)
if err != nil {
v.logger.WithError(err).Errorf("fail to get PV for PVC %s", pvc.Namespace+"/"+pvc.Name)
return false, err
// Any error means PV not available - save to return later if no policy matches
v.logger.Debugf("PV not found for PVC %s: %v", pvc.Namespace+"/"+pvc.Name, err)
pvNotFoundErr = err
} else {
resource = pvResource
}
}
@@ -260,6 +273,12 @@ func (v volumeHelperImpl) ShouldPerformFSBackup(volume corev1api.Volume, pod cor
return false, nil
}
}
// If no policy matched and PV was not found, return the original error
if pvNotFoundErr != nil {
v.logger.WithError(pvNotFoundErr).Errorf("fail to get PV for PVC %s", pvc.Namespace+"/"+pvc.Name)
return false, pvNotFoundErr
}
}
if v.shouldPerformFSBackupLegacy(volume, pod) {

View File

@@ -286,7 +286,7 @@ func TestVolumeHelperImpl_ShouldPerformSnapshot(t *testing.T) {
expectedErr: false,
},
{
name: "PVC not having PV, return false and error case PV not found",
name: "PVC not having PV, return false and error when no matching policy",
inputObj: builder.ForPersistentVolumeClaim("default", "example-pvc").StorageClass("gp2-csi").Result(),
groupResource: kuberesource.PersistentVolumeClaims,
resourcePolicies: &resourcepolicies.ResourcePolicies{
@@ -1234,3 +1234,312 @@ func TestNewVolumeHelperImplWithCache_UsesCache(t *testing.T) {
require.NoError(t, err)
require.False(t, shouldSnapshot, "Expected snapshot to be skipped due to fs-backup selection via cache")
}
// TestVolumeHelperImpl_ShouldPerformSnapshot_UnboundPVC tests that Pending and Lost PVCs with
// phase-based skip policies don't cause errors when GetPVForPVC would fail.
func TestVolumeHelperImpl_ShouldPerformSnapshot_UnboundPVC(t *testing.T) {
testCases := []struct {
name string
inputPVC *corev1api.PersistentVolumeClaim
resourcePolicies *resourcepolicies.ResourcePolicies
shouldSnapshot bool
expectedErr bool
}{
{
name: "Pending PVC with phase-based skip policy should not error and return false",
inputPVC: builder.ForPersistentVolumeClaim("ns", "pvc-pending").
StorageClass("non-existent-class").
Phase(corev1api.ClaimPending).
Result(),
resourcePolicies: &resourcepolicies.ResourcePolicies{
Version: "v1",
VolumePolicies: []resourcepolicies.VolumePolicy{
{
Conditions: map[string]any{
"pvcPhase": []string{"Pending"},
},
Action: resourcepolicies.Action{
Type: resourcepolicies.Skip,
},
},
},
},
shouldSnapshot: false,
expectedErr: false,
},
{
name: "Pending PVC without matching skip policy should error (no PV)",
inputPVC: builder.ForPersistentVolumeClaim("ns", "pvc-pending-no-policy").
StorageClass("non-existent-class").
Phase(corev1api.ClaimPending).
Result(),
resourcePolicies: &resourcepolicies.ResourcePolicies{
Version: "v1",
VolumePolicies: []resourcepolicies.VolumePolicy{
{
Conditions: map[string]any{
"storageClass": []string{"gp2-csi"},
},
Action: resourcepolicies.Action{
Type: resourcepolicies.Skip,
},
},
},
},
shouldSnapshot: false,
expectedErr: true,
},
{
name: "Lost PVC with phase-based skip policy should not error and return false",
inputPVC: builder.ForPersistentVolumeClaim("ns", "pvc-lost").
StorageClass("some-class").
Phase(corev1api.ClaimLost).
Result(),
resourcePolicies: &resourcepolicies.ResourcePolicies{
Version: "v1",
VolumePolicies: []resourcepolicies.VolumePolicy{
{
Conditions: map[string]any{
"pvcPhase": []string{"Lost"},
},
Action: resourcepolicies.Action{
Type: resourcepolicies.Skip,
},
},
},
},
shouldSnapshot: false,
expectedErr: false,
},
{
name: "Lost PVC with policy for Pending and Lost should not error and return false",
inputPVC: builder.ForPersistentVolumeClaim("ns", "pvc-lost").
StorageClass("some-class").
Phase(corev1api.ClaimLost).
Result(),
resourcePolicies: &resourcepolicies.ResourcePolicies{
Version: "v1",
VolumePolicies: []resourcepolicies.VolumePolicy{
{
Conditions: map[string]any{
"pvcPhase": []string{"Pending", "Lost"},
},
Action: resourcepolicies.Action{
Type: resourcepolicies.Skip,
},
},
},
},
shouldSnapshot: false,
expectedErr: false,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
fakeClient := velerotest.NewFakeControllerRuntimeClient(t)
var p *resourcepolicies.Policies
if tc.resourcePolicies != nil {
p = &resourcepolicies.Policies{}
err := p.BuildPolicy(tc.resourcePolicies)
require.NoError(t, err)
}
vh := NewVolumeHelperImpl(
p,
ptr.To(true),
logrus.StandardLogger(),
fakeClient,
false,
false,
)
obj, err := runtime.DefaultUnstructuredConverter.ToUnstructured(tc.inputPVC)
require.NoError(t, err)
actualShouldSnapshot, actualError := vh.ShouldPerformSnapshot(&unstructured.Unstructured{Object: obj}, kuberesource.PersistentVolumeClaims)
if tc.expectedErr {
require.Error(t, actualError, "Want error; Got nil error")
return
}
require.NoError(t, actualError)
require.Equalf(t, tc.shouldSnapshot, actualShouldSnapshot, "Want shouldSnapshot as %t; Got shouldSnapshot as %t", tc.shouldSnapshot, actualShouldSnapshot)
})
}
}
// TestVolumeHelperImpl_ShouldPerformFSBackup_UnboundPVC tests that Pending and Lost PVCs with
// phase-based skip policies don't cause errors when GetPVForPVC would fail.
func TestVolumeHelperImpl_ShouldPerformFSBackup_UnboundPVC(t *testing.T) {
testCases := []struct {
name string
pod *corev1api.Pod
pvc *corev1api.PersistentVolumeClaim
resourcePolicies *resourcepolicies.ResourcePolicies
shouldFSBackup bool
expectedErr bool
}{
{
name: "Pending PVC with phase-based skip policy should not error and return false",
pod: builder.ForPod("ns", "pod-1").
Volumes(
&corev1api.Volume{
Name: "vol-pending",
VolumeSource: corev1api.VolumeSource{
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
ClaimName: "pvc-pending",
},
},
}).Result(),
pvc: builder.ForPersistentVolumeClaim("ns", "pvc-pending").
StorageClass("non-existent-class").
Phase(corev1api.ClaimPending).
Result(),
resourcePolicies: &resourcepolicies.ResourcePolicies{
Version: "v1",
VolumePolicies: []resourcepolicies.VolumePolicy{
{
Conditions: map[string]any{
"pvcPhase": []string{"Pending"},
},
Action: resourcepolicies.Action{
Type: resourcepolicies.Skip,
},
},
},
},
shouldFSBackup: false,
expectedErr: false,
},
{
name: "Pending PVC without matching skip policy should error (no PV)",
pod: builder.ForPod("ns", "pod-1").
Volumes(
&corev1api.Volume{
Name: "vol-pending",
VolumeSource: corev1api.VolumeSource{
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
ClaimName: "pvc-pending-no-policy",
},
},
}).Result(),
pvc: builder.ForPersistentVolumeClaim("ns", "pvc-pending-no-policy").
StorageClass("non-existent-class").
Phase(corev1api.ClaimPending).
Result(),
resourcePolicies: &resourcepolicies.ResourcePolicies{
Version: "v1",
VolumePolicies: []resourcepolicies.VolumePolicy{
{
Conditions: map[string]any{
"storageClass": []string{"gp2-csi"},
},
Action: resourcepolicies.Action{
Type: resourcepolicies.Skip,
},
},
},
},
shouldFSBackup: false,
expectedErr: true,
},
{
name: "Lost PVC with phase-based skip policy should not error and return false",
pod: builder.ForPod("ns", "pod-1").
Volumes(
&corev1api.Volume{
Name: "vol-lost",
VolumeSource: corev1api.VolumeSource{
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
ClaimName: "pvc-lost",
},
},
}).Result(),
pvc: builder.ForPersistentVolumeClaim("ns", "pvc-lost").
StorageClass("some-class").
Phase(corev1api.ClaimLost).
Result(),
resourcePolicies: &resourcepolicies.ResourcePolicies{
Version: "v1",
VolumePolicies: []resourcepolicies.VolumePolicy{
{
Conditions: map[string]any{
"pvcPhase": []string{"Lost"},
},
Action: resourcepolicies.Action{
Type: resourcepolicies.Skip,
},
},
},
},
shouldFSBackup: false,
expectedErr: false,
},
{
name: "Lost PVC with policy for Pending and Lost should not error and return false",
pod: builder.ForPod("ns", "pod-1").
Volumes(
&corev1api.Volume{
Name: "vol-lost",
VolumeSource: corev1api.VolumeSource{
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
ClaimName: "pvc-lost",
},
},
}).Result(),
pvc: builder.ForPersistentVolumeClaim("ns", "pvc-lost").
StorageClass("some-class").
Phase(corev1api.ClaimLost).
Result(),
resourcePolicies: &resourcepolicies.ResourcePolicies{
Version: "v1",
VolumePolicies: []resourcepolicies.VolumePolicy{
{
Conditions: map[string]any{
"pvcPhase": []string{"Pending", "Lost"},
},
Action: resourcepolicies.Action{
Type: resourcepolicies.Skip,
},
},
},
},
shouldFSBackup: false,
expectedErr: false,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
fakeClient := velerotest.NewFakeControllerRuntimeClient(t, tc.pvc)
require.NoError(t, fakeClient.Create(t.Context(), tc.pod))
var p *resourcepolicies.Policies
if tc.resourcePolicies != nil {
p = &resourcepolicies.Policies{}
err := p.BuildPolicy(tc.resourcePolicies)
require.NoError(t, err)
}
vh := NewVolumeHelperImpl(
p,
ptr.To(true),
logrus.StandardLogger(),
fakeClient,
false,
false,
)
actualShouldFSBackup, actualError := vh.ShouldPerformFSBackup(tc.pod.Spec.Volumes[0], *tc.pod)
if tc.expectedErr {
require.Error(t, actualError, "Want error; Got nil error")
return
}
require.NoError(t, actualError)
require.Equalf(t, tc.shouldFSBackup, actualShouldFSBackup, "Want shouldFSBackup as %t; Got shouldFSBackup as %t", tc.shouldFSBackup, actualShouldFSBackup)
})
}
}

View File

@@ -102,6 +102,15 @@ const (
// even if the resource contains a matching selector label.
ExcludeFromBackupLabel = "velero.io/exclude-from-backup"
// SkipFromBackupAnnotation is the annotation used by internal BackupItemActions
// to indicate that a resource should be skipped from backup,
// even if it doesn't have the ExcludeFromBackupLabel.
// This is used in cases where we want to skip backup of a resource based on some logic in a plugin.
//
// Notice: SkipFromBackupAnnotation's priority is higher than MustIncludeAdditionalItemAnnotation.
// If SkipFromBackupAnnotation is set, the resource will be skipped even if MustIncludeAdditionalItemAnnotation is set.
SkipFromBackupAnnotation = "velero.io/skip-from-backup"
// defaultVGSLabelKey is the default label key used to group PVCs under a VolumeGroupSnapshot
DefaultVGSLabelKey = "velero.io/volume-group"

View File

@@ -98,6 +98,14 @@ func (m *backedUpItemsMap) AddItem(key itemKey) {
m.totalItems[key] = struct{}{}
}
func (m *backedUpItemsMap) DeleteItem(key itemKey) {
m.Lock()
defer m.Unlock()
delete(m.backedUpItems, key)
delete(m.totalItems, key)
}
func (m *backedUpItemsMap) AddItemToTotal(key itemKey) {
m.Lock()
defer m.Unlock()

View File

@@ -244,6 +244,14 @@ func (ib *itemBackupper) backupItemInternal(logger logrus.FieldLogger, obj runti
return false, itemFiles, kubeerrs.NewAggregate(backupErrs)
}
// If err is nil and updatedObj is nil, it means the item is skipped by plugin action,
// we should return here to avoid backing up the item, and avoid potential NPE in the following code.
if updatedObj == nil {
log.Infof("Remove item from the backup's backupItems list and totalItems list because it's skipped by plugin action.")
ib.backupRequest.BackedUpItems.DeleteItem(key)
return false, itemFiles, nil
}
itemFiles = append(itemFiles, additionalItemFiles...)
obj = updatedObj
if metadata, err = meta.Accessor(obj); err != nil {
@@ -398,6 +406,13 @@ func (ib *itemBackupper) executeActions(
}
u := &unstructured.Unstructured{Object: updatedItem.UnstructuredContent()}
if _, ok := u.GetAnnotations()[velerov1api.SkipFromBackupAnnotation]; ok {
log.Infof("Resource (groupResource=%s, namespace=%s, name=%s) is skipped from backup by action %s.",
groupResource.String(), namespace, name, actionName)
return nil, itemFiles, nil
}
if actionName == csiBIAPluginName {
if additionalItemIdentifiers == nil && u.GetAnnotations()[velerov1api.SkippedNoCSIPVAnnotation] == "true" {
// snapshot was skipped by CSI plugin
@@ -687,15 +702,14 @@ func (ib *itemBackupper) getMatchAction(obj runtime.Unstructured, groupResource
return nil, errors.WithStack(err)
}
pvName := pvc.Spec.VolumeName
if pvName == "" {
return nil, errors.Errorf("PVC has no volume backing this claim")
}
pv := &corev1api.PersistentVolume{}
if err := ib.kbClient.Get(context.Background(), kbClient.ObjectKey{Name: pvName}, pv); err != nil {
return nil, errors.WithStack(err)
var pv *corev1api.PersistentVolume
if pvName := pvc.Spec.VolumeName; pvName != "" {
pv = &corev1api.PersistentVolume{}
if err := ib.kbClient.Get(context.Background(), kbClient.ObjectKey{Name: pvName}, pv); err != nil {
return nil, errors.WithStack(err)
}
}
// If pv is nil for unbound PVCs - policy matching will use PVC-only conditions
vfd := resourcepolicies.NewVolumeFilterData(pv, nil, pvc)
return ib.backupRequest.ResPolicies.GetMatchAction(vfd)
}
@@ -709,7 +723,10 @@ func (ib *itemBackupper) trackSkippedPV(obj runtime.Unstructured, groupResource
if name, err := getPVName(obj, groupResource); len(name) > 0 && err == nil {
ib.backupRequest.SkippedPVTracker.Track(name, approach, reason)
} else if err != nil {
log.WithError(err).Warnf("unable to get PV name, skip tracking.")
// Log at info level for tracking purposes. This is not an error because
// it's expected for some resources (e.g., PVCs in Pending or Lost phase)
// to not have a PV name. This occurs when volume policy skips unbound PVCs.
log.WithError(err).Infof("unable to get PV name, skip tracking.")
}
}
@@ -719,6 +736,17 @@ func (ib *itemBackupper) unTrackSkippedPV(obj runtime.Unstructured, groupResourc
if name, err := getPVName(obj, groupResource); len(name) > 0 && err == nil {
ib.backupRequest.SkippedPVTracker.Untrack(name)
} else if err != nil {
// For PVCs in Pending or Lost phase, it's expected that there's no PV name.
// Log at debug level instead of warning to reduce noise.
if groupResource == kuberesource.PersistentVolumeClaims {
pvc := new(corev1api.PersistentVolumeClaim)
if convErr := runtime.DefaultUnstructuredConverter.FromUnstructured(obj.UnstructuredContent(), pvc); convErr == nil {
if pvc.Status.Phase == corev1api.ClaimPending || pvc.Status.Phase == corev1api.ClaimLost {
log.WithError(err).Debugf("unable to get PV name for %s PVC, skip untracking.", pvc.Status.Phase)
return
}
}
}
log.WithError(err).Warnf("unable to get PV name, skip untracking.")
}
}

View File

@@ -17,12 +17,15 @@ limitations under the License.
package backup
import (
"bytes"
"testing"
"github.com/sirupsen/logrus"
"github.com/stretchr/testify/require"
"k8s.io/apimachinery/pkg/runtime/schema"
ctrlfake "sigs.k8s.io/controller-runtime/pkg/client/fake"
"github.com/vmware-tanzu/velero/internal/resourcepolicies"
"github.com/vmware-tanzu/velero/pkg/kuberesource"
"github.com/stretchr/testify/assert"
@@ -269,3 +272,225 @@ func TestAddVolumeInfo(t *testing.T) {
})
}
}
func TestGetMatchAction_PendingLostPVC(t *testing.T) {
scheme := runtime.NewScheme()
require.NoError(t, corev1api.AddToScheme(scheme))
// Create resource policies that skip Pending/Lost PVCs
resPolicies := &resourcepolicies.ResourcePolicies{
Version: "v1",
VolumePolicies: []resourcepolicies.VolumePolicy{
{
Conditions: map[string]any{
"pvcPhase": []string{"Pending", "Lost"},
},
Action: resourcepolicies.Action{
Type: resourcepolicies.Skip,
},
},
},
}
policies := &resourcepolicies.Policies{}
err := policies.BuildPolicy(resPolicies)
require.NoError(t, err)
testCases := []struct {
name string
pvc *corev1api.PersistentVolumeClaim
pv *corev1api.PersistentVolume
expectedAction *resourcepolicies.Action
expectError bool
}{
{
name: "Pending PVC with no VolumeName should match pvcPhase policy",
pvc: builder.ForPersistentVolumeClaim("ns", "pending-pvc").
StorageClass("test-sc").
Phase(corev1api.ClaimPending).
Result(),
pv: nil,
expectedAction: &resourcepolicies.Action{Type: resourcepolicies.Skip},
expectError: false,
},
{
name: "Lost PVC with no VolumeName should match pvcPhase policy",
pvc: builder.ForPersistentVolumeClaim("ns", "lost-pvc").
StorageClass("test-sc").
Phase(corev1api.ClaimLost).
Result(),
pv: nil,
expectedAction: &resourcepolicies.Action{Type: resourcepolicies.Skip},
expectError: false,
},
{
name: "Bound PVC with VolumeName and matching PV should not match pvcPhase policy",
pvc: builder.ForPersistentVolumeClaim("ns", "bound-pvc").
StorageClass("test-sc").
VolumeName("test-pv").
Phase(corev1api.ClaimBound).
Result(),
pv: builder.ForPersistentVolume("test-pv").StorageClass("test-sc").Result(),
expectedAction: nil,
expectError: false,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Build fake client with PV if present
clientBuilder := ctrlfake.NewClientBuilder().WithScheme(scheme)
if tc.pv != nil {
clientBuilder = clientBuilder.WithObjects(tc.pv)
}
fakeClient := clientBuilder.Build()
ib := &itemBackupper{
kbClient: fakeClient,
backupRequest: &Request{
ResPolicies: policies,
},
}
// Convert PVC to unstructured
pvcData, err := runtime.DefaultUnstructuredConverter.ToUnstructured(tc.pvc)
require.NoError(t, err)
obj := &unstructured.Unstructured{Object: pvcData}
action, err := ib.getMatchAction(obj, kuberesource.PersistentVolumeClaims, csiBIAPluginName)
if tc.expectError {
require.Error(t, err)
} else {
require.NoError(t, err)
}
if tc.expectedAction == nil {
assert.Nil(t, action)
} else {
require.NotNil(t, action)
assert.Equal(t, tc.expectedAction.Type, action.Type)
}
})
}
}
func TestTrackSkippedPV_PendingLostPVC(t *testing.T) {
testCases := []struct {
name string
pvc *corev1api.PersistentVolumeClaim
}{
{
name: "Pending PVC should log at info level",
pvc: builder.ForPersistentVolumeClaim("ns", "pending-pvc").
Phase(corev1api.ClaimPending).
Result(),
},
{
name: "Lost PVC should log at info level",
pvc: builder.ForPersistentVolumeClaim("ns", "lost-pvc").
Phase(corev1api.ClaimLost).
Result(),
},
{
name: "Bound PVC without VolumeName should log at info level",
pvc: builder.ForPersistentVolumeClaim("ns", "bound-pvc").
Phase(corev1api.ClaimBound).
Result(),
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
ib := &itemBackupper{
backupRequest: &Request{
SkippedPVTracker: NewSkipPVTracker(),
},
}
// Set up log capture
logOutput := &bytes.Buffer{}
logger := logrus.New()
logger.SetOutput(logOutput)
logger.SetLevel(logrus.DebugLevel)
// Convert PVC to unstructured
pvcData, err := runtime.DefaultUnstructuredConverter.ToUnstructured(tc.pvc)
require.NoError(t, err)
obj := &unstructured.Unstructured{Object: pvcData}
ib.trackSkippedPV(obj, kuberesource.PersistentVolumeClaims, "", "test reason", logger)
logStr := logOutput.String()
assert.Contains(t, logStr, "level=info")
assert.Contains(t, logStr, "unable to get PV name, skip tracking.")
})
}
}
func TestUnTrackSkippedPV_PendingLostPVC(t *testing.T) {
testCases := []struct {
name string
pvc *corev1api.PersistentVolumeClaim
expectWarningLog bool
expectDebugMessage string
}{
{
name: "Pending PVC should log at debug level, not warning",
pvc: builder.ForPersistentVolumeClaim("ns", "pending-pvc").
Phase(corev1api.ClaimPending).
Result(),
expectWarningLog: false,
expectDebugMessage: "unable to get PV name for Pending PVC, skip untracking.",
},
{
name: "Lost PVC should log at debug level, not warning",
pvc: builder.ForPersistentVolumeClaim("ns", "lost-pvc").
Phase(corev1api.ClaimLost).
Result(),
expectWarningLog: false,
expectDebugMessage: "unable to get PV name for Lost PVC, skip untracking.",
},
{
name: "Bound PVC without VolumeName should log warning",
pvc: builder.ForPersistentVolumeClaim("ns", "bound-pvc").
Phase(corev1api.ClaimBound).
Result(),
expectWarningLog: true,
expectDebugMessage: "",
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
ib := &itemBackupper{
backupRequest: &Request{
SkippedPVTracker: NewSkipPVTracker(),
},
}
// Set up log capture
logOutput := &bytes.Buffer{}
logger := logrus.New()
logger.SetOutput(logOutput)
logger.SetLevel(logrus.DebugLevel)
// Convert PVC to unstructured
pvcData, err := runtime.DefaultUnstructuredConverter.ToUnstructured(tc.pvc)
require.NoError(t, err)
obj := &unstructured.Unstructured{Object: pvcData}
ib.unTrackSkippedPV(obj, kuberesource.PersistentVolumeClaims, logger)
logStr := logOutput.String()
if tc.expectWarningLog {
assert.Contains(t, logStr, "level=warning")
assert.Contains(t, logStr, "unable to get PV name, skip untracking.")
} else {
assert.NotContains(t, logStr, "level=warning")
if tc.expectDebugMessage != "" {
assert.Contains(t, logStr, "level=debug")
assert.Contains(t, logStr, tc.expectDebugMessage)
}
}
})
}
}

View File

@@ -275,11 +275,21 @@ func (o *Options) AsVeleroOptions() (*install.VeleroOptions, error) {
return nil, err
}
}
veleroPodResources, err := kubeutil.ParseResourceRequirements(o.VeleroPodCPURequest, o.VeleroPodMemRequest, o.VeleroPodCPULimit, o.VeleroPodMemLimit)
veleroPodResources, err := kubeutil.ParseCPUAndMemoryResources(
o.VeleroPodCPURequest,
o.VeleroPodMemRequest,
o.VeleroPodCPULimit,
o.VeleroPodMemLimit,
)
if err != nil {
return nil, err
}
nodeAgentPodResources, err := kubeutil.ParseResourceRequirements(o.NodeAgentPodCPURequest, o.NodeAgentPodMemRequest, o.NodeAgentPodCPULimit, o.NodeAgentPodMemLimit)
nodeAgentPodResources, err := kubeutil.ParseCPUAndMemoryResources(
o.NodeAgentPodCPURequest,
o.NodeAgentPodMemRequest,
o.NodeAgentPodCPULimit,
o.NodeAgentPodMemLimit,
)
if err != nil {
return nil, err
}

View File

@@ -323,7 +323,25 @@ func (s *nodeAgentServer) run() {
podResources := corev1api.ResourceRequirements{}
if s.dataPathConfigs != nil && s.dataPathConfigs.PodResources != nil {
if res, err := kube.ParseResourceRequirements(s.dataPathConfigs.PodResources.CPURequest, s.dataPathConfigs.PodResources.MemoryRequest, s.dataPathConfigs.PodResources.CPULimit, s.dataPathConfigs.PodResources.MemoryLimit); err != nil {
// To make the PodResources ConfigMap without ephemeral storage request/limit backward compatible,
// need to avoid set value as empty, because empty string will cause parsing error.
ephemeralStorageRequest := constant.DefaultEphemeralStorageRequest
if s.dataPathConfigs.PodResources.EphemeralStorageRequest != "" {
ephemeralStorageRequest = s.dataPathConfigs.PodResources.EphemeralStorageRequest
}
ephemeralStorageLimit := constant.DefaultEphemeralStorageLimit
if s.dataPathConfigs.PodResources.EphemeralStorageLimit != "" {
ephemeralStorageLimit = s.dataPathConfigs.PodResources.EphemeralStorageLimit
}
if res, err := kube.ParseResourceRequirements(
s.dataPathConfigs.PodResources.CPURequest,
s.dataPathConfigs.PodResources.MemoryRequest,
ephemeralStorageRequest,
s.dataPathConfigs.PodResources.CPULimit,
s.dataPathConfigs.PodResources.MemoryLimit,
ephemeralStorageLimit,
); err != nil {
s.logger.WithError(err).Warn("Pod resource requirements are invalid, ignore")
} else {
podResources = res

View File

@@ -23,4 +23,7 @@ const (
PluginCSIPVCRestoreRIA = "velero.io/csi-pvc-restorer"
PluginCsiVolumeSnapshotRestoreRIA = "velero.io/csi-volumesnapshot-restorer"
DefaultEphemeralStorageRequest = "0"
DefaultEphemeralStorageLimit = "0"
)

View File

@@ -129,6 +129,13 @@ func (c *scheduleReconciler) Reconcile(ctx context.Context, req ctrl.Request) (c
} else {
schedule.Status.Phase = velerov1.SchedulePhaseEnabled
schedule.Status.ValidationErrors = nil
// Compute expected interval between consecutive scheduled backup runs.
// Only meaningful when the cron expression is valid.
now := c.clock.Now()
nextRun := cronSchedule.Next(now)
nextNextRun := cronSchedule.Next(nextRun)
c.metrics.SetScheduleExpectedIntervalSeconds(schedule.Name, nextNextRun.Sub(nextRun).Seconds())
}
scheduleNeedsPatch := false

View File

@@ -124,6 +124,15 @@ func (e *csiSnapshotExposer) Expose(ctx context.Context, ownerObject corev1api.O
"owner": ownerObject.Name,
})
volumeTopology, err := kube.GetVolumeTopology(ctx, e.kubeClient.CoreV1(), e.kubeClient.StorageV1(), csiExposeParam.SourcePVName, csiExposeParam.StorageClass)
if err != nil {
return errors.Wrapf(err, "error getting volume topology for PV %s, storage class %s", csiExposeParam.SourcePVName, csiExposeParam.StorageClass)
}
if volumeTopology != nil {
curLog.Infof("Using volume topology %v", volumeTopology)
}
curLog.Info("Exposing CSI snapshot")
volumeSnapshot, err := csi.WaitVolumeSnapshotReady(ctx, e.csiSnapshotClient, csiExposeParam.SnapshotName, csiExposeParam.SourceNamespace, csiExposeParam.ExposeTimeout, curLog)
@@ -254,6 +263,7 @@ func (e *csiSnapshotExposer) Expose(ctx context.Context, ownerObject corev1api.O
csiExposeParam.NodeOS,
csiExposeParam.PriorityClassName,
intoleratableNodes,
volumeTopology,
)
if err != nil {
return errors.Wrap(err, "error to create backup pod")
@@ -320,7 +330,8 @@ func (e *csiSnapshotExposer) GetExposed(ctx context.Context, ownerObject corev1a
curLog.WithField("pod", pod.Name).Infof("Backup volume is found in pod at index %v", i)
var nodeOS *string
if os, found := pod.Spec.NodeSelector[kube.NodeOSLabel]; found {
if pod.Spec.OS != nil {
os := string(pod.Spec.OS.Name)
nodeOS = &os
}
@@ -588,6 +599,7 @@ func (e *csiSnapshotExposer) createBackupPod(
nodeOS string,
priorityClassName string,
intoleratableNodes []string,
volumeTopology *corev1api.NodeSelector,
) (*corev1api.Pod, error) {
podName := ownerObject.Name
@@ -643,6 +655,10 @@ func (e *csiSnapshotExposer) createBackupPod(
args = append(args, podInfo.logFormatArgs...)
args = append(args, podInfo.logLevelArgs...)
if affinity == nil {
affinity = &kube.LoadAffinity{}
}
var securityCtx *corev1api.PodSecurityContext
nodeSelector := map[string]string{}
podOS := corev1api.PodOS{}
@@ -654,9 +670,14 @@ func (e *csiSnapshotExposer) createBackupPod(
},
}
nodeSelector[kube.NodeOSLabel] = kube.NodeOSWindows
podOS.Name = kube.NodeOSWindows
affinity.NodeSelector.MatchExpressions = append(affinity.NodeSelector.MatchExpressions, metav1.LabelSelectorRequirement{
Key: kube.NodeOSLabel,
Values: []string{kube.NodeOSWindows},
Operator: metav1.LabelSelectorOpIn,
})
toleration = append(toleration, []corev1api.Toleration{
{
Key: "os",
@@ -683,11 +704,15 @@ func (e *csiSnapshotExposer) createBackupPod(
}
}
nodeSelector[kube.NodeOSLabel] = kube.NodeOSLinux
podOS.Name = kube.NodeOSLinux
affinity.NodeSelector.MatchExpressions = append(affinity.NodeSelector.MatchExpressions, metav1.LabelSelectorRequirement{
Key: kube.NodeOSLabel,
Values: []string{kube.NodeOSWindows},
Operator: metav1.LabelSelectorOpNotIn,
})
}
var podAffinity *corev1api.Affinity
if len(intoleratableNodes) > 0 {
if affinity == nil {
affinity = &kube.LoadAffinity{}
@@ -700,9 +725,7 @@ func (e *csiSnapshotExposer) createBackupPod(
})
}
if affinity != nil {
podAffinity = kube.ToSystemAffinity([]*kube.LoadAffinity{affinity})
}
podAffinity := kube.ToSystemAffinity(affinity, volumeTopology)
pod := &corev1api.Pod{
ObjectMeta: metav1.ObjectMeta{

View File

@@ -154,6 +154,7 @@ func TestCreateBackupPodWithPriorityClass(t *testing.T) {
kube.NodeOSLinux,
tc.expectedPriorityClass,
nil,
nil,
)
require.NoError(t, err, tc.description)
@@ -239,6 +240,7 @@ func TestCreateBackupPodWithMissingConfigMap(t *testing.T) {
kube.NodeOSLinux,
"", // empty priority class since config map is missing
nil,
nil,
)
// Should succeed even when config map is missing

View File

@@ -68,6 +68,12 @@ func TestExpose(t *testing.T) {
var restoreSize int64 = 123456
scObj := &storagev1api.StorageClass{
ObjectMeta: metav1.ObjectMeta{
Name: "fake-sc",
},
}
snapshotClass := "fake-snapshot-class"
vsObject := &snapshotv1api.VolumeSnapshot{
ObjectMeta: metav1.ObjectMeta{
@@ -199,6 +205,18 @@ func TestExpose(t *testing.T) {
expectedAffinity *corev1api.Affinity
expectedPVCAnnotation map[string]string
}{
{
name: "get volume topology fail",
ownerBackup: backup,
exposeParam: CSISnapshotExposeParam{
SnapshotName: "fake-vs",
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
},
err: "error getting volume topology for PV fake-pv, storage class fake-sc: error getting storage class fake-sc: storageclasses.storage.k8s.io \"fake-sc\" not found",
},
{
name: "wait vs ready fail",
ownerBackup: backup,
@@ -206,6 +224,11 @@ func TestExpose(t *testing.T) {
SnapshotName: "fake-vs",
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
},
kubeClientObj: []runtime.Object{
scObj,
},
err: "error wait volume snapshot ready: error to get VolumeSnapshot /fake-vs: volumesnapshots.snapshot.storage.k8s.io \"fake-vs\" not found",
},
@@ -217,10 +240,15 @@ func TestExpose(t *testing.T) {
SourceNamespace: "fake-ns",
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
},
snapshotClientObj: []runtime.Object{
vsObject,
},
kubeClientObj: []runtime.Object{
scObj,
},
err: "error to get volume snapshot content: error getting volume snapshot content from API: volumesnapshotcontents.snapshot.storage.k8s.io \"fake-vsc\" not found",
},
{
@@ -231,6 +259,8 @@ func TestExpose(t *testing.T) {
SourceNamespace: "fake-ns",
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
},
snapshotClientObj: []runtime.Object{
vsObject,
@@ -245,6 +275,9 @@ func TestExpose(t *testing.T) {
},
},
},
kubeClientObj: []runtime.Object{
scObj,
},
err: "error to delete volume snapshot: error to delete volume snapshot: fake-delete-error",
},
{
@@ -255,6 +288,8 @@ func TestExpose(t *testing.T) {
SourceNamespace: "fake-ns",
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
},
snapshotClientObj: []runtime.Object{
vsObject,
@@ -269,6 +304,9 @@ func TestExpose(t *testing.T) {
},
},
},
kubeClientObj: []runtime.Object{
scObj,
},
err: "error to delete volume snapshot content: error to delete volume snapshot content: fake-delete-error",
},
{
@@ -279,6 +317,8 @@ func TestExpose(t *testing.T) {
SourceNamespace: "fake-ns",
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
},
snapshotClientObj: []runtime.Object{
vsObject,
@@ -293,6 +333,9 @@ func TestExpose(t *testing.T) {
},
},
},
kubeClientObj: []runtime.Object{
scObj,
},
err: "error to create backup volume snapshot: fake-create-error",
},
{
@@ -303,6 +346,8 @@ func TestExpose(t *testing.T) {
SourceNamespace: "fake-ns",
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
},
snapshotClientObj: []runtime.Object{
vsObject,
@@ -317,6 +362,9 @@ func TestExpose(t *testing.T) {
},
},
},
kubeClientObj: []runtime.Object{
scObj,
},
err: "error to create backup volume snapshot content: fake-create-error",
},
{
@@ -326,11 +374,16 @@ func TestExpose(t *testing.T) {
SnapshotName: "fake-vs",
SourceNamespace: "fake-ns",
AccessMode: "fake-mode",
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
},
snapshotClientObj: []runtime.Object{
vsObject,
vscObj,
},
kubeClientObj: []runtime.Object{
scObj,
},
err: "error to create backup pvc: unsupported access mode fake-mode",
},
{
@@ -342,6 +395,8 @@ func TestExpose(t *testing.T) {
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
AccessMode: AccessModeFileSystem,
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
},
snapshotClientObj: []runtime.Object{
vsObject,
@@ -356,6 +411,9 @@ func TestExpose(t *testing.T) {
},
},
},
kubeClientObj: []runtime.Object{
scObj,
},
err: "error to create backup pvc: error to create pvc: fake-create-error",
},
{
@@ -367,6 +425,8 @@ func TestExpose(t *testing.T) {
AccessMode: AccessModeFileSystem,
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
},
snapshotClientObj: []runtime.Object{
vsObject,
@@ -374,6 +434,7 @@ func TestExpose(t *testing.T) {
},
kubeClientObj: []runtime.Object{
daemonSet,
scObj,
},
kubeReactors: []reactor{
{
@@ -395,6 +456,8 @@ func TestExpose(t *testing.T) {
AccessMode: AccessModeFileSystem,
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
},
snapshotClientObj: []runtime.Object{
vsObject,
@@ -402,6 +465,24 @@ func TestExpose(t *testing.T) {
},
kubeClientObj: []runtime.Object{
daemonSet,
scObj,
},
expectedAffinity: &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "kubernetes.io/os",
Operator: corev1api.NodeSelectorOpNotIn,
Values: []string{"windows"},
},
},
},
},
},
},
},
},
{
@@ -413,6 +494,8 @@ func TestExpose(t *testing.T) {
AccessMode: AccessModeFileSystem,
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
},
snapshotClientObj: []runtime.Object{
vsObject,
@@ -420,6 +503,24 @@ func TestExpose(t *testing.T) {
},
kubeClientObj: []runtime.Object{
daemonSet,
scObj,
},
expectedAffinity: &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "kubernetes.io/os",
Operator: corev1api.NodeSelectorOpNotIn,
Values: []string{"windows"},
},
},
},
},
},
},
},
},
{
@@ -432,6 +533,8 @@ func TestExpose(t *testing.T) {
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
VolumeSize: *resource.NewQuantity(567890, ""),
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
},
snapshotClientObj: []runtime.Object{
vsObjectWithoutRestoreSize,
@@ -439,8 +542,26 @@ func TestExpose(t *testing.T) {
},
kubeClientObj: []runtime.Object{
daemonSet,
scObj,
},
expectedVolumeSize: resource.NewQuantity(567890, ""),
expectedAffinity: &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "kubernetes.io/os",
Operator: corev1api.NodeSelectorOpNotIn,
Values: []string{"windows"},
},
},
},
},
},
},
},
},
{
name: "backupPod mounts read only backupPVC",
@@ -449,6 +570,7 @@ func TestExpose(t *testing.T) {
SnapshotName: "fake-vs",
SourceNamespace: "fake-ns",
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
AccessMode: AccessModeFileSystem,
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
@@ -465,8 +587,26 @@ func TestExpose(t *testing.T) {
},
kubeClientObj: []runtime.Object{
daemonSet,
scObj,
},
expectedReadOnlyPVC: true,
expectedAffinity: &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "kubernetes.io/os",
Operator: corev1api.NodeSelectorOpNotIn,
Values: []string{"windows"},
},
},
},
},
},
},
},
},
{
name: "backupPod mounts read only backupPVC and storageClass specified in backupPVC config",
@@ -475,6 +615,7 @@ func TestExpose(t *testing.T) {
SnapshotName: "fake-vs",
SourceNamespace: "fake-ns",
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
AccessMode: AccessModeFileSystem,
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
@@ -491,9 +632,27 @@ func TestExpose(t *testing.T) {
},
kubeClientObj: []runtime.Object{
daemonSet,
scObj,
},
expectedReadOnlyPVC: true,
expectedBackupPVCStorageClass: "fake-sc-read-only",
expectedAffinity: &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "kubernetes.io/os",
Operator: corev1api.NodeSelectorOpNotIn,
Values: []string{"windows"},
},
},
},
},
},
},
},
},
{
name: "backupPod mounts backupPVC with storageClass specified in backupPVC config",
@@ -502,6 +661,7 @@ func TestExpose(t *testing.T) {
SnapshotName: "fake-vs",
SourceNamespace: "fake-ns",
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
AccessMode: AccessModeFileSystem,
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
@@ -517,8 +677,26 @@ func TestExpose(t *testing.T) {
},
kubeClientObj: []runtime.Object{
daemonSet,
scObj,
},
expectedBackupPVCStorageClass: "fake-sc-read-only",
expectedAffinity: &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "kubernetes.io/os",
Operator: corev1api.NodeSelectorOpNotIn,
Values: []string{"windows"},
},
},
},
},
},
},
},
},
{
name: "Affinity per StorageClass",
@@ -527,6 +705,7 @@ func TestExpose(t *testing.T) {
SnapshotName: "fake-vs",
SourceNamespace: "fake-ns",
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
AccessMode: AccessModeFileSystem,
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
@@ -551,6 +730,7 @@ func TestExpose(t *testing.T) {
},
kubeClientObj: []runtime.Object{
daemonSet,
scObj,
},
expectedAffinity: &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
@@ -563,6 +743,11 @@ func TestExpose(t *testing.T) {
Operator: corev1api.NodeSelectorOpIn,
Values: []string{"Linux"},
},
{
Key: "kubernetes.io/os",
Operator: corev1api.NodeSelectorOpNotIn,
Values: []string{"windows"},
},
},
},
},
@@ -577,6 +762,7 @@ func TestExpose(t *testing.T) {
SnapshotName: "fake-vs",
SourceNamespace: "fake-ns",
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
AccessMode: AccessModeFileSystem,
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
@@ -606,6 +792,7 @@ func TestExpose(t *testing.T) {
},
kubeClientObj: []runtime.Object{
daemonSet,
scObj,
},
expectedBackupPVCStorageClass: "fake-sc-read-only",
expectedAffinity: &corev1api.Affinity{
@@ -619,6 +806,11 @@ func TestExpose(t *testing.T) {
Operator: corev1api.NodeSelectorOpIn,
Values: []string{"amd64"},
},
{
Key: "kubernetes.io/os",
Operator: corev1api.NodeSelectorOpNotIn,
Values: []string{"windows"},
},
},
},
},
@@ -633,6 +825,7 @@ func TestExpose(t *testing.T) {
SnapshotName: "fake-vs",
SourceNamespace: "fake-ns",
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
AccessMode: AccessModeFileSystem,
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
@@ -649,9 +842,26 @@ func TestExpose(t *testing.T) {
},
kubeClientObj: []runtime.Object{
daemonSet,
scObj,
},
expectedBackupPVCStorageClass: "fake-sc-read-only",
expectedAffinity: nil,
expectedAffinity: &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "kubernetes.io/os",
Operator: corev1api.NodeSelectorOpNotIn,
Values: []string{"windows"},
},
},
},
},
},
},
},
},
{
name: "IntolerateSourceNode, get source node fail",
@@ -677,6 +887,7 @@ func TestExpose(t *testing.T) {
},
kubeClientObj: []runtime.Object{
daemonSet,
scObj,
},
kubeReactors: []reactor{
{
@@ -687,7 +898,23 @@ func TestExpose(t *testing.T) {
},
},
},
expectedAffinity: nil,
expectedAffinity: &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "kubernetes.io/os",
Operator: corev1api.NodeSelectorOpNotIn,
Values: []string{"windows"},
},
},
},
},
},
},
},
expectedPVCAnnotation: nil,
},
{
@@ -714,8 +941,25 @@ func TestExpose(t *testing.T) {
},
kubeClientObj: []runtime.Object{
daemonSet,
scObj,
},
expectedAffinity: &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "kubernetes.io/os",
Operator: corev1api.NodeSelectorOpNotIn,
Values: []string{"windows"},
},
},
},
},
},
},
},
expectedAffinity: nil,
expectedPVCAnnotation: map[string]string{util.VSphereCNSFastCloneAnno: "true"},
},
{
@@ -744,6 +988,7 @@ func TestExpose(t *testing.T) {
daemonSet,
volumeAttachement1,
volumeAttachement2,
scObj,
},
expectedAffinity: &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
@@ -751,6 +996,11 @@ func TestExpose(t *testing.T) {
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "kubernetes.io/os",
Operator: corev1api.NodeSelectorOpNotIn,
Values: []string{"windows"},
},
{
Key: "kubernetes.io/hostname",
Operator: corev1api.NodeSelectorOpNotIn,
@@ -844,6 +1094,8 @@ func TestExpose(t *testing.T) {
if test.expectedAffinity != nil {
assert.Equal(t, test.expectedAffinity, backupPod.Spec.Affinity)
} else {
assert.Nil(t, backupPod.Spec.Affinity)
}
if test.expectedPVCAnnotation != nil {

View File

@@ -493,13 +493,15 @@ func (e *genericRestoreExposer) createRestorePod(
containerName := string(ownerObject.UID)
volumeName := string(ownerObject.UID)
var podAffinity *corev1api.Affinity
if selectedNode == "" {
e.log.Infof("No selected node for restore pod. Try to get affinity from the node-agent config.")
nodeSelector := map[string]string{}
if selectedNode != "" {
affinity = nil
nodeSelector["kubernetes.io/hostname"] = selectedNode
e.log.Infof("Selected node for restore pod. Ignore affinity from the node-agent config.")
}
if affinity != nil {
podAffinity = kube.ToSystemAffinity([]*kube.LoadAffinity{affinity})
}
if affinity == nil {
affinity = &kube.LoadAffinity{}
}
podInfo, err := getInheritedPodInfo(ctx, e.kubeClient, ownerObject.Namespace, nodeOS)
@@ -566,7 +568,6 @@ func (e *genericRestoreExposer) createRestorePod(
args = append(args, podInfo.logLevelArgs...)
var securityCtx *corev1api.PodSecurityContext
nodeSelector := map[string]string{}
podOS := corev1api.PodOS{}
if nodeOS == kube.NodeOSWindows {
userID := "ContainerAdministrator"
@@ -576,9 +577,14 @@ func (e *genericRestoreExposer) createRestorePod(
},
}
nodeSelector[kube.NodeOSLabel] = kube.NodeOSWindows
podOS.Name = kube.NodeOSWindows
affinity.NodeSelector.MatchExpressions = append(affinity.NodeSelector.MatchExpressions, metav1.LabelSelectorRequirement{
Key: kube.NodeOSLabel,
Values: []string{kube.NodeOSWindows},
Operator: metav1.LabelSelectorOpIn,
})
toleration = append(toleration, []corev1api.Toleration{
{
Key: "os",
@@ -599,10 +605,17 @@ func (e *genericRestoreExposer) createRestorePod(
RunAsUser: &userID,
}
nodeSelector[kube.NodeOSLabel] = kube.NodeOSLinux
podOS.Name = kube.NodeOSLinux
affinity.NodeSelector.MatchExpressions = append(affinity.NodeSelector.MatchExpressions, metav1.LabelSelectorRequirement{
Key: kube.NodeOSLabel,
Values: []string{kube.NodeOSWindows},
Operator: metav1.LabelSelectorOpNotIn,
})
}
podAffinity := kube.ToSystemAffinity(affinity, nil)
pod := &corev1api.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: restorePodName,
@@ -656,7 +669,6 @@ func (e *genericRestoreExposer) createRestorePod(
ServiceAccountName: podInfo.serviceAccount,
TerminationGracePeriodSeconds: &gracePeriod,
Volumes: volumes,
NodeName: selectedNode,
RestartPolicy: corev1api.RestartPolicyNever,
SecurityContext: securityCtx,
Tolerations: toleration,

View File

@@ -434,6 +434,8 @@ func (e *podVolumeExposer) createHostingPod(
args = append(args, podInfo.logFormatArgs...)
args = append(args, podInfo.logLevelArgs...)
affinity := &kube.LoadAffinity{}
var securityCtx *corev1api.PodSecurityContext
var containerSecurityCtx *corev1api.SecurityContext
nodeSelector := map[string]string{}
@@ -446,9 +448,14 @@ func (e *podVolumeExposer) createHostingPod(
},
}
nodeSelector[kube.NodeOSLabel] = kube.NodeOSWindows
podOS.Name = kube.NodeOSWindows
affinity.NodeSelector.MatchExpressions = append(affinity.NodeSelector.MatchExpressions, metav1.LabelSelectorRequirement{
Key: kube.NodeOSLabel,
Values: []string{kube.NodeOSWindows},
Operator: metav1.LabelSelectorOpIn,
})
toleration = append(toleration, []corev1api.Toleration{
{
Key: "os",
@@ -472,10 +479,17 @@ func (e *podVolumeExposer) createHostingPod(
Privileged: &privileged,
}
nodeSelector[kube.NodeOSLabel] = kube.NodeOSLinux
podOS.Name = kube.NodeOSLinux
affinity.NodeSelector.MatchExpressions = append(affinity.NodeSelector.MatchExpressions, metav1.LabelSelectorRequirement{
Key: kube.NodeOSLabel,
Values: []string{kube.NodeOSWindows},
Operator: metav1.LabelSelectorOpNotIn,
})
}
podAffinity := kube.ToSystemAffinity(affinity, nil)
pod := &corev1api.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: hostingPodName,
@@ -495,6 +509,7 @@ func (e *podVolumeExposer) createHostingPod(
Spec: corev1api.PodSpec{
NodeSelector: nodeSelector,
OS: &podOS,
Affinity: podAffinity,
Containers: []corev1api.Container{
{
Name: containerName,

View File

@@ -235,12 +235,28 @@ func DaemonSet(namespace string, opts ...podTemplateOption) *appsv1api.DaemonSet
if c.forWindows {
daemonSet.Spec.Template.Spec.SecurityContext = nil
daemonSet.Spec.Template.Spec.Containers[0].SecurityContext = nil
daemonSet.Spec.Template.Spec.NodeSelector = map[string]string{
"kubernetes.io/os": "windows",
}
daemonSet.Spec.Template.Spec.OS = &corev1api.PodOS{
Name: "windows",
}
daemonSet.Spec.Template.Spec.Affinity = &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "kubernetes.io/os",
Values: []string{"windows"},
Operator: corev1api.NodeSelectorOpIn,
},
},
},
},
},
},
}
daemonSet.Spec.Template.Spec.Tolerations = []corev1api.Toleration{
{
Key: "os",
@@ -256,11 +272,22 @@ func DaemonSet(namespace string, opts ...podTemplateOption) *appsv1api.DaemonSet
},
}
} else {
daemonSet.Spec.Template.Spec.NodeSelector = map[string]string{
"kubernetes.io/os": "linux",
}
daemonSet.Spec.Template.Spec.OS = &corev1api.PodOS{
Name: "linux",
daemonSet.Spec.Template.Spec.Affinity = &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "kubernetes.io/os",
Values: []string{"windows"},
Operator: corev1api.NodeSelectorOpNotIn,
},
},
},
},
},
},
}
}

View File

@@ -34,8 +34,23 @@ func TestDaemonSet(t *testing.T) {
assert.Equal(t, "velero", ds.ObjectMeta.Namespace)
assert.Equal(t, "node-agent", ds.Spec.Template.ObjectMeta.Labels["name"])
assert.Equal(t, "node-agent", ds.Spec.Template.ObjectMeta.Labels["role"])
assert.Equal(t, "linux", ds.Spec.Template.Spec.NodeSelector["kubernetes.io/os"])
assert.Equal(t, "linux", string(ds.Spec.Template.Spec.OS.Name))
assert.Equal(t, &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "kubernetes.io/os",
Values: []string{"windows"},
Operator: corev1api.NodeSelectorOpNotIn,
},
},
},
},
},
},
}, ds.Spec.Template.Spec.Affinity)
assert.Equal(t, corev1api.PodSecurityContext{RunAsUser: &userID}, *ds.Spec.Template.Spec.SecurityContext)
assert.Equal(t, corev1api.SecurityContext{Privileged: &boolFalse}, *ds.Spec.Template.Spec.Containers[0].SecurityContext)
assert.Len(t, ds.Spec.Template.Spec.Volumes, 3)
@@ -80,8 +95,24 @@ func TestDaemonSet(t *testing.T) {
assert.Equal(t, "velero", ds.ObjectMeta.Namespace)
assert.Equal(t, "node-agent-windows", ds.Spec.Template.ObjectMeta.Labels["name"])
assert.Equal(t, "node-agent", ds.Spec.Template.ObjectMeta.Labels["role"])
assert.Equal(t, "windows", ds.Spec.Template.Spec.NodeSelector["kubernetes.io/os"])
assert.Equal(t, "windows", string(ds.Spec.Template.Spec.OS.Name))
assert.Equal(t, &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "kubernetes.io/os",
Values: []string{"windows"},
Operator: corev1api.NodeSelectorOpIn,
},
},
},
},
},
},
}, ds.Spec.Template.Spec.Affinity)
assert.Equal(t, (*corev1api.PodSecurityContext)(nil), ds.Spec.Template.Spec.SecurityContext)
assert.Equal(t, (*corev1api.SecurityContext)(nil), ds.Spec.Template.Spec.Containers[0].SecurityContext)
}

View File

@@ -364,12 +364,26 @@ func Deployment(namespace string, opts ...podTemplateOption) *appsv1api.Deployme
Spec: corev1api.PodSpec{
RestartPolicy: corev1api.RestartPolicyAlways,
ServiceAccountName: c.serviceAccountName,
NodeSelector: map[string]string{
"kubernetes.io/os": "linux",
},
OS: &corev1api.PodOS{
Name: "linux",
},
Affinity: &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "kubernetes.io/os",
Values: []string{"windows"},
Operator: corev1api.NodeSelectorOpNotIn,
},
},
},
},
},
},
},
Containers: []corev1api.Container{
{
Name: "velero",

View File

@@ -100,8 +100,23 @@ func TestDeployment(t *testing.T) {
assert.Len(t, deploy.Spec.Template.Spec.Containers[0].Args, 2)
assert.Equal(t, "--repo-maintenance-job-configmap=test-repo-maintenance-config", deploy.Spec.Template.Spec.Containers[0].Args[1])
assert.Equal(t, "linux", deploy.Spec.Template.Spec.NodeSelector["kubernetes.io/os"])
assert.Equal(t, "linux", string(deploy.Spec.Template.Spec.OS.Name))
assert.Equal(t, &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "kubernetes.io/os",
Values: []string{"windows"},
Operator: corev1api.NodeSelectorOpNotIn,
},
},
},
},
},
},
}, deploy.Spec.Template.Spec.Affinity)
}
func TestDeploymentWithPriorityClassName(t *testing.T) {

View File

@@ -80,6 +80,9 @@ const (
DataDownloadFailureTotal = "data_download_failure_total"
DataDownloadCancelTotal = "data_download_cancel_total"
// schedule metrics
scheduleExpectedIntervalSeconds = "schedule_expected_interval_seconds"
// repo maintenance metrics
repoMaintenanceSuccessTotal = "repo_maintenance_success_total"
repoMaintenanceFailureTotal = "repo_maintenance_failure_total"
@@ -347,6 +350,14 @@ func NewServerMetrics() *ServerMetrics {
},
[]string{scheduleLabel, backupNameLabel},
),
scheduleExpectedIntervalSeconds: prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Namespace: metricNamespace,
Name: scheduleExpectedIntervalSeconds,
Help: "Expected interval between consecutive scheduled backups, in seconds",
},
[]string{scheduleLabel},
),
repoMaintenanceSuccessTotal: prometheus.NewCounterVec(
prometheus.CounterOpts{
Namespace: metricNamespace,
@@ -644,6 +655,9 @@ func (m *ServerMetrics) RemoveSchedule(scheduleName string) {
if c, ok := m.metrics[csiSnapshotFailureTotal].(*prometheus.CounterVec); ok {
c.DeleteLabelValues(scheduleName, "")
}
if g, ok := m.metrics[scheduleExpectedIntervalSeconds].(*prometheus.GaugeVec); ok {
g.DeleteLabelValues(scheduleName)
}
}
// InitMetricsForNode initializes counter metrics for a node.
@@ -758,6 +772,14 @@ func (m *ServerMetrics) SetBackupLastSuccessfulTimestamp(backupSchedule string,
}
}
// SetScheduleExpectedIntervalSeconds records the expected interval in seconds,
// between consecutive backups for a schedule.
func (m *ServerMetrics) SetScheduleExpectedIntervalSeconds(scheduleName string, seconds float64) {
if g, ok := m.metrics[scheduleExpectedIntervalSeconds].(*prometheus.GaugeVec); ok {
g.WithLabelValues(scheduleName).Set(seconds)
}
}
// SetBackupTotal records the current number of existent backups.
func (m *ServerMetrics) SetBackupTotal(numberOfBackups int64) {
if g, ok := m.metrics[backupTotal].(prometheus.Gauge); ok {

View File

@@ -259,6 +259,90 @@ func TestMultipleAdhocBackupsShareMetrics(t *testing.T) {
assert.Equal(t, float64(1), validationFailureMetric, "All adhoc validation failures should be counted together")
}
// TestSetScheduleExpectedIntervalSeconds verifies that the expected interval metric
// is properly recorded for schedules.
func TestSetScheduleExpectedIntervalSeconds(t *testing.T) {
tests := []struct {
name string
scheduleName string
intervalSeconds float64
description string
}{
{
name: "every 5 minutes schedule",
scheduleName: "frequent-backup",
intervalSeconds: 300,
description: "Expected interval should be 5m in seconds",
},
{
name: "daily schedule",
scheduleName: "daily-backup",
intervalSeconds: 86400,
description: "Expected interval should be 24h in seconds",
},
{
name: "monthly schedule",
scheduleName: "monthly-backup",
intervalSeconds: 2678400, // 31 days in seconds
description: "Expected interval should be 31 days in seconds",
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
m := NewServerMetrics()
m.SetScheduleExpectedIntervalSeconds(tc.scheduleName, tc.intervalSeconds)
metric := getMetricValue(t, m.metrics[scheduleExpectedIntervalSeconds].(*prometheus.GaugeVec), tc.scheduleName)
assert.Equal(t, tc.intervalSeconds, metric, tc.description)
})
}
}
// TestScheduleExpectedIntervalNotInitializedByDefault verifies that the expected
// interval metric is not initialized by InitSchedule, so it only appears for
// schedules with a valid cron expression.
func TestScheduleExpectedIntervalNotInitializedByDefault(t *testing.T) {
m := NewServerMetrics()
m.InitSchedule("test-schedule")
// The metric should not have any values after InitSchedule
ch := make(chan prometheus.Metric, 1)
m.metrics[scheduleExpectedIntervalSeconds].(*prometheus.GaugeVec).Collect(ch)
close(ch)
count := 0
for range ch {
count++
}
assert.Equal(t, 0, count, "scheduleExpectedIntervalSeconds should not be initialized by InitSchedule")
}
// TestRemoveScheduleCleansUpExpectedInterval verifies that RemoveSchedule
// cleans up the expected interval metric.
func TestRemoveScheduleCleansUpExpectedInterval(t *testing.T) {
m := NewServerMetrics()
m.InitSchedule("test-schedule")
m.SetScheduleExpectedIntervalSeconds("test-schedule", 3600)
// Verify metric exists
metric := getMetricValue(t, m.metrics[scheduleExpectedIntervalSeconds].(*prometheus.GaugeVec), "test-schedule")
assert.Equal(t, float64(3600), metric)
// Remove schedule and verify metric is cleaned up
m.RemoveSchedule("test-schedule")
ch := make(chan prometheus.Metric, 1)
m.metrics[scheduleExpectedIntervalSeconds].(*prometheus.GaugeVec).Collect(ch)
close(ch)
count := 0
for range ch {
count++
}
assert.Equal(t, 0, count, "scheduleExpectedIntervalSeconds should be removed after RemoveSchedule")
}
// TestInitScheduleWithEmptyName verifies that InitSchedule works correctly
// with an empty schedule name (for adhoc backups).
func TestInitScheduleWithEmptyName(t *testing.T) {

View File

@@ -149,7 +149,8 @@ func (b *objectBackupStoreGetter) Get(location *velerov1api.BackupStorageLocatio
// if there are any slashes in the middle of 'bucket', the user
// probably put <bucket>/<prefix> in the bucket field, which we
// don't support.
if strings.Contains(bucket, "/") {
// Exception: MRAP ARNs (arn:aws:s3::...) legitimately contain slashes.
if strings.Contains(bucket, "/") && !strings.HasPrefix(bucket, "arn:aws:s3:") {
return nil, errors.Errorf("backup storage location's bucket name %q must not contain a '/' (if using a prefix, put it in the 'Prefix' field instead)", location.Spec.ObjectStorage.Bucket)
}

View File

@@ -943,6 +943,24 @@ func TestNewObjectBackupStoreGetter(t *testing.T) {
wantBucket: "bucket",
wantPrefix: "prefix/",
},
{
name: "when the Bucket field is an MRAP ARN, it should be valid",
location: builder.ForBackupStorageLocation("", "").Provider("provider-1").Bucket("arn:aws:s3::123456789012:accesspoint/abcdef0123456.mrap").Result(),
objectStoreGetter: objectStoreGetter{
"provider-1": newInMemoryObjectStore("arn:aws:s3::123456789012:accesspoint/abcdef0123456.mrap"),
},
credFileStore: velerotest.NewFakeCredentialsFileStore("", nil),
wantBucket: "arn:aws:s3::123456789012:accesspoint/abcdef0123456.mrap",
},
{
name: "when the Bucket field is an MRAP ARN with trailing slash, it should be valid and trimmed",
location: builder.ForBackupStorageLocation("", "").Provider("provider-1").Bucket("arn:aws:s3::123456789012:accesspoint/abcdef0123456.mrap/").Result(),
objectStoreGetter: objectStoreGetter{
"provider-1": newInMemoryObjectStore("arn:aws:s3::123456789012:accesspoint/abcdef0123456.mrap"),
},
credFileStore: velerotest.NewFakeCredentialsFileStore("", nil),
wantBucket: "arn:aws:s3::123456789012:accesspoint/abcdef0123456.mrap",
},
}
for _, tc := range tests {

View File

@@ -210,11 +210,9 @@ func resultsKey(ns, name string) string {
func (b *backupper) getMatchAction(resPolicies *resourcepolicies.Policies, pvc *corev1api.PersistentVolumeClaim, volume *corev1api.Volume) (*resourcepolicies.Action, error) {
if pvc != nil {
pv := new(corev1api.PersistentVolume)
err := b.crClient.Get(context.TODO(), ctrlclient.ObjectKey{Name: pvc.Spec.VolumeName}, pv)
if err != nil {
return nil, errors.Wrapf(err, "error getting pv for pvc %s", pvc.Spec.VolumeName)
}
// Ignore err, if the PV is not available (Pending/Lost PVC or PV fetch failed) - try matching with PVC only
// GetPVForPVC returns nil for all error cases
pv, _ := kube.GetPVForPVC(pvc, b.crClient)
vfd := resourcepolicies.NewVolumeFilterData(pv, nil, pvc)
return resPolicies.GetMatchAction(vfd)
}

View File

@@ -309,8 +309,8 @@ func createNodeObj() *corev1api.Node {
func TestBackupPodVolumes(t *testing.T) {
scheme := runtime.NewScheme()
velerov1api.AddToScheme(scheme)
corev1api.AddToScheme(scheme)
require.NoError(t, velerov1api.AddToScheme(scheme))
require.NoError(t, corev1api.AddToScheme(scheme))
log := logrus.New()
tests := []struct {
@@ -778,7 +778,7 @@ func TestWaitAllPodVolumesProcessed(t *testing.T) {
backuper := newBackupper(c.ctx, log, nil, nil, informer, nil, "", &velerov1api.Backup{})
if c.pvb != nil {
backuper.pvbIndexer.Add(c.pvb)
require.NoError(t, backuper.pvbIndexer.Add(c.pvb))
backuper.wg.Add(1)
}
@@ -833,3 +833,185 @@ func TestPVCBackupSummary(t *testing.T) {
assert.Empty(t, pbs.Skipped)
assert.Len(t, pbs.Backedup, 2)
}
func TestGetMatchAction_PendingPVC(t *testing.T) {
// Create resource policies that skip Pending/Lost PVCs
resPolicies := &resourcepolicies.ResourcePolicies{
Version: "v1",
VolumePolicies: []resourcepolicies.VolumePolicy{
{
Conditions: map[string]any{
"pvcPhase": []string{"Pending", "Lost"},
},
Action: resourcepolicies.Action{
Type: resourcepolicies.Skip,
},
},
},
}
policies := &resourcepolicies.Policies{}
err := policies.BuildPolicy(resPolicies)
require.NoError(t, err)
testCases := []struct {
name string
pvc *corev1api.PersistentVolumeClaim
volume *corev1api.Volume
pv *corev1api.PersistentVolume
expectedAction *resourcepolicies.Action
expectError bool
}{
{
name: "Pending PVC with pvcPhase skip policy should return skip action",
pvc: builder.ForPersistentVolumeClaim("ns", "pending-pvc").
StorageClass("test-sc").
Phase(corev1api.ClaimPending).
Result(),
volume: &corev1api.Volume{
Name: "test-volume",
VolumeSource: corev1api.VolumeSource{
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
ClaimName: "pending-pvc",
},
},
},
pv: nil,
expectedAction: &resourcepolicies.Action{Type: resourcepolicies.Skip},
expectError: false,
},
{
name: "Lost PVC with pvcPhase skip policy should return skip action",
pvc: builder.ForPersistentVolumeClaim("ns", "lost-pvc").
StorageClass("test-sc").
Phase(corev1api.ClaimLost).
Result(),
volume: &corev1api.Volume{
Name: "test-volume",
VolumeSource: corev1api.VolumeSource{
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
ClaimName: "lost-pvc",
},
},
},
pv: nil,
expectedAction: &resourcepolicies.Action{Type: resourcepolicies.Skip},
expectError: false,
},
{
name: "Bound PVC with matching PV should not match pvcPhase policy",
pvc: builder.ForPersistentVolumeClaim("ns", "bound-pvc").
StorageClass("test-sc").
VolumeName("test-pv").
Phase(corev1api.ClaimBound).
Result(),
volume: &corev1api.Volume{
Name: "test-volume",
VolumeSource: corev1api.VolumeSource{
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
ClaimName: "bound-pvc",
},
},
},
pv: builder.ForPersistentVolume("test-pv").StorageClass("test-sc").Result(),
expectedAction: nil,
expectError: false,
},
{
name: "Pending PVC with no matching policy should return nil action",
pvc: builder.ForPersistentVolumeClaim("ns", "pending-pvc-no-match").
StorageClass("test-sc").
Phase(corev1api.ClaimPending).
Result(),
volume: &corev1api.Volume{
Name: "test-volume",
VolumeSource: corev1api.VolumeSource{
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
ClaimName: "pending-pvc-no-match",
},
},
},
pv: nil,
expectedAction: &resourcepolicies.Action{Type: resourcepolicies.Skip}, // Will match the pvcPhase policy
expectError: false,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Build fake client with PV if present
var objs []runtime.Object
if tc.pv != nil {
objs = append(objs, tc.pv)
}
fakeClient := velerotest.NewFakeControllerRuntimeClient(t, objs...)
b := &backupper{
crClient: fakeClient,
}
action, err := b.getMatchAction(policies, tc.pvc, tc.volume)
if tc.expectError {
require.Error(t, err)
} else {
require.NoError(t, err)
}
if tc.expectedAction == nil {
assert.Nil(t, action)
} else {
require.NotNil(t, action)
assert.Equal(t, tc.expectedAction.Type, action.Type)
}
})
}
}
func TestGetMatchAction_PVCWithoutPVLookupError(t *testing.T) {
// Test that when a PVC has a VolumeName but the PV doesn't exist,
// the function ignores the error and tries to match with PVC only
resPolicies := &resourcepolicies.ResourcePolicies{
Version: "v1",
VolumePolicies: []resourcepolicies.VolumePolicy{
{
Conditions: map[string]any{
"pvcPhase": []string{"Pending"},
},
Action: resourcepolicies.Action{
Type: resourcepolicies.Skip,
},
},
},
}
policies := &resourcepolicies.Policies{}
err := policies.BuildPolicy(resPolicies)
require.NoError(t, err)
// Pending PVC without a matching PV in the cluster
pvc := builder.ForPersistentVolumeClaim("ns", "pending-pvc").
StorageClass("test-sc").
Phase(corev1api.ClaimPending).
Result()
volume := &corev1api.Volume{
Name: "test-volume",
VolumeSource: corev1api.VolumeSource{
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
ClaimName: "pending-pvc",
},
},
}
// Empty client - no PV exists
fakeClient := velerotest.NewFakeControllerRuntimeClient(t)
b := &backupper{
crClient: fakeClient,
}
// Should succeed even though PV lookup would fail
// because the function ignores PV lookup errors and uses PVC-only matching
action, err := b.getMatchAction(policies, pvc, volume)
require.NoError(t, err)
require.NotNil(t, action)
assert.Equal(t, resourcepolicies.Skip, action.Type)
}

View File

@@ -38,6 +38,7 @@ import (
"sigs.k8s.io/controller-runtime/pkg/client"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/constant"
velerolabel "github.com/vmware-tanzu/velero/pkg/label"
velerotypes "github.com/vmware-tanzu/velero/pkg/types"
"github.com/vmware-tanzu/velero/pkg/util"
@@ -574,15 +575,32 @@ func buildJob(
// Set resource limits and requests
cpuRequest := DefaultMaintenanceJobCPURequest
memRequest := DefaultMaintenanceJobMemRequest
ephemeralStorageRequest := constant.DefaultEphemeralStorageRequest
cpuLimit := DefaultMaintenanceJobCPULimit
memLimit := DefaultMaintenanceJobMemLimit
ephemeralStorageLimit := constant.DefaultEphemeralStorageLimit
if config != nil && config.PodResources != nil {
cpuRequest = config.PodResources.CPURequest
memRequest = config.PodResources.MemoryRequest
cpuLimit = config.PodResources.CPULimit
memLimit = config.PodResources.MemoryLimit
// To make the PodResources ConfigMap without ephemeral storage request/limit backward compatible,
// need to avoid set value as empty, because empty string will cause parsing error.
if config.PodResources.EphemeralStorageRequest != "" {
ephemeralStorageRequest = config.PodResources.EphemeralStorageRequest
}
if config.PodResources.EphemeralStorageLimit != "" {
ephemeralStorageLimit = config.PodResources.EphemeralStorageLimit
}
}
resources, err := kube.ParseResourceRequirements(cpuRequest, memRequest, cpuLimit, memLimit)
resources, err := kube.ParseResourceRequirements(
cpuRequest,
memRequest,
ephemeralStorageRequest,
cpuLimit,
memLimit,
ephemeralStorageLimit,
)
if err != nil {
return nil, errors.Wrap(err, "failed to parse resource requirements for maintenance job")
}
@@ -671,8 +689,7 @@ func buildJob(
}
if config != nil && len(config.LoadAffinities) > 0 {
// Maintenance job only takes the first loadAffinity.
affinity := kube.ToSystemAffinity([]*kube.LoadAffinity{config.LoadAffinities[0]})
affinity := kube.ToSystemAffinity(config.LoadAffinities[0], nil)
job.Spec.Template.Spec.Affinity = affinity
}

View File

@@ -163,12 +163,19 @@ func (a *PodVolumeRestoreAction) Execute(input *velero.RestoreItemActionExecuteI
memLimit = defaultMemRequestLimit
}
resourceReqs, err := kube.ParseResourceRequirements(cpuRequest, memRequest, cpuLimit, memLimit)
resourceReqs, err := kube.ParseCPUAndMemoryResources(
cpuRequest,
memRequest,
cpuLimit,
memLimit,
)
if err != nil {
log.Errorf("couldn't parse resource requirements: %s.", err)
resourceReqs, _ = kube.ParseResourceRequirements(
defaultCPURequestLimit, defaultMemRequestLimit, // requests
defaultCPURequestLimit, defaultMemRequestLimit, // limits
resourceReqs, _ = kube.ParseCPUAndMemoryResources(
defaultCPURequestLimit,
defaultMemRequestLimit,
defaultCPURequestLimit,
defaultMemRequestLimit,
)
}

View File

@@ -117,9 +117,11 @@ func TestGetImage(t *testing.T) {
// TestPodVolumeRestoreActionExecute tests the pod volume restore item action plugin's Execute method.
func TestPodVolumeRestoreActionExecute(t *testing.T) {
resourceReqs, _ := kube.ParseResourceRequirements(
defaultCPURequestLimit, defaultMemRequestLimit, // requests
defaultCPURequestLimit, defaultMemRequestLimit, // limits
resourceReqs, _ := kube.ParseCPUAndMemoryResources(
defaultCPURequestLimit,
defaultMemRequestLimit,
defaultCPURequestLimit,
defaultMemRequestLimit,
)
id := int64(1000)
securityContext := corev1api.SecurityContext{

View File

@@ -35,6 +35,7 @@ type BlockOutput struct {
*restore.FilesystemOutput
targetFileName string
targetFile *os.File
}
var _ restore.Output = &BlockOutput{}
@@ -52,7 +53,7 @@ func (o *BlockOutput) WriteFile(ctx context.Context, relativePath string, remote
if err != nil {
return errors.Wrapf(err, "failed to open file %s", o.targetFileName)
}
defer targetFile.Close()
o.targetFile = targetFile
buffer := make([]byte, bufferSize)
@@ -101,3 +102,23 @@ func (o *BlockOutput) BeginDirectory(ctx context.Context, relativePath string, e
return nil
}
func (o *BlockOutput) Flush() error {
if o.targetFile != nil {
if err := o.targetFile.Sync(); err != nil {
return errors.Wrapf(err, "error syncing block dev %v", o.targetFileName)
}
}
return nil
}
func (o *BlockOutput) Terminate() error {
if o.targetFile != nil {
if err := o.targetFile.Close(); err != nil {
return errors.Wrapf(err, "error closing block dev %v", o.targetFileName)
}
}
return nil
}

View File

@@ -40,3 +40,11 @@ func (o *BlockOutput) WriteFile(ctx context.Context, relativePath string, remote
func (o *BlockOutput) BeginDirectory(ctx context.Context, relativePath string, e fs.Directory) error {
return fmt.Errorf("block mode is not supported for Windows")
}
func (o *BlockOutput) Flush() error {
return flushVolume(o.targetFileName)
}
func (o *BlockOutput) Terminate() error {
return nil
}

View File

@@ -0,0 +1,50 @@
//go:build linux
// +build linux
/*
Copyright The Velero Contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package kopia
import (
"os"
"github.com/pkg/errors"
"golang.org/x/sys/unix"
)
func flushVolume(dirPath string) error {
dir, err := os.Open(dirPath)
if err != nil {
return errors.Wrapf(err, "error opening dir %v", dirPath)
}
raw, err := dir.SyscallConn()
if err != nil {
return errors.Wrapf(err, "error getting handle of dir %v", dirPath)
}
var syncErr error
if err := raw.Control(func(fd uintptr) {
if e := unix.Syncfs(int(fd)); e != nil {
syncErr = e
}
}); err != nil {
return errors.Wrapf(err, "error calling fs sync from %v", dirPath)
}
return errors.Wrapf(syncErr, "error syncing fs from %v", dirPath)
}

View File

@@ -0,0 +1,24 @@
//go:build !linux
// +build !linux
/*
Copyright The Velero Contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package kopia
func flushVolume(_ string) error {
return errFlushUnsupported
}

View File

@@ -0,0 +1,30 @@
/*
Copyright The Velero Contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package kopia
import (
"github.com/kopia/kopia/snapshot/restore"
"github.com/pkg/errors"
)
var errFlushUnsupported = errors.New("flush is not supported")
type RestoreOutput interface {
restore.Output
Flush() error
Terminate() error
}

View File

@@ -53,6 +53,7 @@ var loadSnapshotFunc = snapshot.LoadSnapshot
var listSnapshotsFunc = snapshot.ListSnapshots
var filesystemEntryFunc = snapshotfs.FilesystemEntryFromIDWithPath
var restoreEntryFunc = restore.Entry
var flushVolumeFunc = flushVolume
const UploaderConfigMultipartKey = "uploader-multipart"
const MaxErrorReported = 10
@@ -375,6 +376,18 @@ func findPreviousSnapshotManifest(ctx context.Context, rep repo.Repository, sour
return result, nil
}
type fileSystemRestoreOutput struct {
*restore.FilesystemOutput
}
func (o *fileSystemRestoreOutput) Flush() error {
return flushVolumeFunc(o.TargetPath)
}
func (o *fileSystemRestoreOutput) Terminate() error {
return nil
}
// Restore restore specific sourcePath with given snapshotID and update progress
func Restore(ctx context.Context, rep repo.RepositoryWriter, progress *Progress, snapshotID, dest string, volMode uploader.PersistentVolumeMode, uploaderCfg map[string]string,
log logrus.FieldLogger, cancleCh chan struct{}) (int64, int32, error) {
@@ -434,13 +447,23 @@ func Restore(ctx context.Context, rep repo.RepositoryWriter, progress *Progress,
return 0, 0, errors.Wrap(err, "error to init output")
}
var output restore.Output = fsOutput
var output RestoreOutput
if volMode == uploader.PersistentVolumeBlock {
output = &BlockOutput{
FilesystemOutput: fsOutput,
}
} else {
output = &fileSystemRestoreOutput{
FilesystemOutput: fsOutput,
}
}
defer func() {
if err := output.Terminate(); err != nil {
log.Warnf("error terminating restore output for %v", path)
}
}()
stat, err := restoreEntryFunc(kopiaCtx, rep, output, rootEntry, restore.Options{
Parallel: restoreConcurrency,
RestoreDirEntryAtDepth: math.MaxInt32,
@@ -453,5 +476,16 @@ func Restore(ctx context.Context, rep repo.RepositoryWriter, progress *Progress,
if err != nil {
return 0, 0, errors.Wrapf(err, "Failed to copy snapshot data to the target")
}
if err := output.Flush(); err != nil {
if err == errFlushUnsupported {
log.Warnf("Skip flushing data for %v under the current OS %v", path, runtime.GOOS)
} else {
return 0, 0, errors.Wrapf(err, "Failed to flush data to target")
}
} else {
log.Infof("Flush done for volume dir %v", path)
}
return stat.RestoredTotalFileSize, stat.RestoredFileCount, nil
}

View File

@@ -675,6 +675,7 @@ func TestRestore(t *testing.T) {
invalidManifestType bool
filesystemEntryFunc func(ctx context.Context, rep repo.Repository, rootID string, consistentAttributes bool) (fs.Entry, error)
restoreEntryFunc func(ctx context.Context, rep repo.Repository, output restore.Output, rootEntry fs.Entry, options restore.Options) (restore.Stats, error)
flushVolumeFunc func(string) error
dest string
expectedBytes int64
expectedCount int32
@@ -757,6 +758,30 @@ func TestRestore(t *testing.T) {
volMode: uploader.PersistentVolumeBlock,
dest: "/tmp",
},
{
name: "Flush is not supported",
filesystemEntryFunc: func(ctx context.Context, rep repo.Repository, rootID string, consistentAttributes bool) (fs.Entry, error) {
return snapshotfs.EntryFromDirEntry(rep, &snapshot.DirEntry{Type: snapshot.EntryTypeFile}), nil
},
restoreEntryFunc: func(ctx context.Context, rep repo.Repository, output restore.Output, rootEntry fs.Entry, options restore.Options) (restore.Stats, error) {
return restore.Stats{}, nil
},
flushVolumeFunc: func(string) error { return errFlushUnsupported },
snapshotID: "snapshot-123",
expectedError: nil,
},
{
name: "Flush fails",
filesystemEntryFunc: func(ctx context.Context, rep repo.Repository, rootID string, consistentAttributes bool) (fs.Entry, error) {
return snapshotfs.EntryFromDirEntry(rep, &snapshot.DirEntry{Type: snapshot.EntryTypeFile}), nil
},
restoreEntryFunc: func(ctx context.Context, rep repo.Repository, output restore.Output, rootEntry fs.Entry, options restore.Options) (restore.Stats, error) {
return restore.Stats{}, nil
},
flushVolumeFunc: func(string) error { return errors.New("fake-flush-error") },
snapshotID: "snapshot-123",
expectedError: errors.New("fake-flush-error"),
},
}
em := &manifest.EntryMetadata{
@@ -784,6 +809,10 @@ func TestRestore(t *testing.T) {
restoreEntryFunc = tc.restoreEntryFunc
}
if tc.flushVolumeFunc != nil {
flushVolumeFunc = tc.flushVolumeFunc
}
repoWriterMock := &repomocks.RepositoryWriter{}
repoWriterMock.On("GetManifest", mock.Anything, mock.Anything, mock.Anything).Return(em, nil)
repoWriterMock.On("OpenObject", mock.Anything, mock.Anything).Return(em, nil)

View File

@@ -666,10 +666,22 @@ func validateNamespaceName(ns string) []error {
return nil
}
// Kubernetes does not allow asterisks in namespaces but Velero uses them as
// wildcards. Replace asterisks with an arbitrary letter to pass Kubernetes
// validation.
tmpNamespace := strings.ReplaceAll(ns, "*", "x")
// Validate the namespace name to ensure it is a valid wildcard pattern
if err := wildcard.ValidateNamespaceName(ns); err != nil {
return []error{err}
}
// Kubernetes does not allow wildcard characters in namespaces but Velero uses them
// for glob patterns. Replace wildcard characters with valid characters to pass
// Kubernetes validation.
tmpNamespace := ns
// Replace glob wildcard characters with valid alphanumeric characters
// Note: Validation of wildcard patterns is handled by the wildcard package.
tmpNamespace = strings.ReplaceAll(tmpNamespace, "*", "x") // matches any sequence
tmpNamespace = strings.ReplaceAll(tmpNamespace, "?", "x") // matches single character
tmpNamespace = strings.ReplaceAll(tmpNamespace, "[", "x") // character class start
tmpNamespace = strings.ReplaceAll(tmpNamespace, "]", "x") // character class end
if errMsgs := validation.ValidateNamespaceName(tmpNamespace, false); errMsgs != nil {
for _, msg := range errMsgs {

View File

@@ -289,6 +289,54 @@ func TestValidateNamespaceIncludesExcludes(t *testing.T) {
excludes: []string{"bar"},
wantErr: true,
},
{
name: "glob characters in includes should not error",
includes: []string{"kube-*", "test-?", "ns-[0-9]"},
excludes: []string{},
wantErr: false,
},
{
name: "glob characters in excludes should not error",
includes: []string{"default"},
excludes: []string{"test-*", "app-?", "ns-[1-5]"},
wantErr: false,
},
{
name: "character class in includes should not error",
includes: []string{"ns-[abc]", "test-[0-9]"},
excludes: []string{},
wantErr: false,
},
{
name: "mixed glob patterns should not error",
includes: []string{"kube-*", "test-?"},
excludes: []string{"*-test", "debug-[0-9]"},
wantErr: false,
},
{
name: "pipe character in includes should error",
includes: []string{"namespace|other"},
excludes: []string{},
wantErr: true,
},
{
name: "parentheses in includes should error",
includes: []string{"namespace(prod)", "test-(dev)"},
excludes: []string{},
wantErr: true,
},
{
name: "exclamation mark in includes should error",
includes: []string{"!namespace", "test!"},
excludes: []string{},
wantErr: true,
},
{
name: "unsupported characters in excludes should error",
includes: []string{"default"},
excludes: []string{"test|prod", "app(staging)"},
wantErr: true,
},
}
for _, tc := range tests {
@@ -1082,16 +1130,6 @@ func TestExpandIncludesExcludes(t *testing.T) {
expectedWildcardExpanded: true,
expectError: false,
},
{
name: "brace wildcard pattern",
includes: []string{"app-{prod,dev}"},
excludes: []string{},
activeNamespaces: []string{"app-prod", "app-dev", "app-test", "default"},
expectedIncludes: []string{"app-prod", "app-dev"},
expectedExcludes: []string{},
expectedWildcardExpanded: true,
expectError: false,
},
{
name: "empty activeNamespaces with wildcards",
includes: []string{"kube-*"},
@@ -1233,13 +1271,6 @@ func TestResolveNamespaceList(t *testing.T) {
expectedNamespaces: []string{"kube-system", "kube-public"},
preExpandWildcards: true,
},
{
name: "complex wildcard pattern",
includes: []string{"app-{prod,dev}", "kube-*"},
excludes: []string{"*-test"},
activeNamespaces: []string{"app-prod", "app-dev", "app-test", "kube-system", "kube-test", "default"},
expectedNamespaces: []string{"app-prod", "app-dev", "kube-system"},
},
{
name: "question mark wildcard pattern",
includes: []string{"ns-?"},

View File

@@ -17,7 +17,6 @@ package kube
import (
"context"
"fmt"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
@@ -34,6 +33,11 @@ const (
NodeOSLabel = "kubernetes.io/os"
)
var realNodeOSMap = map[string]string{
"linux": NodeOSLinux,
"windows": NodeOSWindows,
}
func IsLinuxNode(ctx context.Context, nodeName string, client client.Client) error {
node := &corev1api.Node{}
if err := client.Get(ctx, types.NamespacedName{Name: nodeName}, node); err != nil {
@@ -41,12 +45,11 @@ func IsLinuxNode(ctx context.Context, nodeName string, client client.Client) err
}
os, found := node.Labels[NodeOSLabel]
if !found {
return errors.Errorf("no os type label for node %s", nodeName)
}
if os != NodeOSLinux {
if getRealOS(os) != NodeOSLinux {
return errors.Errorf("os type %s for node %s is not linux", os, nodeName)
}
@@ -72,7 +75,7 @@ func withOSNode(ctx context.Context, client client.Client, osType string, log lo
for _, node := range nodeList.Items {
os, found := node.Labels[NodeOSLabel]
if os == osType {
if getRealOS(os) == osType {
return true
}
@@ -98,7 +101,7 @@ func GetNodeOS(ctx context.Context, nodeName string, nodeClient corev1client.Cor
return "", nil
}
return node.Labels[NodeOSLabel], nil
return getRealOS(node.Labels[NodeOSLabel]), nil
}
func HasNodeWithOS(ctx context.Context, os string, nodeClient corev1client.CoreV1Interface) error {
@@ -106,14 +109,29 @@ func HasNodeWithOS(ctx context.Context, os string, nodeClient corev1client.CoreV
return errors.New("invalid node OS")
}
nodes, err := nodeClient.Nodes().List(ctx, metav1.ListOptions{LabelSelector: fmt.Sprintf("%s=%s", NodeOSLabel, os)})
nodes, err := nodeClient.Nodes().List(ctx, metav1.ListOptions{})
if err != nil {
return errors.Wrapf(err, "error listing nodes with OS %s", os)
}
if len(nodes.Items) == 0 {
return errors.Errorf("node with OS %s doesn't exist", os)
for _, node := range nodes.Items {
osLabel, found := node.Labels[NodeOSLabel]
if !found {
continue
}
if getRealOS(osLabel) == os {
return nil
}
}
return nil
return errors.Errorf("node with OS %s doesn't exist", os)
}
func getRealOS(osLabel string) string {
if os, found := realNodeOSMap[osLabel]; !found {
return NodeOSLinux
} else {
return os
}
}

View File

@@ -40,10 +40,12 @@ type LoadAffinity struct {
}
type PodResources struct {
CPURequest string `json:"cpuRequest,omitempty"`
MemoryRequest string `json:"memoryRequest,omitempty"`
CPULimit string `json:"cpuLimit,omitempty"`
MemoryLimit string `json:"memoryLimit,omitempty"`
CPURequest string `json:"cpuRequest,omitempty"`
CPULimit string `json:"cpuLimit,omitempty"`
MemoryRequest string `json:"memoryRequest,omitempty"`
MemoryLimit string `json:"memoryLimit,omitempty"`
EphemeralStorageRequest string `json:"ephemeralStorageRequest,omitempty"`
EphemeralStorageLimit string `json:"ephemeralStorageLimit,omitempty"`
}
// IsPodRunning does a well-rounded check to make sure the specified pod is running stably.
@@ -230,14 +232,9 @@ func CollectPodLogs(ctx context.Context, podGetter corev1client.CoreV1Interface,
return nil
}
func ToSystemAffinity(loadAffinities []*LoadAffinity) *corev1api.Affinity {
if len(loadAffinities) == 0 {
return nil
}
nodeSelectorTermList := make([]corev1api.NodeSelectorTerm, 0)
for _, loadAffinity := range loadAffinities {
requirements := []corev1api.NodeSelectorRequirement{}
func ToSystemAffinity(loadAffinity *LoadAffinity, volumeTopology *corev1api.NodeSelector) *corev1api.Affinity {
requirements := []corev1api.NodeSelectorRequirement{}
if loadAffinity != nil {
for k, v := range loadAffinity.NodeSelector.MatchLabels {
requirements = append(requirements, corev1api.NodeSelectorRequirement{
Key: k,
@@ -253,25 +250,25 @@ func ToSystemAffinity(loadAffinities []*LoadAffinity) *corev1api.Affinity {
Operator: corev1api.NodeSelectorOperator(exp.Operator),
})
}
nodeSelectorTermList = append(
nodeSelectorTermList,
corev1api.NodeSelectorTerm{
MatchExpressions: requirements,
},
)
}
if len(nodeSelectorTermList) > 0 {
result := new(corev1api.Affinity)
result.NodeAffinity = new(corev1api.NodeAffinity)
result.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution = new(corev1api.NodeSelector)
result.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms = nodeSelectorTermList
result := new(corev1api.Affinity)
result.NodeAffinity = new(corev1api.NodeAffinity)
result.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution = new(corev1api.NodeSelector)
return result
if volumeTopology != nil {
result.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms = append(result.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms, volumeTopology.NodeSelectorTerms...)
} else if len(requirements) > 0 {
result.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms = make([]corev1api.NodeSelectorTerm, 1)
} else {
return nil
}
return nil
for i := range result.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms {
result.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms[i].MatchExpressions = append(result.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms[i].MatchExpressions, requirements...)
}
return result
}
func DiagnosePod(pod *corev1api.Pod, events *corev1api.EventList) string {

View File

@@ -747,24 +747,23 @@ func TestCollectPodLogs(t *testing.T) {
func TestToSystemAffinity(t *testing.T) {
tests := []struct {
name string
loadAffinities []*LoadAffinity
loadAffinity *LoadAffinity
volumeTopology *corev1api.NodeSelector
expected *corev1api.Affinity
}{
{
name: "loadAffinity is nil",
},
{
name: "loadAffinity is empty",
loadAffinities: []*LoadAffinity{},
name: "loadAffinity is empty",
loadAffinity: &LoadAffinity{},
},
{
name: "with match label",
loadAffinities: []*LoadAffinity{
{
NodeSelector: metav1.LabelSelector{
MatchLabels: map[string]string{
"key-1": "value-1",
},
loadAffinity: &LoadAffinity{
NodeSelector: metav1.LabelSelector{
MatchLabels: map[string]string{
"key-1": "value-1",
},
},
},
@@ -788,23 +787,21 @@ func TestToSystemAffinity(t *testing.T) {
},
{
name: "with match expression",
loadAffinities: []*LoadAffinity{
{
NodeSelector: metav1.LabelSelector{
MatchLabels: map[string]string{
"key-2": "value-2",
loadAffinity: &LoadAffinity{
NodeSelector: metav1.LabelSelector{
MatchLabels: map[string]string{
"key-2": "value-2",
},
MatchExpressions: []metav1.LabelSelectorRequirement{
{
Key: "key-3",
Values: []string{"value-3-1", "value-3-2"},
Operator: metav1.LabelSelectorOpNotIn,
},
MatchExpressions: []metav1.LabelSelectorRequirement{
{
Key: "key-3",
Values: []string{"value-3-1", "value-3-2"},
Operator: metav1.LabelSelectorOpNotIn,
},
{
Key: "key-4",
Values: []string{"value-4-1", "value-4-2", "value-4-3"},
Operator: metav1.LabelSelectorOpDoesNotExist,
},
{
Key: "key-4",
Values: []string{"value-4-1", "value-4-2", "value-4-3"},
Operator: metav1.LabelSelectorOpDoesNotExist,
},
},
},
@@ -838,19 +835,49 @@ func TestToSystemAffinity(t *testing.T) {
},
},
{
name: "multiple load affinities",
loadAffinities: []*LoadAffinity{
{
NodeSelector: metav1.LabelSelector{
MatchLabels: map[string]string{
"key-1": "value-1",
name: "with olume topology",
volumeTopology: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "key-5",
Values: []string{"value-5-1", "value-5-2", "value-5-3"},
Operator: corev1api.NodeSelectorOpGt,
},
{
Key: "key-6",
Values: []string{"value-5-1", "value-5-2", "value-5-3"},
Operator: corev1api.NodeSelectorOpGt,
},
},
},
},
{
NodeSelector: metav1.LabelSelector{
MatchLabels: map[string]string{
"key-2": "value-2",
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "key-7",
Values: []string{"value-7-1", "value-7-2", "value-7-3"},
Operator: corev1api.NodeSelectorOpGt,
},
{
Key: "key-8",
Values: []string{"value-8-1", "value-8-2", "value-8-3"},
Operator: corev1api.NodeSelectorOpGt,
},
},
},
{
MatchFields: []corev1api.NodeSelectorRequirement{
{
Key: "key-9",
Values: []string{"value-9-1", "value-9-2", "value-9-3"},
Operator: corev1api.NodeSelectorOpGt,
},
{
Key: "key-a",
Values: []string{"value-a-1", "value-a-2", "value-a-3"},
Operator: corev1api.NodeSelectorOpGt,
},
},
},
},
@@ -862,10 +889,177 @@ func TestToSystemAffinity(t *testing.T) {
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "key-1",
Values: []string{"value-1"},
Key: "key-5",
Values: []string{"value-5-1", "value-5-2", "value-5-3"},
Operator: corev1api.NodeSelectorOpGt,
},
{
Key: "key-6",
Values: []string{"value-5-1", "value-5-2", "value-5-3"},
Operator: corev1api.NodeSelectorOpGt,
},
},
},
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "key-7",
Values: []string{"value-7-1", "value-7-2", "value-7-3"},
Operator: corev1api.NodeSelectorOpGt,
},
{
Key: "key-8",
Values: []string{"value-8-1", "value-8-2", "value-8-3"},
Operator: corev1api.NodeSelectorOpGt,
},
},
},
{
MatchFields: []corev1api.NodeSelectorRequirement{
{
Key: "key-9",
Values: []string{"value-9-1", "value-9-2", "value-9-3"},
Operator: corev1api.NodeSelectorOpGt,
},
{
Key: "key-a",
Values: []string{"value-a-1", "value-a-2", "value-a-3"},
Operator: corev1api.NodeSelectorOpGt,
},
},
},
},
},
},
},
},
{
name: "with match expression and volume topology",
loadAffinity: &LoadAffinity{
NodeSelector: metav1.LabelSelector{
MatchLabels: map[string]string{
"key-2": "value-2",
},
MatchExpressions: []metav1.LabelSelectorRequirement{
{
Key: "key-3",
Values: []string{"value-3-1", "value-3-2"},
Operator: metav1.LabelSelectorOpNotIn,
},
{
Key: "key-4",
Values: []string{"value-4-1", "value-4-2", "value-4-3"},
Operator: metav1.LabelSelectorOpDoesNotExist,
},
},
},
},
volumeTopology: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "key-5",
Values: []string{"value-5-1", "value-5-2", "value-5-3"},
Operator: corev1api.NodeSelectorOpGt,
},
{
Key: "key-6",
Values: []string{"value-5-1", "value-5-2", "value-5-3"},
Operator: corev1api.NodeSelectorOpGt,
},
},
},
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "key-7",
Values: []string{"value-7-1", "value-7-2", "value-7-3"},
Operator: corev1api.NodeSelectorOpGt,
},
{
Key: "key-8",
Values: []string{"value-8-1", "value-8-2", "value-8-3"},
Operator: corev1api.NodeSelectorOpGt,
},
},
},
{
MatchFields: []corev1api.NodeSelectorRequirement{
{
Key: "key-9",
Values: []string{"value-9-1", "value-9-2", "value-9-3"},
Operator: corev1api.NodeSelectorOpGt,
},
{
Key: "key-a",
Values: []string{"value-a-1", "value-a-2", "value-a-3"},
Operator: corev1api.NodeSelectorOpGt,
},
},
},
},
},
expected: &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "key-5",
Values: []string{"value-5-1", "value-5-2", "value-5-3"},
Operator: corev1api.NodeSelectorOpGt,
},
{
Key: "key-6",
Values: []string{"value-5-1", "value-5-2", "value-5-3"},
Operator: corev1api.NodeSelectorOpGt,
},
{
Key: "key-2",
Values: []string{"value-2"},
Operator: corev1api.NodeSelectorOpIn,
},
{
Key: "key-3",
Values: []string{"value-3-1", "value-3-2"},
Operator: corev1api.NodeSelectorOpNotIn,
},
{
Key: "key-4",
Values: []string{"value-4-1", "value-4-2", "value-4-3"},
Operator: corev1api.NodeSelectorOpDoesNotExist,
},
},
},
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "key-7",
Values: []string{"value-7-1", "value-7-2", "value-7-3"},
Operator: corev1api.NodeSelectorOpGt,
},
{
Key: "key-8",
Values: []string{"value-8-1", "value-8-2", "value-8-3"},
Operator: corev1api.NodeSelectorOpGt,
},
{
Key: "key-2",
Values: []string{"value-2"},
Operator: corev1api.NodeSelectorOpIn,
},
{
Key: "key-3",
Values: []string{"value-3-1", "value-3-2"},
Operator: corev1api.NodeSelectorOpNotIn,
},
{
Key: "key-4",
Values: []string{"value-4-1", "value-4-2", "value-4-3"},
Operator: corev1api.NodeSelectorOpDoesNotExist,
},
},
},
{
@@ -875,6 +1069,28 @@ func TestToSystemAffinity(t *testing.T) {
Values: []string{"value-2"},
Operator: corev1api.NodeSelectorOpIn,
},
{
Key: "key-3",
Values: []string{"value-3-1", "value-3-2"},
Operator: corev1api.NodeSelectorOpNotIn,
},
{
Key: "key-4",
Values: []string{"value-4-1", "value-4-2", "value-4-3"},
Operator: corev1api.NodeSelectorOpDoesNotExist,
},
},
MatchFields: []corev1api.NodeSelectorRequirement{
{
Key: "key-9",
Values: []string{"value-9-1", "value-9-2", "value-9-3"},
Operator: corev1api.NodeSelectorOpGt,
},
{
Key: "key-a",
Values: []string{"value-a-1", "value-a-2", "value-a-3"},
Operator: corev1api.NodeSelectorOpGt,
},
},
},
},
@@ -886,7 +1102,7 @@ func TestToSystemAffinity(t *testing.T) {
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
affinity := ToSystemAffinity(test.loadAffinities)
affinity := ToSystemAffinity(test.loadAffinity, test.volumeTopology)
assert.True(t, reflect.DeepEqual(affinity, test.expected))
})
}

View File

@@ -417,19 +417,19 @@ func MakePodPVCAttachment(volumeName string, volumeMode *corev1api.PersistentVol
return volumeMounts, volumeDevices, volumePath
}
// GetPVForPVC returns the PersistentVolume backing a PVC
// returns PV, error.
// PV will be nil on error
func GetPVForPVC(
pvc *corev1api.PersistentVolumeClaim,
crClient crclient.Client,
) (*corev1api.PersistentVolume, error) {
if pvc.Spec.VolumeName == "" {
return nil, errors.Errorf("PVC %s/%s has no volume backing this claim",
pvc.Namespace, pvc.Name)
return nil, errors.Errorf("PVC %s/%s has no volume backing this claim", pvc.Namespace, pvc.Name)
}
if pvc.Status.Phase != corev1api.ClaimBound {
// TODO: confirm if this PVC should be snapshotted if it has no PV bound
return nil,
errors.Errorf("PVC %s/%s is in phase %v and is not bound to a volume",
pvc.Namespace, pvc.Name, pvc.Status.Phase)
return nil, errors.Errorf("PVC %s/%s is in phase %v and is not bound to a volume",
pvc.Namespace, pvc.Name, pvc.Status.Phase)
}
pv := &corev1api.PersistentVolume{}
@@ -580,3 +580,29 @@ func GetPVAttachedNodes(ctx context.Context, pv string, storageClient storagev1.
return nodes, nil
}
func GetVolumeTopology(ctx context.Context, volumeClient corev1client.CoreV1Interface, storageClient storagev1.StorageV1Interface, pvName string, scName string) (*corev1api.NodeSelector, error) {
if pvName == "" || scName == "" {
return nil, errors.Errorf("invalid parameter, pv %s, sc %s", pvName, scName)
}
sc, err := storageClient.StorageClasses().Get(ctx, scName, metav1.GetOptions{})
if err != nil {
return nil, errors.Wrapf(err, "error getting storage class %s", scName)
}
if sc.VolumeBindingMode == nil || *sc.VolumeBindingMode != storagev1api.VolumeBindingWaitForFirstConsumer {
return nil, nil
}
pv, err := volumeClient.PersistentVolumes().Get(ctx, pvName, metav1.GetOptions{})
if err != nil {
return nil, errors.Wrapf(err, "error getting PV %s", pvName)
}
if pv.Spec.NodeAffinity == nil {
return nil, nil
}
return pv.Spec.NodeAffinity.Required, nil
}

View File

@@ -1909,3 +1909,143 @@ func TestGetPVCAttachingNodeOS(t *testing.T) {
})
}
}
func TestGetVolumeTopology(t *testing.T) {
pvWithoutNodeAffinity := &corev1api.PersistentVolume{
ObjectMeta: metav1.ObjectMeta{
Name: "fake-pv",
},
}
pvWithNodeAffinity := &corev1api.PersistentVolume{
ObjectMeta: metav1.ObjectMeta{
Name: "fake-pv",
},
Spec: corev1api.PersistentVolumeSpec{
NodeAffinity: &corev1api.VolumeNodeAffinity{
Required: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "fake-key",
},
},
},
},
},
},
},
}
scObjWithoutVolumeBind := &storagev1api.StorageClass{
ObjectMeta: metav1.ObjectMeta{
Name: "fake-storage-class",
},
}
volumeBindImmediate := storagev1api.VolumeBindingImmediate
scObjWithImeediateBind := &storagev1api.StorageClass{
ObjectMeta: metav1.ObjectMeta{
Name: "fake-storage-class",
},
VolumeBindingMode: &volumeBindImmediate,
}
volumeBindWffc := storagev1api.VolumeBindingWaitForFirstConsumer
scObjWithWffcBind := &storagev1api.StorageClass{
ObjectMeta: metav1.ObjectMeta{
Name: "fake-storage-class",
},
VolumeBindingMode: &volumeBindWffc,
}
tests := []struct {
name string
pvName string
scName string
kubeClientObj []runtime.Object
expectedErr string
expected *corev1api.NodeSelector
}{
{
name: "invalid pvName",
scName: "fake-storage-class",
expectedErr: "invalid parameter, pv , sc fake-storage-class",
},
{
name: "invalid scName",
pvName: "fake-pv",
expectedErr: "invalid parameter, pv fake-pv, sc ",
},
{
name: "no sc",
pvName: "fake-pv",
scName: "fake-storage-class",
expectedErr: "error getting storage class fake-storage-class: storageclasses.storage.k8s.io \"fake-storage-class\" not found",
},
{
name: "sc without binding mode",
pvName: "fake-pv",
scName: "fake-storage-class",
kubeClientObj: []runtime.Object{scObjWithoutVolumeBind},
},
{
name: "sc without immediate binding mode",
pvName: "fake-pv",
scName: "fake-storage-class",
kubeClientObj: []runtime.Object{scObjWithImeediateBind},
},
{
name: "get pv fail",
pvName: "fake-pv",
scName: "fake-storage-class",
kubeClientObj: []runtime.Object{scObjWithWffcBind},
expectedErr: "error getting PV fake-pv: persistentvolumes \"fake-pv\" not found",
},
{
name: "pv with no affinity",
pvName: "fake-pv",
scName: "fake-storage-class",
kubeClientObj: []runtime.Object{
scObjWithWffcBind,
pvWithoutNodeAffinity,
},
},
{
name: "pv with affinity",
pvName: "fake-pv",
scName: "fake-storage-class",
kubeClientObj: []runtime.Object{
scObjWithWffcBind,
pvWithNodeAffinity,
},
expected: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "fake-key",
},
},
},
},
},
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
fakeKubeClient := fake.NewSimpleClientset(test.kubeClientObj...)
var kubeClient kubernetes.Interface = fakeKubeClient
affinity, err := GetVolumeTopology(t.Context(), kubeClient.CoreV1(), kubeClient.StorageV1(), test.pvName, test.scName)
if test.expectedErr != "" {
assert.EqualError(t, err, test.expectedErr)
} else {
assert.Equal(t, test.expected, affinity)
}
})
}
}

View File

@@ -20,12 +20,34 @@ import (
"github.com/pkg/errors"
corev1api "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"
"github.com/vmware-tanzu/velero/pkg/constant"
)
// ParseResourceRequirements takes a set of CPU and memory requests and limit string
// ParseCPUAndMemoryResources is a helper function that parses CPU and memory requests and limits,
// using default values for ephemeral storage.
func ParseCPUAndMemoryResources(cpuRequest, memRequest, cpuLimit, memLimit string) (corev1api.ResourceRequirements, error) {
return ParseResourceRequirements(
cpuRequest,
memRequest,
constant.DefaultEphemeralStorageRequest,
cpuLimit,
memLimit,
constant.DefaultEphemeralStorageLimit,
)
}
// ParseResourceRequirements takes a set of CPU, memory, ephemeral storage requests and limit string
// values and returns a ResourceRequirements struct to be used in a Container.
// An error is returned if we cannot parse the request/limit.
func ParseResourceRequirements(cpuRequest, memRequest, cpuLimit, memLimit string) (corev1api.ResourceRequirements, error) {
func ParseResourceRequirements(
cpuRequest,
memRequest,
ephemeralStorageRequest,
cpuLimit,
memLimit,
ephemeralStorageLimit string,
) (corev1api.ResourceRequirements, error) {
resources := corev1api.ResourceRequirements{
Requests: corev1api.ResourceList{},
Limits: corev1api.ResourceList{},
@@ -41,6 +63,11 @@ func ParseResourceRequirements(cpuRequest, memRequest, cpuLimit, memLimit string
return resources, errors.Wrapf(err, `couldn't parse memory request "%s"`, memRequest)
}
parsedEphemeralStorageRequest, err := resource.ParseQuantity(ephemeralStorageRequest)
if err != nil {
return resources, errors.Wrapf(err, `couldn't parse ephemeral storage request "%s"`, ephemeralStorageRequest)
}
parsedCPULimit, err := resource.ParseQuantity(cpuLimit)
if err != nil {
return resources, errors.Wrapf(err, `couldn't parse CPU limit "%s"`, cpuLimit)
@@ -51,6 +78,11 @@ func ParseResourceRequirements(cpuRequest, memRequest, cpuLimit, memLimit string
return resources, errors.Wrapf(err, `couldn't parse memory limit "%s"`, memLimit)
}
parsedEphemeralStorageLimit, err := resource.ParseQuantity(ephemeralStorageLimit)
if err != nil {
return resources, errors.Wrapf(err, `couldn't parse ephemeral storage limit "%s"`, ephemeralStorageLimit)
}
// A quantity of 0 is treated as unbounded
unbounded := resource.MustParse("0")
@@ -62,6 +94,10 @@ func ParseResourceRequirements(cpuRequest, memRequest, cpuLimit, memLimit string
return resources, errors.WithStack(errors.Errorf(`Memory request "%s" must be less than or equal to Memory limit "%s"`, memRequest, memLimit))
}
if parsedEphemeralStorageLimit != unbounded && parsedEphemeralStorageRequest.Cmp(parsedEphemeralStorageLimit) > 0 {
return resources, errors.WithStack(errors.Errorf(`Ephemeral storage request "%s" must be less than or equal to Ephemeral storage limit "%s"`, ephemeralStorageRequest, ephemeralStorageLimit))
}
// Only set resources if they are not unbounded
if parsedCPURequest != unbounded {
resources.Requests[corev1api.ResourceCPU] = parsedCPURequest
@@ -69,12 +105,18 @@ func ParseResourceRequirements(cpuRequest, memRequest, cpuLimit, memLimit string
if parsedMemRequest != unbounded {
resources.Requests[corev1api.ResourceMemory] = parsedMemRequest
}
if parsedEphemeralStorageRequest != unbounded {
resources.Requests[corev1api.ResourceEphemeralStorage] = parsedEphemeralStorageRequest
}
if parsedCPULimit != unbounded {
resources.Limits[corev1api.ResourceCPU] = parsedCPULimit
}
if parsedMemLimit != unbounded {
resources.Limits[corev1api.ResourceMemory] = parsedMemLimit
}
if parsedEphemeralStorageLimit != unbounded {
resources.Limits[corev1api.ResourceEphemeralStorage] = parsedEphemeralStorageLimit
}
return resources, nil
}

View File

@@ -27,10 +27,12 @@ import (
func TestParseResourceRequirements(t *testing.T) {
type args struct {
cpuRequest string
memRequest string
cpuLimit string
memLimit string
cpuRequest string
memRequest string
ephemeralStorageRequest string
cpuLimit string
memLimit string
ephemeralStorageLimit string
}
tests := []struct {
name string
@@ -38,43 +40,61 @@ func TestParseResourceRequirements(t *testing.T) {
wantErr bool
expected *corev1api.ResourceRequirements
}{
{"unbounded quantities", args{"0", "0", "0", "0"}, false, &corev1api.ResourceRequirements{
{"unbounded quantities", args{"0", "0", "0", "0", "0", "0"}, false, &corev1api.ResourceRequirements{
Requests: corev1api.ResourceList{},
Limits: corev1api.ResourceList{},
}},
{"valid quantities", args{"100m", "128Mi", "200m", "256Mi"}, false, nil},
{"CPU request with unbounded limit", args{"100m", "128Mi", "0", "256Mi"}, false, &corev1api.ResourceRequirements{
{"valid quantities", args{"100m", "128Mi", "5Gi", "200m", "256Mi", "10Gi"}, false, nil},
{"CPU request with unbounded limit", args{"100m", "128Mi", "5Gi", "0", "256Mi", "10Gi"}, false, &corev1api.ResourceRequirements{
Requests: corev1api.ResourceList{
corev1api.ResourceCPU: resource.MustParse("100m"),
corev1api.ResourceMemory: resource.MustParse("128Mi"),
corev1api.ResourceCPU: resource.MustParse("100m"),
corev1api.ResourceMemory: resource.MustParse("128Mi"),
corev1api.ResourceEphemeralStorage: resource.MustParse("5Gi"),
},
Limits: corev1api.ResourceList{
corev1api.ResourceMemory: resource.MustParse("256Mi"),
corev1api.ResourceEphemeralStorage: resource.MustParse("10Gi"),
},
}},
{"Mem request with unbounded limit", args{"100m", "128Mi", "5Gi", "200m", "0", "10Gi"}, false, &corev1api.ResourceRequirements{
Requests: corev1api.ResourceList{
corev1api.ResourceCPU: resource.MustParse("100m"),
corev1api.ResourceMemory: resource.MustParse("128Mi"),
corev1api.ResourceEphemeralStorage: resource.MustParse("5Gi"),
},
Limits: corev1api.ResourceList{
corev1api.ResourceCPU: resource.MustParse("200m"),
corev1api.ResourceEphemeralStorage: resource.MustParse("10Gi"),
},
}},
{"Ephemeral storage request with unbounded limit", args{"100m", "128Mi", "5Gi", "200m", "256Mi", "0"}, false, &corev1api.ResourceRequirements{
Requests: corev1api.ResourceList{
corev1api.ResourceCPU: resource.MustParse("100m"),
corev1api.ResourceMemory: resource.MustParse("128Mi"),
corev1api.ResourceEphemeralStorage: resource.MustParse("5Gi"),
},
Limits: corev1api.ResourceList{
corev1api.ResourceCPU: resource.MustParse("200m"),
corev1api.ResourceMemory: resource.MustParse("256Mi"),
},
}},
{"Mem request with unbounded limit", args{"100m", "128Mi", "200m", "0"}, false, &corev1api.ResourceRequirements{
{"CPU/Mem/EphemeralStorage requests with unbounded limits", args{"100m", "128Mi", "5Gi", "0", "0", "0"}, false, &corev1api.ResourceRequirements{
Requests: corev1api.ResourceList{
corev1api.ResourceCPU: resource.MustParse("100m"),
corev1api.ResourceMemory: resource.MustParse("128Mi"),
},
Limits: corev1api.ResourceList{
corev1api.ResourceCPU: resource.MustParse("200m"),
},
}},
{"CPU/Mem requests with unbounded limits", args{"100m", "128Mi", "0", "0"}, false, &corev1api.ResourceRequirements{
Requests: corev1api.ResourceList{
corev1api.ResourceCPU: resource.MustParse("100m"),
corev1api.ResourceMemory: resource.MustParse("128Mi"),
corev1api.ResourceCPU: resource.MustParse("100m"),
corev1api.ResourceMemory: resource.MustParse("128Mi"),
corev1api.ResourceEphemeralStorage: resource.MustParse("5Gi"),
},
Limits: corev1api.ResourceList{},
}},
{"invalid quantity", args{"100m", "invalid", "200m", "256Mi"}, true, nil},
{"CPU request greater than limit", args{"300m", "128Mi", "200m", "256Mi"}, true, nil},
{"memory request greater than limit", args{"100m", "512Mi", "200m", "256Mi"}, true, nil},
{"invalid quantity", args{"100m", "invalid", "1Gi", "200m", "256Mi", "valid"}, true, nil},
{"CPU request greater than limit", args{"300m", "128Mi", "5Gi", "200m", "256Mi", "10Gi"}, true, nil},
{"memory request greater than limit", args{"100m", "512Mi", "5Gi", "200m", "256Mi", "10Gi"}, true, nil},
{"ephemeral storage request greater than limit", args{"100m", "128Mi", "10Gi", "200m", "256Mi", "5Gi"}, true, nil},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := ParseResourceRequirements(tt.args.cpuRequest, tt.args.memRequest, tt.args.cpuLimit, tt.args.memLimit)
got, err := ParseResourceRequirements(tt.args.cpuRequest, tt.args.memRequest, tt.args.ephemeralStorageRequest, tt.args.cpuLimit, tt.args.memLimit, tt.args.ephemeralStorageLimit)
if tt.wantErr {
assert.Error(t, err)
return
@@ -85,12 +105,14 @@ func TestParseResourceRequirements(t *testing.T) {
if tt.expected == nil {
expected = corev1api.ResourceRequirements{
Requests: corev1api.ResourceList{
corev1api.ResourceCPU: resource.MustParse(tt.args.cpuRequest),
corev1api.ResourceMemory: resource.MustParse(tt.args.memRequest),
corev1api.ResourceCPU: resource.MustParse(tt.args.cpuRequest),
corev1api.ResourceMemory: resource.MustParse(tt.args.memRequest),
corev1api.ResourceEphemeralStorage: resource.MustParse(tt.args.ephemeralStorageRequest),
},
Limits: corev1api.ResourceList{
corev1api.ResourceCPU: resource.MustParse(tt.args.cpuLimit),
corev1api.ResourceMemory: resource.MustParse(tt.args.memLimit),
corev1api.ResourceCPU: resource.MustParse(tt.args.cpuLimit),
corev1api.ResourceMemory: resource.MustParse(tt.args.memLimit),
corev1api.ResourceEphemeralStorage: resource.MustParse(tt.args.ephemeralStorageLimit),
},
}
} else {

View File

@@ -156,7 +156,7 @@ func TestGetVolumesByPod(t *testing.T) {
Volumes: []corev1api.Volume{
// PVB Volumes
{Name: "pvbPV1"}, {Name: "pvbPV2"}, {Name: "pvbPV3"},
/// Excluded from PVB because colume mounting default service account token
/// Excluded from PVB because volume mounting default service account token
{Name: "default-token-5xq45"},
},
},

View File

@@ -31,70 +31,77 @@ func ShouldExpandWildcards(includes []string, excludes []string) bool {
}
// containsWildcardPattern checks if a pattern contains any wildcard symbols
// Supported patterns: *, ?, [abc], {a,b,c}
// Supported patterns: *, ?, [abc]
// Note: . and + are treated as literal characters (not wildcards)
// Note: ** and consecutive asterisks are NOT supported (will cause validation error)
func containsWildcardPattern(pattern string) bool {
return strings.ContainsAny(pattern, "*?[{")
return strings.ContainsAny(pattern, "*?[")
}
func validateWildcardPatterns(patterns []string) error {
for _, pattern := range patterns {
// Check for invalid regex-only patterns that we don't support
if strings.ContainsAny(pattern, "|()") {
return errors.New("wildcard pattern contains unsupported regex symbols: |, (, )")
}
// Check for consecutive asterisks (2 or more)
if strings.Contains(pattern, "**") {
return errors.New("wildcard pattern contains consecutive asterisks (only single * allowed)")
}
// Check for malformed brace patterns
if err := validateBracePatterns(pattern); err != nil {
if err := ValidateNamespaceName(pattern); err != nil {
return err
}
}
return nil
}
func ValidateNamespaceName(pattern string) error {
// Check for invalid characters that are not supported in glob patterns
if strings.ContainsAny(pattern, "|()!{},") {
return errors.New("wildcard pattern contains unsupported characters: |, (, ), !, {, }, ,")
}
// Check for consecutive asterisks (2 or more)
if strings.Contains(pattern, "**") {
return errors.New("wildcard pattern contains consecutive asterisks (only single * allowed)")
}
// Check for malformed brace patterns
if err := validateBracePatterns(pattern); err != nil {
return err
}
return nil
}
// validateBracePatterns checks for malformed brace patterns like unclosed braces or empty braces
// Also validates bracket patterns [] for character classes
func validateBracePatterns(pattern string) error {
depth := 0
bracketDepth := 0
for i := 0; i < len(pattern); i++ {
if pattern[i] == '{' {
braceStart := i
depth++
if pattern[i] == '[' {
bracketStart := i
bracketDepth++
// Scan ahead to find the matching closing brace and validate content
for j := i + 1; j < len(pattern) && depth > 0; j++ {
if pattern[j] == '{' {
depth++
} else if pattern[j] == '}' {
depth--
if depth == 0 {
// Found matching closing brace - validate content
content := pattern[braceStart+1 : j]
if strings.Trim(content, ", \t") == "" {
return errors.New("wildcard pattern contains empty brace pattern '{}'")
// Scan ahead to find the matching closing bracket and validate content
for j := i + 1; j < len(pattern) && bracketDepth > 0; j++ {
if pattern[j] == ']' {
bracketDepth--
if bracketDepth == 0 {
// Found matching closing bracket - validate content
content := pattern[bracketStart+1 : j]
if content == "" {
return errors.New("wildcard pattern contains empty bracket pattern '[]'")
}
// Skip to the closing brace
// Skip to the closing bracket
i = j
break
}
}
}
// If we exited the loop without finding a match (depth > 0), brace is unclosed
if depth > 0 {
return errors.New("wildcard pattern contains unclosed brace '{'")
// If we exited the loop without finding a match (bracketDepth > 0), bracket is unclosed
if bracketDepth > 0 {
return errors.New("wildcard pattern contains unclosed bracket '['")
}
// i is now positioned at the closing brace; the outer loop will increment it
} else if pattern[i] == '}' {
// Found a closing brace without a matching opening brace
return errors.New("wildcard pattern contains unmatched closing brace '}'")
// i is now positioned at the closing bracket; the outer loop will increment it
} else if pattern[i] == ']' {
// Found a closing bracket without a matching opening bracket
return errors.New("wildcard pattern contains unmatched closing bracket ']'")
}
}

View File

@@ -90,7 +90,7 @@ func TestShouldExpandWildcards(t *testing.T) {
name: "brace alternatives wildcard",
includes: []string{"ns{prod,staging}"},
excludes: []string{},
expected: true, // brace alternatives are considered wildcard
expected: false, // brace alternatives are not supported
},
{
name: "dot is literal - not wildcard",
@@ -237,9 +237,9 @@ func TestExpandWildcards(t *testing.T) {
activeNamespaces: []string{"app-prod", "app-staging", "app-dev", "db-prod"},
includes: []string{"app-{prod,staging}"},
excludes: []string{},
expectedIncludes: []string{"app-prod", "app-staging"}, // {prod,staging} matches either
expectedIncludes: nil,
expectedExcludes: nil,
expectError: false,
expectError: true,
},
{
name: "literal dot and plus patterns",
@@ -259,33 +259,6 @@ func TestExpandWildcards(t *testing.T) {
expectedExcludes: nil,
expectError: true, // |, (, ) are not supported
},
{
name: "unclosed brace patterns should error",
activeNamespaces: []string{"app-prod"},
includes: []string{"app-{prod,staging"},
excludes: []string{},
expectedIncludes: nil,
expectedExcludes: nil,
expectError: true, // unclosed brace
},
{
name: "empty brace patterns should error",
activeNamespaces: []string{"app-prod"},
includes: []string{"app-{}"},
excludes: []string{},
expectedIncludes: nil,
expectedExcludes: nil,
expectError: true, // empty braces
},
{
name: "unmatched closing brace should error",
activeNamespaces: []string{"app-prod"},
includes: []string{"app-prod}"},
excludes: []string{},
expectedIncludes: nil,
expectedExcludes: nil,
expectError: true, // unmatched closing brace
},
}
for _, tt := range tests {
@@ -354,13 +327,6 @@ func TestExpandWildcardsPrivate(t *testing.T) {
expected: []string{}, // returns empty slice, not nil
expectError: false,
},
{
name: "brace patterns work correctly",
patterns: []string{"app-{prod,staging}"},
activeNamespaces: []string{"app-prod", "app-staging", "app-dev", "app-{prod,staging}"},
expected: []string{"app-prod", "app-staging"}, // brace patterns do expand
expectError: false,
},
{
name: "duplicate matches from multiple patterns",
patterns: []string{"app-*", "*-prod"},
@@ -389,20 +355,6 @@ func TestExpandWildcardsPrivate(t *testing.T) {
expected: []string{"nsa", "nsb", "nsc"}, // [a-c] matches a to c
expectError: false,
},
{
name: "negated character class",
patterns: []string{"ns[!abc]"},
activeNamespaces: []string{"nsa", "nsb", "nsc", "nsd", "ns1"},
expected: []string{"nsd", "ns1"}, // [!abc] matches anything except a, b, c
expectError: false,
},
{
name: "brace alternatives",
patterns: []string{"app-{prod,test}"},
activeNamespaces: []string{"app-prod", "app-test", "app-staging", "db-prod"},
expected: []string{"app-prod", "app-test"}, // {prod,test} matches either
expectError: false,
},
{
name: "double asterisk should error",
patterns: []string{"**"},
@@ -410,13 +362,6 @@ func TestExpandWildcardsPrivate(t *testing.T) {
expected: nil,
expectError: true, // ** is not allowed
},
{
name: "literal dot and plus",
patterns: []string{"app.prod", "service+"},
activeNamespaces: []string{"app.prod", "appXprod", "service+", "service"},
expected: []string{"app.prod", "service+"}, // . and + are literal
expectError: false,
},
{
name: "unsupported regex symbols should error",
patterns: []string{"ns(1|2)"},
@@ -468,153 +413,101 @@ func TestValidateBracePatterns(t *testing.T) {
expectError bool
errorMsg string
}{
// Valid patterns
// Valid square bracket patterns
{
name: "valid single brace pattern",
pattern: "app-{prod,staging}",
name: "valid square bracket pattern",
pattern: "ns[abc]",
expectError: false,
},
{
name: "valid brace with single option",
pattern: "app-{prod}",
name: "valid square bracket pattern with range",
pattern: "ns[a-z]",
expectError: false,
},
{
name: "valid brace with three options",
pattern: "app-{prod,staging,dev}",
name: "valid square bracket pattern with numbers",
pattern: "ns[0-9]",
expectError: false,
},
{
name: "valid pattern with text before and after brace",
pattern: "prefix-{a,b}-suffix",
name: "valid square bracket pattern with mixed",
pattern: "ns[a-z0-9]",
expectError: false,
},
{
name: "valid pattern with no braces",
pattern: "app-prod",
name: "valid square bracket pattern with single character",
pattern: "ns[a]",
expectError: false,
},
{
name: "valid pattern with asterisk",
pattern: "app-*",
name: "valid square bracket pattern with text before and after",
pattern: "prefix-[abc]-suffix",
expectError: false,
},
// Unclosed opening brackets
{
name: "valid brace with spaces around content",
pattern: "app-{ prod , staging }",
expectError: false,
name: "unclosed opening bracket at end",
pattern: "ns[abc",
expectError: true,
errorMsg: "unclosed bracket",
},
{
name: "valid brace with numbers",
pattern: "ns-{1,2,3}",
expectError: false,
name: "unclosed opening bracket at start",
pattern: "[abc",
expectError: true,
errorMsg: "unclosed bracket",
},
{
name: "valid brace with hyphens in options",
pattern: "{app-prod,db-staging}",
expectError: false,
name: "unclosed opening bracket in middle",
pattern: "ns[abc-test",
expectError: true,
errorMsg: "unclosed bracket",
},
// Unclosed opening braces
// Unmatched closing brackets
{
name: "unclosed opening brace at end",
pattern: "app-{prod,staging",
name: "unmatched closing bracket at end",
pattern: "ns-abc]",
expectError: true,
errorMsg: "unclosed brace",
errorMsg: "unmatched closing bracket",
},
{
name: "unclosed opening brace at start",
pattern: "{prod,staging",
name: "unmatched closing bracket at start",
pattern: "]ns-abc",
expectError: true,
errorMsg: "unclosed brace",
errorMsg: "unmatched closing bracket",
},
{
name: "unclosed opening brace in middle",
pattern: "app-{prod-test",
name: "unmatched closing bracket in middle",
pattern: "ns-]abc",
expectError: true,
errorMsg: "unclosed brace",
errorMsg: "unmatched closing bracket",
},
{
name: "multiple unclosed braces",
pattern: "app-{prod-{staging",
name: "extra closing bracket after valid pair",
pattern: "ns[abc]]",
expectError: true,
errorMsg: "unclosed brace",
errorMsg: "unmatched closing bracket",
},
// Unmatched closing braces
// Empty bracket patterns
{
name: "unmatched closing brace at end",
pattern: "app-prod}",
name: "completely empty brackets",
pattern: "ns[]",
expectError: true,
errorMsg: "unmatched closing brace",
errorMsg: "empty bracket pattern",
},
{
name: "unmatched closing brace at start",
pattern: "}app-prod",
name: "empty brackets at start",
pattern: "[]ns",
expectError: true,
errorMsg: "unmatched closing brace",
errorMsg: "empty bracket pattern",
},
{
name: "unmatched closing brace in middle",
pattern: "app-}prod",
name: "empty brackets standalone",
pattern: "[]",
expectError: true,
errorMsg: "unmatched closing brace",
},
{
name: "extra closing brace after valid pair",
pattern: "app-{prod,staging}}",
expectError: true,
errorMsg: "unmatched closing brace",
},
// Empty brace patterns
{
name: "completely empty braces",
pattern: "app-{}",
expectError: true,
errorMsg: "empty brace pattern",
},
{
name: "braces with only spaces",
pattern: "app-{ }",
expectError: true,
errorMsg: "empty brace pattern",
},
{
name: "braces with only comma",
pattern: "app-{,}",
expectError: true,
errorMsg: "empty brace pattern",
},
{
name: "braces with only commas",
pattern: "app-{,,,}",
expectError: true,
errorMsg: "empty brace pattern",
},
{
name: "braces with commas and spaces",
pattern: "app-{ , , }",
expectError: true,
errorMsg: "empty brace pattern",
},
{
name: "braces with tabs and commas",
pattern: "app-{\t,\t}",
expectError: true,
errorMsg: "empty brace pattern",
},
{
name: "empty braces at start",
pattern: "{}app-prod",
expectError: true,
errorMsg: "empty brace pattern",
},
{
name: "empty braces standalone",
pattern: "{}",
expectError: true,
errorMsg: "empty brace pattern",
errorMsg: "empty bracket pattern",
},
// Edge cases
@@ -623,58 +516,6 @@ func TestValidateBracePatterns(t *testing.T) {
pattern: "",
expectError: false,
},
{
name: "pattern with only opening brace",
pattern: "{",
expectError: true,
errorMsg: "unclosed brace",
},
{
name: "pattern with only closing brace",
pattern: "}",
expectError: true,
errorMsg: "unmatched closing brace",
},
{
name: "valid brace with special characters inside",
pattern: "app-{prod-1,staging_2,dev.3}",
expectError: false,
},
{
name: "brace with asterisk inside option",
pattern: "app-{prod*,staging}",
expectError: false,
},
{
name: "multiple valid brace patterns",
pattern: "{app,db}-{prod,staging}",
expectError: false,
},
{
name: "brace with single character",
pattern: "app-{a}",
expectError: false,
},
{
name: "brace with trailing comma but has content",
pattern: "app-{prod,staging,}",
expectError: false, // Has content, so it's valid
},
{
name: "brace with leading comma but has content",
pattern: "app-{,prod,staging}",
expectError: false, // Has content, so it's valid
},
{
name: "brace with leading comma but has content",
pattern: "app-{{,prod,staging}",
expectError: true, // unclosed brace
},
{
name: "brace with leading comma but has content",
pattern: "app-{,prod,staging}}",
expectError: true, // unmatched closing brace
},
}
for _, tt := range tests {
@@ -723,20 +564,6 @@ func TestExpandWildcardsEdgeCases(t *testing.T) {
assert.ElementsMatch(t, []string{"ns-1", "ns_2", "ns.3", "ns@4"}, result)
})
t.Run("complex glob combinations", func(t *testing.T) {
activeNamespaces := []string{"app1-prod", "app2-prod", "app1-test", "db-prod", "service"}
result, err := expandWildcards([]string{"app?-{prod,test}"}, activeNamespaces)
require.NoError(t, err)
assert.ElementsMatch(t, []string{"app1-prod", "app2-prod", "app1-test"}, result)
})
t.Run("escaped characters", func(t *testing.T) {
activeNamespaces := []string{"app*", "app-prod", "app?test", "app-test"}
result, err := expandWildcards([]string{"app\\*"}, activeNamespaces)
require.NoError(t, err)
assert.ElementsMatch(t, []string{"app*"}, result)
})
t.Run("mixed literal and wildcard patterns", func(t *testing.T) {
activeNamespaces := []string{"app.prod", "app-prod", "app_prod", "test.ns"}
result, err := expandWildcards([]string{"app.prod", "app?prod"}, activeNamespaces)
@@ -777,12 +604,8 @@ func TestExpandWildcardsEdgeCases(t *testing.T) {
shouldError bool
}{
{"unclosed bracket", "ns[abc", true},
{"unclosed brace", "app-{prod,staging", true},
{"nested unclosed", "ns[a{bc", true},
{"valid bracket", "ns[abc]", false},
{"valid brace", "app-{prod,staging}", false},
{"empty bracket", "ns[]", true}, // empty brackets are invalid
{"empty brace", "app-{}", true}, // empty braces are invalid
}
for _, tt := range tests {

View File

@@ -1,90 +0,0 @@
new Crawler({
rateLimit: 8,
maxDepth: 10,
startUrls: ["https://velero.io/docs", "https://velero.io/"],
renderJavaScript: false,
sitemaps: ["https://velero.io/sitemap.xml"],
ignoreCanonicalTo: false,
discoveryPatterns: ["https://velero.io/**"],
schedule: "at 6:39 PM on Friday",
actions: [
{
indexName: "velero_new",
pathsToMatch: ["https://velero.io/docs**/**"],
recordExtractor: ({ helpers }) => {
return helpers.docsearch({
recordProps: {
lvl1: ["header h1", "article h1", "main h1", "h1", "head > title"],
content: ["article p, article li", "main p, main li", "p, li"],
lvl0: {
defaultValue: "Documentation",
},
lvl2: ["article h2", "main h2", "h2"],
lvl3: ["article h3", "main h3", "h3"],
lvl4: ["article h4", "main h4", "h4"],
lvl5: ["article h5", "main h5", "h5"],
lvl6: ["article h6", "main h6", "h6"],
version: "#dropdownMenuButton",
},
aggregateContent: true,
recordVersion: "v3",
});
},
},
],
initialIndexSettings: {
velero_new: {
attributesForFaceting: ["type", "lang", "version"],
attributesToRetrieve: [
"hierarchy",
"content",
"anchor",
"url",
"url_without_anchor",
"type",
"version",
],
attributesToHighlight: ["hierarchy", "content"],
attributesToSnippet: ["content:10"],
camelCaseAttributes: ["hierarchy", "content"],
searchableAttributes: [
"unordered(hierarchy.lvl0)",
"unordered(hierarchy.lvl1)",
"unordered(hierarchy.lvl2)",
"unordered(hierarchy.lvl3)",
"unordered(hierarchy.lvl4)",
"unordered(hierarchy.lvl5)",
"unordered(hierarchy.lvl6)",
"content",
],
distinct: true,
attributeForDistinct: "url",
customRanking: [
"desc(weight.pageRank)",
"desc(weight.level)",
"asc(weight.position)",
],
ranking: [
"words",
"filters",
"typo",
"attribute",
"proximity",
"exact",
"custom",
],
highlightPreTag: '<span class="algolia-docsearch-suggestion--highlight">',
highlightPostTag: "</span>",
minWordSizefor1Typo: 3,
minWordSizefor2Typos: 7,
allowTyposOnNumericTokens: false,
minProximity: 1,
ignorePlurals: true,
advancedSyntax: true,
attributeCriteriaComputedByMinProximity: true,
removeWordsIfNoResults: "allOptional",
},
},
appId: "9ASKQJ1HR3",
apiKey: "6392a5916af73b73df2406d3aef5ca45",
});

View File

@@ -12,7 +12,7 @@ params:
hero:
backgroundColor: med-blue
versioning: true
latest: v1.17
latest: v1.18
versions:
- main
- v1.18

View File

@@ -16,6 +16,8 @@ Backup belongs to the API group version `velero.io/v1`.
Here is a sample `Backup` object with each of the fields documented:
**Note:** Namespace includes/excludes support glob patterns (`*`, `?`, `[abc]`). See [Namespace Glob Patterns](../namespace-glob-patterns) for more details.
```yaml
# Standard Kubernetes API Version declaration. Required.
apiVersion: velero.io/v1
@@ -42,11 +44,12 @@ spec:
resourcePolicy:
kind: configmap
name: resource-policy-configmap
# Array of namespaces to include in the backup. If unspecified, all namespaces are included.
# Optional.
# Array of namespaces to include in the backup. Accepts glob patterns (*, ?, [abc]).
# Note: '*' alone is reserved for empty fields, which means all namespaces.
# If unspecified, all namespaces are included. Optional.
includedNamespaces:
- '*'
# Array of namespaces to exclude from the backup. Optional.
# Array of namespaces to exclude from the backup. Accepts glob patterns (*, ?, [abc]). Optional.
excludedNamespaces:
- some-namespace
# Array of resources to include in the backup. Resources may be shortcuts (for example 'po' for 'pods')

View File

@@ -16,6 +16,8 @@ Restore belongs to the API group version `velero.io/v1`.
Here is a sample `Restore` object with each of the fields documented:
**Note:** Namespace includes/excludes support glob patterns (`*`, `?`, `[abc]`). See [Namespace Glob Patterns](../namespace-glob-patterns) for more details.
```yaml
# Standard Kubernetes API Version declaration. Required.
apiVersion: velero.io/v1
@@ -45,11 +47,11 @@ spec:
writeSparseFiles: true
# ParallelFilesDownload is the concurrency number setting for restore
parallelFilesDownload: 10
# Array of namespaces to include in the restore. If unspecified, all namespaces are included.
# Optional.
# Array of namespaces to include in the restore. Accepts glob patterns (*, ?, [abc]).
# If unspecified, all namespaces are included. Optional.
includedNamespaces:
- '*'
# Array of namespaces to exclude from the restore. Optional.
# Array of namespaces to exclude from the restore. Accepts glob patterns (*, ?, [abc]). Optional.
excludedNamespaces:
- some-namespace
# Array of resources to include in the restore. Resources may be shortcuts (for example 'po' for 'pods')

View File

@@ -63,6 +63,10 @@ spec:
# CSI VolumeSnapshot status turns to ReadyToUse during creation, before
# returning error as timeout. The default value is 10 minute.
csiSnapshotTimeout: 10m
# ItemOperationTimeout specifies the time used to wait for
# asynchronous BackupItemAction operations
# The default value is 4 hour.
itemOperationTimeout: 4h
# resourcePolicy specifies the referenced resource policies that backup should follow
# optional
resourcePolicy:

View File

@@ -7,7 +7,7 @@ During [CSI Snapshot Data Movement][1], Velero built-in data mover launches data
During [fs-backup][2], Velero also launches data mover pods to run the data transfer.
The data transfer is a time and resource consuming activity.
Velero by default uses the [BestEffort QoS][2] for the data mover pods, which guarantees the best performance of the data movement activities. On the other hand, it may take lots of cluster resource, i.e., CPU, memory, and how many resources are taken is decided by the concurrency and the scale of data to be moved.
Velero by default uses the [BestEffort QoS][2] for the data mover pods, which guarantees the best performance of the data movement activities. On the other hand, it may take lots of cluster resource, i.e., CPU, memory, ephemeral storage, and how many resources are taken is decided by the concurrency and the scale of data to be moved.
If the cluster nodes don't have sufficient resource, Velero also allows you to customize the resources for the data mover pods.
Note: If less resources are assigned to data mover pods, the data movement activities may take longer time; or the data mover pods may be OOM killed if the assigned memory resource doesn't meet the requirements. Consequently, the dataUpload/dataDownload may run longer or fail.
@@ -25,6 +25,8 @@ Here is a sample of the configMap with ```podResources```:
"podResources": {
"cpuRequest": "1000m",
"cpuLimit": "1000m",
"ephemeralStorageRequest": "5Gi",
"ephemeralStorageLimit": "10Gi",
"memoryRequest": "512Mi",
"memoryLimit": "1Gi"
}

View File

@@ -0,0 +1,71 @@
---
title: "Namespace Glob Patterns"
layout: docs
---
When using `--include-namespaces` and `--exclude-namespaces` flags with backup and restore commands, you can use glob patterns to match multiple namespaces.
## Supported Patterns
Velero supports the following glob pattern characters:
- `*` - Matches any sequence of characters
```bash
velero backup create my-backup --include-namespaces "app-*"
# Matches: app-prod, app-staging, app-dev, etc.
```
- `?` - Matches exactly one character
```bash
velero backup create my-backup --include-namespaces "ns?"
# Matches: ns1, ns2, nsa, but NOT ns10
```
- `[abc]` - Matches any single character in the brackets
```bash
velero backup create my-backup --include-namespaces "ns[123]"
# Matches: ns1, ns2, ns3
```
- `[a-z]` - Matches any single character in the range
```bash
velero backup create my-backup --include-namespaces "ns[a-c]"
# Matches: nsa, nsb, nsc
```
## Unsupported Patterns
The following patterns are **not supported** and will cause validation errors:
- `**` - Consecutive asterisks
- `|` - Alternation (regex operator)
- `()` - Grouping (regex operators)
- `!` - Negation
- `{}` - Brace expansion
- `,` - Comma (used in brace expansion)
## Special Cases
- `*` alone means "all namespaces" and is not expanded
- Empty brackets `[]` are invalid
- Unmatched or unclosed brackets will cause validation errors
## Examples
Combine patterns with include and exclude flags:
```bash
# Backup all production namespaces except test
velero backup create prod-backup \
--include-namespaces "*-prod" \
--exclude-namespaces "test-*"
# Backup specific numbered namespaces
velero backup create numbered-backup \
--include-namespaces "app-[0-9]"
# Restore namespaces matching multiple patterns
velero restore create my-restore \
--from-backup my-backup \
--include-namespaces "frontend-*,backend-*"
```

View File

@@ -72,6 +72,8 @@ data:
"podResources": {
"cpuRequest": "100m",
"cpuLimit": "200m",
"ephemeralStorageRequest": "5Gi",
"ephemeralStorageLimit": "10Gi",
"memoryRequest": "100Mi",
"memoryLimit": "200Mi"
},
@@ -99,6 +101,8 @@ data:
"podResources": {
"cpuRequest": "200m",
"cpuLimit": "400m",
"ephemeralStorageRequest": "5Gi",
"ephemeralStorageLimit": "10Gi",
"memoryRequest": "200Mi",
"memoryLimit": "400Mi"
},

View File

@@ -17,7 +17,11 @@ Wildcard takes precedence when both a wildcard and specific resource are include
### --include-namespaces
Namespaces to include. Default is `*`, all namespaces.
Namespaces to include. Accepts glob patterns (`*`, `?`, `[abc]`). Default is `*`, all namespaces.
See [Namespace Glob Patterns](namespace-glob-patterns) for more details on supported patterns.
Note: `*` alone is reserved for empty fields, which means all namespaces.
* Backup a namespace and it's objects.
@@ -158,7 +162,9 @@ Wildcard excludes are ignored.
### --exclude-namespaces
Namespaces to exclude.
Namespaces to exclude. Accepts glob patterns (`*`, `?`, `[abc]`).
See [Namespace Glob Patterns](namespace-glob-patterns.md) for more details on supported patterns.
* Exclude kube-system from the cluster backup.

View File

@@ -224,7 +224,7 @@ Configure different node selection rules for specific storage classes:
```
### Pod Resources (`podResources`)
Configure CPU and memory resources for Data Mover Pods to optimize performance and prevent resource conflict.
Configure CPU, memory and ephemeral storage resources for Data Mover Pods to optimize performance and prevent resource conflict.
The configurations work for PodVolumeBackup, PodVolumeRestore, DataUpload, and DataDownload pods.
@@ -233,6 +233,8 @@ The configurations work for PodVolumeBackup, PodVolumeRestore, DataUpload, and D
"podResources": {
"cpuRequest": "1000m",
"cpuLimit": "2000m",
"ephemeralStorageRequest": "5Gi",
"ephemeralStorageLimit": "10Gi",
"memoryRequest": "1Gi",
"memoryLimit": "4Gi"
}
@@ -535,6 +537,8 @@ Here's a comprehensive example showing how all configuration sections work toget
"podResources": {
"cpuRequest": "500m",
"cpuLimit": "1000m",
"ephemeralStorageRequest": "5Gi",
"ephemeralStorageLimit": "10Gi",
"memoryRequest": "1Gi",
"memoryLimit": "2Gi"
},

View File

@@ -1,13 +1,13 @@
---
title: "Upgrading to Velero 1.17"
title: "Upgrading to Velero 1.18"
layout: docs
---
## Prerequisites
- Velero [v1.16.x][9] installed.
- Velero [v1.17.x][9] installed.
If you're not yet running at least Velero v1.16, see the following:
If you're not yet running at least Velero v1.17, see the following:
- [Upgrading to v1.8][1]
- [Upgrading to v1.9][2]
@@ -18,13 +18,14 @@ If you're not yet running at least Velero v1.16, see the following:
- [Upgrading to v1.14][7]
- [Upgrading to v1.15][8]
- [Upgrading to v1.16][9]
- [Upgrading to v1.17][10]
Before upgrading, check the [Velero compatibility matrix](https://github.com/vmware-tanzu/velero#velero-compatibility-matrix) to make sure your version of Kubernetes is supported by the new version of Velero.
## Instructions
### Upgrade from v1.16
1. Install the Velero v1.17 command-line interface (CLI) by following the [instructions here][0].
### Upgrade from v1.17
1. Install the Velero v1.18 command-line interface (CLI) by following the [instructions here][0].
Verify that you've properly installed it by running:
@@ -36,7 +37,7 @@ Before upgrading, check the [Velero compatibility matrix](https://github.com/vmw
```bash
Client:
Version: v1.17.0
Version: v1.18.0
Git commit: <git SHA>
```
@@ -46,28 +47,21 @@ Before upgrading, check the [Velero compatibility matrix](https://github.com/vmw
velero install --crds-only --dry-run -o yaml | kubectl apply -f -
```
3. (optional) Update the `uploader-type` to `kopia` if you are using `restic`:
```bash
kubectl get deploy -n velero -ojson \
| sed "s/\"--uploader-type=restic\"/\"--uploader-type=kopia\"/g" \
| kubectl apply -f -
```
4. Update the container image used by the Velero deployment, plugin and (optionally) the node agent daemon set:
3. Update the container image used by the Velero deployment, plugin and (optionally) the node agent daemon set:
```bash
# set the container and image of the init container for plugin accordingly,
# if you are using other plugin
kubectl set image deployment/velero \
velero=velero/velero:v1.17.0 \
velero-plugin-for-aws=velero/velero-plugin-for-aws:v1.13.0 \
velero=velero/velero:v1.18.0 \
velero-plugin-for-aws=velero/velero-plugin-for-aws:v1.14.0 \
--namespace velero
# optional, if using the node agent daemonset
kubectl set image daemonset/node-agent \
node-agent=velero/velero:v1.17.0 \
node-agent=velero/velero:v1.18.0 \
--namespace velero
```
5. Confirm that the deployment is up and running with the correct version by running:
4. Confirm that the deployment is up and running with the correct version by running:
```bash
velero version
@@ -77,11 +71,11 @@ Before upgrading, check the [Velero compatibility matrix](https://github.com/vmw
```bash
Client:
Version: v1.17.0
Version: v1.18.0
Git commit: <git SHA>
Server:
Version: v1.17.0
Version: v1.18.0
```
[0]: basic-install.md#install-the-cli
@@ -93,4 +87,5 @@ Before upgrading, check the [Velero compatibility matrix](https://github.com/vmw
[6]: https://velero.io/docs/v1.13/upgrade-to-1.13
[7]: https://velero.io/docs/v1.14/upgrade-to-1.14
[8]: https://velero.io/docs/v1.15/upgrade-to-1.15
[9]: https://velero.io/docs/v1.16/upgrade-to-1.16
[9]: https://velero.io/docs/v1.16/upgrade-to-1.16
[10]: https://velero.io/docs/v1.17/upgrade-to-1.17

View File

@@ -16,6 +16,8 @@ Backup belongs to the API group version `velero.io/v1`.
Here is a sample `Backup` object with each of the fields documented:
**Note:** Namespace includes/excludes support glob patterns (`*`, `?`, `[abc]`). See [Namespace Glob Patterns](../namespace-glob-patterns) for more details.
```yaml
# Standard Kubernetes API Version declaration. Required.
apiVersion: velero.io/v1
@@ -42,11 +44,11 @@ spec:
resourcePolicy:
kind: configmap
name: resource-policy-configmap
# Array of namespaces to include in the backup. If unspecified, all namespaces are included.
# Optional.
# Array of namespaces to include in the backup. Accepts glob patterns (*, ?, [abc]).
# If unspecified, all namespaces are included. Optional.
includedNamespaces:
- '*'
# Array of namespaces to exclude from the backup. Optional.
# Array of namespaces to exclude from the backup. Accepts glob patterns (*, ?, [abc]). Optional.
excludedNamespaces:
- some-namespace
# Array of resources to include in the backup. Resources may be shortcuts (for example 'po' for 'pods')

View File

@@ -16,6 +16,8 @@ Restore belongs to the API group version `velero.io/v1`.
Here is a sample `Restore` object with each of the fields documented:
**Note:** Namespace includes/excludes support glob patterns (`*`, `?`, `[abc]`). See [Namespace Glob Patterns](../namespace-glob-patterns) for more details.
```yaml
# Standard Kubernetes API Version declaration. Required.
apiVersion: velero.io/v1
@@ -45,11 +47,11 @@ spec:
writeSparseFiles: true
# ParallelFilesDownload is the concurrency number setting for restore
parallelFilesDownload: 10
# Array of namespaces to include in the restore. If unspecified, all namespaces are included.
# Optional.
# Array of namespaces to include in the restore. Accepts glob patterns (*, ?, [abc]).
# If unspecified, all namespaces are included. Optional.
includedNamespaces:
- '*'
# Array of namespaces to exclude from the restore. Optional.
# Array of namespaces to exclude from the restore. Accepts glob patterns (*, ?, [abc]). Optional.
excludedNamespaces:
- some-namespace
# Array of resources to include in the restore. Resources may be shortcuts (for example 'po' for 'pods')

View File

@@ -0,0 +1,71 @@
---
title: "Namespace Glob Patterns"
layout: docs
---
When using `--include-namespaces` and `--exclude-namespaces` flags with backup and restore commands, you can use glob patterns to match multiple namespaces.
## Supported Patterns
Velero supports the following glob pattern characters:
- `*` - Matches any sequence of characters
```bash
velero backup create my-backup --include-namespaces "app-*"
# Matches: app-prod, app-staging, app-dev, etc.
```
- `?` - Matches exactly one character
```bash
velero backup create my-backup --include-namespaces "ns?"
# Matches: ns1, ns2, nsa, but NOT ns10
```
- `[abc]` - Matches any single character in the brackets
```bash
velero backup create my-backup --include-namespaces "ns[123]"
# Matches: ns1, ns2, ns3
```
- `[a-z]` - Matches any single character in the range
```bash
velero backup create my-backup --include-namespaces "ns[a-c]"
# Matches: nsa, nsb, nsc
```
## Unsupported Patterns
The following patterns are **not supported** and will cause validation errors:
- `**` - Consecutive asterisks
- `|` - Alternation (regex operator)
- `()` - Grouping (regex operators)
- `!` - Negation
- `{}` - Brace expansion
- `,` - Comma (used in brace expansion)
## Special Cases
- `*` alone means "all namespaces" and is not expanded
- Empty brackets `[]` are invalid
- Unmatched or unclosed brackets will cause validation errors
## Examples
Combine patterns with include and exclude flags:
```bash
# Backup all production namespaces except test
velero backup create prod-backup \
--include-namespaces "*-prod" \
--exclude-namespaces "test-*"
# Backup specific numbered namespaces
velero backup create numbered-backup \
--include-namespaces "app-[0-9]"
# Restore namespaces matching multiple patterns
velero restore create my-restore \
--from-backup my-backup \
--include-namespaces "frontend-*,backend-*"
```

View File

@@ -17,7 +17,11 @@ Wildcard takes precedence when both a wildcard and specific resource are include
### --include-namespaces
Namespaces to include. Default is `*`, all namespaces.
Namespaces to include. Accepts glob patterns (`*`, `?`, `[abc]`). Default is `*`, all namespaces.
See [Namespace Glob Patterns](namespace-glob-patterns) for more details on supported patterns.
Note: `*` alone is reserved for empty fields, which means all namespaces.
* Backup a namespace and it's objects.
@@ -158,7 +162,9 @@ Wildcard excludes are ignored.
### --exclude-namespaces
Namespaces to exclude.
Namespaces to exclude. Accepts glob patterns (`*`, `?`, `[abc]`).
See [Namespace Glob Patterns](namespace-glob-patterns) for more details on supported patterns.
* Exclude kube-system from the cluster backup.

View File

@@ -1,13 +1,13 @@
---
title: "Upgrading to Velero 1.17"
title: "Upgrading to Velero 1.18"
layout: docs
---
## Prerequisites
- Velero [v1.16.x][9] installed.
- Velero [v1.17.x][9] installed.
If you're not yet running at least Velero v1.16, see the following:
If you're not yet running at least Velero v1.17, see the following:
- [Upgrading to v1.8][1]
- [Upgrading to v1.9][2]
@@ -18,13 +18,14 @@ If you're not yet running at least Velero v1.16, see the following:
- [Upgrading to v1.14][7]
- [Upgrading to v1.15][8]
- [Upgrading to v1.16][9]
- [Upgrading to v1.17][10]
Before upgrading, check the [Velero compatibility matrix](https://github.com/vmware-tanzu/velero#velero-compatibility-matrix) to make sure your version of Kubernetes is supported by the new version of Velero.
## Instructions
### Upgrade from v1.16
1. Install the Velero v1.17 command-line interface (CLI) by following the [instructions here][0].
### Upgrade from v1.17
1. Install the Velero v1.18 command-line interface (CLI) by following the [instructions here][0].
Verify that you've properly installed it by running:
@@ -36,7 +37,7 @@ Before upgrading, check the [Velero compatibility matrix](https://github.com/vmw
```bash
Client:
Version: v1.17.0
Version: v1.18.0
Git commit: <git SHA>
```
@@ -46,28 +47,21 @@ Before upgrading, check the [Velero compatibility matrix](https://github.com/vmw
velero install --crds-only --dry-run -o yaml | kubectl apply -f -
```
3. (optional) Update the `uploader-type` to `kopia` if you are using `restic`:
```bash
kubectl get deploy -n velero -ojson \
| sed "s/\"--uploader-type=restic\"/\"--uploader-type=kopia\"/g" \
| kubectl apply -f -
```
4. Update the container image used by the Velero deployment, plugin and (optionally) the node agent daemon set:
3. Update the container image used by the Velero deployment, plugin and (optionally) the node agent daemon set:
```bash
# set the container and image of the init container for plugin accordingly,
# if you are using other plugin
kubectl set image deployment/velero \
velero=velero/velero:v1.17.0 \
velero-plugin-for-aws=velero/velero-plugin-for-aws:v1.13.0 \
velero=velero/velero:v1.18.0 \
velero-plugin-for-aws=velero/velero-plugin-for-aws:v1.14.0 \
--namespace velero
# optional, if using the node agent daemonset
kubectl set image daemonset/node-agent \
node-agent=velero/velero:v1.17.0 \
node-agent=velero/velero:v1.18.0 \
--namespace velero
```
5. Confirm that the deployment is up and running with the correct version by running:
4. Confirm that the deployment is up and running with the correct version by running:
```bash
velero version
@@ -77,11 +71,11 @@ Before upgrading, check the [Velero compatibility matrix](https://github.com/vmw
```bash
Client:
Version: v1.17.0
Version: v1.18.0
Git commit: <git SHA>
Server:
Version: v1.17.0
Version: v1.18.0
```
[0]: basic-install.md#install-the-cli
@@ -93,4 +87,5 @@ Before upgrading, check the [Velero compatibility matrix](https://github.com/vmw
[6]: https://velero.io/docs/v1.13/upgrade-to-1.13
[7]: https://velero.io/docs/v1.14/upgrade-to-1.14
[8]: https://velero.io/docs/v1.15/upgrade-to-1.15
[9]: https://velero.io/docs/v1.16/upgrade-to-1.16
[9]: https://velero.io/docs/v1.16/upgrade-to-1.16
[10]: https://velero.io/docs/v1.17/upgrade-to-1.17

View File

@@ -13,8 +13,8 @@ toc:
url: /basic-install
- page: Customize Installation
url: /customize-installation
- page: Upgrade to 1.17
url: /upgrade-to-1.17
- page: Upgrade to 1.18
url: /upgrade-to-1.18
- page: Supported providers
url: /supported-providers
- page: Evaluation install
@@ -33,6 +33,8 @@ toc:
url: /enable-api-group-versions-feature
- page: Resource filtering
url: /resource-filtering
- page: Namespace glob patterns
url: /namespace-glob-patterns
- page: Backup reference
url: /backup-reference
- page: Backup hooks

View File

@@ -13,8 +13,8 @@ toc:
url: /basic-install
- page: Customize Installation
url: /customize-installation
- page: Upgrade to 1.17
url: /upgrade-to-1.17
- page: Upgrade to 1.18
url: /upgrade-to-1.18
- page: Supported providers
url: /supported-providers
- page: Evaluation install
@@ -33,6 +33,8 @@ toc:
url: /enable-api-group-versions-feature
- page: Resource filtering
url: /resource-filtering
- page: Namespace glob patterns
url: /namespace-glob-patterns
- page: Backup reference
url: /backup-reference
- page: Backup hooks

View File

@@ -27,16 +27,6 @@
<div class="col-md-3 toc">
{{ .Render "versions" }}
<br/>
<div id="docsearch">
<!-- <form class="d-flex align-items-center">
<span class="algolia-autocomplete" style="position: relative; display: inline-block; direction: ltr;">
<input type="search" class="form-control docsearch" id="search-input" placeholder="Search..."
aria-label="Search for..." autocomplete="off" spellcheck="false" role="combobox"
aria-autocomplete="list" aria-expanded="false" aria-owns="algolia-autocomplete-listbox-0"
dir="auto" style="position: relative; vertical-align: top;">
</span>
</form> -->
</div>
{{ .Render "nav" }}
</div>
<div class="col-md-8">
@@ -58,16 +48,6 @@
{{ .Render "footer" }}
</div>
</div>
<script src="https://cdn.jsdelivr.net/npm/@docsearch/js@3"></script>
<script type="text/javascript"> docsearch({
appId: '9ASKQJ1HR3',
apiKey: '170ba79bfa16cebfdf10726ae4771d7e',
indexName: 'velero_new',
container: '#docsearch',
searchParameters: {
facetFilters: ["version:{{ .CurrentSection.Params.version }}"]},
});
</script>
</body>
</html>

View File

@@ -8,6 +8,4 @@
{{ $styles := resources.Get "styles.scss" | toCSS $options | resources.Fingerprint }}
<link rel="stylesheet" href="{{ $styles.RelPermalink }}" integrity="{{ $styles.Data.Integrity }}">
{{/* TODO {% seo %}*/}}
<link rel="preconnect" href="https://9ASKQJ1HR3-dsn.algolia.net" crossorigin />
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@docsearch/css@3" />
</head>

View File

@@ -76,7 +76,7 @@ HAS_VSPHERE_PLUGIN ?= false
RESTORE_HELPER_IMAGE ?=
#Released version only
UPGRADE_FROM_VELERO_VERSION ?= v1.15.2,v1.16.2
UPGRADE_FROM_VELERO_VERSION ?= v1.16.2,v1.17.2
# UPGRADE_FROM_VELERO_CLI can has the same format(a list divided by comma) with UPGRADE_FROM_VELERO_VERSION
# Upgrade tests will be executed sequently according to the list by UPGRADE_FROM_VELERO_VERSION
@@ -85,7 +85,7 @@ UPGRADE_FROM_VELERO_VERSION ?= v1.15.2,v1.16.2
# to the end, nil string will be set if UPGRADE_FROM_VELERO_CLI is shorter than UPGRADE_FROM_VELERO_VERSION
UPGRADE_FROM_VELERO_CLI ?=
MIGRATE_FROM_VELERO_VERSION ?= v1.16.2,$(VERSION)
MIGRATE_FROM_VELERO_VERSION ?= v1.17.2,$(VERSION)
MIGRATE_FROM_VELERO_CLI ?=
VELERO_NAMESPACE ?= velero

View File

@@ -0,0 +1,150 @@
/*
Copyright the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package basic
import (
"fmt"
"path"
"strings"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/vmware-tanzu/velero/test/e2e/test"
. "github.com/vmware-tanzu/velero/test/e2e/test"
"github.com/vmware-tanzu/velero/test/util/common"
. "github.com/vmware-tanzu/velero/test/util/k8s"
)
// RestoreExecHooks tests that a pod with multiple restore exec hooks does not hang
// at the Finalizing phase during restore (Issue #9359 / PR #9366).
type RestoreExecHooks struct {
TestCase
podName string
}
var RestoreExecHooksTest func() = test.TestFunc(&RestoreExecHooks{})
func (r *RestoreExecHooks) Init() error {
Expect(r.TestCase.Init()).To(Succeed())
r.CaseBaseName = "restore-exec-hooks-" + r.UUIDgen
r.BackupName = "backup-" + r.CaseBaseName
r.RestoreName = "restore-" + r.CaseBaseName
r.podName = "pod-multiple-hooks"
r.NamespacesTotal = 1
r.NSIncluded = &[]string{}
for nsNum := 0; nsNum < r.NamespacesTotal; nsNum++ {
createNSName := fmt.Sprintf("%s-%00000d", r.CaseBaseName, nsNum)
*r.NSIncluded = append(*r.NSIncluded, createNSName)
}
r.TestMsg = &test.TestMSG{
Desc: "Restore pod with multiple restore exec hooks",
Text: "Should successfully backup and restore without hanging at Finalizing phase",
FailedMSG: "Failed to successfully backup and restore pod with multiple hooks",
}
r.BackupArgs = []string{
"create", "--namespace", r.VeleroCfg.VeleroNamespace, "backup", r.BackupName,
"--include-namespaces", strings.Join(*r.NSIncluded, ","),
"--default-volumes-to-fs-backup", "--wait",
}
r.RestoreArgs = []string{
"create", "--namespace", r.VeleroCfg.VeleroNamespace, "restore", r.RestoreName,
"--from-backup", r.BackupName, "--wait",
}
return nil
}
func (r *RestoreExecHooks) CreateResources() error {
for nsNum := 0; nsNum < r.NamespacesTotal; nsNum++ {
createNSName := fmt.Sprintf("%s-%00000d", r.CaseBaseName, nsNum)
By(fmt.Sprintf("Creating namespace %s", createNSName), func() {
Expect(CreateNamespace(r.Ctx, r.Client, createNSName)).
To(Succeed(), fmt.Sprintf("Failed to create namespace %s", createNSName))
})
// Prepare images and commands adaptively for the target OS
imageAddress := LinuxTestImage
initCommand := `["/bin/sh", "-c", "echo init-hook-done"]`
execCommand1 := `["/bin/sh", "-c", "echo hook1"]`
execCommand2 := `["/bin/sh", "-c", "echo hook2"]`
if r.VeleroCfg.WorkerOS == common.WorkerOSLinux && r.VeleroCfg.ImageRegistryProxy != "" {
imageAddress = path.Join(r.VeleroCfg.ImageRegistryProxy, LinuxTestImage)
} else if r.VeleroCfg.WorkerOS == common.WorkerOSWindows {
imageAddress = WindowTestImage
initCommand = `["cmd", "/c", "echo init-hook-done"]`
execCommand1 = `["cmd", "/c", "echo hook1"]`
execCommand2 = `["cmd", "/c", "echo hook2"]`
}
// Inject mixing InitContainer hook and multiple Exec post-restore hooks.
// This guarantees that the loop index 'i' mismatched 'hook.hookIndex' (Issue #9359),
// ensuring the bug is properly reproduced and the fix is verified.
ann := map[string]string{
// Inject InitContainer Restore Hook
"init.hook.restore.velero.io/container-image": imageAddress,
"init.hook.restore.velero.io/container-name": "test-init-hook",
"init.hook.restore.velero.io/command": initCommand,
// Inject multiple Exec Restore Hooks
"post.hook.restore.velero.io/test1.command": execCommand1,
"post.hook.restore.velero.io/test1.container": r.podName,
"post.hook.restore.velero.io/test2.command": execCommand2,
"post.hook.restore.velero.io/test2.container": r.podName,
}
By(fmt.Sprintf("Creating pod %s with multiple restore hooks in namespace %s", r.podName, createNSName), func() {
_, err := CreatePod(
r.Client,
createNSName,
r.podName,
"", // No storage class needed
"", // No PVC needed
[]string{}, // No volumes
nil,
ann,
r.VeleroCfg.ImageRegistryProxy,
r.VeleroCfg.WorkerOS,
)
Expect(err).To(Succeed(), fmt.Sprintf("Failed to create pod with hooks in namespace %s", createNSName))
})
By(fmt.Sprintf("Waiting for pod %s to be ready", r.podName), func() {
err := WaitForPods(r.Ctx, r.Client, createNSName, []string{r.podName})
Expect(err).To(Succeed(), fmt.Sprintf("Failed to wait for pod %s in namespace %s", r.podName, createNSName))
})
}
return nil
}
func (r *RestoreExecHooks) Verify() error {
for nsNum := 0; nsNum < r.NamespacesTotal; nsNum++ {
createNSName := fmt.Sprintf("%s-%00000d", r.CaseBaseName, nsNum)
By(fmt.Sprintf("Verifying pod %s in namespace %s after restore", r.podName, createNSName), func() {
err := WaitForPods(r.Ctx, r.Client, createNSName, []string{r.podName})
Expect(err).To(Succeed(), fmt.Sprintf("Failed to verify pod %s in namespace %s after restore", r.podName, createNSName))
})
}
return nil
}

View File

@@ -440,6 +440,12 @@ var _ = Describe(
StorageClasssChangingTest,
)
var _ = Describe(
"Restore phase does not block at Finalizing when a container has multiple exec hooks",
Label("Basic", "Hooks"),
RestoreExecHooksTest,
)
var _ = Describe(
"Backup/restore of 2500 namespaces",
Label("Scale", "LongTime"),
@@ -494,6 +500,11 @@ var _ = Describe(
Label("ResourceFiltering", "IncludeNamespaces", "Restore"),
RestoreWithIncludeNamespaces,
)
var _ = Describe(
"Velero test on backup/restore with wildcard namespaces",
Label("ResourceFiltering", "WildcardNamespaces"),
WildcardNamespacesTest,
)
var _ = Describe(
"Velero test on include resources from the cluster backup",
Label("ResourceFiltering", "IncludeResources", "Backup"),

View File

@@ -36,7 +36,6 @@ import (
"github.com/vmware-tanzu/velero/pkg/builder"
velerotypes "github.com/vmware-tanzu/velero/pkg/types"
"github.com/vmware-tanzu/velero/pkg/util/kube"
velerokubeutil "github.com/vmware-tanzu/velero/pkg/util/kube"
"github.com/vmware-tanzu/velero/test"
. "github.com/vmware-tanzu/velero/test/e2e/test"
k8sutil "github.com/vmware-tanzu/velero/test/util/k8s"
@@ -240,9 +239,13 @@ func (n *NodeAgentConfigTestCase) Backup() error {
Expect(backupPodList.Items[0].Spec.PriorityClassName).To(Equal(n.nodeAgentConfigs.PriorityClassName))
// In backup, only the second element of LoadAffinity array should be used.
expectedAffinity := velerokubeutil.ToSystemAffinity(n.nodeAgentConfigs.LoadAffinity[1:])
expectedLabelKey, _, ok := popFromMap(n.nodeAgentConfigs.LoadAffinity[1].NodeSelector.MatchLabels)
Expect(ok).To(BeTrue(), "Expected LoadAffinity's MatchLabels should at least have one key-value pair")
Expect(backupPodList.Items[0].Spec.Affinity).To(Equal(expectedAffinity))
// From 1.18.1, Velero adds some default affinity in the backup/restore pod,
// so we can't directly compare the whole affinity,
// but we can verify if the expected affinity is contained in the pod affinity.
Expect(backupPodList.Items[0].Spec.Affinity.String()).To(ContainSubstring(expectedLabelKey))
fmt.Println("backupPod content verification completed successfully.")
@@ -317,9 +320,13 @@ func (n *NodeAgentConfigTestCase) Restore() error {
Expect(restorePodList.Items[0].Spec.PriorityClassName).To(Equal(n.nodeAgentConfigs.PriorityClassName))
// In restore, only the first element of LoadAffinity array should be used.
expectedAffinity := velerokubeutil.ToSystemAffinity(n.nodeAgentConfigs.LoadAffinity[:1])
expectedLabelKey, _, ok := popFromMap(n.nodeAgentConfigs.LoadAffinity[0].NodeSelector.MatchLabels)
Expect(ok).To(BeTrue(), "Expected LoadAffinity's MatchLabels should at least have one key-value pair")
Expect(restorePodList.Items[0].Spec.Affinity).To(Equal(expectedAffinity))
// From 1.18.1, Velero adds some default affinity in the backup/restore pod,
// so we can't directly compare the whole affinity,
// but we can verify if the expected affinity is contained in the pod affinity.
Expect(restorePodList.Items[0].Spec.Affinity.String()).To(ContainSubstring(expectedLabelKey))
fmt.Println("restorePod content verification completed successfully.")
@@ -345,3 +352,12 @@ func (n *NodeAgentConfigTestCase) Restore() error {
return nil
}
func popFromMap[K comparable, V any](m map[K]V) (k K, v V, ok bool) {
for key, val := range m {
delete(m, key)
return key, val, true
}
return
}

View File

@@ -70,10 +70,12 @@ var SpecificRepoMaintenanceTest func() = TestFunc(&RepoMaintenanceTestCase{
jobConfigs: velerotypes.JobConfigs{
KeepLatestMaintenanceJobs: &keepJobNum,
PodResources: &velerokubeutil.PodResources{
CPURequest: "100m",
MemoryRequest: "100Mi",
CPULimit: "200m",
MemoryLimit: "200Mi",
CPURequest: "100m",
MemoryRequest: "100Mi",
EphemeralStorageRequest: "5Gi",
CPULimit: "200m",
MemoryLimit: "200Mi",
EphemeralStorageLimit: "10Gi",
},
PriorityClassName: test.PriorityClassNameForRepoMaintenance,
},
@@ -230,8 +232,10 @@ func (r *RepoMaintenanceTestCase) Verify() error {
resources, err := kube.ParseResourceRequirements(
r.jobConfigs.PodResources.CPURequest,
r.jobConfigs.PodResources.MemoryRequest,
r.jobConfigs.PodResources.EphemeralStorageRequest,
r.jobConfigs.PodResources.CPULimit,
r.jobConfigs.PodResources.MemoryLimit,
r.jobConfigs.PodResources.EphemeralStorageLimit,
)
if err != nil {
return errors.Wrap(err, "failed to parse resource requirements for maintenance job")

View File

@@ -0,0 +1,143 @@
/*
Copyright the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package filtering
import (
"fmt"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
apierrors "k8s.io/apimachinery/pkg/api/errors"
. "github.com/vmware-tanzu/velero/test/e2e/test"
. "github.com/vmware-tanzu/velero/test/util/k8s"
)
// WildcardNamespaces tests the inclusion and exclusion of namespaces using wildcards
// introduced in PR #9255 (Issue #1874). It verifies filtering at both Backup and Restore stages.
type WildcardNamespaces struct {
TestCase // Inherit from basic TestCase instead of FilteringCase to customize a single flow
restoredNS []string
excludedByBackupNS []string
excludedByRestoreNS []string
}
// Register as a single E2E test
var WildcardNamespacesTest func() = TestFunc(&WildcardNamespaces{})
func (w *WildcardNamespaces) Init() error {
Expect(w.TestCase.Init()).To(Succeed())
w.CaseBaseName = "wildcard-ns-" + w.UUIDgen
w.BackupName = "backup-" + w.CaseBaseName
w.RestoreName = "restore-" + w.CaseBaseName
// 1. Define namespaces for different filtering lifecycle scenarios
nsIncBoth := w.CaseBaseName + "-inc-both" // Included in both backup and restore
nsExact := w.CaseBaseName + "-exact" // Included exactly without wildcards
nsIncExc := w.CaseBaseName + "-inc-exc" // Included in backup, but excluded during restore
nsBakExc := w.CaseBaseName + "-test-bak" // Excluded during backup
// Group namespaces for validation
w.restoredNS = []string{nsIncBoth, nsExact}
w.excludedByRestoreNS = []string{nsIncExc}
w.excludedByBackupNS = []string{nsBakExc}
w.TestMsg = &TestMSG{
Desc: "Backup and restore with wildcard namespaces",
Text: "Should correctly filter namespaces using wildcards during both backup and restore stages",
FailedMSG: "Failed to properly filter namespaces using wildcards",
}
// 2. Setup Backup Args
backupIncWildcard1 := fmt.Sprintf("%s-inc-*", w.CaseBaseName) // Matches nsIncBoth, nsIncExc
backupIncWildcard2 := fmt.Sprintf("%s-test-*", w.CaseBaseName) // Matches nsBakExc
backupExcWildcard := fmt.Sprintf("%s-test-bak", w.CaseBaseName) // Excludes nsBakExc
nonExistentWildcard := "non-existent-ns-*" // Tests zero-match boundary condition
w.BackupArgs = []string{
"create", "--namespace", w.VeleroCfg.VeleroNamespace, "backup", w.BackupName,
// Use broad wildcards for inclusion to bypass Velero CLI's literal string collision validation
"--include-namespaces", fmt.Sprintf("%s,%s,%s,%s", backupIncWildcard1, backupIncWildcard2, nsExact, nonExistentWildcard),
"--exclude-namespaces", backupExcWildcard,
"--default-volumes-to-fs-backup", "--wait",
}
// 3. Setup Restore Args
restoreExcWildcard := fmt.Sprintf("%s-*-exc", w.CaseBaseName) // Excludes nsIncExc
w.RestoreArgs = []string{
"create", "--namespace", w.VeleroCfg.VeleroNamespace, "restore", w.RestoreName,
"--from-backup", w.BackupName,
"--include-namespaces", fmt.Sprintf("%s,%s,%s", backupIncWildcard1, nsExact, nonExistentWildcard),
"--exclude-namespaces", restoreExcWildcard,
"--wait",
}
return nil
}
func (w *WildcardNamespaces) CreateResources() error {
allNamespaces := append(w.restoredNS, w.excludedByRestoreNS...)
allNamespaces = append(allNamespaces, w.excludedByBackupNS...)
for _, ns := range allNamespaces {
By(fmt.Sprintf("Creating namespace %s", ns), func() {
Expect(CreateNamespace(w.Ctx, w.Client, ns)).To(Succeed(), fmt.Sprintf("Failed to create namespace %s", ns))
})
// Create a ConfigMap in each namespace to verify resource restoration
cmName := "configmap-" + ns
By(fmt.Sprintf("Creating ConfigMap %s in namespace %s", cmName, ns), func() {
_, err := CreateConfigMap(w.Client.ClientGo, ns, cmName, map[string]string{"wildcard-test": "true"}, nil)
Expect(err).To(Succeed(), fmt.Sprintf("Failed to create configmap in namespace %s", ns))
})
}
return nil
}
func (w *WildcardNamespaces) Verify() error {
// 1. Verify namespaces that should be successfully restored
for _, ns := range w.restoredNS {
By(fmt.Sprintf("Checking included namespace %s exists", ns), func() {
_, err := GetNamespace(w.Ctx, w.Client, ns)
Expect(err).To(Succeed(), fmt.Sprintf("Included namespace %s should exist after restore", ns))
_, err = GetConfigMap(w.Client.ClientGo, ns, "configmap-"+ns)
Expect(err).To(Succeed(), fmt.Sprintf("ConfigMap in included namespace %s should exist", ns))
})
}
// 2. Verify namespaces excluded during Backup
for _, ns := range w.excludedByBackupNS {
By(fmt.Sprintf("Checking namespace %s excluded by backup does NOT exist", ns), func() {
_, err := GetNamespace(w.Ctx, w.Client, ns)
Expect(err).To(HaveOccurred(), fmt.Sprintf("Namespace %s excluded by backup should NOT exist after restore", ns))
Expect(apierrors.IsNotFound(err)).To(BeTrue(), "Error should be NotFound")
})
}
// 3. Verify namespaces excluded during Restore
for _, ns := range w.excludedByRestoreNS {
By(fmt.Sprintf("Checking namespace %s excluded by restore does NOT exist", ns), func() {
_, err := GetNamespace(w.Ctx, w.Client, ns)
Expect(err).To(HaveOccurred(), fmt.Sprintf("Namespace %s excluded by restore should NOT exist after restore", ns))
Expect(apierrors.IsNotFound(err)).To(BeTrue(), "Error should be NotFound")
})
}
return nil
}

View File

@@ -365,7 +365,7 @@ func VersionNoOlderThan(version string, targetVersion string) (bool, error) {
matches := tagRe.FindStringSubmatch(targetVersion)
targetMajor := matches[1]
targetMinor := matches[2]
if major > targetMajor && minor >= targetMinor {
if major >= targetMajor && minor >= targetMinor {
return true, nil
} else {
return false, nil

View File

@@ -0,0 +1,65 @@
/*
Copyright the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package velero
import (
"testing"
"github.com/stretchr/testify/require"
)
func Test_VersionNoOlderThan(t *testing.T) {
type versionTest struct {
caseName string
version string
targetVersion string
result bool
err error
}
tests := []versionTest{
{
caseName: "branch version compare",
version: "release-1.18",
targetVersion: "v1.16",
result: true,
err: nil,
},
{
caseName: "tag version compare",
version: "v1.18.0",
targetVersion: "v1.16",
result: true,
err: nil,
},
{
caseName: "main version compare",
version: "main",
targetVersion: "v1.15",
result: true,
err: nil,
},
}
for _, test := range tests {
t.Run(test.caseName, func(t *testing.T) {
res, err := VersionNoOlderThan(test.version, test.targetVersion)
require.Equal(t, test.err, err)
require.Equal(t, test.result, res)
})
}
}

View File

@@ -99,6 +99,15 @@ var ImagesMatrix = map[string]map[string][]string{
"velero": {"velero/velero:v1.16.2"},
"velero-restore-helper": {"velero/velero:v1.16.2"},
},
"v1.17": {
"aws": {"velero/velero-plugin-for-aws:v1.13.2"},
"azure": {"velero/velero-plugin-for-microsoft-azure:v1.13.2"},
"vsphere": {"vsphereveleroplugin/velero-plugin-for-vsphere:v1.5.2"},
"gcp": {"velero/velero-plugin-for-gcp:v1.13.2"},
"datamover": {"velero/velero-plugin-for-aws:v1.13.2"},
"velero": {"velero/velero:v1.17.2"},
"velero-restore-helper": {"velero/velero:v1.17.2"},
},
"main": {
"aws": {"velero/velero-plugin-for-aws:main"},
"azure": {"velero/velero-plugin-for-microsoft-azure:main"},
@@ -128,16 +137,13 @@ func SetImagesToDefaultValues(config VeleroConfig, version string) (VeleroConfig
ret.Plugins = ""
versionWithoutPatch := "main"
if version != "main" {
versionWithoutPatch = semver.MajorMinor(version)
}
versionWithoutPatch := getVersionWithoutPatch(version)
// Read migration case needs images from the PluginsMatrix map.
images, ok := ImagesMatrix[versionWithoutPatch]
if !ok {
return config, fmt.Errorf("fail to read the images for version %s from the ImagesMatrix",
versionWithoutPatch)
fmt.Printf("Cannot read the images for version %s from the ImagesMatrix. Use the original values.\n", versionWithoutPatch)
return config, nil
}
ret.VeleroImage = images[Velero][0]
@@ -164,6 +170,27 @@ func SetImagesToDefaultValues(config VeleroConfig, version string) (VeleroConfig
return ret, nil
}
func getVersionWithoutPatch(version string) string {
versionWithoutPatch := ""
mainRe := regexp.MustCompile(`^main$`)
releaseRe := regexp.MustCompile(`^release-(\d+)\.(\d+)(-dev)?$`)
switch {
case mainRe.MatchString(version):
versionWithoutPatch = "main"
case releaseRe.MatchString(version):
matches := releaseRe.FindStringSubmatch(version)
versionWithoutPatch = fmt.Sprintf("v%s.%s", matches[1], matches[2])
default:
versionWithoutPatch = semver.MajorMinor(version)
}
fmt.Println("The version is ", versionWithoutPatch)
return versionWithoutPatch
}
func getPluginsByVersion(version string, cloudProvider string, needDataMoverPlugin bool) ([]string, error) {
var cloudMap map[string][]string
arr := strings.Split(version, ".")

View File

@@ -0,0 +1,54 @@
/*
Copyright the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package velero
import (
"testing"
"github.com/stretchr/testify/require"
)
func Test_getVersionWithoutPatch(t *testing.T) {
versionTests := []struct {
caseName string
version string
result string
}{
{
caseName: "main version",
version: "main",
result: "main",
},
{
caseName: "release version",
version: "release-1.18-dev",
result: "v1.18",
},
{
caseName: "tag version",
version: "v1.17.2",
result: "v1.17",
},
}
for _, test := range versionTests {
t.Run(test.caseName, func(t *testing.T) {
res := getVersionWithoutPatch(test.version)
require.Equal(t, test.result, res)
})
}
}