Compare commits

..

17 Commits

Author SHA1 Message Date
Xun Jiang/Bruce Jiang
6adcf06b5b Merge pull request #9572 from blackpiglet/xj014661/v1.18/CVE-2026-24051
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m0s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 15s
Main CI / Build (push) Has been skipped
Xj014661/v1.18/CVE 2026 24051
2026-03-02 17:42:25 +08:00
Xun Jiang
ffa65605a6 Bump go.opentelemetry.io/otel/sdk to 1.40.0 to fix CVE-2026-24051.
Bump Linux base image to paketobuildpacks/run-jammy-tiny:0.2.104

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2026-03-02 17:09:23 +08:00
Xun Jiang/Bruce Jiang
bd8dfe9ee2 Merge branch 'vmware-tanzu:release-1.18' into release-1.18 2026-03-02 17:04:43 +08:00
lyndon-li
54783fbe28 Merge pull request #9546 from Lyndon-Li/release-1.18
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1h5m30s
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 21s
Main CI / Build (push) Has been skipped
Update base image for 1.18.0
2026-02-13 17:11:09 +08:00
Lyndon-Li
cb5f56265a update base image for 1.18.0
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-02-13 16:43:12 +08:00
Tiger Kaovilai
0c7b89a44e Fix VolumePolicy PVC phase condition filter for unbound PVCs
Use typed error approach: Make GetPVForPVC return ErrPVNotFoundForPVC
when PV is not expected to be found (unbound PVC), then use errors.Is
to check for this error type. When a matching policy exists (e.g.,
pvcPhase: [Pending, Lost] with action: skip), apply the action without
error. When no policy matches, return the original error to preserve
default behavior.

Changes:
- Add ErrPVNotFoundForPVC sentinel error to pvc_pv.go
- Update ShouldPerformSnapshot to handle unbound PVCs with policies
- Update ShouldPerformFSBackup to handle unbound PVCs with policies
- Update item_backupper.go to handle Lost PVCs in tracking functions
- Remove checkPVCOnlySkip helper (no longer needed)
- Update tests to reflect new behavior

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-12 16:18:56 +08:00
Xun Jiang/Bruce Jiang
aa89713559 Merge pull request #9537 from kaovilai/1.18-9508
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m33s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 29s
Main CI / Build (push) Has been skipped
release-1.18: Fix VolumePolicy PVC phase condition filter for unbound PVCs
2026-02-12 11:41:25 +08:00
Xun Jiang/Bruce Jiang
5db4c65a92 Merge branch 'release-1.18' into 1.18-9508 2026-02-12 11:32:06 +08:00
Xun Jiang/Bruce Jiang
87db850f66 Merge pull request #9539 from Joeavaikath/1.18-9502
release-1.18: Support all glob wildcard characters in namespace validation (#9502)
2026-02-12 10:50:55 +08:00
Xun Jiang/Bruce Jiang
c7631fc4a4 Merge branch 'release-1.18' into 1.18-9502 2026-02-12 10:41:26 +08:00
Wenkai Yin(尹文开)
9a37478cc2 Merge pull request #9538 from vmware-tanzu/topic/xj014661/update_migration_test_cases
Update the migration and upgrade test cases.
2026-02-12 10:19:43 +08:00
Tiger Kaovilai
5b54ccd2e0 Fix VolumePolicy PVC phase condition filter for unbound PVCs
Use typed error approach: Make GetPVForPVC return ErrPVNotFoundForPVC
when PV is not expected to be found (unbound PVC), then use errors.Is
to check for this error type. When a matching policy exists (e.g.,
pvcPhase: [Pending, Lost] with action: skip), apply the action without
error. When no policy matches, return the original error to preserve
default behavior.

Changes:
- Add ErrPVNotFoundForPVC sentinel error to pvc_pv.go
- Update ShouldPerformSnapshot to handle unbound PVCs with policies
- Update ShouldPerformFSBackup to handle unbound PVCs with policies
- Update item_backupper.go to handle Lost PVCs in tracking functions
- Remove checkPVCOnlySkip helper (no longer needed)
- Update tests to reflect new behavior

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-11 15:29:14 -05:00
Joseph Antony Vaikath
43b926a58b Support all glob wildcard characters in namespace validation (#9502)
* Support all glob wildcard characters in namespace validation

Expand namespace validation to allow all valid glob pattern characters
(*, ?, {}, [], ,) by replacing them with valid characters during RFC 1123
validation. The actual glob pattern validation is handled separately by
the wildcard package.

Also add validation to reject unsupported characters (|, (), !) that are
not valid in glob patterns, and update terminology from "regex" to "glob"
for clarity since this implementation uses glob patterns, not regex.

Changes:
- Replace all glob wildcard characters in validateNamespaceName
- Add test coverage for valid glob patterns in includes/excludes
- Add test coverage for unsupported characters
- Reject exclamation mark (!) in wildcard patterns
- Clarify comments and error messages about glob vs regex

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Joseph <jvaikath@redhat.com>

* Changelog

Signed-off-by: Joseph <jvaikath@redhat.com>

* Add documentation: glob patterns are now accepted

Signed-off-by: Joseph <jvaikath@redhat.com>

* Error message fix

Signed-off-by: Joseph <jvaikath@redhat.com>

* Remove negation glob char test

Signed-off-by: Joseph <jvaikath@redhat.com>

* Add bracket pattern validation for namespace glob patterns

Extends wildcard validation to support square bracket patterns [] used in glob character classes. Validates bracket syntax including empty brackets, unclosed brackets, and unmatched brackets. Extracts ValidateNamespaceName as a public function to enable reuse in namespace validation logic.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Joseph <jvaikath@redhat.com>

* Reduce scope to *, ?, [ and ]

Signed-off-by: Joseph <jvaikath@redhat.com>

* Fix tests

Signed-off-by: Joseph <jvaikath@redhat.com>

* Add namespace glob patterns documentation page

Adds dedicated documentation explaining supported glob patterns
for namespace include/exclude filtering to help users understand
the wildcard syntax.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Joseph <jvaikath@redhat.com>

* Fix build-image Dockerfile envtest download

Replace inaccessible go.kubebuilder.io URL with setup-envtest and update envtest version to 1.33.0 to match Kubernetes v0.33.3 dependencies.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Joseph <jvaikath@redhat.com>

* kubebuilder binaries mv

Signed-off-by: Joseph <jvaikath@redhat.com>

* Reject brace patterns and update documentation

Add {, }, and , to unsupported characters list to explicitly reject
brace expansion patterns. Remove { from wildcard detection since these
patterns are not supported in the 1.18 release.

Update all documentation to show supported patterns inline (*, ?, [abc])
with clickable links to the detailed namespace-glob-patterns page.
Simplify YAML comments by removing non-clickable URLs.

Update tests to expect errors when brace patterns are used.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Joseph <jvaikath@redhat.com>

* Document brace expansion as unsupported

Add {} and , to the unsupported patterns section to clarify that
brace expansion patterns like {a,b,c} are not supported.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Joseph <jvaikath@redhat.com>

* Update tests to expect brace pattern rejection

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Joseph <jvaikath@redhat.com>

---------

Signed-off-by: Joseph <jvaikath@redhat.com>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-11 15:14:06 -05:00
Xun Jiang
9bfc78e769 Update the migration and upgrade test cases.
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m34s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Modify Dockerfile to fix GitHub CI action error.

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2026-02-11 14:56:55 +08:00
Xun Jiang/Bruce Jiang
c9e26256fa Bump Golang version to v1.25.7 (#9536)
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m26s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 16s
Main CI / Build (push) Has been skipped
Fix test case issue and add UT.

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2026-02-10 15:50:23 -05:00
Wenkai Yin(尹文开)
6e315c32e2 Merge pull request #9530 from Lyndon-Li/release-1.18
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m55s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 12s
Main CI / Build (push) Has been skipped
[1.18] Move implemented design for 1.18
2026-02-09 11:18:40 +08:00
Lyndon-Li
91cbc40956 move implemented design for 1.18
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-02-06 18:56:05 +08:00
78 changed files with 486 additions and 2288 deletions

View File

@@ -13,7 +13,7 @@
# limitations under the License.
# Velero binary build section
FROM --platform=$BUILDPLATFORM golang:1.25-bookworm AS velero-builder
FROM --platform=$BUILDPLATFORM golang:1.25.7-bookworm AS velero-builder
ARG GOPROXY
ARG BIN
@@ -49,7 +49,7 @@ RUN mkdir -p /output/usr/bin && \
go clean -modcache -cache
# Restic binary build section
FROM --platform=$BUILDPLATFORM golang:1.25-bookworm AS restic-builder
FROM --platform=$BUILDPLATFORM golang:1.25.7-bookworm AS restic-builder
ARG GOPROXY
ARG BIN
@@ -73,7 +73,7 @@ RUN mkdir -p /output/usr/bin && \
go clean -modcache -cache
# Velero image packing section
FROM paketobuildpacks/run-jammy-tiny:latest
FROM paketobuildpacks/run-jammy-tiny:0.2.104
LABEL maintainer="Xun Jiang <jxun@vmware.com>"

View File

@@ -15,7 +15,7 @@
ARG OS_VERSION=1809
# Velero binary build section
FROM --platform=$BUILDPLATFORM golang:1.25-bookworm AS velero-builder
FROM --platform=$BUILDPLATFORM golang:1.25.7-bookworm AS velero-builder
ARG GOPROXY
ARG BIN

View File

@@ -52,7 +52,7 @@ git_sha = str(local("git rev-parse HEAD", quiet = True, echo_off = True)).strip(
tilt_helper_dockerfile_header = """
# Tilt image
FROM golang:1.25 as tilt-helper
FROM golang:1.25.7 as tilt-helper
# Support live reloading with Tilt
RUN wget --output-document /restart.sh --quiet https://raw.githubusercontent.com/windmilleng/rerun-process-wrapper/master/restart.sh && \

View File

@@ -1 +0,0 @@
Fix issue #9343, include PV topology to data mover pod affinities

View File

@@ -1 +0,0 @@
Fix issue #9496, support customized host os

View File

@@ -0,0 +1 @@
Fix VolumePolicy PVC phase condition filter for unbound PVCs (#9507)

View File

@@ -1 +0,0 @@
If BIA return updateObj with SkipFromBackupAnnotation, treat it as skip the resource from backup.

View File

@@ -1 +0,0 @@
Issue #9544: Add test coverage for S3 bucket name in MRAP ARN notation and fix bucket validation to accept ARN format

View File

@@ -1 +0,0 @@
Fix issue #9475, use node-selector instead of nodName for generic restore

View File

@@ -1 +0,0 @@
Fix issue #9460, flush buffer before data mover completes

View File

@@ -1 +0,0 @@
Add schedule_expected_interval_seconds metric for dynamic backup alerting thresholds (#9559)

View File

@@ -1 +0,0 @@
Add ephemeral storage limit and request support for data mover and maintenance job

View File

@@ -1 +0,0 @@
Fix DBR stuck when CSI snapshot no longer exists in cloud provider

View File

@@ -1 +0,0 @@
Implement original VolumeSnapshotContent deletion for legacy backups

View File

@@ -1 +0,0 @@
Fix issue #9636, fix configmap lookup in non-default namespaces

View File

@@ -1 +0,0 @@
Fix issue #9641, Remove redundant ReadyToUse polling in CSI VolumeSnapshotContent delete plugin

34
go.mod
View File

@@ -1,6 +1,6 @@
module github.com/vmware-tanzu/velero
go 1.25.0
go 1.25.7
require (
cloud.google.com/go/storage v1.57.2
@@ -42,11 +42,10 @@ require (
github.com/vmware-tanzu/crash-diagnostics v0.3.7
go.uber.org/zap v1.27.1
golang.org/x/mod v0.30.0
golang.org/x/oauth2 v0.34.0
golang.org/x/sys v0.40.0
golang.org/x/text v0.32.0
golang.org/x/oauth2 v0.33.0
golang.org/x/text v0.31.0
google.golang.org/api v0.256.0
google.golang.org/grpc v1.79.3
google.golang.org/grpc v1.77.0
google.golang.org/protobuf v1.36.10
gopkg.in/yaml.v3 v3.0.1
k8s.io/api v0.33.3
@@ -64,7 +63,7 @@ require (
)
require (
cel.dev/expr v0.25.1 // indirect
cel.dev/expr v0.24.0 // indirect
cloud.google.com/go v0.121.6 // indirect
cloud.google.com/go/auth v0.17.0 // indirect
cloud.google.com/go/auth/oauth2adapt v0.2.8 // indirect
@@ -94,13 +93,13 @@ require (
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/chmduquesne/rollinghash v4.0.0+incompatible // indirect
github.com/cncf/xds/go v0.0.0-20251210132809-ee656c7534f5 // indirect
github.com/cncf/xds/go v0.0.0-20251022180443-0feb69152e9f // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/edsrzf/mmap-go v1.2.0 // indirect
github.com/emicklei/go-restful/v3 v3.11.0 // indirect
github.com/envoyproxy/go-control-plane/envoy v1.36.0 // indirect
github.com/envoyproxy/protoc-gen-validate v1.3.0 // indirect
github.com/envoyproxy/go-control-plane/envoy v1.35.0 // indirect
github.com/envoyproxy/protoc-gen-validate v1.2.1 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/fsnotify/fsnotify v1.7.0 // indirect
github.com/fxamacker/cbor/v2 v2.7.0 // indirect
@@ -169,7 +168,7 @@ require (
github.com/x448/float16 v0.8.4 // indirect
github.com/zeebo/blake3 v0.2.4 // indirect
go.opentelemetry.io/auto/sdk v1.2.1 // indirect
go.opentelemetry.io/contrib/detectors/gcp v1.39.0 // indirect
go.opentelemetry.io/contrib/detectors/gcp v1.38.0 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0 // indirect
go.opentelemetry.io/otel v1.40.0 // indirect
@@ -180,17 +179,18 @@ require (
go.starlark.net v0.0.0-20230525235612-a134d8f9ddca // indirect
go.uber.org/multierr v1.11.0 // indirect
go.yaml.in/yaml/v2 v2.4.3 // indirect
golang.org/x/crypto v0.46.0 // indirect
golang.org/x/crypto v0.45.0 // indirect
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 // indirect
golang.org/x/net v0.48.0 // indirect
golang.org/x/sync v0.19.0 // indirect
golang.org/x/term v0.38.0 // indirect
golang.org/x/net v0.47.0 // indirect
golang.org/x/sync v0.18.0 // indirect
golang.org/x/sys v0.40.0 // indirect
golang.org/x/term v0.37.0 // indirect
golang.org/x/time v0.14.0 // indirect
golang.org/x/tools v0.39.0 // indirect
golang.org/x/tools v0.38.0 // indirect
gomodules.xyz/jsonpatch/v2 v2.4.0 // indirect
google.golang.org/genproto v0.0.0-20250603155806-513f23925822 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20251202230838-ff82c1b0f217 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20251202230838-ff82c1b0f217 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20251022142026-3a174f9686a8 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20251103181224-f26f9409b101 // indirect
gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
k8s.io/kube-openapi v0.0.0-20250318190949-c8a335a9a2ff // indirect

64
go.sum
View File

@@ -1,7 +1,7 @@
al.essio.dev/pkg/shellescape v1.5.1 h1:86HrALUujYS/h+GtqoB26SBEdkWfmMI6FubjXlsXyho=
al.essio.dev/pkg/shellescape v1.5.1/go.mod h1:6sIqp7X2P6mThCQ7twERpZTuigpr6KbZWtls1U8I890=
cel.dev/expr v0.25.1 h1:1KrZg61W6TWSxuNZ37Xy49ps13NUovb66QLprthtwi4=
cel.dev/expr v0.25.1/go.mod h1:hrXvqGP6G6gyx8UAHSHJ5RGk//1Oj5nXQ2NI02Nrsg4=
cel.dev/expr v0.24.0 h1:56OvJKSH3hDGL0ml5uSxZmz3/3Pq4tJ+fb1unVLAFcY=
cel.dev/expr v0.24.0/go.mod h1:hLPLo1W4QUmuYdA72RBX06QTs6MXw941piREPl3Yfiw=
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
@@ -189,8 +189,8 @@ github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDk
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/cncf/udpa/go v0.0.0-20200629203442-efcf912fb354/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
github.com/cncf/xds/go v0.0.0-20251210132809-ee656c7534f5 h1:6xNmx7iTtyBRev0+D/Tv1FZd4SCg8axKApyNyRsAt/w=
github.com/cncf/xds/go v0.0.0-20251210132809-ee656c7534f5/go.mod h1:KdCmV+x/BuvyMxRnYBlmVaq4OLiKW6iRQfvC62cvdkI=
github.com/cncf/xds/go v0.0.0-20251022180443-0feb69152e9f h1:Y8xYupdHxryycyPlc9Y+bSQAYZnetRJ70VMVKm5CKI0=
github.com/cncf/xds/go v0.0.0-20251022180443-0feb69152e9f/go.mod h1:HlzOvOjVBOfTGSRXRyY0OiCS/3J1akRGQQpRO/7zyF4=
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/etcd v3.3.13+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
@@ -227,15 +227,15 @@ github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1m
github.com/envoyproxy/go-control-plane v0.9.7/go.mod h1:cwu0lG7PUMfa9snN8LXBig5ynNVH9qI8YYLbd1fK2po=
github.com/envoyproxy/go-control-plane v0.9.9-0.20201210154907-fd9021fe5dad/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
github.com/envoyproxy/go-control-plane v0.9.9-0.20210217033140-668b12f5399d/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
github.com/envoyproxy/go-control-plane v0.14.0 h1:hbG2kr4RuFj222B6+7T83thSPqLjwBIfQawTkC++2HA=
github.com/envoyproxy/go-control-plane v0.14.0/go.mod h1:NcS5X47pLl/hfqxU70yPwL9ZMkUlwlKxtAohpi2wBEU=
github.com/envoyproxy/go-control-plane/envoy v1.36.0 h1:yg/JjO5E7ubRyKX3m07GF3reDNEnfOboJ0QySbH736g=
github.com/envoyproxy/go-control-plane/envoy v1.36.0/go.mod h1:ty89S1YCCVruQAm9OtKeEkQLTb+Lkz0k8v9W0Oxsv98=
github.com/envoyproxy/go-control-plane v0.13.5-0.20251024222203-75eaa193e329 h1:K+fnvUM0VZ7ZFJf0n4L/BRlnsb9pL/GuDG6FqaH+PwM=
github.com/envoyproxy/go-control-plane v0.13.5-0.20251024222203-75eaa193e329/go.mod h1:Alz8LEClvR7xKsrq3qzoc4N0guvVNSS8KmSChGYr9hs=
github.com/envoyproxy/go-control-plane/envoy v1.35.0 h1:ixjkELDE+ru6idPxcHLj8LBVc2bFP7iBytj353BoHUo=
github.com/envoyproxy/go-control-plane/envoy v1.35.0/go.mod h1:09qwbGVuSWWAyN5t/b3iyVfz5+z8QWGrzkoqm/8SbEs=
github.com/envoyproxy/go-control-plane/ratelimit v0.1.0 h1:/G9QYbddjL25KvtKTv3an9lx6VBE2cnb8wp1vEGNYGI=
github.com/envoyproxy/go-control-plane/ratelimit v0.1.0/go.mod h1:Wk+tMFAFbCXaJPzVVHnPgRKdUdwW/KdbRt94AzgRee4=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/envoyproxy/protoc-gen-validate v1.3.0 h1:TvGH1wof4H33rezVKWSpqKz5NXWg5VPuZ0uONDT6eb4=
github.com/envoyproxy/protoc-gen-validate v1.3.0/go.mod h1:HvYl7zwPa5mffgyeTUHA9zHIH36nmrm7oCbo4YKoSWA=
github.com/envoyproxy/protoc-gen-validate v1.2.1 h1:DEo3O99U8j4hBFwbJfrz9VtgcDfUKS7KJ7spH3d86P8=
github.com/envoyproxy/protoc-gen-validate v1.2.1/go.mod h1:d/C80l/jxXLdfEIhX1W2TmLfsJ31lvEjwamM4DxlWXU=
github.com/evanphx/json-patch v4.11.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/evanphx/json-patch v5.6.0+incompatible h1:jBYDEEiFBPxA0v50tFdvOzQQTCvpL6mnFh5mB2/l16U=
github.com/evanphx/json-patch v5.6.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
@@ -742,8 +742,8 @@ go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk=
go.opencensus.io v0.23.0/go.mod h1:XItmlyltB5F7CS4xOC1DcqMoFqwtC6OG2xF7mCv7P7E=
go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
go.opentelemetry.io/contrib/detectors/gcp v1.39.0 h1:kWRNZMsfBHZ+uHjiH4y7Etn2FK26LAGkNFw7RHv1DhE=
go.opentelemetry.io/contrib/detectors/gcp v1.39.0/go.mod h1:t/OGqzHBa5v6RHZwrDBJ2OirWc+4q/w2fTbLZwAKjTk=
go.opentelemetry.io/contrib/detectors/gcp v1.38.0 h1:ZoYbqX7OaA/TAikspPl3ozPI6iY6LiIY9I8cUfm+pJs=
go.opentelemetry.io/contrib/detectors/gcp v1.38.0/go.mod h1:SU+iU7nu5ud4oCb3LQOhIZ3nRLj6FNVrKgtflbaf2ts=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0 h1:q4XOmH/0opmeuJtPsbFNivyl7bCt7yRBbeEm2sC/XtQ=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0/go.mod h1:snMWehoOh2wsEwnvvwtDyFCxVeDAODenXHtn5vzrKjo=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0 h1:F7Jx+6hwnZ41NSFTO5q4LYDtJRXBf2PD0rNBkeB/lus=
@@ -790,8 +790,8 @@ golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0/go.mod h1:LzIPMQfyMNhhGPh
golang.org/x/crypto v0.0.0-20210220033148-5ea612d1eb83/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
golang.org/x/crypto v0.0.0-20210421170649-83a5a9bb288b/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.46.0 h1:cKRW/pmt1pKAfetfu+RCEvjvZkA9RimPbh7bhFjGVBU=
golang.org/x/crypto v0.46.0/go.mod h1:Evb/oLKmMraqjZ2iQTwDwvCtJkczlDuTmdJXoZVzqU0=
golang.org/x/crypto v0.45.0 h1:jMBrvKuj23MTlT0bQEOBcAE0mjg8mK9RXFhRH6nyF3Q=
golang.org/x/crypto v0.45.0/go.mod h1:XTGrrkGJve7CYK7J8PEww4aY7gM3qMCElcJQ8n8JdX4=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@@ -876,8 +876,8 @@ golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4/go.mod h1:RBQZq4jEuRlivfhVLd
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20210520170846-37e1c6afe023/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.48.0 h1:zyQRTTrjc33Lhh0fBgT/H3oZq9WuvRR5gPC70xpDiQU=
golang.org/x/net v0.48.0/go.mod h1:+ndRgGjkh8FGtu1w1FGbEC31if4VrNVMuKTgcAAnQRY=
golang.org/x/net v0.47.0 h1:Mx+4dIFzqraBXUugkia1OOvlD6LemFo1ALMHjrXDOhY=
golang.org/x/net v0.47.0/go.mod h1:/jNxtkgq5yWUGYkaZGqo27cfGZ1c5Nen03aYrrKpVRU=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -891,8 +891,8 @@ golang.org/x/oauth2 v0.0.0-20210220000619-9bb904979d93/go.mod h1:KelEdhl1UZF7XfJ
golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210402161424-2e8d93401602/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.34.0 h1:hqK/t4AKgbqWkdkcAeI8XLmbK+4m4G5YeQRrmiotGlw=
golang.org/x/oauth2 v0.34.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
golang.org/x/oauth2 v0.33.0 h1:4Q+qn+E5z8gPRJfmRy7C2gGG3T4jIprK6aSYgTXGRpo=
golang.org/x/oauth2 v0.33.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -904,8 +904,8 @@ golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4=
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -975,8 +975,8 @@ golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXR
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20220526004731-065cf7ba2467/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.38.0 h1:PQ5pkm/rLO6HnxFR7N2lJHOZX6Kez5Y1gDSJla6jo7Q=
golang.org/x/term v0.38.0/go.mod h1:bSEAKrOT1W+VSu9TSCMtoGEOUcKxOKgl3LE5QEF/xVg=
golang.org/x/term v0.37.0 h1:8EGAD0qCmHYZg6J17DvsMy9/wJ7/D/4pV/wfnld5lTU=
golang.org/x/term v0.37.0/go.mod h1:5pB4lxRNYYVZuTLmy8oR2BH8dflOR+IbTYFD8fi3254=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@@ -986,8 +986,8 @@ golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.32.0 h1:ZD01bjUt1FQ9WJ0ClOL5vxgxOI/sVCNgX1YtKwcY0mU=
golang.org/x/text v0.32.0/go.mod h1:o/rUWzghvpD5TXrTIBuJU77MTaN0ljMWE47kxGJQ7jY=
golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM=
golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
@@ -1047,8 +1047,8 @@ golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4f
golang.org/x/tools v0.0.0-20210108195828-e2f9c7f1fc8e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0=
golang.org/x/tools v0.1.2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.39.0 h1:ik4ho21kwuQln40uelmciQPp9SipgNDdrafrYA4TmQQ=
golang.org/x/tools v0.39.0/go.mod h1:JnefbkDPyD8UU2kI5fuf8ZX4/yUeh9W877ZeBONxUqQ=
golang.org/x/tools v0.38.0 h1:Hx2Xv8hISq8Lm16jvBZ2VQf+RLmbd7wVUsALibYI/IQ=
golang.org/x/tools v0.38.0/go.mod h1:yEsQ/d/YK8cjh0L6rZlY8tgtlKiBNTL14pGDJPJpYQs=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@@ -1134,10 +1134,10 @@ google.golang.org/genproto v0.0.0-20210402141018-6c239bbf2bb1/go.mod h1:9lPAdzaE
google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
google.golang.org/genproto v0.0.0-20250603155806-513f23925822 h1:rHWScKit0gvAPuOnu87KpaYtjK5zBMLcULh7gxkCXu4=
google.golang.org/genproto v0.0.0-20250603155806-513f23925822/go.mod h1:HubltRL7rMh0LfnQPkMH4NPDFEWp0jw3vixw7jEM53s=
google.golang.org/genproto/googleapis/api v0.0.0-20251202230838-ff82c1b0f217 h1:fCvbg86sFXwdrl5LgVcTEvNC+2txB5mgROGmRL5mrls=
google.golang.org/genproto/googleapis/api v0.0.0-20251202230838-ff82c1b0f217/go.mod h1:+rXWjjaukWZun3mLfjmVnQi18E1AsFbDN9QdJ5YXLto=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251202230838-ff82c1b0f217 h1:gRkg/vSppuSQoDjxyiGfN4Upv/h/DQmIR10ZU8dh4Ww=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251202230838-ff82c1b0f217/go.mod h1:7i2o+ce6H/6BluujYR+kqX3GKH+dChPTQU19wjRPiGk=
google.golang.org/genproto/googleapis/api v0.0.0-20251022142026-3a174f9686a8 h1:mepRgnBZa07I4TRuomDE4sTIYieg/osKmzIf4USdWS4=
google.golang.org/genproto/googleapis/api v0.0.0-20251022142026-3a174f9686a8/go.mod h1:fDMmzKV90WSg1NbozdqrE64fkuTv6mlq2zxo9ad+3yo=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251103181224-f26f9409b101 h1:tRPGkdGHuewF4UisLzzHHr1spKw92qLM98nIzxbC0wY=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251103181224-f26f9409b101/go.mod h1:7i2o+ce6H/6BluujYR+kqX3GKH+dChPTQU19wjRPiGk=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
@@ -1159,8 +1159,8 @@ google.golang.org/grpc v1.35.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAG
google.golang.org/grpc v1.36.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.36.1/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
google.golang.org/grpc v1.79.3 h1:sybAEdRIEtvcD68Gx7dmnwjZKlyfuc61Dyo9pGXXkKE=
google.golang.org/grpc v1.79.3/go.mod h1:KmT0Kjez+0dde/v2j9vzwoAScgEPx/Bw1CYChhHLrHQ=
google.golang.org/grpc v1.77.0 h1:wVVY6/8cGA6vvffn+wWK5ToddbgdU3d8MNENr4evgXM=
google.golang.org/grpc v1.77.0/go.mod h1:z0BY1iVj0q8E1uSQCjL9cppRj+gnZjzDnzV0dHhrNig=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=

View File

@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
FROM --platform=$TARGETPLATFORM golang:1.25-bookworm
FROM --platform=$TARGETPLATFORM golang:1.25.7-bookworm
ARG GOPROXY

View File

@@ -18,6 +18,7 @@ package csi
import (
"context"
"time"
"github.com/google/uuid"
snapshotv1api "github.com/kubernetes-csi/external-snapshotter/client/v8/apis/volumesnapshot/v1"
@@ -26,11 +27,14 @@ import (
corev1api "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/util/wait"
crclient "sigs.k8s.io/controller-runtime/pkg/client"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/client"
plugincommon "github.com/vmware-tanzu/velero/pkg/plugin/framework/common"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
"github.com/vmware-tanzu/velero/pkg/util/boolptr"
kubeutil "github.com/vmware-tanzu/velero/pkg/util/kube"
)
@@ -67,7 +71,7 @@ func (p *volumeSnapshotContentDeleteItemAction) Execute(
// So skip deleting VolumeSnapshotContent not have the backup name
// in its labels.
if !kubeutil.HasBackupLabel(&snapCont.ObjectMeta, input.Backup.Name) {
p.log.Infof(
p.log.Info(
"VolumeSnapshotContent %s was not taken by backup %s, skipping deletion",
snapCont.Name,
input.Backup.Name,
@@ -77,17 +81,6 @@ func (p *volumeSnapshotContentDeleteItemAction) Execute(
p.log.Infof("Deleting VolumeSnapshotContent %s", snapCont.Name)
// Try to delete the original VSC from the cluster first.
// This handles legacy (pre-1.15) backups where the original VSC
// with DeletionPolicy=Retain still exists in the cluster.
originalVSCName := snapCont.Name
if cleaned := p.tryDeleteOriginalVSC(context.TODO(), originalVSCName); cleaned {
p.log.Infof("Successfully deleted original VolumeSnapshotContent %s from cluster, skipping temp VSC creation", originalVSCName)
return nil
}
// create a temp VSC to trigger cloud snapshot deletion
// (for backups where the original VSC no longer exists in cluster)
uuid, err := uuid.NewRandom()
if err != nil {
p.log.WithError(err).Errorf("Fail to generate the UUID to create VSC %s", snapCont.Name)
@@ -121,62 +114,60 @@ func (p *volumeSnapshotContentDeleteItemAction) Execute(
if err := p.crClient.Create(context.TODO(), &snapCont); err != nil {
return errors.Wrapf(err, "fail to create VolumeSnapshotContent %s", snapCont.Name)
}
p.log.Infof("Created temp VolumeSnapshotContent %s with DeletionPolicy=Delete to trigger cloud snapshot cleanup", snapCont.Name)
// Delete the temp VSC immediately to trigger cloud snapshot removal.
// The CSI driver will handle the actual cloud snapshot deletion.
// Read resource timeout from backup annotation, if not set, use default value.
timeout, err := time.ParseDuration(
input.Backup.Annotations[velerov1api.ResourceTimeoutAnnotation])
if err != nil {
p.log.Warnf("fail to parse resource timeout annotation %s: %s",
input.Backup.Annotations[velerov1api.ResourceTimeoutAnnotation], err.Error())
timeout = 10 * time.Minute
}
p.log.Debugf("resource timeout is set to %s", timeout.String())
interval := 5 * time.Second
// Wait until VSC created and ReadyToUse is true.
if err := wait.PollUntilContextTimeout(
context.Background(),
interval,
timeout,
true,
func(ctx context.Context) (bool, error) {
return checkVSCReadiness(ctx, &snapCont, p.crClient)
},
); err != nil {
return errors.Wrapf(err, "fail to wait VolumeSnapshotContent %s becomes ready.", snapCont.Name)
}
if err := p.crClient.Delete(
context.TODO(),
&snapCont,
); err != nil && !apierrors.IsNotFound(err) {
p.log.WithError(err).Errorf("Failed to delete temp VolumeSnapshotContent %s", snapCont.Name)
p.log.Infof("VolumeSnapshotContent %s not found", snapCont.Name)
return err
}
p.log.Infof("Successfully triggered deletion of VolumeSnapshotContent %s and its cloud snapshot", snapCont.Name)
return nil
}
// tryDeleteOriginalVSC attempts to find and delete the original VSC from
// the cluster (legacy pre-1.15 backups). It patches the DeletionPolicy to
// Delete so the CSI driver also removes the cloud snapshot, then deletes
// the VSC object itself.
// Returns true if the original VSC was found and deletion was initiated.
func (p *volumeSnapshotContentDeleteItemAction) tryDeleteOriginalVSC(
var checkVSCReadiness = func(
ctx context.Context,
vscName string,
) bool {
existing := new(snapshotv1api.VolumeSnapshotContent)
if err := p.crClient.Get(ctx, crclient.ObjectKey{Name: vscName}, existing); err != nil {
if apierrors.IsNotFound(err) {
p.log.Debugf("Original VolumeSnapshotContent %s not found in cluster, will use temp VSC flow", vscName)
} else {
p.log.WithError(err).Warnf("Error looking up original VolumeSnapshotContent %s, will use temp VSC flow", vscName)
}
return false
vsc *snapshotv1api.VolumeSnapshotContent,
client crclient.Client,
) (bool, error) {
tmpVSC := new(snapshotv1api.VolumeSnapshotContent)
if err := client.Get(ctx, crclient.ObjectKeyFromObject(vsc), tmpVSC); err != nil {
return false, errors.Wrapf(
err, "failed to get VolumeSnapshotContent %s", vsc.Name,
)
}
p.log.Debugf("Found original VolumeSnapshotContent %s in cluster (legacy backup), cleaning up directly", vscName)
// Patch DeletionPolicy to Delete so the CSI driver removes the cloud snapshot
if existing.Spec.DeletionPolicy != snapshotv1api.VolumeSnapshotContentDelete {
original := existing.DeepCopy()
existing.Spec.DeletionPolicy = snapshotv1api.VolumeSnapshotContentDelete
if err := p.crClient.Patch(ctx, existing, crclient.MergeFrom(original)); err != nil {
p.log.WithError(err).Warnf("Failed to patch DeletionPolicy on original VSC %s, will use temp VSC flow", vscName)
return false
}
p.log.Debugf("Patched DeletionPolicy to Delete on original VolumeSnapshotContent %s", vscName)
if tmpVSC.Status != nil && boolptr.IsSetToTrue(tmpVSC.Status.ReadyToUse) {
return true, nil
}
// Delete the original VSC — the CSI driver will clean up the cloud snapshot
if err := p.crClient.Delete(ctx, existing); err != nil && !apierrors.IsNotFound(err) {
p.log.WithError(err).Warnf("Failed to delete original VolumeSnapshotContent %s, will use temp VSC flow", vscName)
return false
}
p.log.Infof("Deleted original VolumeSnapshotContent %s with DeletionPolicy=Delete, CSI driver will remove cloud snapshot", vscName)
return true
return false, nil
}
func NewVolumeSnapshotContentDeleteItemAction(

View File

@@ -22,10 +22,9 @@ import (
"testing"
snapshotv1api "github.com/kubernetes-csi/external-snapshotter/client/v8/apis/volumesnapshot/v1"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/stretchr/testify/require"
corev1api "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
@@ -38,44 +37,19 @@ import (
velerotest "github.com/vmware-tanzu/velero/pkg/test"
)
// fakeClientWithErrors wraps a real client and injects errors for specific operations.
type fakeClientWithErrors struct {
crclient.Client
getError error
patchError error
deleteError error
}
func (c *fakeClientWithErrors) Get(ctx context.Context, key crclient.ObjectKey, obj crclient.Object, opts ...crclient.GetOption) error {
if c.getError != nil {
return c.getError
}
return c.Client.Get(ctx, key, obj, opts...)
}
func (c *fakeClientWithErrors) Patch(ctx context.Context, obj crclient.Object, patch crclient.Patch, opts ...crclient.PatchOption) error {
if c.patchError != nil {
return c.patchError
}
return c.Client.Patch(ctx, obj, patch, opts...)
}
func (c *fakeClientWithErrors) Delete(ctx context.Context, obj crclient.Object, opts ...crclient.DeleteOption) error {
if c.deleteError != nil {
return c.deleteError
}
return c.Client.Delete(ctx, obj, opts...)
}
func TestVSCExecute(t *testing.T) {
snapshotHandleStr := "test"
tests := []struct {
name string
item runtime.Unstructured
vsc *snapshotv1api.VolumeSnapshotContent
backup *velerov1api.Backup
preExistingVSC *snapshotv1api.VolumeSnapshotContent
expectErr bool
name string
item runtime.Unstructured
vsc *snapshotv1api.VolumeSnapshotContent
backup *velerov1api.Backup
function func(
ctx context.Context,
vsc *snapshotv1api.VolumeSnapshotContent,
client crclient.Client,
) (bool, error)
expectErr bool
}{
{
name: "VolumeSnapshotContent doesn't have backup label",
@@ -97,22 +71,27 @@ func TestVSCExecute(t *testing.T) {
{
name: "Normal case, VolumeSnapshot should be deleted",
vsc: builder.ForVolumeSnapshotContent("bar").ObjectMeta(builder.WithLabelsMap(map[string]string{velerov1api.BackupNameLabel: "backup"})).VolumeSnapshotClassName("volumesnapshotclass").Status(&snapshotv1api.VolumeSnapshotContentStatus{SnapshotHandle: &snapshotHandleStr}).Result(),
backup: builder.ForBackup("velero", "backup").Result(),
backup: builder.ForBackup("velero", "backup").ObjectMeta(builder.WithAnnotationsMap(map[string]string{velerov1api.ResourceTimeoutAnnotation: "5s"})).Result(),
expectErr: false,
function: func(
ctx context.Context,
vsc *snapshotv1api.VolumeSnapshotContent,
client crclient.Client,
) (bool, error) {
return true, nil
},
},
{
name: "Original VSC exists in cluster, cleaned up directly",
name: "Error case, deletion fails",
vsc: builder.ForVolumeSnapshotContent("bar").ObjectMeta(builder.WithLabelsMap(map[string]string{velerov1api.BackupNameLabel: "backup"})).Status(&snapshotv1api.VolumeSnapshotContentStatus{SnapshotHandle: &snapshotHandleStr}).Result(),
backup: builder.ForBackup("velero", "backup").Result(),
expectErr: false,
preExistingVSC: &snapshotv1api.VolumeSnapshotContent{
ObjectMeta: metav1.ObjectMeta{Name: "bar"},
Spec: snapshotv1api.VolumeSnapshotContentSpec{
DeletionPolicy: snapshotv1api.VolumeSnapshotContentRetain,
Driver: "disk.csi.azure.com",
Source: snapshotv1api.VolumeSnapshotContentSource{SnapshotHandle: stringPtr("snap-123")},
VolumeSnapshotRef: corev1api.ObjectReference{Name: "vs-1", Namespace: "default"},
},
backup: builder.ForBackup("velero", "backup").ObjectMeta(builder.WithAnnotationsMap(map[string]string{velerov1api.ResourceTimeoutAnnotation: "5s"})).Result(),
expectErr: true,
function: func(
ctx context.Context,
vsc *snapshotv1api.VolumeSnapshotContent,
client crclient.Client,
) (bool, error) {
return false, errors.Errorf("test error case")
},
},
}
@@ -121,10 +100,7 @@ func TestVSCExecute(t *testing.T) {
t.Run(test.name, func(t *testing.T) {
crClient := velerotest.NewFakeControllerRuntimeClient(t)
logger := logrus.StandardLogger()
if test.preExistingVSC != nil {
require.NoError(t, crClient.Create(t.Context(), test.preExistingVSC))
}
checkVSCReadiness = test.function
p := volumeSnapshotContentDeleteItemAction{log: logger, crClient: crClient}
@@ -182,153 +158,52 @@ func TestNewVolumeSnapshotContentDeleteItemAction(t *testing.T) {
require.NoError(t, err1)
}
func TestTryDeleteOriginalVSC(t *testing.T) {
func TestCheckVSCReadiness(t *testing.T) {
tests := []struct {
name string
vscName string
existing *snapshotv1api.VolumeSnapshotContent
createIt bool
expectRet bool
vsc *snapshotv1api.VolumeSnapshotContent
createVSC bool
expectErr bool
ready bool
}{
{
name: "VSC not found in cluster, returns false",
vscName: "not-found",
expectRet: false,
},
{
name: "VSC found with Retain policy, patches and deletes",
vscName: "legacy-vsc",
existing: &snapshotv1api.VolumeSnapshotContent{
ObjectMeta: metav1.ObjectMeta{Name: "legacy-vsc"},
Spec: snapshotv1api.VolumeSnapshotContentSpec{
DeletionPolicy: snapshotv1api.VolumeSnapshotContentRetain,
Driver: "disk.csi.azure.com",
Source: snapshotv1api.VolumeSnapshotContentSource{
SnapshotHandle: stringPtr("snap-123"),
},
VolumeSnapshotRef: corev1api.ObjectReference{
Name: "vs-1",
Namespace: "default",
},
name: "VSC not exist",
vsc: &snapshotv1api.VolumeSnapshotContent{
ObjectMeta: metav1.ObjectMeta{
Name: "vsc-1",
Namespace: "velero",
},
},
createIt: true,
expectRet: true,
createVSC: false,
expectErr: true,
ready: false,
},
{
name: "VSC found with Delete policy already, just deletes",
vscName: "already-delete-vsc",
existing: &snapshotv1api.VolumeSnapshotContent{
ObjectMeta: metav1.ObjectMeta{Name: "already-delete-vsc"},
Spec: snapshotv1api.VolumeSnapshotContentSpec{
DeletionPolicy: snapshotv1api.VolumeSnapshotContentDelete,
Driver: "disk.csi.azure.com",
Source: snapshotv1api.VolumeSnapshotContentSource{
SnapshotHandle: stringPtr("snap-456"),
},
VolumeSnapshotRef: corev1api.ObjectReference{
Name: "vs-2",
Namespace: "default",
},
name: "VSC not ready",
vsc: &snapshotv1api.VolumeSnapshotContent{
ObjectMeta: metav1.ObjectMeta{
Name: "vsc-1",
Namespace: "velero",
},
},
createIt: true,
expectRet: true,
createVSC: true,
expectErr: false,
ready: false,
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
crClient := velerotest.NewFakeControllerRuntimeClient(t)
logger := logrus.StandardLogger()
p := &volumeSnapshotContentDeleteItemAction{
log: logger,
crClient: crClient,
if test.createVSC {
require.NoError(t, crClient.Create(t.Context(), test.vsc))
}
if test.createIt && test.existing != nil {
require.NoError(t, crClient.Create(t.Context(), test.existing))
}
result := p.tryDeleteOriginalVSC(t.Context(), test.vscName)
require.Equal(t, test.expectRet, result)
// If cleanup succeeded, verify the VSC is gone
if test.expectRet {
err := crClient.Get(t.Context(), crclient.ObjectKey{Name: test.vscName},
&snapshotv1api.VolumeSnapshotContent{})
require.True(t, apierrors.IsNotFound(err),
"VSC should have been deleted from cluster")
ready, err := checkVSCReadiness(t.Context(), test.vsc, crClient)
require.Equal(t, test.ready, ready)
if test.expectErr {
require.Error(t, err)
}
})
}
// Error injection tests for tryDeleteOriginalVSC
t.Run("Get returns non-NotFound error, returns false", func(t *testing.T) {
errClient := &fakeClientWithErrors{
Client: velerotest.NewFakeControllerRuntimeClient(t),
getError: fmt.Errorf("connection refused"),
}
p := &volumeSnapshotContentDeleteItemAction{
log: logrus.StandardLogger(),
crClient: errClient,
}
require.False(t, p.tryDeleteOriginalVSC(t.Context(), "some-vsc"))
})
t.Run("Patch fails, returns false", func(t *testing.T) {
realClient := velerotest.NewFakeControllerRuntimeClient(t)
vsc := &snapshotv1api.VolumeSnapshotContent{
ObjectMeta: metav1.ObjectMeta{Name: "patch-fail-vsc"},
Spec: snapshotv1api.VolumeSnapshotContentSpec{
DeletionPolicy: snapshotv1api.VolumeSnapshotContentRetain,
Driver: "disk.csi.azure.com",
Source: snapshotv1api.VolumeSnapshotContentSource{SnapshotHandle: stringPtr("snap-789")},
VolumeSnapshotRef: corev1api.ObjectReference{Name: "vs-3", Namespace: "default"},
},
}
require.NoError(t, realClient.Create(t.Context(), vsc))
errClient := &fakeClientWithErrors{
Client: realClient,
patchError: fmt.Errorf("patch forbidden"),
}
p := &volumeSnapshotContentDeleteItemAction{
log: logrus.StandardLogger(),
crClient: errClient,
}
require.False(t, p.tryDeleteOriginalVSC(t.Context(), "patch-fail-vsc"))
})
t.Run("Delete fails, returns false", func(t *testing.T) {
realClient := velerotest.NewFakeControllerRuntimeClient(t)
vsc := &snapshotv1api.VolumeSnapshotContent{
ObjectMeta: metav1.ObjectMeta{Name: "delete-fail-vsc"},
Spec: snapshotv1api.VolumeSnapshotContentSpec{
DeletionPolicy: snapshotv1api.VolumeSnapshotContentDelete,
Driver: "disk.csi.azure.com",
Source: snapshotv1api.VolumeSnapshotContentSource{SnapshotHandle: stringPtr("snap-999")},
VolumeSnapshotRef: corev1api.ObjectReference{Name: "vs-4", Namespace: "default"},
},
}
require.NoError(t, realClient.Create(t.Context(), vsc))
errClient := &fakeClientWithErrors{
Client: realClient,
deleteError: fmt.Errorf("delete forbidden"),
}
p := &volumeSnapshotContentDeleteItemAction{
log: logrus.StandardLogger(),
crClient: errClient,
}
require.False(t, p.tryDeleteOriginalVSC(t.Context(), "delete-fail-vsc"))
})
}
func boolPtr(b bool) *bool {
return &b
}
func stringPtr(s string) *string {
return &s
}

View File

@@ -102,15 +102,6 @@ const (
// even if the resource contains a matching selector label.
ExcludeFromBackupLabel = "velero.io/exclude-from-backup"
// SkipFromBackupAnnotation is the annotation used by internal BackupItemActions
// to indicate that a resource should be skipped from backup,
// even if it doesn't have the ExcludeFromBackupLabel.
// This is used in cases where we want to skip backup of a resource based on some logic in a plugin.
//
// Notice: SkipFromBackupAnnotation's priority is higher than MustIncludeAdditionalItemAnnotation.
// If SkipFromBackupAnnotation is set, the resource will be skipped even if MustIncludeAdditionalItemAnnotation is set.
SkipFromBackupAnnotation = "velero.io/skip-from-backup"
// defaultVGSLabelKey is the default label key used to group PVCs under a VolumeGroupSnapshot
DefaultVGSLabelKey = "velero.io/volume-group"

View File

@@ -98,14 +98,6 @@ func (m *backedUpItemsMap) AddItem(key itemKey) {
m.totalItems[key] = struct{}{}
}
func (m *backedUpItemsMap) DeleteItem(key itemKey) {
m.Lock()
defer m.Unlock()
delete(m.backedUpItems, key)
delete(m.totalItems, key)
}
func (m *backedUpItemsMap) AddItemToTotal(key itemKey) {
m.Lock()
defer m.Unlock()

View File

@@ -244,14 +244,6 @@ func (ib *itemBackupper) backupItemInternal(logger logrus.FieldLogger, obj runti
return false, itemFiles, kubeerrs.NewAggregate(backupErrs)
}
// If err is nil and updatedObj is nil, it means the item is skipped by plugin action,
// we should return here to avoid backing up the item, and avoid potential NPE in the following code.
if updatedObj == nil {
log.Infof("Remove item from the backup's backupItems list and totalItems list because it's skipped by plugin action.")
ib.backupRequest.BackedUpItems.DeleteItem(key)
return false, itemFiles, nil
}
itemFiles = append(itemFiles, additionalItemFiles...)
obj = updatedObj
if metadata, err = meta.Accessor(obj); err != nil {
@@ -406,13 +398,6 @@ func (ib *itemBackupper) executeActions(
}
u := &unstructured.Unstructured{Object: updatedItem.UnstructuredContent()}
if _, ok := u.GetAnnotations()[velerov1api.SkipFromBackupAnnotation]; ok {
log.Infof("Resource (groupResource=%s, namespace=%s, name=%s) is skipped from backup by action %s.",
groupResource.String(), namespace, name, actionName)
return nil, itemFiles, nil
}
if actionName == csiBIAPluginName {
if additionalItemIdentifiers == nil && u.GetAnnotations()[velerov1api.SkippedNoCSIPVAnnotation] == "true" {
// snapshot was skipped by CSI plugin

View File

@@ -275,21 +275,11 @@ func (o *Options) AsVeleroOptions() (*install.VeleroOptions, error) {
return nil, err
}
}
veleroPodResources, err := kubeutil.ParseCPUAndMemoryResources(
o.VeleroPodCPURequest,
o.VeleroPodMemRequest,
o.VeleroPodCPULimit,
o.VeleroPodMemLimit,
)
veleroPodResources, err := kubeutil.ParseResourceRequirements(o.VeleroPodCPURequest, o.VeleroPodMemRequest, o.VeleroPodCPULimit, o.VeleroPodMemLimit)
if err != nil {
return nil, err
}
nodeAgentPodResources, err := kubeutil.ParseCPUAndMemoryResources(
o.NodeAgentPodCPURequest,
o.NodeAgentPodMemRequest,
o.NodeAgentPodCPULimit,
o.NodeAgentPodMemLimit,
)
nodeAgentPodResources, err := kubeutil.ParseResourceRequirements(o.NodeAgentPodCPURequest, o.NodeAgentPodMemRequest, o.NodeAgentPodCPULimit, o.NodeAgentPodMemLimit)
if err != nil {
return nil, err
}
@@ -381,8 +371,8 @@ This is useful as a starting point for more customized installations.
# velero install --provider azure --plugins velero/velero-plugin-for-microsoft-azure:v1.0.0 --bucket $BLOB_CONTAINER --secret-file ./credentials-velero --backup-location-config resourceGroup=$AZURE_BACKUP_RESOURCE_GROUP,storageAccount=$AZURE_STORAGE_ACCOUNT_ID[,subscriptionId=$AZURE_BACKUP_SUBSCRIPTION_ID] --snapshot-location-config apiTimeout=<YOUR_TIMEOUT>[,resourceGroup=$AZURE_BACKUP_RESOURCE_GROUP,subscriptionId=$AZURE_BACKUP_SUBSCRIPTION_ID]`,
Run: func(c *cobra.Command, args []string) {
cmd.CheckError(o.Complete(args, f))
cmd.CheckError(o.Validate(c, args, f))
cmd.CheckError(o.Complete(args, f))
cmd.CheckError(o.Run(c, f))
},
}

View File

@@ -17,18 +17,11 @@ limitations under the License.
package install
import (
"context"
"testing"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
corev1api "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
factorymocks "github.com/vmware-tanzu/velero/pkg/client/mocks"
velerotest "github.com/vmware-tanzu/velero/pkg/test"
)
func TestPriorityClassNameFlag(t *testing.T) {
@@ -98,168 +91,3 @@ func TestPriorityClassNameFlag(t *testing.T) {
})
}
}
// makeValidateCmd returns a minimal *cobra.Command that satisfies output.ValidateFlags.
func makeValidateCmd() *cobra.Command {
c := &cobra.Command{}
// output.ValidateFlags only inspects the "output" flag; add it so validation passes.
c.Flags().StringP("output", "o", "", "output format")
return c
}
// configMapInNamespace builds a ConfigMap with a single JSON data entry in the given namespace.
func configMapInNamespace(namespace, name, jsonValue string) *corev1api.ConfigMap {
return &corev1api.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Namespace: namespace,
Name: name,
},
Data: map[string]string{
"config": jsonValue,
},
}
}
// TestValidateConfigMapsUseFactoryNamespace verifies that Validate resolves the target
// namespace correctly for all three ConfigMap flags.
//
// The fix (Option B) calls Complete before Validate in NewCommand so that o.Namespace is
// populated from f.Namespace() before VerifyJSONConfigs runs. Tests mirror that order by
// calling Complete before Validate.
func TestValidateConfigMapsUseFactoryNamespace(t *testing.T) {
const targetNS = "tenant-b"
const defaultNS = "default"
// Shared options that satisfy every other validation gate:
// - NoDefaultBackupLocation=true + UseVolumeSnapshots=false skips provider/bucket/plugins checks
// - NoSecret=true satisfies the secret-file check
baseOptions := func() *Options {
o := NewInstallOptions()
o.NoDefaultBackupLocation = true
o.UseVolumeSnapshots = false
o.NoSecret = true
return o
}
tests := []struct {
name string
setupOpts func(o *Options, cmName string)
cmJSON string
wantErrMsg string // substring expected in error; empty means success
}{
{
name: "NodeAgentConfigMap found in factory namespace",
setupOpts: func(o *Options, cmName string) {
o.NodeAgentConfigMap = cmName
},
cmJSON: `{}`,
},
{
name: "NodeAgentConfigMap not found when only in default namespace",
setupOpts: func(o *Options, cmName string) {
o.NodeAgentConfigMap = cmName
},
cmJSON: `{}`,
wantErrMsg: "--node-agent-configmap specified ConfigMap",
},
{
name: "RepoMaintenanceJobConfigMap found in factory namespace",
setupOpts: func(o *Options, cmName string) {
o.RepoMaintenanceJobConfigMap = cmName
},
cmJSON: `{}`,
},
{
name: "RepoMaintenanceJobConfigMap not found when only in default namespace",
setupOpts: func(o *Options, cmName string) {
o.RepoMaintenanceJobConfigMap = cmName
},
cmJSON: `{}`,
wantErrMsg: "--repo-maintenance-job-configmap specified ConfigMap",
},
{
name: "BackupRepoConfigMap found in factory namespace",
setupOpts: func(o *Options, cmName string) {
o.BackupRepoConfigMap = cmName
},
cmJSON: `{}`,
},
{
name: "BackupRepoConfigMap not found when only in default namespace",
setupOpts: func(o *Options, cmName string) {
o.BackupRepoConfigMap = cmName
},
cmJSON: `{}`,
wantErrMsg: "--backup-repository-configmap specified ConfigMap",
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
const cmName = "my-config"
// Decide where to place the ConfigMap:
// "not found" cases put it in "default", so the factory namespace lookup misses it.
cmNamespace := targetNS
if tc.wantErrMsg != "" {
cmNamespace = defaultNS
}
cm := configMapInNamespace(cmNamespace, cmName, tc.cmJSON)
kbClient := velerotest.NewFakeControllerRuntimeClient(t, cm)
f := &factorymocks.Factory{}
f.On("Namespace").Return(targetNS)
f.On("KubebuilderClient").Return(kbClient, nil)
o := baseOptions()
tc.setupOpts(o, cmName)
// Mirror the NewCommand call order: Complete populates o.Namespace before Validate runs.
require.NoError(t, o.Complete([]string{}, f))
c := makeValidateCmd()
c.SetContext(context.Background())
err := o.Validate(c, []string{}, f)
if tc.wantErrMsg == "" {
require.NoError(t, err)
} else {
require.Error(t, err)
assert.Contains(t, err.Error(), tc.wantErrMsg)
}
})
}
}
// TestNewCommandRunClosureOrder covers the Run closure in NewCommand (the lines that were
// reordered by the fix: Complete → Validate → Run).
//
// The closure uses CheckError which calls os.Exit on any error, so the only safe path is one
// where all three steps return nil. DryRun=true causes o.Run to return after PrintWithFormat
// (which is a no-op when no --output flag is set) without touching any cluster clients.
func TestNewCommandRunClosureOrder(t *testing.T) {
const targetNS = "tenant-b"
const cmName = "my-config"
cm := configMapInNamespace(targetNS, cmName, `{}`)
kbClient := velerotest.NewFakeControllerRuntimeClient(t, cm)
f := &factorymocks.Factory{}
f.On("Namespace").Return(targetNS)
f.On("KubebuilderClient").Return(kbClient, nil)
c := NewCommand(f)
c.SetArgs([]string{
"--no-default-backup-location",
"--use-volume-snapshots=false",
"--no-secret",
"--dry-run",
"--node-agent-configmap", cmName,
})
// Execute drives the full Run closure: Complete populates o.Namespace, Validate
// looks up the ConfigMap in targetNS (succeeds), Run returns early via DryRun.
require.NoError(t, c.Execute())
}

View File

@@ -323,25 +323,7 @@ func (s *nodeAgentServer) run() {
podResources := corev1api.ResourceRequirements{}
if s.dataPathConfigs != nil && s.dataPathConfigs.PodResources != nil {
// To make the PodResources ConfigMap without ephemeral storage request/limit backward compatible,
// need to avoid set value as empty, because empty string will cause parsing error.
ephemeralStorageRequest := constant.DefaultEphemeralStorageRequest
if s.dataPathConfigs.PodResources.EphemeralStorageRequest != "" {
ephemeralStorageRequest = s.dataPathConfigs.PodResources.EphemeralStorageRequest
}
ephemeralStorageLimit := constant.DefaultEphemeralStorageLimit
if s.dataPathConfigs.PodResources.EphemeralStorageLimit != "" {
ephemeralStorageLimit = s.dataPathConfigs.PodResources.EphemeralStorageLimit
}
if res, err := kube.ParseResourceRequirements(
s.dataPathConfigs.PodResources.CPURequest,
s.dataPathConfigs.PodResources.MemoryRequest,
ephemeralStorageRequest,
s.dataPathConfigs.PodResources.CPULimit,
s.dataPathConfigs.PodResources.MemoryLimit,
ephemeralStorageLimit,
); err != nil {
if res, err := kube.ParseResourceRequirements(s.dataPathConfigs.PodResources.CPURequest, s.dataPathConfigs.PodResources.MemoryRequest, s.dataPathConfigs.PodResources.CPULimit, s.dataPathConfigs.PodResources.MemoryLimit); err != nil {
s.logger.WithError(err).Warn("Pod resource requirements are invalid, ignore")
} else {
podResources = res

View File

@@ -23,7 +23,4 @@ const (
PluginCSIPVCRestoreRIA = "velero.io/csi-pvc-restorer"
PluginCsiVolumeSnapshotRestoreRIA = "velero.io/csi-volumesnapshot-restorer"
DefaultEphemeralStorageRequest = "0"
DefaultEphemeralStorageLimit = "0"
)

View File

@@ -129,13 +129,6 @@ func (c *scheduleReconciler) Reconcile(ctx context.Context, req ctrl.Request) (c
} else {
schedule.Status.Phase = velerov1.SchedulePhaseEnabled
schedule.Status.ValidationErrors = nil
// Compute expected interval between consecutive scheduled backup runs.
// Only meaningful when the cron expression is valid.
now := c.clock.Now()
nextRun := cronSchedule.Next(now)
nextNextRun := cronSchedule.Next(nextRun)
c.metrics.SetScheduleExpectedIntervalSeconds(schedule.Name, nextNextRun.Sub(nextRun).Seconds())
}
scheduleNeedsPatch := false

View File

@@ -124,15 +124,6 @@ func (e *csiSnapshotExposer) Expose(ctx context.Context, ownerObject corev1api.O
"owner": ownerObject.Name,
})
volumeTopology, err := kube.GetVolumeTopology(ctx, e.kubeClient.CoreV1(), e.kubeClient.StorageV1(), csiExposeParam.SourcePVName, csiExposeParam.StorageClass)
if err != nil {
return errors.Wrapf(err, "error getting volume topology for PV %s, storage class %s", csiExposeParam.SourcePVName, csiExposeParam.StorageClass)
}
if volumeTopology != nil {
curLog.Infof("Using volume topology %v", volumeTopology)
}
curLog.Info("Exposing CSI snapshot")
volumeSnapshot, err := csi.WaitVolumeSnapshotReady(ctx, e.csiSnapshotClient, csiExposeParam.SnapshotName, csiExposeParam.SourceNamespace, csiExposeParam.ExposeTimeout, curLog)
@@ -263,7 +254,6 @@ func (e *csiSnapshotExposer) Expose(ctx context.Context, ownerObject corev1api.O
csiExposeParam.NodeOS,
csiExposeParam.PriorityClassName,
intoleratableNodes,
volumeTopology,
)
if err != nil {
return errors.Wrap(err, "error to create backup pod")
@@ -330,8 +320,7 @@ func (e *csiSnapshotExposer) GetExposed(ctx context.Context, ownerObject corev1a
curLog.WithField("pod", pod.Name).Infof("Backup volume is found in pod at index %v", i)
var nodeOS *string
if pod.Spec.OS != nil {
os := string(pod.Spec.OS.Name)
if os, found := pod.Spec.NodeSelector[kube.NodeOSLabel]; found {
nodeOS = &os
}
@@ -599,7 +588,6 @@ func (e *csiSnapshotExposer) createBackupPod(
nodeOS string,
priorityClassName string,
intoleratableNodes []string,
volumeTopology *corev1api.NodeSelector,
) (*corev1api.Pod, error) {
podName := ownerObject.Name
@@ -655,10 +643,6 @@ func (e *csiSnapshotExposer) createBackupPod(
args = append(args, podInfo.logFormatArgs...)
args = append(args, podInfo.logLevelArgs...)
if affinity == nil {
affinity = &kube.LoadAffinity{}
}
var securityCtx *corev1api.PodSecurityContext
nodeSelector := map[string]string{}
podOS := corev1api.PodOS{}
@@ -670,14 +654,9 @@ func (e *csiSnapshotExposer) createBackupPod(
},
}
nodeSelector[kube.NodeOSLabel] = kube.NodeOSWindows
podOS.Name = kube.NodeOSWindows
affinity.NodeSelector.MatchExpressions = append(affinity.NodeSelector.MatchExpressions, metav1.LabelSelectorRequirement{
Key: kube.NodeOSLabel,
Values: []string{kube.NodeOSWindows},
Operator: metav1.LabelSelectorOpIn,
})
toleration = append(toleration, []corev1api.Toleration{
{
Key: "os",
@@ -704,15 +683,11 @@ func (e *csiSnapshotExposer) createBackupPod(
}
}
nodeSelector[kube.NodeOSLabel] = kube.NodeOSLinux
podOS.Name = kube.NodeOSLinux
affinity.NodeSelector.MatchExpressions = append(affinity.NodeSelector.MatchExpressions, metav1.LabelSelectorRequirement{
Key: kube.NodeOSLabel,
Values: []string{kube.NodeOSWindows},
Operator: metav1.LabelSelectorOpNotIn,
})
}
var podAffinity *corev1api.Affinity
if len(intoleratableNodes) > 0 {
if affinity == nil {
affinity = &kube.LoadAffinity{}
@@ -725,7 +700,9 @@ func (e *csiSnapshotExposer) createBackupPod(
})
}
podAffinity := kube.ToSystemAffinity(affinity, volumeTopology)
if affinity != nil {
podAffinity = kube.ToSystemAffinity([]*kube.LoadAffinity{affinity})
}
pod := &corev1api.Pod{
ObjectMeta: metav1.ObjectMeta{

View File

@@ -154,7 +154,6 @@ func TestCreateBackupPodWithPriorityClass(t *testing.T) {
kube.NodeOSLinux,
tc.expectedPriorityClass,
nil,
nil,
)
require.NoError(t, err, tc.description)
@@ -240,7 +239,6 @@ func TestCreateBackupPodWithMissingConfigMap(t *testing.T) {
kube.NodeOSLinux,
"", // empty priority class since config map is missing
nil,
nil,
)
// Should succeed even when config map is missing

View File

@@ -68,12 +68,6 @@ func TestExpose(t *testing.T) {
var restoreSize int64 = 123456
scObj := &storagev1api.StorageClass{
ObjectMeta: metav1.ObjectMeta{
Name: "fake-sc",
},
}
snapshotClass := "fake-snapshot-class"
vsObject := &snapshotv1api.VolumeSnapshot{
ObjectMeta: metav1.ObjectMeta{
@@ -205,18 +199,6 @@ func TestExpose(t *testing.T) {
expectedAffinity *corev1api.Affinity
expectedPVCAnnotation map[string]string
}{
{
name: "get volume topology fail",
ownerBackup: backup,
exposeParam: CSISnapshotExposeParam{
SnapshotName: "fake-vs",
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
},
err: "error getting volume topology for PV fake-pv, storage class fake-sc: error getting storage class fake-sc: storageclasses.storage.k8s.io \"fake-sc\" not found",
},
{
name: "wait vs ready fail",
ownerBackup: backup,
@@ -224,11 +206,6 @@ func TestExpose(t *testing.T) {
SnapshotName: "fake-vs",
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
},
kubeClientObj: []runtime.Object{
scObj,
},
err: "error wait volume snapshot ready: error to get VolumeSnapshot /fake-vs: volumesnapshots.snapshot.storage.k8s.io \"fake-vs\" not found",
},
@@ -240,15 +217,10 @@ func TestExpose(t *testing.T) {
SourceNamespace: "fake-ns",
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
},
snapshotClientObj: []runtime.Object{
vsObject,
},
kubeClientObj: []runtime.Object{
scObj,
},
err: "error to get volume snapshot content: error getting volume snapshot content from API: volumesnapshotcontents.snapshot.storage.k8s.io \"fake-vsc\" not found",
},
{
@@ -259,8 +231,6 @@ func TestExpose(t *testing.T) {
SourceNamespace: "fake-ns",
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
},
snapshotClientObj: []runtime.Object{
vsObject,
@@ -275,9 +245,6 @@ func TestExpose(t *testing.T) {
},
},
},
kubeClientObj: []runtime.Object{
scObj,
},
err: "error to delete volume snapshot: error to delete volume snapshot: fake-delete-error",
},
{
@@ -288,8 +255,6 @@ func TestExpose(t *testing.T) {
SourceNamespace: "fake-ns",
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
},
snapshotClientObj: []runtime.Object{
vsObject,
@@ -304,9 +269,6 @@ func TestExpose(t *testing.T) {
},
},
},
kubeClientObj: []runtime.Object{
scObj,
},
err: "error to delete volume snapshot content: error to delete volume snapshot content: fake-delete-error",
},
{
@@ -317,8 +279,6 @@ func TestExpose(t *testing.T) {
SourceNamespace: "fake-ns",
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
},
snapshotClientObj: []runtime.Object{
vsObject,
@@ -333,9 +293,6 @@ func TestExpose(t *testing.T) {
},
},
},
kubeClientObj: []runtime.Object{
scObj,
},
err: "error to create backup volume snapshot: fake-create-error",
},
{
@@ -346,8 +303,6 @@ func TestExpose(t *testing.T) {
SourceNamespace: "fake-ns",
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
},
snapshotClientObj: []runtime.Object{
vsObject,
@@ -362,9 +317,6 @@ func TestExpose(t *testing.T) {
},
},
},
kubeClientObj: []runtime.Object{
scObj,
},
err: "error to create backup volume snapshot content: fake-create-error",
},
{
@@ -374,16 +326,11 @@ func TestExpose(t *testing.T) {
SnapshotName: "fake-vs",
SourceNamespace: "fake-ns",
AccessMode: "fake-mode",
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
},
snapshotClientObj: []runtime.Object{
vsObject,
vscObj,
},
kubeClientObj: []runtime.Object{
scObj,
},
err: "error to create backup pvc: unsupported access mode fake-mode",
},
{
@@ -395,8 +342,6 @@ func TestExpose(t *testing.T) {
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
AccessMode: AccessModeFileSystem,
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
},
snapshotClientObj: []runtime.Object{
vsObject,
@@ -411,9 +356,6 @@ func TestExpose(t *testing.T) {
},
},
},
kubeClientObj: []runtime.Object{
scObj,
},
err: "error to create backup pvc: error to create pvc: fake-create-error",
},
{
@@ -425,8 +367,6 @@ func TestExpose(t *testing.T) {
AccessMode: AccessModeFileSystem,
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
},
snapshotClientObj: []runtime.Object{
vsObject,
@@ -434,7 +374,6 @@ func TestExpose(t *testing.T) {
},
kubeClientObj: []runtime.Object{
daemonSet,
scObj,
},
kubeReactors: []reactor{
{
@@ -456,8 +395,6 @@ func TestExpose(t *testing.T) {
AccessMode: AccessModeFileSystem,
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
},
snapshotClientObj: []runtime.Object{
vsObject,
@@ -465,24 +402,6 @@ func TestExpose(t *testing.T) {
},
kubeClientObj: []runtime.Object{
daemonSet,
scObj,
},
expectedAffinity: &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "kubernetes.io/os",
Operator: corev1api.NodeSelectorOpNotIn,
Values: []string{"windows"},
},
},
},
},
},
},
},
},
{
@@ -494,8 +413,6 @@ func TestExpose(t *testing.T) {
AccessMode: AccessModeFileSystem,
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
},
snapshotClientObj: []runtime.Object{
vsObject,
@@ -503,24 +420,6 @@ func TestExpose(t *testing.T) {
},
kubeClientObj: []runtime.Object{
daemonSet,
scObj,
},
expectedAffinity: &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "kubernetes.io/os",
Operator: corev1api.NodeSelectorOpNotIn,
Values: []string{"windows"},
},
},
},
},
},
},
},
},
{
@@ -533,8 +432,6 @@ func TestExpose(t *testing.T) {
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
VolumeSize: *resource.NewQuantity(567890, ""),
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
},
snapshotClientObj: []runtime.Object{
vsObjectWithoutRestoreSize,
@@ -542,26 +439,8 @@ func TestExpose(t *testing.T) {
},
kubeClientObj: []runtime.Object{
daemonSet,
scObj,
},
expectedVolumeSize: resource.NewQuantity(567890, ""),
expectedAffinity: &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "kubernetes.io/os",
Operator: corev1api.NodeSelectorOpNotIn,
Values: []string{"windows"},
},
},
},
},
},
},
},
},
{
name: "backupPod mounts read only backupPVC",
@@ -570,7 +449,6 @@ func TestExpose(t *testing.T) {
SnapshotName: "fake-vs",
SourceNamespace: "fake-ns",
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
AccessMode: AccessModeFileSystem,
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
@@ -587,26 +465,8 @@ func TestExpose(t *testing.T) {
},
kubeClientObj: []runtime.Object{
daemonSet,
scObj,
},
expectedReadOnlyPVC: true,
expectedAffinity: &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "kubernetes.io/os",
Operator: corev1api.NodeSelectorOpNotIn,
Values: []string{"windows"},
},
},
},
},
},
},
},
},
{
name: "backupPod mounts read only backupPVC and storageClass specified in backupPVC config",
@@ -615,7 +475,6 @@ func TestExpose(t *testing.T) {
SnapshotName: "fake-vs",
SourceNamespace: "fake-ns",
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
AccessMode: AccessModeFileSystem,
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
@@ -632,27 +491,9 @@ func TestExpose(t *testing.T) {
},
kubeClientObj: []runtime.Object{
daemonSet,
scObj,
},
expectedReadOnlyPVC: true,
expectedBackupPVCStorageClass: "fake-sc-read-only",
expectedAffinity: &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "kubernetes.io/os",
Operator: corev1api.NodeSelectorOpNotIn,
Values: []string{"windows"},
},
},
},
},
},
},
},
},
{
name: "backupPod mounts backupPVC with storageClass specified in backupPVC config",
@@ -661,7 +502,6 @@ func TestExpose(t *testing.T) {
SnapshotName: "fake-vs",
SourceNamespace: "fake-ns",
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
AccessMode: AccessModeFileSystem,
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
@@ -677,26 +517,8 @@ func TestExpose(t *testing.T) {
},
kubeClientObj: []runtime.Object{
daemonSet,
scObj,
},
expectedBackupPVCStorageClass: "fake-sc-read-only",
expectedAffinity: &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "kubernetes.io/os",
Operator: corev1api.NodeSelectorOpNotIn,
Values: []string{"windows"},
},
},
},
},
},
},
},
},
{
name: "Affinity per StorageClass",
@@ -705,7 +527,6 @@ func TestExpose(t *testing.T) {
SnapshotName: "fake-vs",
SourceNamespace: "fake-ns",
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
AccessMode: AccessModeFileSystem,
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
@@ -730,7 +551,6 @@ func TestExpose(t *testing.T) {
},
kubeClientObj: []runtime.Object{
daemonSet,
scObj,
},
expectedAffinity: &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
@@ -743,11 +563,6 @@ func TestExpose(t *testing.T) {
Operator: corev1api.NodeSelectorOpIn,
Values: []string{"Linux"},
},
{
Key: "kubernetes.io/os",
Operator: corev1api.NodeSelectorOpNotIn,
Values: []string{"windows"},
},
},
},
},
@@ -762,7 +577,6 @@ func TestExpose(t *testing.T) {
SnapshotName: "fake-vs",
SourceNamespace: "fake-ns",
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
AccessMode: AccessModeFileSystem,
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
@@ -792,7 +606,6 @@ func TestExpose(t *testing.T) {
},
kubeClientObj: []runtime.Object{
daemonSet,
scObj,
},
expectedBackupPVCStorageClass: "fake-sc-read-only",
expectedAffinity: &corev1api.Affinity{
@@ -806,11 +619,6 @@ func TestExpose(t *testing.T) {
Operator: corev1api.NodeSelectorOpIn,
Values: []string{"amd64"},
},
{
Key: "kubernetes.io/os",
Operator: corev1api.NodeSelectorOpNotIn,
Values: []string{"windows"},
},
},
},
},
@@ -825,7 +633,6 @@ func TestExpose(t *testing.T) {
SnapshotName: "fake-vs",
SourceNamespace: "fake-ns",
StorageClass: "fake-sc",
SourcePVName: "fake-pv",
AccessMode: AccessModeFileSystem,
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
@@ -842,26 +649,9 @@ func TestExpose(t *testing.T) {
},
kubeClientObj: []runtime.Object{
daemonSet,
scObj,
},
expectedBackupPVCStorageClass: "fake-sc-read-only",
expectedAffinity: &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "kubernetes.io/os",
Operator: corev1api.NodeSelectorOpNotIn,
Values: []string{"windows"},
},
},
},
},
},
},
},
expectedAffinity: nil,
},
{
name: "IntolerateSourceNode, get source node fail",
@@ -887,7 +677,6 @@ func TestExpose(t *testing.T) {
},
kubeClientObj: []runtime.Object{
daemonSet,
scObj,
},
kubeReactors: []reactor{
{
@@ -898,23 +687,7 @@ func TestExpose(t *testing.T) {
},
},
},
expectedAffinity: &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "kubernetes.io/os",
Operator: corev1api.NodeSelectorOpNotIn,
Values: []string{"windows"},
},
},
},
},
},
},
},
expectedAffinity: nil,
expectedPVCAnnotation: nil,
},
{
@@ -941,25 +714,8 @@ func TestExpose(t *testing.T) {
},
kubeClientObj: []runtime.Object{
daemonSet,
scObj,
},
expectedAffinity: &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "kubernetes.io/os",
Operator: corev1api.NodeSelectorOpNotIn,
Values: []string{"windows"},
},
},
},
},
},
},
},
expectedAffinity: nil,
expectedPVCAnnotation: map[string]string{util.VSphereCNSFastCloneAnno: "true"},
},
{
@@ -988,7 +744,6 @@ func TestExpose(t *testing.T) {
daemonSet,
volumeAttachement1,
volumeAttachement2,
scObj,
},
expectedAffinity: &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
@@ -996,11 +751,6 @@ func TestExpose(t *testing.T) {
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "kubernetes.io/os",
Operator: corev1api.NodeSelectorOpNotIn,
Values: []string{"windows"},
},
{
Key: "kubernetes.io/hostname",
Operator: corev1api.NodeSelectorOpNotIn,
@@ -1094,8 +844,6 @@ func TestExpose(t *testing.T) {
if test.expectedAffinity != nil {
assert.Equal(t, test.expectedAffinity, backupPod.Spec.Affinity)
} else {
assert.Nil(t, backupPod.Spec.Affinity)
}
if test.expectedPVCAnnotation != nil {

View File

@@ -493,15 +493,13 @@ func (e *genericRestoreExposer) createRestorePod(
containerName := string(ownerObject.UID)
volumeName := string(ownerObject.UID)
nodeSelector := map[string]string{}
if selectedNode != "" {
affinity = nil
nodeSelector["kubernetes.io/hostname"] = selectedNode
e.log.Infof("Selected node for restore pod. Ignore affinity from the node-agent config.")
}
var podAffinity *corev1api.Affinity
if selectedNode == "" {
e.log.Infof("No selected node for restore pod. Try to get affinity from the node-agent config.")
if affinity == nil {
affinity = &kube.LoadAffinity{}
if affinity != nil {
podAffinity = kube.ToSystemAffinity([]*kube.LoadAffinity{affinity})
}
}
podInfo, err := getInheritedPodInfo(ctx, e.kubeClient, ownerObject.Namespace, nodeOS)
@@ -568,6 +566,7 @@ func (e *genericRestoreExposer) createRestorePod(
args = append(args, podInfo.logLevelArgs...)
var securityCtx *corev1api.PodSecurityContext
nodeSelector := map[string]string{}
podOS := corev1api.PodOS{}
if nodeOS == kube.NodeOSWindows {
userID := "ContainerAdministrator"
@@ -577,14 +576,9 @@ func (e *genericRestoreExposer) createRestorePod(
},
}
nodeSelector[kube.NodeOSLabel] = kube.NodeOSWindows
podOS.Name = kube.NodeOSWindows
affinity.NodeSelector.MatchExpressions = append(affinity.NodeSelector.MatchExpressions, metav1.LabelSelectorRequirement{
Key: kube.NodeOSLabel,
Values: []string{kube.NodeOSWindows},
Operator: metav1.LabelSelectorOpIn,
})
toleration = append(toleration, []corev1api.Toleration{
{
Key: "os",
@@ -605,17 +599,10 @@ func (e *genericRestoreExposer) createRestorePod(
RunAsUser: &userID,
}
nodeSelector[kube.NodeOSLabel] = kube.NodeOSLinux
podOS.Name = kube.NodeOSLinux
affinity.NodeSelector.MatchExpressions = append(affinity.NodeSelector.MatchExpressions, metav1.LabelSelectorRequirement{
Key: kube.NodeOSLabel,
Values: []string{kube.NodeOSWindows},
Operator: metav1.LabelSelectorOpNotIn,
})
}
podAffinity := kube.ToSystemAffinity(affinity, nil)
pod := &corev1api.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: restorePodName,
@@ -669,6 +656,7 @@ func (e *genericRestoreExposer) createRestorePod(
ServiceAccountName: podInfo.serviceAccount,
TerminationGracePeriodSeconds: &gracePeriod,
Volumes: volumes,
NodeName: selectedNode,
RestartPolicy: corev1api.RestartPolicyNever,
SecurityContext: securityCtx,
Tolerations: toleration,

View File

@@ -434,8 +434,6 @@ func (e *podVolumeExposer) createHostingPod(
args = append(args, podInfo.logFormatArgs...)
args = append(args, podInfo.logLevelArgs...)
affinity := &kube.LoadAffinity{}
var securityCtx *corev1api.PodSecurityContext
var containerSecurityCtx *corev1api.SecurityContext
nodeSelector := map[string]string{}
@@ -448,14 +446,9 @@ func (e *podVolumeExposer) createHostingPod(
},
}
nodeSelector[kube.NodeOSLabel] = kube.NodeOSWindows
podOS.Name = kube.NodeOSWindows
affinity.NodeSelector.MatchExpressions = append(affinity.NodeSelector.MatchExpressions, metav1.LabelSelectorRequirement{
Key: kube.NodeOSLabel,
Values: []string{kube.NodeOSWindows},
Operator: metav1.LabelSelectorOpIn,
})
toleration = append(toleration, []corev1api.Toleration{
{
Key: "os",
@@ -479,17 +472,10 @@ func (e *podVolumeExposer) createHostingPod(
Privileged: &privileged,
}
nodeSelector[kube.NodeOSLabel] = kube.NodeOSLinux
podOS.Name = kube.NodeOSLinux
affinity.NodeSelector.MatchExpressions = append(affinity.NodeSelector.MatchExpressions, metav1.LabelSelectorRequirement{
Key: kube.NodeOSLabel,
Values: []string{kube.NodeOSWindows},
Operator: metav1.LabelSelectorOpNotIn,
})
}
podAffinity := kube.ToSystemAffinity(affinity, nil)
pod := &corev1api.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: hostingPodName,
@@ -509,7 +495,6 @@ func (e *podVolumeExposer) createHostingPod(
Spec: corev1api.PodSpec{
NodeSelector: nodeSelector,
OS: &podOS,
Affinity: podAffinity,
Containers: []corev1api.Container{
{
Name: containerName,

View File

@@ -235,28 +235,12 @@ func DaemonSet(namespace string, opts ...podTemplateOption) *appsv1api.DaemonSet
if c.forWindows {
daemonSet.Spec.Template.Spec.SecurityContext = nil
daemonSet.Spec.Template.Spec.Containers[0].SecurityContext = nil
daemonSet.Spec.Template.Spec.NodeSelector = map[string]string{
"kubernetes.io/os": "windows",
}
daemonSet.Spec.Template.Spec.OS = &corev1api.PodOS{
Name: "windows",
}
daemonSet.Spec.Template.Spec.Affinity = &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "kubernetes.io/os",
Values: []string{"windows"},
Operator: corev1api.NodeSelectorOpIn,
},
},
},
},
},
},
}
daemonSet.Spec.Template.Spec.Tolerations = []corev1api.Toleration{
{
Key: "os",
@@ -272,22 +256,11 @@ func DaemonSet(namespace string, opts ...podTemplateOption) *appsv1api.DaemonSet
},
}
} else {
daemonSet.Spec.Template.Spec.Affinity = &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "kubernetes.io/os",
Values: []string{"windows"},
Operator: corev1api.NodeSelectorOpNotIn,
},
},
},
},
},
},
daemonSet.Spec.Template.Spec.NodeSelector = map[string]string{
"kubernetes.io/os": "linux",
}
daemonSet.Spec.Template.Spec.OS = &corev1api.PodOS{
Name: "linux",
}
}

View File

@@ -34,23 +34,8 @@ func TestDaemonSet(t *testing.T) {
assert.Equal(t, "velero", ds.ObjectMeta.Namespace)
assert.Equal(t, "node-agent", ds.Spec.Template.ObjectMeta.Labels["name"])
assert.Equal(t, "node-agent", ds.Spec.Template.ObjectMeta.Labels["role"])
assert.Equal(t, &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "kubernetes.io/os",
Values: []string{"windows"},
Operator: corev1api.NodeSelectorOpNotIn,
},
},
},
},
},
},
}, ds.Spec.Template.Spec.Affinity)
assert.Equal(t, "linux", ds.Spec.Template.Spec.NodeSelector["kubernetes.io/os"])
assert.Equal(t, "linux", string(ds.Spec.Template.Spec.OS.Name))
assert.Equal(t, corev1api.PodSecurityContext{RunAsUser: &userID}, *ds.Spec.Template.Spec.SecurityContext)
assert.Equal(t, corev1api.SecurityContext{Privileged: &boolFalse}, *ds.Spec.Template.Spec.Containers[0].SecurityContext)
assert.Len(t, ds.Spec.Template.Spec.Volumes, 3)
@@ -95,24 +80,8 @@ func TestDaemonSet(t *testing.T) {
assert.Equal(t, "velero", ds.ObjectMeta.Namespace)
assert.Equal(t, "node-agent-windows", ds.Spec.Template.ObjectMeta.Labels["name"])
assert.Equal(t, "node-agent", ds.Spec.Template.ObjectMeta.Labels["role"])
assert.Equal(t, "windows", ds.Spec.Template.Spec.NodeSelector["kubernetes.io/os"])
assert.Equal(t, "windows", string(ds.Spec.Template.Spec.OS.Name))
assert.Equal(t, &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "kubernetes.io/os",
Values: []string{"windows"},
Operator: corev1api.NodeSelectorOpIn,
},
},
},
},
},
},
}, ds.Spec.Template.Spec.Affinity)
assert.Equal(t, (*corev1api.PodSecurityContext)(nil), ds.Spec.Template.Spec.SecurityContext)
assert.Equal(t, (*corev1api.SecurityContext)(nil), ds.Spec.Template.Spec.Containers[0].SecurityContext)
}

View File

@@ -364,26 +364,12 @@ func Deployment(namespace string, opts ...podTemplateOption) *appsv1api.Deployme
Spec: corev1api.PodSpec{
RestartPolicy: corev1api.RestartPolicyAlways,
ServiceAccountName: c.serviceAccountName,
NodeSelector: map[string]string{
"kubernetes.io/os": "linux",
},
OS: &corev1api.PodOS{
Name: "linux",
},
Affinity: &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "kubernetes.io/os",
Values: []string{"windows"},
Operator: corev1api.NodeSelectorOpNotIn,
},
},
},
},
},
},
},
Containers: []corev1api.Container{
{
Name: "velero",

View File

@@ -100,23 +100,8 @@ func TestDeployment(t *testing.T) {
assert.Len(t, deploy.Spec.Template.Spec.Containers[0].Args, 2)
assert.Equal(t, "--repo-maintenance-job-configmap=test-repo-maintenance-config", deploy.Spec.Template.Spec.Containers[0].Args[1])
assert.Equal(t, &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "kubernetes.io/os",
Values: []string{"windows"},
Operator: corev1api.NodeSelectorOpNotIn,
},
},
},
},
},
},
}, deploy.Spec.Template.Spec.Affinity)
assert.Equal(t, "linux", deploy.Spec.Template.Spec.NodeSelector["kubernetes.io/os"])
assert.Equal(t, "linux", string(deploy.Spec.Template.Spec.OS.Name))
}
func TestDeploymentWithPriorityClassName(t *testing.T) {

View File

@@ -80,9 +80,6 @@ const (
DataDownloadFailureTotal = "data_download_failure_total"
DataDownloadCancelTotal = "data_download_cancel_total"
// schedule metrics
scheduleExpectedIntervalSeconds = "schedule_expected_interval_seconds"
// repo maintenance metrics
repoMaintenanceSuccessTotal = "repo_maintenance_success_total"
repoMaintenanceFailureTotal = "repo_maintenance_failure_total"
@@ -350,14 +347,6 @@ func NewServerMetrics() *ServerMetrics {
},
[]string{scheduleLabel, backupNameLabel},
),
scheduleExpectedIntervalSeconds: prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Namespace: metricNamespace,
Name: scheduleExpectedIntervalSeconds,
Help: "Expected interval between consecutive scheduled backups, in seconds",
},
[]string{scheduleLabel},
),
repoMaintenanceSuccessTotal: prometheus.NewCounterVec(
prometheus.CounterOpts{
Namespace: metricNamespace,
@@ -655,9 +644,6 @@ func (m *ServerMetrics) RemoveSchedule(scheduleName string) {
if c, ok := m.metrics[csiSnapshotFailureTotal].(*prometheus.CounterVec); ok {
c.DeleteLabelValues(scheduleName, "")
}
if g, ok := m.metrics[scheduleExpectedIntervalSeconds].(*prometheus.GaugeVec); ok {
g.DeleteLabelValues(scheduleName)
}
}
// InitMetricsForNode initializes counter metrics for a node.
@@ -772,14 +758,6 @@ func (m *ServerMetrics) SetBackupLastSuccessfulTimestamp(backupSchedule string,
}
}
// SetScheduleExpectedIntervalSeconds records the expected interval in seconds,
// between consecutive backups for a schedule.
func (m *ServerMetrics) SetScheduleExpectedIntervalSeconds(scheduleName string, seconds float64) {
if g, ok := m.metrics[scheduleExpectedIntervalSeconds].(*prometheus.GaugeVec); ok {
g.WithLabelValues(scheduleName).Set(seconds)
}
}
// SetBackupTotal records the current number of existent backups.
func (m *ServerMetrics) SetBackupTotal(numberOfBackups int64) {
if g, ok := m.metrics[backupTotal].(prometheus.Gauge); ok {

View File

@@ -259,90 +259,6 @@ func TestMultipleAdhocBackupsShareMetrics(t *testing.T) {
assert.Equal(t, float64(1), validationFailureMetric, "All adhoc validation failures should be counted together")
}
// TestSetScheduleExpectedIntervalSeconds verifies that the expected interval metric
// is properly recorded for schedules.
func TestSetScheduleExpectedIntervalSeconds(t *testing.T) {
tests := []struct {
name string
scheduleName string
intervalSeconds float64
description string
}{
{
name: "every 5 minutes schedule",
scheduleName: "frequent-backup",
intervalSeconds: 300,
description: "Expected interval should be 5m in seconds",
},
{
name: "daily schedule",
scheduleName: "daily-backup",
intervalSeconds: 86400,
description: "Expected interval should be 24h in seconds",
},
{
name: "monthly schedule",
scheduleName: "monthly-backup",
intervalSeconds: 2678400, // 31 days in seconds
description: "Expected interval should be 31 days in seconds",
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
m := NewServerMetrics()
m.SetScheduleExpectedIntervalSeconds(tc.scheduleName, tc.intervalSeconds)
metric := getMetricValue(t, m.metrics[scheduleExpectedIntervalSeconds].(*prometheus.GaugeVec), tc.scheduleName)
assert.Equal(t, tc.intervalSeconds, metric, tc.description)
})
}
}
// TestScheduleExpectedIntervalNotInitializedByDefault verifies that the expected
// interval metric is not initialized by InitSchedule, so it only appears for
// schedules with a valid cron expression.
func TestScheduleExpectedIntervalNotInitializedByDefault(t *testing.T) {
m := NewServerMetrics()
m.InitSchedule("test-schedule")
// The metric should not have any values after InitSchedule
ch := make(chan prometheus.Metric, 1)
m.metrics[scheduleExpectedIntervalSeconds].(*prometheus.GaugeVec).Collect(ch)
close(ch)
count := 0
for range ch {
count++
}
assert.Equal(t, 0, count, "scheduleExpectedIntervalSeconds should not be initialized by InitSchedule")
}
// TestRemoveScheduleCleansUpExpectedInterval verifies that RemoveSchedule
// cleans up the expected interval metric.
func TestRemoveScheduleCleansUpExpectedInterval(t *testing.T) {
m := NewServerMetrics()
m.InitSchedule("test-schedule")
m.SetScheduleExpectedIntervalSeconds("test-schedule", 3600)
// Verify metric exists
metric := getMetricValue(t, m.metrics[scheduleExpectedIntervalSeconds].(*prometheus.GaugeVec), "test-schedule")
assert.Equal(t, float64(3600), metric)
// Remove schedule and verify metric is cleaned up
m.RemoveSchedule("test-schedule")
ch := make(chan prometheus.Metric, 1)
m.metrics[scheduleExpectedIntervalSeconds].(*prometheus.GaugeVec).Collect(ch)
close(ch)
count := 0
for range ch {
count++
}
assert.Equal(t, 0, count, "scheduleExpectedIntervalSeconds should be removed after RemoveSchedule")
}
// TestInitScheduleWithEmptyName verifies that InitSchedule works correctly
// with an empty schedule name (for adhoc backups).
func TestInitScheduleWithEmptyName(t *testing.T) {

View File

@@ -149,8 +149,7 @@ func (b *objectBackupStoreGetter) Get(location *velerov1api.BackupStorageLocatio
// if there are any slashes in the middle of 'bucket', the user
// probably put <bucket>/<prefix> in the bucket field, which we
// don't support.
// Exception: MRAP ARNs (arn:aws:s3::...) legitimately contain slashes.
if strings.Contains(bucket, "/") && !strings.HasPrefix(bucket, "arn:aws:s3:") {
if strings.Contains(bucket, "/") {
return nil, errors.Errorf("backup storage location's bucket name %q must not contain a '/' (if using a prefix, put it in the 'Prefix' field instead)", location.Spec.ObjectStorage.Bucket)
}

View File

@@ -943,24 +943,6 @@ func TestNewObjectBackupStoreGetter(t *testing.T) {
wantBucket: "bucket",
wantPrefix: "prefix/",
},
{
name: "when the Bucket field is an MRAP ARN, it should be valid",
location: builder.ForBackupStorageLocation("", "").Provider("provider-1").Bucket("arn:aws:s3::123456789012:accesspoint/abcdef0123456.mrap").Result(),
objectStoreGetter: objectStoreGetter{
"provider-1": newInMemoryObjectStore("arn:aws:s3::123456789012:accesspoint/abcdef0123456.mrap"),
},
credFileStore: velerotest.NewFakeCredentialsFileStore("", nil),
wantBucket: "arn:aws:s3::123456789012:accesspoint/abcdef0123456.mrap",
},
{
name: "when the Bucket field is an MRAP ARN with trailing slash, it should be valid and trimmed",
location: builder.ForBackupStorageLocation("", "").Provider("provider-1").Bucket("arn:aws:s3::123456789012:accesspoint/abcdef0123456.mrap/").Result(),
objectStoreGetter: objectStoreGetter{
"provider-1": newInMemoryObjectStore("arn:aws:s3::123456789012:accesspoint/abcdef0123456.mrap"),
},
credFileStore: velerotest.NewFakeCredentialsFileStore("", nil),
wantBucket: "arn:aws:s3::123456789012:accesspoint/abcdef0123456.mrap",
},
}
for _, tc := range tests {

View File

@@ -38,7 +38,6 @@ import (
"sigs.k8s.io/controller-runtime/pkg/client"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/constant"
velerolabel "github.com/vmware-tanzu/velero/pkg/label"
velerotypes "github.com/vmware-tanzu/velero/pkg/types"
"github.com/vmware-tanzu/velero/pkg/util"
@@ -575,32 +574,15 @@ func buildJob(
// Set resource limits and requests
cpuRequest := DefaultMaintenanceJobCPURequest
memRequest := DefaultMaintenanceJobMemRequest
ephemeralStorageRequest := constant.DefaultEphemeralStorageRequest
cpuLimit := DefaultMaintenanceJobCPULimit
memLimit := DefaultMaintenanceJobMemLimit
ephemeralStorageLimit := constant.DefaultEphemeralStorageLimit
if config != nil && config.PodResources != nil {
cpuRequest = config.PodResources.CPURequest
memRequest = config.PodResources.MemoryRequest
cpuLimit = config.PodResources.CPULimit
memLimit = config.PodResources.MemoryLimit
// To make the PodResources ConfigMap without ephemeral storage request/limit backward compatible,
// need to avoid set value as empty, because empty string will cause parsing error.
if config.PodResources.EphemeralStorageRequest != "" {
ephemeralStorageRequest = config.PodResources.EphemeralStorageRequest
}
if config.PodResources.EphemeralStorageLimit != "" {
ephemeralStorageLimit = config.PodResources.EphemeralStorageLimit
}
}
resources, err := kube.ParseResourceRequirements(
cpuRequest,
memRequest,
ephemeralStorageRequest,
cpuLimit,
memLimit,
ephemeralStorageLimit,
)
resources, err := kube.ParseResourceRequirements(cpuRequest, memRequest, cpuLimit, memLimit)
if err != nil {
return nil, errors.Wrap(err, "failed to parse resource requirements for maintenance job")
}
@@ -689,7 +671,8 @@ func buildJob(
}
if config != nil && len(config.LoadAffinities) > 0 {
affinity := kube.ToSystemAffinity(config.LoadAffinities[0], nil)
// Maintenance job only takes the first loadAffinity.
affinity := kube.ToSystemAffinity([]*kube.LoadAffinity{config.LoadAffinities[0]})
job.Spec.Template.Spec.Affinity = affinity
}

View File

@@ -163,19 +163,12 @@ func (a *PodVolumeRestoreAction) Execute(input *velero.RestoreItemActionExecuteI
memLimit = defaultMemRequestLimit
}
resourceReqs, err := kube.ParseCPUAndMemoryResources(
cpuRequest,
memRequest,
cpuLimit,
memLimit,
)
resourceReqs, err := kube.ParseResourceRequirements(cpuRequest, memRequest, cpuLimit, memLimit)
if err != nil {
log.Errorf("couldn't parse resource requirements: %s.", err)
resourceReqs, _ = kube.ParseCPUAndMemoryResources(
defaultCPURequestLimit,
defaultMemRequestLimit,
defaultCPURequestLimit,
defaultMemRequestLimit,
resourceReqs, _ = kube.ParseResourceRequirements(
defaultCPURequestLimit, defaultMemRequestLimit, // requests
defaultCPURequestLimit, defaultMemRequestLimit, // limits
)
}

View File

@@ -117,11 +117,9 @@ func TestGetImage(t *testing.T) {
// TestPodVolumeRestoreActionExecute tests the pod volume restore item action plugin's Execute method.
func TestPodVolumeRestoreActionExecute(t *testing.T) {
resourceReqs, _ := kube.ParseCPUAndMemoryResources(
defaultCPURequestLimit,
defaultMemRequestLimit,
defaultCPURequestLimit,
defaultMemRequestLimit,
resourceReqs, _ := kube.ParseResourceRequirements(
defaultCPURequestLimit, defaultMemRequestLimit, // requests
defaultCPURequestLimit, defaultMemRequestLimit, // limits
)
id := int64(1000)
securityContext := corev1api.SecurityContext{

View File

@@ -35,7 +35,6 @@ type BlockOutput struct {
*restore.FilesystemOutput
targetFileName string
targetFile *os.File
}
var _ restore.Output = &BlockOutput{}
@@ -53,7 +52,7 @@ func (o *BlockOutput) WriteFile(ctx context.Context, relativePath string, remote
if err != nil {
return errors.Wrapf(err, "failed to open file %s", o.targetFileName)
}
o.targetFile = targetFile
defer targetFile.Close()
buffer := make([]byte, bufferSize)
@@ -102,23 +101,3 @@ func (o *BlockOutput) BeginDirectory(ctx context.Context, relativePath string, e
return nil
}
func (o *BlockOutput) Flush() error {
if o.targetFile != nil {
if err := o.targetFile.Sync(); err != nil {
return errors.Wrapf(err, "error syncing block dev %v", o.targetFileName)
}
}
return nil
}
func (o *BlockOutput) Terminate() error {
if o.targetFile != nil {
if err := o.targetFile.Close(); err != nil {
return errors.Wrapf(err, "error closing block dev %v", o.targetFileName)
}
}
return nil
}

View File

@@ -40,11 +40,3 @@ func (o *BlockOutput) WriteFile(ctx context.Context, relativePath string, remote
func (o *BlockOutput) BeginDirectory(ctx context.Context, relativePath string, e fs.Directory) error {
return fmt.Errorf("block mode is not supported for Windows")
}
func (o *BlockOutput) Flush() error {
return flushVolume(o.targetFileName)
}
func (o *BlockOutput) Terminate() error {
return nil
}

View File

@@ -1,50 +0,0 @@
//go:build linux
// +build linux
/*
Copyright The Velero Contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package kopia
import (
"os"
"github.com/pkg/errors"
"golang.org/x/sys/unix"
)
func flushVolume(dirPath string) error {
dir, err := os.Open(dirPath)
if err != nil {
return errors.Wrapf(err, "error opening dir %v", dirPath)
}
raw, err := dir.SyscallConn()
if err != nil {
return errors.Wrapf(err, "error getting handle of dir %v", dirPath)
}
var syncErr error
if err := raw.Control(func(fd uintptr) {
if e := unix.Syncfs(int(fd)); e != nil {
syncErr = e
}
}); err != nil {
return errors.Wrapf(err, "error calling fs sync from %v", dirPath)
}
return errors.Wrapf(syncErr, "error syncing fs from %v", dirPath)
}

View File

@@ -1,24 +0,0 @@
//go:build !linux
// +build !linux
/*
Copyright The Velero Contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package kopia
func flushVolume(_ string) error {
return errFlushUnsupported
}

View File

@@ -1,30 +0,0 @@
/*
Copyright The Velero Contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package kopia
import (
"github.com/kopia/kopia/snapshot/restore"
"github.com/pkg/errors"
)
var errFlushUnsupported = errors.New("flush is not supported")
type RestoreOutput interface {
restore.Output
Flush() error
Terminate() error
}

View File

@@ -53,7 +53,6 @@ var loadSnapshotFunc = snapshot.LoadSnapshot
var listSnapshotsFunc = snapshot.ListSnapshots
var filesystemEntryFunc = snapshotfs.FilesystemEntryFromIDWithPath
var restoreEntryFunc = restore.Entry
var flushVolumeFunc = flushVolume
const UploaderConfigMultipartKey = "uploader-multipart"
const MaxErrorReported = 10
@@ -376,18 +375,6 @@ func findPreviousSnapshotManifest(ctx context.Context, rep repo.Repository, sour
return result, nil
}
type fileSystemRestoreOutput struct {
*restore.FilesystemOutput
}
func (o *fileSystemRestoreOutput) Flush() error {
return flushVolumeFunc(o.TargetPath)
}
func (o *fileSystemRestoreOutput) Terminate() error {
return nil
}
// Restore restore specific sourcePath with given snapshotID and update progress
func Restore(ctx context.Context, rep repo.RepositoryWriter, progress *Progress, snapshotID, dest string, volMode uploader.PersistentVolumeMode, uploaderCfg map[string]string,
log logrus.FieldLogger, cancleCh chan struct{}) (int64, int32, error) {
@@ -447,23 +434,13 @@ func Restore(ctx context.Context, rep repo.RepositoryWriter, progress *Progress,
return 0, 0, errors.Wrap(err, "error to init output")
}
var output RestoreOutput
var output restore.Output = fsOutput
if volMode == uploader.PersistentVolumeBlock {
output = &BlockOutput{
FilesystemOutput: fsOutput,
}
} else {
output = &fileSystemRestoreOutput{
FilesystemOutput: fsOutput,
}
}
defer func() {
if err := output.Terminate(); err != nil {
log.Warnf("error terminating restore output for %v", path)
}
}()
stat, err := restoreEntryFunc(kopiaCtx, rep, output, rootEntry, restore.Options{
Parallel: restoreConcurrency,
RestoreDirEntryAtDepth: math.MaxInt32,
@@ -476,16 +453,5 @@ func Restore(ctx context.Context, rep repo.RepositoryWriter, progress *Progress,
if err != nil {
return 0, 0, errors.Wrapf(err, "Failed to copy snapshot data to the target")
}
if err := output.Flush(); err != nil {
if err == errFlushUnsupported {
log.Warnf("Skip flushing data for %v under the current OS %v", path, runtime.GOOS)
} else {
return 0, 0, errors.Wrapf(err, "Failed to flush data to target")
}
} else {
log.Infof("Flush done for volume dir %v", path)
}
return stat.RestoredTotalFileSize, stat.RestoredFileCount, nil
}

View File

@@ -675,7 +675,6 @@ func TestRestore(t *testing.T) {
invalidManifestType bool
filesystemEntryFunc func(ctx context.Context, rep repo.Repository, rootID string, consistentAttributes bool) (fs.Entry, error)
restoreEntryFunc func(ctx context.Context, rep repo.Repository, output restore.Output, rootEntry fs.Entry, options restore.Options) (restore.Stats, error)
flushVolumeFunc func(string) error
dest string
expectedBytes int64
expectedCount int32
@@ -758,30 +757,6 @@ func TestRestore(t *testing.T) {
volMode: uploader.PersistentVolumeBlock,
dest: "/tmp",
},
{
name: "Flush is not supported",
filesystemEntryFunc: func(ctx context.Context, rep repo.Repository, rootID string, consistentAttributes bool) (fs.Entry, error) {
return snapshotfs.EntryFromDirEntry(rep, &snapshot.DirEntry{Type: snapshot.EntryTypeFile}), nil
},
restoreEntryFunc: func(ctx context.Context, rep repo.Repository, output restore.Output, rootEntry fs.Entry, options restore.Options) (restore.Stats, error) {
return restore.Stats{}, nil
},
flushVolumeFunc: func(string) error { return errFlushUnsupported },
snapshotID: "snapshot-123",
expectedError: nil,
},
{
name: "Flush fails",
filesystemEntryFunc: func(ctx context.Context, rep repo.Repository, rootID string, consistentAttributes bool) (fs.Entry, error) {
return snapshotfs.EntryFromDirEntry(rep, &snapshot.DirEntry{Type: snapshot.EntryTypeFile}), nil
},
restoreEntryFunc: func(ctx context.Context, rep repo.Repository, output restore.Output, rootEntry fs.Entry, options restore.Options) (restore.Stats, error) {
return restore.Stats{}, nil
},
flushVolumeFunc: func(string) error { return errors.New("fake-flush-error") },
snapshotID: "snapshot-123",
expectedError: errors.New("fake-flush-error"),
},
}
em := &manifest.EntryMetadata{
@@ -809,10 +784,6 @@ func TestRestore(t *testing.T) {
restoreEntryFunc = tc.restoreEntryFunc
}
if tc.flushVolumeFunc != nil {
flushVolumeFunc = tc.flushVolumeFunc
}
repoWriterMock := &repomocks.RepositoryWriter{}
repoWriterMock.On("GetManifest", mock.Anything, mock.Anything, mock.Anything).Return(em, nil)
repoWriterMock.On("OpenObject", mock.Anything, mock.Anything).Return(em, nil)

View File

@@ -17,6 +17,7 @@ package kube
import (
"context"
"fmt"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
@@ -33,11 +34,6 @@ const (
NodeOSLabel = "kubernetes.io/os"
)
var realNodeOSMap = map[string]string{
"linux": NodeOSLinux,
"windows": NodeOSWindows,
}
func IsLinuxNode(ctx context.Context, nodeName string, client client.Client) error {
node := &corev1api.Node{}
if err := client.Get(ctx, types.NamespacedName{Name: nodeName}, node); err != nil {
@@ -45,11 +41,12 @@ func IsLinuxNode(ctx context.Context, nodeName string, client client.Client) err
}
os, found := node.Labels[NodeOSLabel]
if !found {
return errors.Errorf("no os type label for node %s", nodeName)
}
if getRealOS(os) != NodeOSLinux {
if os != NodeOSLinux {
return errors.Errorf("os type %s for node %s is not linux", os, nodeName)
}
@@ -75,7 +72,7 @@ func withOSNode(ctx context.Context, client client.Client, osType string, log lo
for _, node := range nodeList.Items {
os, found := node.Labels[NodeOSLabel]
if getRealOS(os) == osType {
if os == osType {
return true
}
@@ -101,7 +98,7 @@ func GetNodeOS(ctx context.Context, nodeName string, nodeClient corev1client.Cor
return "", nil
}
return getRealOS(node.Labels[NodeOSLabel]), nil
return node.Labels[NodeOSLabel], nil
}
func HasNodeWithOS(ctx context.Context, os string, nodeClient corev1client.CoreV1Interface) error {
@@ -109,29 +106,14 @@ func HasNodeWithOS(ctx context.Context, os string, nodeClient corev1client.CoreV
return errors.New("invalid node OS")
}
nodes, err := nodeClient.Nodes().List(ctx, metav1.ListOptions{})
nodes, err := nodeClient.Nodes().List(ctx, metav1.ListOptions{LabelSelector: fmt.Sprintf("%s=%s", NodeOSLabel, os)})
if err != nil {
return errors.Wrapf(err, "error listing nodes with OS %s", os)
}
for _, node := range nodes.Items {
osLabel, found := node.Labels[NodeOSLabel]
if !found {
continue
}
if getRealOS(osLabel) == os {
return nil
}
if len(nodes.Items) == 0 {
return errors.Errorf("node with OS %s doesn't exist", os)
}
return errors.Errorf("node with OS %s doesn't exist", os)
}
func getRealOS(osLabel string) string {
if os, found := realNodeOSMap[osLabel]; !found {
return NodeOSLinux
} else {
return os
}
return nil
}

View File

@@ -40,12 +40,10 @@ type LoadAffinity struct {
}
type PodResources struct {
CPURequest string `json:"cpuRequest,omitempty"`
CPULimit string `json:"cpuLimit,omitempty"`
MemoryRequest string `json:"memoryRequest,omitempty"`
MemoryLimit string `json:"memoryLimit,omitempty"`
EphemeralStorageRequest string `json:"ephemeralStorageRequest,omitempty"`
EphemeralStorageLimit string `json:"ephemeralStorageLimit,omitempty"`
CPURequest string `json:"cpuRequest,omitempty"`
MemoryRequest string `json:"memoryRequest,omitempty"`
CPULimit string `json:"cpuLimit,omitempty"`
MemoryLimit string `json:"memoryLimit,omitempty"`
}
// IsPodRunning does a well-rounded check to make sure the specified pod is running stably.
@@ -232,9 +230,14 @@ func CollectPodLogs(ctx context.Context, podGetter corev1client.CoreV1Interface,
return nil
}
func ToSystemAffinity(loadAffinity *LoadAffinity, volumeTopology *corev1api.NodeSelector) *corev1api.Affinity {
requirements := []corev1api.NodeSelectorRequirement{}
if loadAffinity != nil {
func ToSystemAffinity(loadAffinities []*LoadAffinity) *corev1api.Affinity {
if len(loadAffinities) == 0 {
return nil
}
nodeSelectorTermList := make([]corev1api.NodeSelectorTerm, 0)
for _, loadAffinity := range loadAffinities {
requirements := []corev1api.NodeSelectorRequirement{}
for k, v := range loadAffinity.NodeSelector.MatchLabels {
requirements = append(requirements, corev1api.NodeSelectorRequirement{
Key: k,
@@ -250,25 +253,25 @@ func ToSystemAffinity(loadAffinity *LoadAffinity, volumeTopology *corev1api.Node
Operator: corev1api.NodeSelectorOperator(exp.Operator),
})
}
nodeSelectorTermList = append(
nodeSelectorTermList,
corev1api.NodeSelectorTerm{
MatchExpressions: requirements,
},
)
}
result := new(corev1api.Affinity)
result.NodeAffinity = new(corev1api.NodeAffinity)
result.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution = new(corev1api.NodeSelector)
if len(nodeSelectorTermList) > 0 {
result := new(corev1api.Affinity)
result.NodeAffinity = new(corev1api.NodeAffinity)
result.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution = new(corev1api.NodeSelector)
result.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms = nodeSelectorTermList
if volumeTopology != nil {
result.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms = append(result.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms, volumeTopology.NodeSelectorTerms...)
} else if len(requirements) > 0 {
result.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms = make([]corev1api.NodeSelectorTerm, 1)
} else {
return nil
return result
}
for i := range result.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms {
result.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms[i].MatchExpressions = append(result.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms[i].MatchExpressions, requirements...)
}
return result
return nil
}
func DiagnosePod(pod *corev1api.Pod, events *corev1api.EventList) string {

View File

@@ -747,23 +747,24 @@ func TestCollectPodLogs(t *testing.T) {
func TestToSystemAffinity(t *testing.T) {
tests := []struct {
name string
loadAffinity *LoadAffinity
volumeTopology *corev1api.NodeSelector
loadAffinities []*LoadAffinity
expected *corev1api.Affinity
}{
{
name: "loadAffinity is nil",
},
{
name: "loadAffinity is empty",
loadAffinity: &LoadAffinity{},
name: "loadAffinity is empty",
loadAffinities: []*LoadAffinity{},
},
{
name: "with match label",
loadAffinity: &LoadAffinity{
NodeSelector: metav1.LabelSelector{
MatchLabels: map[string]string{
"key-1": "value-1",
loadAffinities: []*LoadAffinity{
{
NodeSelector: metav1.LabelSelector{
MatchLabels: map[string]string{
"key-1": "value-1",
},
},
},
},
@@ -787,21 +788,23 @@ func TestToSystemAffinity(t *testing.T) {
},
{
name: "with match expression",
loadAffinity: &LoadAffinity{
NodeSelector: metav1.LabelSelector{
MatchLabels: map[string]string{
"key-2": "value-2",
},
MatchExpressions: []metav1.LabelSelectorRequirement{
{
Key: "key-3",
Values: []string{"value-3-1", "value-3-2"},
Operator: metav1.LabelSelectorOpNotIn,
loadAffinities: []*LoadAffinity{
{
NodeSelector: metav1.LabelSelector{
MatchLabels: map[string]string{
"key-2": "value-2",
},
{
Key: "key-4",
Values: []string{"value-4-1", "value-4-2", "value-4-3"},
Operator: metav1.LabelSelectorOpDoesNotExist,
MatchExpressions: []metav1.LabelSelectorRequirement{
{
Key: "key-3",
Values: []string{"value-3-1", "value-3-2"},
Operator: metav1.LabelSelectorOpNotIn,
},
{
Key: "key-4",
Values: []string{"value-4-1", "value-4-2", "value-4-3"},
Operator: metav1.LabelSelectorOpDoesNotExist,
},
},
},
},
@@ -835,49 +838,19 @@ func TestToSystemAffinity(t *testing.T) {
},
},
{
name: "with olume topology",
volumeTopology: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "key-5",
Values: []string{"value-5-1", "value-5-2", "value-5-3"},
Operator: corev1api.NodeSelectorOpGt,
},
{
Key: "key-6",
Values: []string{"value-5-1", "value-5-2", "value-5-3"},
Operator: corev1api.NodeSelectorOpGt,
},
name: "multiple load affinities",
loadAffinities: []*LoadAffinity{
{
NodeSelector: metav1.LabelSelector{
MatchLabels: map[string]string{
"key-1": "value-1",
},
},
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "key-7",
Values: []string{"value-7-1", "value-7-2", "value-7-3"},
Operator: corev1api.NodeSelectorOpGt,
},
{
Key: "key-8",
Values: []string{"value-8-1", "value-8-2", "value-8-3"},
Operator: corev1api.NodeSelectorOpGt,
},
},
},
{
MatchFields: []corev1api.NodeSelectorRequirement{
{
Key: "key-9",
Values: []string{"value-9-1", "value-9-2", "value-9-3"},
Operator: corev1api.NodeSelectorOpGt,
},
{
Key: "key-a",
Values: []string{"value-a-1", "value-a-2", "value-a-3"},
Operator: corev1api.NodeSelectorOpGt,
},
},
{
NodeSelector: metav1.LabelSelector{
MatchLabels: map[string]string{
"key-2": "value-2",
},
},
},
@@ -889,177 +862,10 @@ func TestToSystemAffinity(t *testing.T) {
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "key-5",
Values: []string{"value-5-1", "value-5-2", "value-5-3"},
Operator: corev1api.NodeSelectorOpGt,
},
{
Key: "key-6",
Values: []string{"value-5-1", "value-5-2", "value-5-3"},
Operator: corev1api.NodeSelectorOpGt,
},
},
},
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "key-7",
Values: []string{"value-7-1", "value-7-2", "value-7-3"},
Operator: corev1api.NodeSelectorOpGt,
},
{
Key: "key-8",
Values: []string{"value-8-1", "value-8-2", "value-8-3"},
Operator: corev1api.NodeSelectorOpGt,
},
},
},
{
MatchFields: []corev1api.NodeSelectorRequirement{
{
Key: "key-9",
Values: []string{"value-9-1", "value-9-2", "value-9-3"},
Operator: corev1api.NodeSelectorOpGt,
},
{
Key: "key-a",
Values: []string{"value-a-1", "value-a-2", "value-a-3"},
Operator: corev1api.NodeSelectorOpGt,
},
},
},
},
},
},
},
},
{
name: "with match expression and volume topology",
loadAffinity: &LoadAffinity{
NodeSelector: metav1.LabelSelector{
MatchLabels: map[string]string{
"key-2": "value-2",
},
MatchExpressions: []metav1.LabelSelectorRequirement{
{
Key: "key-3",
Values: []string{"value-3-1", "value-3-2"},
Operator: metav1.LabelSelectorOpNotIn,
},
{
Key: "key-4",
Values: []string{"value-4-1", "value-4-2", "value-4-3"},
Operator: metav1.LabelSelectorOpDoesNotExist,
},
},
},
},
volumeTopology: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "key-5",
Values: []string{"value-5-1", "value-5-2", "value-5-3"},
Operator: corev1api.NodeSelectorOpGt,
},
{
Key: "key-6",
Values: []string{"value-5-1", "value-5-2", "value-5-3"},
Operator: corev1api.NodeSelectorOpGt,
},
},
},
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "key-7",
Values: []string{"value-7-1", "value-7-2", "value-7-3"},
Operator: corev1api.NodeSelectorOpGt,
},
{
Key: "key-8",
Values: []string{"value-8-1", "value-8-2", "value-8-3"},
Operator: corev1api.NodeSelectorOpGt,
},
},
},
{
MatchFields: []corev1api.NodeSelectorRequirement{
{
Key: "key-9",
Values: []string{"value-9-1", "value-9-2", "value-9-3"},
Operator: corev1api.NodeSelectorOpGt,
},
{
Key: "key-a",
Values: []string{"value-a-1", "value-a-2", "value-a-3"},
Operator: corev1api.NodeSelectorOpGt,
},
},
},
},
},
expected: &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "key-5",
Values: []string{"value-5-1", "value-5-2", "value-5-3"},
Operator: corev1api.NodeSelectorOpGt,
},
{
Key: "key-6",
Values: []string{"value-5-1", "value-5-2", "value-5-3"},
Operator: corev1api.NodeSelectorOpGt,
},
{
Key: "key-2",
Values: []string{"value-2"},
Key: "key-1",
Values: []string{"value-1"},
Operator: corev1api.NodeSelectorOpIn,
},
{
Key: "key-3",
Values: []string{"value-3-1", "value-3-2"},
Operator: corev1api.NodeSelectorOpNotIn,
},
{
Key: "key-4",
Values: []string{"value-4-1", "value-4-2", "value-4-3"},
Operator: corev1api.NodeSelectorOpDoesNotExist,
},
},
},
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "key-7",
Values: []string{"value-7-1", "value-7-2", "value-7-3"},
Operator: corev1api.NodeSelectorOpGt,
},
{
Key: "key-8",
Values: []string{"value-8-1", "value-8-2", "value-8-3"},
Operator: corev1api.NodeSelectorOpGt,
},
{
Key: "key-2",
Values: []string{"value-2"},
Operator: corev1api.NodeSelectorOpIn,
},
{
Key: "key-3",
Values: []string{"value-3-1", "value-3-2"},
Operator: corev1api.NodeSelectorOpNotIn,
},
{
Key: "key-4",
Values: []string{"value-4-1", "value-4-2", "value-4-3"},
Operator: corev1api.NodeSelectorOpDoesNotExist,
},
},
},
{
@@ -1069,28 +875,6 @@ func TestToSystemAffinity(t *testing.T) {
Values: []string{"value-2"},
Operator: corev1api.NodeSelectorOpIn,
},
{
Key: "key-3",
Values: []string{"value-3-1", "value-3-2"},
Operator: corev1api.NodeSelectorOpNotIn,
},
{
Key: "key-4",
Values: []string{"value-4-1", "value-4-2", "value-4-3"},
Operator: corev1api.NodeSelectorOpDoesNotExist,
},
},
MatchFields: []corev1api.NodeSelectorRequirement{
{
Key: "key-9",
Values: []string{"value-9-1", "value-9-2", "value-9-3"},
Operator: corev1api.NodeSelectorOpGt,
},
{
Key: "key-a",
Values: []string{"value-a-1", "value-a-2", "value-a-3"},
Operator: corev1api.NodeSelectorOpGt,
},
},
},
},
@@ -1102,7 +886,7 @@ func TestToSystemAffinity(t *testing.T) {
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
affinity := ToSystemAffinity(test.loadAffinity, test.volumeTopology)
affinity := ToSystemAffinity(test.loadAffinities)
assert.True(t, reflect.DeepEqual(affinity, test.expected))
})
}

View File

@@ -580,29 +580,3 @@ func GetPVAttachedNodes(ctx context.Context, pv string, storageClient storagev1.
return nodes, nil
}
func GetVolumeTopology(ctx context.Context, volumeClient corev1client.CoreV1Interface, storageClient storagev1.StorageV1Interface, pvName string, scName string) (*corev1api.NodeSelector, error) {
if pvName == "" || scName == "" {
return nil, errors.Errorf("invalid parameter, pv %s, sc %s", pvName, scName)
}
sc, err := storageClient.StorageClasses().Get(ctx, scName, metav1.GetOptions{})
if err != nil {
return nil, errors.Wrapf(err, "error getting storage class %s", scName)
}
if sc.VolumeBindingMode == nil || *sc.VolumeBindingMode != storagev1api.VolumeBindingWaitForFirstConsumer {
return nil, nil
}
pv, err := volumeClient.PersistentVolumes().Get(ctx, pvName, metav1.GetOptions{})
if err != nil {
return nil, errors.Wrapf(err, "error getting PV %s", pvName)
}
if pv.Spec.NodeAffinity == nil {
return nil, nil
}
return pv.Spec.NodeAffinity.Required, nil
}

View File

@@ -1909,143 +1909,3 @@ func TestGetPVCAttachingNodeOS(t *testing.T) {
})
}
}
func TestGetVolumeTopology(t *testing.T) {
pvWithoutNodeAffinity := &corev1api.PersistentVolume{
ObjectMeta: metav1.ObjectMeta{
Name: "fake-pv",
},
}
pvWithNodeAffinity := &corev1api.PersistentVolume{
ObjectMeta: metav1.ObjectMeta{
Name: "fake-pv",
},
Spec: corev1api.PersistentVolumeSpec{
NodeAffinity: &corev1api.VolumeNodeAffinity{
Required: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "fake-key",
},
},
},
},
},
},
},
}
scObjWithoutVolumeBind := &storagev1api.StorageClass{
ObjectMeta: metav1.ObjectMeta{
Name: "fake-storage-class",
},
}
volumeBindImmediate := storagev1api.VolumeBindingImmediate
scObjWithImeediateBind := &storagev1api.StorageClass{
ObjectMeta: metav1.ObjectMeta{
Name: "fake-storage-class",
},
VolumeBindingMode: &volumeBindImmediate,
}
volumeBindWffc := storagev1api.VolumeBindingWaitForFirstConsumer
scObjWithWffcBind := &storagev1api.StorageClass{
ObjectMeta: metav1.ObjectMeta{
Name: "fake-storage-class",
},
VolumeBindingMode: &volumeBindWffc,
}
tests := []struct {
name string
pvName string
scName string
kubeClientObj []runtime.Object
expectedErr string
expected *corev1api.NodeSelector
}{
{
name: "invalid pvName",
scName: "fake-storage-class",
expectedErr: "invalid parameter, pv , sc fake-storage-class",
},
{
name: "invalid scName",
pvName: "fake-pv",
expectedErr: "invalid parameter, pv fake-pv, sc ",
},
{
name: "no sc",
pvName: "fake-pv",
scName: "fake-storage-class",
expectedErr: "error getting storage class fake-storage-class: storageclasses.storage.k8s.io \"fake-storage-class\" not found",
},
{
name: "sc without binding mode",
pvName: "fake-pv",
scName: "fake-storage-class",
kubeClientObj: []runtime.Object{scObjWithoutVolumeBind},
},
{
name: "sc without immediate binding mode",
pvName: "fake-pv",
scName: "fake-storage-class",
kubeClientObj: []runtime.Object{scObjWithImeediateBind},
},
{
name: "get pv fail",
pvName: "fake-pv",
scName: "fake-storage-class",
kubeClientObj: []runtime.Object{scObjWithWffcBind},
expectedErr: "error getting PV fake-pv: persistentvolumes \"fake-pv\" not found",
},
{
name: "pv with no affinity",
pvName: "fake-pv",
scName: "fake-storage-class",
kubeClientObj: []runtime.Object{
scObjWithWffcBind,
pvWithoutNodeAffinity,
},
},
{
name: "pv with affinity",
pvName: "fake-pv",
scName: "fake-storage-class",
kubeClientObj: []runtime.Object{
scObjWithWffcBind,
pvWithNodeAffinity,
},
expected: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "fake-key",
},
},
},
},
},
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
fakeKubeClient := fake.NewSimpleClientset(test.kubeClientObj...)
var kubeClient kubernetes.Interface = fakeKubeClient
affinity, err := GetVolumeTopology(t.Context(), kubeClient.CoreV1(), kubeClient.StorageV1(), test.pvName, test.scName)
if test.expectedErr != "" {
assert.EqualError(t, err, test.expectedErr)
} else {
assert.Equal(t, test.expected, affinity)
}
})
}
}

View File

@@ -20,34 +20,12 @@ import (
"github.com/pkg/errors"
corev1api "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"
"github.com/vmware-tanzu/velero/pkg/constant"
)
// ParseCPUAndMemoryResources is a helper function that parses CPU and memory requests and limits,
// using default values for ephemeral storage.
func ParseCPUAndMemoryResources(cpuRequest, memRequest, cpuLimit, memLimit string) (corev1api.ResourceRequirements, error) {
return ParseResourceRequirements(
cpuRequest,
memRequest,
constant.DefaultEphemeralStorageRequest,
cpuLimit,
memLimit,
constant.DefaultEphemeralStorageLimit,
)
}
// ParseResourceRequirements takes a set of CPU, memory, ephemeral storage requests and limit string
// ParseResourceRequirements takes a set of CPU and memory requests and limit string
// values and returns a ResourceRequirements struct to be used in a Container.
// An error is returned if we cannot parse the request/limit.
func ParseResourceRequirements(
cpuRequest,
memRequest,
ephemeralStorageRequest,
cpuLimit,
memLimit,
ephemeralStorageLimit string,
) (corev1api.ResourceRequirements, error) {
func ParseResourceRequirements(cpuRequest, memRequest, cpuLimit, memLimit string) (corev1api.ResourceRequirements, error) {
resources := corev1api.ResourceRequirements{
Requests: corev1api.ResourceList{},
Limits: corev1api.ResourceList{},
@@ -63,11 +41,6 @@ func ParseResourceRequirements(
return resources, errors.Wrapf(err, `couldn't parse memory request "%s"`, memRequest)
}
parsedEphemeralStorageRequest, err := resource.ParseQuantity(ephemeralStorageRequest)
if err != nil {
return resources, errors.Wrapf(err, `couldn't parse ephemeral storage request "%s"`, ephemeralStorageRequest)
}
parsedCPULimit, err := resource.ParseQuantity(cpuLimit)
if err != nil {
return resources, errors.Wrapf(err, `couldn't parse CPU limit "%s"`, cpuLimit)
@@ -78,11 +51,6 @@ func ParseResourceRequirements(
return resources, errors.Wrapf(err, `couldn't parse memory limit "%s"`, memLimit)
}
parsedEphemeralStorageLimit, err := resource.ParseQuantity(ephemeralStorageLimit)
if err != nil {
return resources, errors.Wrapf(err, `couldn't parse ephemeral storage limit "%s"`, ephemeralStorageLimit)
}
// A quantity of 0 is treated as unbounded
unbounded := resource.MustParse("0")
@@ -94,10 +62,6 @@ func ParseResourceRequirements(
return resources, errors.WithStack(errors.Errorf(`Memory request "%s" must be less than or equal to Memory limit "%s"`, memRequest, memLimit))
}
if parsedEphemeralStorageLimit != unbounded && parsedEphemeralStorageRequest.Cmp(parsedEphemeralStorageLimit) > 0 {
return resources, errors.WithStack(errors.Errorf(`Ephemeral storage request "%s" must be less than or equal to Ephemeral storage limit "%s"`, ephemeralStorageRequest, ephemeralStorageLimit))
}
// Only set resources if they are not unbounded
if parsedCPURequest != unbounded {
resources.Requests[corev1api.ResourceCPU] = parsedCPURequest
@@ -105,18 +69,12 @@ func ParseResourceRequirements(
if parsedMemRequest != unbounded {
resources.Requests[corev1api.ResourceMemory] = parsedMemRequest
}
if parsedEphemeralStorageRequest != unbounded {
resources.Requests[corev1api.ResourceEphemeralStorage] = parsedEphemeralStorageRequest
}
if parsedCPULimit != unbounded {
resources.Limits[corev1api.ResourceCPU] = parsedCPULimit
}
if parsedMemLimit != unbounded {
resources.Limits[corev1api.ResourceMemory] = parsedMemLimit
}
if parsedEphemeralStorageLimit != unbounded {
resources.Limits[corev1api.ResourceEphemeralStorage] = parsedEphemeralStorageLimit
}
return resources, nil
}

View File

@@ -27,12 +27,10 @@ import (
func TestParseResourceRequirements(t *testing.T) {
type args struct {
cpuRequest string
memRequest string
ephemeralStorageRequest string
cpuLimit string
memLimit string
ephemeralStorageLimit string
cpuRequest string
memRequest string
cpuLimit string
memLimit string
}
tests := []struct {
name string
@@ -40,61 +38,43 @@ func TestParseResourceRequirements(t *testing.T) {
wantErr bool
expected *corev1api.ResourceRequirements
}{
{"unbounded quantities", args{"0", "0", "0", "0", "0", "0"}, false, &corev1api.ResourceRequirements{
{"unbounded quantities", args{"0", "0", "0", "0"}, false, &corev1api.ResourceRequirements{
Requests: corev1api.ResourceList{},
Limits: corev1api.ResourceList{},
}},
{"valid quantities", args{"100m", "128Mi", "5Gi", "200m", "256Mi", "10Gi"}, false, nil},
{"CPU request with unbounded limit", args{"100m", "128Mi", "5Gi", "0", "256Mi", "10Gi"}, false, &corev1api.ResourceRequirements{
{"valid quantities", args{"100m", "128Mi", "200m", "256Mi"}, false, nil},
{"CPU request with unbounded limit", args{"100m", "128Mi", "0", "256Mi"}, false, &corev1api.ResourceRequirements{
Requests: corev1api.ResourceList{
corev1api.ResourceCPU: resource.MustParse("100m"),
corev1api.ResourceMemory: resource.MustParse("128Mi"),
corev1api.ResourceEphemeralStorage: resource.MustParse("5Gi"),
corev1api.ResourceCPU: resource.MustParse("100m"),
corev1api.ResourceMemory: resource.MustParse("128Mi"),
},
Limits: corev1api.ResourceList{
corev1api.ResourceMemory: resource.MustParse("256Mi"),
corev1api.ResourceEphemeralStorage: resource.MustParse("10Gi"),
},
}},
{"Mem request with unbounded limit", args{"100m", "128Mi", "5Gi", "200m", "0", "10Gi"}, false, &corev1api.ResourceRequirements{
Requests: corev1api.ResourceList{
corev1api.ResourceCPU: resource.MustParse("100m"),
corev1api.ResourceMemory: resource.MustParse("128Mi"),
corev1api.ResourceEphemeralStorage: resource.MustParse("5Gi"),
},
Limits: corev1api.ResourceList{
corev1api.ResourceCPU: resource.MustParse("200m"),
corev1api.ResourceEphemeralStorage: resource.MustParse("10Gi"),
},
}},
{"Ephemeral storage request with unbounded limit", args{"100m", "128Mi", "5Gi", "200m", "256Mi", "0"}, false, &corev1api.ResourceRequirements{
Requests: corev1api.ResourceList{
corev1api.ResourceCPU: resource.MustParse("100m"),
corev1api.ResourceMemory: resource.MustParse("128Mi"),
corev1api.ResourceEphemeralStorage: resource.MustParse("5Gi"),
},
Limits: corev1api.ResourceList{
corev1api.ResourceCPU: resource.MustParse("200m"),
corev1api.ResourceMemory: resource.MustParse("256Mi"),
},
}},
{"CPU/Mem/EphemeralStorage requests with unbounded limits", args{"100m", "128Mi", "5Gi", "0", "0", "0"}, false, &corev1api.ResourceRequirements{
{"Mem request with unbounded limit", args{"100m", "128Mi", "200m", "0"}, false, &corev1api.ResourceRequirements{
Requests: corev1api.ResourceList{
corev1api.ResourceCPU: resource.MustParse("100m"),
corev1api.ResourceMemory: resource.MustParse("128Mi"),
corev1api.ResourceEphemeralStorage: resource.MustParse("5Gi"),
corev1api.ResourceCPU: resource.MustParse("100m"),
corev1api.ResourceMemory: resource.MustParse("128Mi"),
},
Limits: corev1api.ResourceList{
corev1api.ResourceCPU: resource.MustParse("200m"),
},
}},
{"CPU/Mem requests with unbounded limits", args{"100m", "128Mi", "0", "0"}, false, &corev1api.ResourceRequirements{
Requests: corev1api.ResourceList{
corev1api.ResourceCPU: resource.MustParse("100m"),
corev1api.ResourceMemory: resource.MustParse("128Mi"),
},
Limits: corev1api.ResourceList{},
}},
{"invalid quantity", args{"100m", "invalid", "1Gi", "200m", "256Mi", "valid"}, true, nil},
{"CPU request greater than limit", args{"300m", "128Mi", "5Gi", "200m", "256Mi", "10Gi"}, true, nil},
{"memory request greater than limit", args{"100m", "512Mi", "5Gi", "200m", "256Mi", "10Gi"}, true, nil},
{"ephemeral storage request greater than limit", args{"100m", "128Mi", "10Gi", "200m", "256Mi", "5Gi"}, true, nil},
{"invalid quantity", args{"100m", "invalid", "200m", "256Mi"}, true, nil},
{"CPU request greater than limit", args{"300m", "128Mi", "200m", "256Mi"}, true, nil},
{"memory request greater than limit", args{"100m", "512Mi", "200m", "256Mi"}, true, nil},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := ParseResourceRequirements(tt.args.cpuRequest, tt.args.memRequest, tt.args.ephemeralStorageRequest, tt.args.cpuLimit, tt.args.memLimit, tt.args.ephemeralStorageLimit)
got, err := ParseResourceRequirements(tt.args.cpuRequest, tt.args.memRequest, tt.args.cpuLimit, tt.args.memLimit)
if tt.wantErr {
assert.Error(t, err)
return
@@ -105,14 +85,12 @@ func TestParseResourceRequirements(t *testing.T) {
if tt.expected == nil {
expected = corev1api.ResourceRequirements{
Requests: corev1api.ResourceList{
corev1api.ResourceCPU: resource.MustParse(tt.args.cpuRequest),
corev1api.ResourceMemory: resource.MustParse(tt.args.memRequest),
corev1api.ResourceEphemeralStorage: resource.MustParse(tt.args.ephemeralStorageRequest),
corev1api.ResourceCPU: resource.MustParse(tt.args.cpuRequest),
corev1api.ResourceMemory: resource.MustParse(tt.args.memRequest),
},
Limits: corev1api.ResourceList{
corev1api.ResourceCPU: resource.MustParse(tt.args.cpuLimit),
corev1api.ResourceMemory: resource.MustParse(tt.args.memLimit),
corev1api.ResourceEphemeralStorage: resource.MustParse(tt.args.ephemeralStorageLimit),
corev1api.ResourceCPU: resource.MustParse(tt.args.cpuLimit),
corev1api.ResourceMemory: resource.MustParse(tt.args.memLimit),
},
}
} else {

View File

@@ -156,7 +156,7 @@ func TestGetVolumesByPod(t *testing.T) {
Volumes: []corev1api.Volume{
// PVB Volumes
{Name: "pvbPV1"}, {Name: "pvbPV2"}, {Name: "pvbPV3"},
/// Excluded from PVB because volume mounting default service account token
/// Excluded from PVB because colume mounting default service account token
{Name: "default-token-5xq45"},
},
},

90
site/algolia-crawler.json Normal file
View File

@@ -0,0 +1,90 @@
new Crawler({
rateLimit: 8,
maxDepth: 10,
startUrls: ["https://velero.io/docs", "https://velero.io/"],
renderJavaScript: false,
sitemaps: ["https://velero.io/sitemap.xml"],
ignoreCanonicalTo: false,
discoveryPatterns: ["https://velero.io/**"],
schedule: "at 6:39 PM on Friday",
actions: [
{
indexName: "velero_new",
pathsToMatch: ["https://velero.io/docs**/**"],
recordExtractor: ({ helpers }) => {
return helpers.docsearch({
recordProps: {
lvl1: ["header h1", "article h1", "main h1", "h1", "head > title"],
content: ["article p, article li", "main p, main li", "p, li"],
lvl0: {
defaultValue: "Documentation",
},
lvl2: ["article h2", "main h2", "h2"],
lvl3: ["article h3", "main h3", "h3"],
lvl4: ["article h4", "main h4", "h4"],
lvl5: ["article h5", "main h5", "h5"],
lvl6: ["article h6", "main h6", "h6"],
version: "#dropdownMenuButton",
},
aggregateContent: true,
recordVersion: "v3",
});
},
},
],
initialIndexSettings: {
velero_new: {
attributesForFaceting: ["type", "lang", "version"],
attributesToRetrieve: [
"hierarchy",
"content",
"anchor",
"url",
"url_without_anchor",
"type",
"version",
],
attributesToHighlight: ["hierarchy", "content"],
attributesToSnippet: ["content:10"],
camelCaseAttributes: ["hierarchy", "content"],
searchableAttributes: [
"unordered(hierarchy.lvl0)",
"unordered(hierarchy.lvl1)",
"unordered(hierarchy.lvl2)",
"unordered(hierarchy.lvl3)",
"unordered(hierarchy.lvl4)",
"unordered(hierarchy.lvl5)",
"unordered(hierarchy.lvl6)",
"content",
],
distinct: true,
attributeForDistinct: "url",
customRanking: [
"desc(weight.pageRank)",
"desc(weight.level)",
"asc(weight.position)",
],
ranking: [
"words",
"filters",
"typo",
"attribute",
"proximity",
"exact",
"custom",
],
highlightPreTag: '<span class="algolia-docsearch-suggestion--highlight">',
highlightPostTag: "</span>",
minWordSizefor1Typo: 3,
minWordSizefor2Typos: 7,
allowTyposOnNumericTokens: false,
minProximity: 1,
ignorePlurals: true,
advancedSyntax: true,
attributeCriteriaComputedByMinProximity: true,
removeWordsIfNoResults: "allOptional",
},
},
appId: "9ASKQJ1HR3",
apiKey: "6392a5916af73b73df2406d3aef5ca45",
});

View File

@@ -12,7 +12,7 @@ params:
hero:
backgroundColor: med-blue
versioning: true
latest: v1.18
latest: v1.17
versions:
- main
- v1.18

View File

@@ -63,10 +63,6 @@ spec:
# CSI VolumeSnapshot status turns to ReadyToUse during creation, before
# returning error as timeout. The default value is 10 minute.
csiSnapshotTimeout: 10m
# ItemOperationTimeout specifies the time used to wait for
# asynchronous BackupItemAction operations
# The default value is 4 hour.
itemOperationTimeout: 4h
# resourcePolicy specifies the referenced resource policies that backup should follow
# optional
resourcePolicy:

View File

@@ -7,7 +7,7 @@ During [CSI Snapshot Data Movement][1], Velero built-in data mover launches data
During [fs-backup][2], Velero also launches data mover pods to run the data transfer.
The data transfer is a time and resource consuming activity.
Velero by default uses the [BestEffort QoS][2] for the data mover pods, which guarantees the best performance of the data movement activities. On the other hand, it may take lots of cluster resource, i.e., CPU, memory, ephemeral storage, and how many resources are taken is decided by the concurrency and the scale of data to be moved.
Velero by default uses the [BestEffort QoS][2] for the data mover pods, which guarantees the best performance of the data movement activities. On the other hand, it may take lots of cluster resource, i.e., CPU, memory, and how many resources are taken is decided by the concurrency and the scale of data to be moved.
If the cluster nodes don't have sufficient resource, Velero also allows you to customize the resources for the data mover pods.
Note: If less resources are assigned to data mover pods, the data movement activities may take longer time; or the data mover pods may be OOM killed if the assigned memory resource doesn't meet the requirements. Consequently, the dataUpload/dataDownload may run longer or fail.
@@ -25,8 +25,6 @@ Here is a sample of the configMap with ```podResources```:
"podResources": {
"cpuRequest": "1000m",
"cpuLimit": "1000m",
"ephemeralStorageRequest": "5Gi",
"ephemeralStorageLimit": "10Gi",
"memoryRequest": "512Mi",
"memoryLimit": "1Gi"
}

View File

@@ -72,8 +72,6 @@ data:
"podResources": {
"cpuRequest": "100m",
"cpuLimit": "200m",
"ephemeralStorageRequest": "5Gi",
"ephemeralStorageLimit": "10Gi",
"memoryRequest": "100Mi",
"memoryLimit": "200Mi"
},
@@ -101,8 +99,6 @@ data:
"podResources": {
"cpuRequest": "200m",
"cpuLimit": "400m",
"ephemeralStorageRequest": "5Gi",
"ephemeralStorageLimit": "10Gi",
"memoryRequest": "200Mi",
"memoryLimit": "400Mi"
},

View File

@@ -224,7 +224,7 @@ Configure different node selection rules for specific storage classes:
```
### Pod Resources (`podResources`)
Configure CPU, memory and ephemeral storage resources for Data Mover Pods to optimize performance and prevent resource conflict.
Configure CPU and memory resources for Data Mover Pods to optimize performance and prevent resource conflict.
The configurations work for PodVolumeBackup, PodVolumeRestore, DataUpload, and DataDownload pods.
@@ -233,8 +233,6 @@ The configurations work for PodVolumeBackup, PodVolumeRestore, DataUpload, and D
"podResources": {
"cpuRequest": "1000m",
"cpuLimit": "2000m",
"ephemeralStorageRequest": "5Gi",
"ephemeralStorageLimit": "10Gi",
"memoryRequest": "1Gi",
"memoryLimit": "4Gi"
}
@@ -537,8 +535,6 @@ Here's a comprehensive example showing how all configuration sections work toget
"podResources": {
"cpuRequest": "500m",
"cpuLimit": "1000m",
"ephemeralStorageRequest": "5Gi",
"ephemeralStorageLimit": "10Gi",
"memoryRequest": "1Gi",
"memoryLimit": "2Gi"
},

View File

@@ -1,13 +1,13 @@
---
title: "Upgrading to Velero 1.18"
title: "Upgrading to Velero 1.17"
layout: docs
---
## Prerequisites
- Velero [v1.17.x][9] installed.
- Velero [v1.16.x][9] installed.
If you're not yet running at least Velero v1.17, see the following:
If you're not yet running at least Velero v1.16, see the following:
- [Upgrading to v1.8][1]
- [Upgrading to v1.9][2]
@@ -18,14 +18,13 @@ If you're not yet running at least Velero v1.17, see the following:
- [Upgrading to v1.14][7]
- [Upgrading to v1.15][8]
- [Upgrading to v1.16][9]
- [Upgrading to v1.17][10]
Before upgrading, check the [Velero compatibility matrix](https://github.com/vmware-tanzu/velero#velero-compatibility-matrix) to make sure your version of Kubernetes is supported by the new version of Velero.
## Instructions
### Upgrade from v1.17
1. Install the Velero v1.18 command-line interface (CLI) by following the [instructions here][0].
### Upgrade from v1.16
1. Install the Velero v1.17 command-line interface (CLI) by following the [instructions here][0].
Verify that you've properly installed it by running:
@@ -37,7 +36,7 @@ Before upgrading, check the [Velero compatibility matrix](https://github.com/vmw
```bash
Client:
Version: v1.18.0
Version: v1.17.0
Git commit: <git SHA>
```
@@ -47,21 +46,28 @@ Before upgrading, check the [Velero compatibility matrix](https://github.com/vmw
velero install --crds-only --dry-run -o yaml | kubectl apply -f -
```
3. Update the container image used by the Velero deployment, plugin and (optionally) the node agent daemon set:
3. (optional) Update the `uploader-type` to `kopia` if you are using `restic`:
```bash
kubectl get deploy -n velero -ojson \
| sed "s/\"--uploader-type=restic\"/\"--uploader-type=kopia\"/g" \
| kubectl apply -f -
```
4. Update the container image used by the Velero deployment, plugin and (optionally) the node agent daemon set:
```bash
# set the container and image of the init container for plugin accordingly,
# if you are using other plugin
kubectl set image deployment/velero \
velero=velero/velero:v1.18.0 \
velero-plugin-for-aws=velero/velero-plugin-for-aws:v1.14.0 \
velero=velero/velero:v1.17.0 \
velero-plugin-for-aws=velero/velero-plugin-for-aws:v1.13.0 \
--namespace velero
# optional, if using the node agent daemonset
kubectl set image daemonset/node-agent \
node-agent=velero/velero:v1.18.0 \
node-agent=velero/velero:v1.17.0 \
--namespace velero
```
4. Confirm that the deployment is up and running with the correct version by running:
5. Confirm that the deployment is up and running with the correct version by running:
```bash
velero version
@@ -71,11 +77,11 @@ Before upgrading, check the [Velero compatibility matrix](https://github.com/vmw
```bash
Client:
Version: v1.18.0
Version: v1.17.0
Git commit: <git SHA>
Server:
Version: v1.18.0
Version: v1.17.0
```
[0]: basic-install.md#install-the-cli
@@ -87,5 +93,4 @@ Before upgrading, check the [Velero compatibility matrix](https://github.com/vmw
[6]: https://velero.io/docs/v1.13/upgrade-to-1.13
[7]: https://velero.io/docs/v1.14/upgrade-to-1.14
[8]: https://velero.io/docs/v1.15/upgrade-to-1.15
[9]: https://velero.io/docs/v1.16/upgrade-to-1.16
[10]: https://velero.io/docs/v1.17/upgrade-to-1.17
[9]: https://velero.io/docs/v1.16/upgrade-to-1.16

View File

@@ -1,13 +1,13 @@
---
title: "Upgrading to Velero 1.18"
title: "Upgrading to Velero 1.17"
layout: docs
---
## Prerequisites
- Velero [v1.17.x][9] installed.
- Velero [v1.16.x][9] installed.
If you're not yet running at least Velero v1.17, see the following:
If you're not yet running at least Velero v1.16, see the following:
- [Upgrading to v1.8][1]
- [Upgrading to v1.9][2]
@@ -18,14 +18,13 @@ If you're not yet running at least Velero v1.17, see the following:
- [Upgrading to v1.14][7]
- [Upgrading to v1.15][8]
- [Upgrading to v1.16][9]
- [Upgrading to v1.17][10]
Before upgrading, check the [Velero compatibility matrix](https://github.com/vmware-tanzu/velero#velero-compatibility-matrix) to make sure your version of Kubernetes is supported by the new version of Velero.
## Instructions
### Upgrade from v1.17
1. Install the Velero v1.18 command-line interface (CLI) by following the [instructions here][0].
### Upgrade from v1.16
1. Install the Velero v1.17 command-line interface (CLI) by following the [instructions here][0].
Verify that you've properly installed it by running:
@@ -37,7 +36,7 @@ Before upgrading, check the [Velero compatibility matrix](https://github.com/vmw
```bash
Client:
Version: v1.18.0
Version: v1.17.0
Git commit: <git SHA>
```
@@ -47,21 +46,28 @@ Before upgrading, check the [Velero compatibility matrix](https://github.com/vmw
velero install --crds-only --dry-run -o yaml | kubectl apply -f -
```
3. Update the container image used by the Velero deployment, plugin and (optionally) the node agent daemon set:
3. (optional) Update the `uploader-type` to `kopia` if you are using `restic`:
```bash
kubectl get deploy -n velero -ojson \
| sed "s/\"--uploader-type=restic\"/\"--uploader-type=kopia\"/g" \
| kubectl apply -f -
```
4. Update the container image used by the Velero deployment, plugin and (optionally) the node agent daemon set:
```bash
# set the container and image of the init container for plugin accordingly,
# if you are using other plugin
kubectl set image deployment/velero \
velero=velero/velero:v1.18.0 \
velero-plugin-for-aws=velero/velero-plugin-for-aws:v1.14.0 \
velero=velero/velero:v1.17.0 \
velero-plugin-for-aws=velero/velero-plugin-for-aws:v1.13.0 \
--namespace velero
# optional, if using the node agent daemonset
kubectl set image daemonset/node-agent \
node-agent=velero/velero:v1.18.0 \
node-agent=velero/velero:v1.17.0 \
--namespace velero
```
4. Confirm that the deployment is up and running with the correct version by running:
5. Confirm that the deployment is up and running with the correct version by running:
```bash
velero version
@@ -71,11 +77,11 @@ Before upgrading, check the [Velero compatibility matrix](https://github.com/vmw
```bash
Client:
Version: v1.18.0
Version: v1.17.0
Git commit: <git SHA>
Server:
Version: v1.18.0
Version: v1.17.0
```
[0]: basic-install.md#install-the-cli
@@ -87,5 +93,4 @@ Before upgrading, check the [Velero compatibility matrix](https://github.com/vmw
[6]: https://velero.io/docs/v1.13/upgrade-to-1.13
[7]: https://velero.io/docs/v1.14/upgrade-to-1.14
[8]: https://velero.io/docs/v1.15/upgrade-to-1.15
[9]: https://velero.io/docs/v1.16/upgrade-to-1.16
[10]: https://velero.io/docs/v1.17/upgrade-to-1.17
[9]: https://velero.io/docs/v1.16/upgrade-to-1.16

View File

@@ -13,8 +13,8 @@ toc:
url: /basic-install
- page: Customize Installation
url: /customize-installation
- page: Upgrade to 1.18
url: /upgrade-to-1.18
- page: Upgrade to 1.17
url: /upgrade-to-1.17
- page: Supported providers
url: /supported-providers
- page: Evaluation install

View File

@@ -13,8 +13,8 @@ toc:
url: /basic-install
- page: Customize Installation
url: /customize-installation
- page: Upgrade to 1.18
url: /upgrade-to-1.18
- page: Upgrade to 1.17
url: /upgrade-to-1.17
- page: Supported providers
url: /supported-providers
- page: Evaluation install

View File

@@ -27,6 +27,16 @@
<div class="col-md-3 toc">
{{ .Render "versions" }}
<br/>
<div id="docsearch">
<!-- <form class="d-flex align-items-center">
<span class="algolia-autocomplete" style="position: relative; display: inline-block; direction: ltr;">
<input type="search" class="form-control docsearch" id="search-input" placeholder="Search..."
aria-label="Search for..." autocomplete="off" spellcheck="false" role="combobox"
aria-autocomplete="list" aria-expanded="false" aria-owns="algolia-autocomplete-listbox-0"
dir="auto" style="position: relative; vertical-align: top;">
</span>
</form> -->
</div>
{{ .Render "nav" }}
</div>
<div class="col-md-8">
@@ -48,6 +58,16 @@
{{ .Render "footer" }}
</div>
</div>
<script src="https://cdn.jsdelivr.net/npm/@docsearch/js@3"></script>
<script type="text/javascript"> docsearch({
appId: '9ASKQJ1HR3',
apiKey: '170ba79bfa16cebfdf10726ae4771d7e',
indexName: 'velero_new',
container: '#docsearch',
searchParameters: {
facetFilters: ["version:{{ .CurrentSection.Params.version }}"]},
});
</script>
</body>
</html>

View File

@@ -8,4 +8,6 @@
{{ $styles := resources.Get "styles.scss" | toCSS $options | resources.Fingerprint }}
<link rel="stylesheet" href="{{ $styles.RelPermalink }}" integrity="{{ $styles.Data.Integrity }}">
{{/* TODO {% seo %}*/}}
<link rel="preconnect" href="https://9ASKQJ1HR3-dsn.algolia.net" crossorigin />
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@docsearch/css@3" />
</head>

View File

@@ -1,150 +0,0 @@
/*
Copyright the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package basic
import (
"fmt"
"path"
"strings"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/vmware-tanzu/velero/test/e2e/test"
. "github.com/vmware-tanzu/velero/test/e2e/test"
"github.com/vmware-tanzu/velero/test/util/common"
. "github.com/vmware-tanzu/velero/test/util/k8s"
)
// RestoreExecHooks tests that a pod with multiple restore exec hooks does not hang
// at the Finalizing phase during restore (Issue #9359 / PR #9366).
type RestoreExecHooks struct {
TestCase
podName string
}
var RestoreExecHooksTest func() = test.TestFunc(&RestoreExecHooks{})
func (r *RestoreExecHooks) Init() error {
Expect(r.TestCase.Init()).To(Succeed())
r.CaseBaseName = "restore-exec-hooks-" + r.UUIDgen
r.BackupName = "backup-" + r.CaseBaseName
r.RestoreName = "restore-" + r.CaseBaseName
r.podName = "pod-multiple-hooks"
r.NamespacesTotal = 1
r.NSIncluded = &[]string{}
for nsNum := 0; nsNum < r.NamespacesTotal; nsNum++ {
createNSName := fmt.Sprintf("%s-%00000d", r.CaseBaseName, nsNum)
*r.NSIncluded = append(*r.NSIncluded, createNSName)
}
r.TestMsg = &test.TestMSG{
Desc: "Restore pod with multiple restore exec hooks",
Text: "Should successfully backup and restore without hanging at Finalizing phase",
FailedMSG: "Failed to successfully backup and restore pod with multiple hooks",
}
r.BackupArgs = []string{
"create", "--namespace", r.VeleroCfg.VeleroNamespace, "backup", r.BackupName,
"--include-namespaces", strings.Join(*r.NSIncluded, ","),
"--default-volumes-to-fs-backup", "--wait",
}
r.RestoreArgs = []string{
"create", "--namespace", r.VeleroCfg.VeleroNamespace, "restore", r.RestoreName,
"--from-backup", r.BackupName, "--wait",
}
return nil
}
func (r *RestoreExecHooks) CreateResources() error {
for nsNum := 0; nsNum < r.NamespacesTotal; nsNum++ {
createNSName := fmt.Sprintf("%s-%00000d", r.CaseBaseName, nsNum)
By(fmt.Sprintf("Creating namespace %s", createNSName), func() {
Expect(CreateNamespace(r.Ctx, r.Client, createNSName)).
To(Succeed(), fmt.Sprintf("Failed to create namespace %s", createNSName))
})
// Prepare images and commands adaptively for the target OS
imageAddress := LinuxTestImage
initCommand := `["/bin/sh", "-c", "echo init-hook-done"]`
execCommand1 := `["/bin/sh", "-c", "echo hook1"]`
execCommand2 := `["/bin/sh", "-c", "echo hook2"]`
if r.VeleroCfg.WorkerOS == common.WorkerOSLinux && r.VeleroCfg.ImageRegistryProxy != "" {
imageAddress = path.Join(r.VeleroCfg.ImageRegistryProxy, LinuxTestImage)
} else if r.VeleroCfg.WorkerOS == common.WorkerOSWindows {
imageAddress = WindowTestImage
initCommand = `["cmd", "/c", "echo init-hook-done"]`
execCommand1 = `["cmd", "/c", "echo hook1"]`
execCommand2 = `["cmd", "/c", "echo hook2"]`
}
// Inject mixing InitContainer hook and multiple Exec post-restore hooks.
// This guarantees that the loop index 'i' mismatched 'hook.hookIndex' (Issue #9359),
// ensuring the bug is properly reproduced and the fix is verified.
ann := map[string]string{
// Inject InitContainer Restore Hook
"init.hook.restore.velero.io/container-image": imageAddress,
"init.hook.restore.velero.io/container-name": "test-init-hook",
"init.hook.restore.velero.io/command": initCommand,
// Inject multiple Exec Restore Hooks
"post.hook.restore.velero.io/test1.command": execCommand1,
"post.hook.restore.velero.io/test1.container": r.podName,
"post.hook.restore.velero.io/test2.command": execCommand2,
"post.hook.restore.velero.io/test2.container": r.podName,
}
By(fmt.Sprintf("Creating pod %s with multiple restore hooks in namespace %s", r.podName, createNSName), func() {
_, err := CreatePod(
r.Client,
createNSName,
r.podName,
"", // No storage class needed
"", // No PVC needed
[]string{}, // No volumes
nil,
ann,
r.VeleroCfg.ImageRegistryProxy,
r.VeleroCfg.WorkerOS,
)
Expect(err).To(Succeed(), fmt.Sprintf("Failed to create pod with hooks in namespace %s", createNSName))
})
By(fmt.Sprintf("Waiting for pod %s to be ready", r.podName), func() {
err := WaitForPods(r.Ctx, r.Client, createNSName, []string{r.podName})
Expect(err).To(Succeed(), fmt.Sprintf("Failed to wait for pod %s in namespace %s", r.podName, createNSName))
})
}
return nil
}
func (r *RestoreExecHooks) Verify() error {
for nsNum := 0; nsNum < r.NamespacesTotal; nsNum++ {
createNSName := fmt.Sprintf("%s-%00000d", r.CaseBaseName, nsNum)
By(fmt.Sprintf("Verifying pod %s in namespace %s after restore", r.podName, createNSName), func() {
err := WaitForPods(r.Ctx, r.Client, createNSName, []string{r.podName})
Expect(err).To(Succeed(), fmt.Sprintf("Failed to verify pod %s in namespace %s after restore", r.podName, createNSName))
})
}
return nil
}

View File

@@ -440,12 +440,6 @@ var _ = Describe(
StorageClasssChangingTest,
)
var _ = Describe(
"Restore phase does not block at Finalizing when a container has multiple exec hooks",
Label("Basic", "Hooks"),
RestoreExecHooksTest,
)
var _ = Describe(
"Backup/restore of 2500 namespaces",
Label("Scale", "LongTime"),
@@ -500,11 +494,6 @@ var _ = Describe(
Label("ResourceFiltering", "IncludeNamespaces", "Restore"),
RestoreWithIncludeNamespaces,
)
var _ = Describe(
"Velero test on backup/restore with wildcard namespaces",
Label("ResourceFiltering", "WildcardNamespaces"),
WildcardNamespacesTest,
)
var _ = Describe(
"Velero test on include resources from the cluster backup",
Label("ResourceFiltering", "IncludeResources", "Backup"),

View File

@@ -36,6 +36,7 @@ import (
"github.com/vmware-tanzu/velero/pkg/builder"
velerotypes "github.com/vmware-tanzu/velero/pkg/types"
"github.com/vmware-tanzu/velero/pkg/util/kube"
velerokubeutil "github.com/vmware-tanzu/velero/pkg/util/kube"
"github.com/vmware-tanzu/velero/test"
. "github.com/vmware-tanzu/velero/test/e2e/test"
k8sutil "github.com/vmware-tanzu/velero/test/util/k8s"
@@ -239,13 +240,9 @@ func (n *NodeAgentConfigTestCase) Backup() error {
Expect(backupPodList.Items[0].Spec.PriorityClassName).To(Equal(n.nodeAgentConfigs.PriorityClassName))
// In backup, only the second element of LoadAffinity array should be used.
expectedLabelKey, _, ok := popFromMap(n.nodeAgentConfigs.LoadAffinity[1].NodeSelector.MatchLabels)
Expect(ok).To(BeTrue(), "Expected LoadAffinity's MatchLabels should at least have one key-value pair")
expectedAffinity := velerokubeutil.ToSystemAffinity(n.nodeAgentConfigs.LoadAffinity[1:])
// From 1.18.1, Velero adds some default affinity in the backup/restore pod,
// so we can't directly compare the whole affinity,
// but we can verify if the expected affinity is contained in the pod affinity.
Expect(backupPodList.Items[0].Spec.Affinity.String()).To(ContainSubstring(expectedLabelKey))
Expect(backupPodList.Items[0].Spec.Affinity).To(Equal(expectedAffinity))
fmt.Println("backupPod content verification completed successfully.")
@@ -320,13 +317,9 @@ func (n *NodeAgentConfigTestCase) Restore() error {
Expect(restorePodList.Items[0].Spec.PriorityClassName).To(Equal(n.nodeAgentConfigs.PriorityClassName))
// In restore, only the first element of LoadAffinity array should be used.
expectedLabelKey, _, ok := popFromMap(n.nodeAgentConfigs.LoadAffinity[0].NodeSelector.MatchLabels)
Expect(ok).To(BeTrue(), "Expected LoadAffinity's MatchLabels should at least have one key-value pair")
expectedAffinity := velerokubeutil.ToSystemAffinity(n.nodeAgentConfigs.LoadAffinity[:1])
// From 1.18.1, Velero adds some default affinity in the backup/restore pod,
// so we can't directly compare the whole affinity,
// but we can verify if the expected affinity is contained in the pod affinity.
Expect(restorePodList.Items[0].Spec.Affinity.String()).To(ContainSubstring(expectedLabelKey))
Expect(restorePodList.Items[0].Spec.Affinity).To(Equal(expectedAffinity))
fmt.Println("restorePod content verification completed successfully.")
@@ -352,12 +345,3 @@ func (n *NodeAgentConfigTestCase) Restore() error {
return nil
}
func popFromMap[K comparable, V any](m map[K]V) (k K, v V, ok bool) {
for key, val := range m {
delete(m, key)
return key, val, true
}
return
}

View File

@@ -70,12 +70,10 @@ var SpecificRepoMaintenanceTest func() = TestFunc(&RepoMaintenanceTestCase{
jobConfigs: velerotypes.JobConfigs{
KeepLatestMaintenanceJobs: &keepJobNum,
PodResources: &velerokubeutil.PodResources{
CPURequest: "100m",
MemoryRequest: "100Mi",
EphemeralStorageRequest: "5Gi",
CPULimit: "200m",
MemoryLimit: "200Mi",
EphemeralStorageLimit: "10Gi",
CPURequest: "100m",
MemoryRequest: "100Mi",
CPULimit: "200m",
MemoryLimit: "200Mi",
},
PriorityClassName: test.PriorityClassNameForRepoMaintenance,
},
@@ -232,10 +230,8 @@ func (r *RepoMaintenanceTestCase) Verify() error {
resources, err := kube.ParseResourceRequirements(
r.jobConfigs.PodResources.CPURequest,
r.jobConfigs.PodResources.MemoryRequest,
r.jobConfigs.PodResources.EphemeralStorageRequest,
r.jobConfigs.PodResources.CPULimit,
r.jobConfigs.PodResources.MemoryLimit,
r.jobConfigs.PodResources.EphemeralStorageLimit,
)
if err != nil {
return errors.Wrap(err, "failed to parse resource requirements for maintenance job")

View File

@@ -1,143 +0,0 @@
/*
Copyright the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package filtering
import (
"fmt"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
apierrors "k8s.io/apimachinery/pkg/api/errors"
. "github.com/vmware-tanzu/velero/test/e2e/test"
. "github.com/vmware-tanzu/velero/test/util/k8s"
)
// WildcardNamespaces tests the inclusion and exclusion of namespaces using wildcards
// introduced in PR #9255 (Issue #1874). It verifies filtering at both Backup and Restore stages.
type WildcardNamespaces struct {
TestCase // Inherit from basic TestCase instead of FilteringCase to customize a single flow
restoredNS []string
excludedByBackupNS []string
excludedByRestoreNS []string
}
// Register as a single E2E test
var WildcardNamespacesTest func() = TestFunc(&WildcardNamespaces{})
func (w *WildcardNamespaces) Init() error {
Expect(w.TestCase.Init()).To(Succeed())
w.CaseBaseName = "wildcard-ns-" + w.UUIDgen
w.BackupName = "backup-" + w.CaseBaseName
w.RestoreName = "restore-" + w.CaseBaseName
// 1. Define namespaces for different filtering lifecycle scenarios
nsIncBoth := w.CaseBaseName + "-inc-both" // Included in both backup and restore
nsExact := w.CaseBaseName + "-exact" // Included exactly without wildcards
nsIncExc := w.CaseBaseName + "-inc-exc" // Included in backup, but excluded during restore
nsBakExc := w.CaseBaseName + "-test-bak" // Excluded during backup
// Group namespaces for validation
w.restoredNS = []string{nsIncBoth, nsExact}
w.excludedByRestoreNS = []string{nsIncExc}
w.excludedByBackupNS = []string{nsBakExc}
w.TestMsg = &TestMSG{
Desc: "Backup and restore with wildcard namespaces",
Text: "Should correctly filter namespaces using wildcards during both backup and restore stages",
FailedMSG: "Failed to properly filter namespaces using wildcards",
}
// 2. Setup Backup Args
backupIncWildcard1 := fmt.Sprintf("%s-inc-*", w.CaseBaseName) // Matches nsIncBoth, nsIncExc
backupIncWildcard2 := fmt.Sprintf("%s-test-*", w.CaseBaseName) // Matches nsBakExc
backupExcWildcard := fmt.Sprintf("%s-test-bak", w.CaseBaseName) // Excludes nsBakExc
nonExistentWildcard := "non-existent-ns-*" // Tests zero-match boundary condition
w.BackupArgs = []string{
"create", "--namespace", w.VeleroCfg.VeleroNamespace, "backup", w.BackupName,
// Use broad wildcards for inclusion to bypass Velero CLI's literal string collision validation
"--include-namespaces", fmt.Sprintf("%s,%s,%s,%s", backupIncWildcard1, backupIncWildcard2, nsExact, nonExistentWildcard),
"--exclude-namespaces", backupExcWildcard,
"--default-volumes-to-fs-backup", "--wait",
}
// 3. Setup Restore Args
restoreExcWildcard := fmt.Sprintf("%s-*-exc", w.CaseBaseName) // Excludes nsIncExc
w.RestoreArgs = []string{
"create", "--namespace", w.VeleroCfg.VeleroNamespace, "restore", w.RestoreName,
"--from-backup", w.BackupName,
"--include-namespaces", fmt.Sprintf("%s,%s,%s", backupIncWildcard1, nsExact, nonExistentWildcard),
"--exclude-namespaces", restoreExcWildcard,
"--wait",
}
return nil
}
func (w *WildcardNamespaces) CreateResources() error {
allNamespaces := append(w.restoredNS, w.excludedByRestoreNS...)
allNamespaces = append(allNamespaces, w.excludedByBackupNS...)
for _, ns := range allNamespaces {
By(fmt.Sprintf("Creating namespace %s", ns), func() {
Expect(CreateNamespace(w.Ctx, w.Client, ns)).To(Succeed(), fmt.Sprintf("Failed to create namespace %s", ns))
})
// Create a ConfigMap in each namespace to verify resource restoration
cmName := "configmap-" + ns
By(fmt.Sprintf("Creating ConfigMap %s in namespace %s", cmName, ns), func() {
_, err := CreateConfigMap(w.Client.ClientGo, ns, cmName, map[string]string{"wildcard-test": "true"}, nil)
Expect(err).To(Succeed(), fmt.Sprintf("Failed to create configmap in namespace %s", ns))
})
}
return nil
}
func (w *WildcardNamespaces) Verify() error {
// 1. Verify namespaces that should be successfully restored
for _, ns := range w.restoredNS {
By(fmt.Sprintf("Checking included namespace %s exists", ns), func() {
_, err := GetNamespace(w.Ctx, w.Client, ns)
Expect(err).To(Succeed(), fmt.Sprintf("Included namespace %s should exist after restore", ns))
_, err = GetConfigMap(w.Client.ClientGo, ns, "configmap-"+ns)
Expect(err).To(Succeed(), fmt.Sprintf("ConfigMap in included namespace %s should exist", ns))
})
}
// 2. Verify namespaces excluded during Backup
for _, ns := range w.excludedByBackupNS {
By(fmt.Sprintf("Checking namespace %s excluded by backup does NOT exist", ns), func() {
_, err := GetNamespace(w.Ctx, w.Client, ns)
Expect(err).To(HaveOccurred(), fmt.Sprintf("Namespace %s excluded by backup should NOT exist after restore", ns))
Expect(apierrors.IsNotFound(err)).To(BeTrue(), "Error should be NotFound")
})
}
// 3. Verify namespaces excluded during Restore
for _, ns := range w.excludedByRestoreNS {
By(fmt.Sprintf("Checking namespace %s excluded by restore does NOT exist", ns), func() {
_, err := GetNamespace(w.Ctx, w.Client, ns)
Expect(err).To(HaveOccurred(), fmt.Sprintf("Namespace %s excluded by restore should NOT exist after restore", ns))
Expect(apierrors.IsNotFound(err)).To(BeTrue(), "Error should be NotFound")
})
}
return nil
}