Compare commits

...

41 Commits

Author SHA1 Message Date
Xun Jiang
8704b4d7f8 Add Windows support for release dev branch.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-10-31 11:45:21 +08:00
lyndon-li
4ce4a4803d Merge pull request #9376 from Lyndon-Li/release-1.17
Some checks failed
Run the E2E test on kind / build (push) Failing after 8m56s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 39s
issue 9365: prevent multiple update of PVR
2025-10-29 15:51:18 +08:00
Lyndon-Li
ec7fe10816 issue 9365: prevent multiple update of PVR
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-10-29 15:01:33 +08:00
Wenkai Yin(尹文开)
3ae7183473 Merge pull request #9371 from blackpiglet/1.17.1_bump
Some checks failed
Run the E2E test on kind / build (push) Failing after 7m36s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 38s
Bump base image and Golang version for v1.17.1
2025-10-28 17:58:42 +08:00
Xun Jiang
bd4c53d13e Bump base image and Golang version for v1.17.1
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-10-28 15:36:01 +08:00
lyndon-li
988bfa55d4 Merge pull request #9341 from Lyndon-Li/release-1.17
Some checks failed
Run the E2E test on kind / build (push) Failing after 9m9s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 1m23s
[1.17] issue 9332: make bytesDone correct for incremental backup
2025-10-17 11:08:40 +08:00
Lyndon-Li
71ad893618 issue 9332: make bytesDone correct for incremental backup
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-10-17 10:45:41 +08:00
Xun Jiang/Bruce Jiang
1f32333aaa VerifyJSONConfigs verify every elements in Data. (#9303)
Some checks failed
Run the E2E test on kind / build (push) Failing after 7s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 5s
Add error message in the velero install CLI output if VerifyJSONConfigs fail.

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-10-04 23:45:26 -04:00
lyndon-li
8ad7827f05 Merge pull request #9300 from sseago/privileged-fs-backup-pods-1.17
Some checks failed
Run the E2E test on kind / build (push) Failing after 6s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 4s
[release-1.17] Privileged fs backup pods 1.17
2025-09-28 11:37:06 +08:00
lyndon-li
d0c176077b Merge branch 'release-1.17' into privileged-fs-backup-pods-1.17 2025-09-28 10:54:14 +08:00
Scott Seago
1ca4c54c60 Add option for privileged fs-backup pod
Signed-off-by: Scott Seago <sseago@redhat.com>
2025-09-26 13:40:39 -04:00
Shubham Pampattiwar
d2eafe63ed Fix maintenance jobs toleration inheritance from Velero deployment (#9299)
Some checks failed
Run the E2E test on kind / build (push) Failing after 7s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 3s
fix codespell and add changelog file


(cherry picked from commit 5ba00dfb09)

update changelog filename



update changelog

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2025-09-26 11:14:23 -04:00
lyndon-li
14e2e25801 Merge pull request #9297 from Lyndon-Li/release-1.17
Some checks failed
Run the E2E test on kind / build (push) Failing after 6s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 3s
[1.17] backupPVC to different node
2025-09-25 13:33:28 +08:00
Lyndon-Li
cf9e7c5fcb backupPVC to different node
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-09-25 11:21:18 +08:00
Lyndon-Li
2e00746550 backupPVC to different node
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-09-25 11:18:34 +08:00
Shubham Pampattiwar
99b2c57511 Merge pull request #9292 from Lyndon-Li/release-1.17
Some checks failed
Run the E2E test on kind / build (push) Failing after 6s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 3s
[1.17] Issue #9247: Protect VolumeSnapshot field from race condition
2025-09-23 06:45:44 -07:00
0xLeo258
d1f7f152b7 Add built-in mutex for SynchronizedVSList && Update unit tests
Signed-off-by: 0xLeo258 <noixe0312@gmail.com>
2025-09-23 13:56:16 +08:00
0xLeo258
d82af8d8b5 add changelog
Signed-off-by: 0xLeo258 <noixe0312@gmail.com>
2025-09-23 13:51:32 +08:00
0xLeo258
a46b86fa29 fix9247: Protect VolumeSnapshot field
Signed-off-by: 0xLeo258 <noixe0312@gmail.com>
2025-09-23 13:51:21 +08:00
lyndon-li
96d5fb7210 Merge pull request #9290 from Lyndon-Li/release-1.17
Some checks failed
Run the E2E test on kind / build (push) Failing after 5s
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 5s
[1.17] Issue #9234: Fix plugin reentry with safe VolumeSnapshotterCache
2025-09-23 13:50:22 +08:00
0xLeo258
c71e065863 add changelog
Signed-off-by: 0xLeo258 <noixe0312@gmail.com>
2025-09-23 13:19:36 +08:00
0xLeo258
60338d9740 fix 9234: Add safe VolumeSnapshotterCache
Signed-off-by: 0xLeo258 <noixe0312@gmail.com>
2025-09-23 13:15:19 +08:00
lyndon-li
afa71e9e03 Merge pull request #9277 from shubham-pampattiwar/fix-backup-q-accum-cp
Some checks failed
Run the E2E test on kind / build (push) Has been cancelled
Run the E2E test on kind / setup-test-matrix (push) Has been cancelled
Main CI / Build (push) Has been cancelled
Run the E2E test on kind / run-e2e-test (push) Has been cancelled
Fix Schedule Backup Queue Accumulation During Extended Blocking Scenarios
2025-09-19 11:59:11 +08:00
lyndon-li
fc877dd2dc Merge branch 'release-1.17' into fix-backup-q-accum-cp 2025-09-19 11:30:11 +08:00
lyndon-li
bb147b972b Merge pull request #9285 from priyansh17/release-1.17
Update AzureAD Microsoft Authentication Library to v1.5.0 (#9244)
2025-09-19 11:26:17 +08:00
lyndon-li
079394cd4f Merge branch 'release-1.17' into release-1.17 2025-09-19 10:54:48 +08:00
lyndon-li
fc4394964f Merge pull request #9282 from kaovilai/bitnamiminio-1.17
1.17: Fix E2E tests: Build MinIO from Bitnami Dockerfile to replace deprecated image
2025-09-19 10:54:15 +08:00
Priyansh Choudhary
85f2f23076 Added changelog
Signed-off-by: Priyansh Choudhary im1706@gmail.com
Signed-off-by: Priyansh Choudhary <im1706@gmail.com>
2025-09-19 03:20:14 +05:30
Priyansh Choudhary
aa71b53490 Update AzureAD Microsoft Authentication Library to v1.5.0 (#9244)
Signed-off-by: Priyansh Choudhary <im1706@gmail.com>
2025-09-19 03:20:13 +05:30
Tiger Kaovilai
30cf11a6b1 Fix E2E tests: Build MinIO from Bitnami Dockerfile to replace deprecated image
The Bitnami MinIO image bitnami/minio:2021.6.17-debian-10-r7 is no longer
available on Docker Hub, causing E2E tests to fail.

This change implements a solution to build the MinIO image locally from
Bitnami's public Dockerfile and cache it for subsequent runs:
- Fetches the latest commit hash of the Bitnami MinIO Dockerfile
- Uses GitHub Actions cache to store/retrieve built images
- Only rebuilds when the upstream Dockerfile changes
- Maintains compatibility with existing environment variables

Fixes #9279

🤖 Generated with [Claude Code](https://claude.ai/code)

Update .github/workflows/e2e-test-kind.yaml

Signed-off-by: Tiger Kaovilai <passawit.kaovilai@gmail.com>
Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2025-09-18 08:53:28 -04:00
Shubham Pampattiwar
f404ff207d Fix Schedule Backup Queue Accumulation During Extended Blocking Scenarios
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
(cherry picked from commit 59289fba76)

add changelog file

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2025-09-17 09:44:44 -07:00
lyndon-li
690b074891 Merge pull request #9266 from sseago/iba-perf-1.17
Some checks failed
Run the E2E test on kind / build (push) Failing after 4s
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 5s
[release-1.17] Get pod list once per namespace in pvc IBA
2025-09-17 11:15:16 +08:00
Scott Seago
c188c454d7 Get pod list once per namespace in pvc IBA
Signed-off-by: Scott Seago <sseago@redhat.com>
2025-09-16 17:39:30 -04:00
Wenkai Yin(尹文开)
18a690d69e Merge pull request #9220 from kaovilai/9173-release-1.17
Some checks failed
Run the E2E test on kind / build (push) Failing after 5s
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 3s
release-1.17: feat: Permit specifying annotations for the BackupPVC #9173
2025-09-15 17:05:10 +08:00
lyndon-li
6986cde4d3 Merge branch 'release-1.17' into 9173-release-1.17 2025-09-15 15:46:44 +08:00
lyndon-li
3172d9f99c Merge pull request #9228 from vmware-tanzu/bump_k8s_lib_to_1.33
Some checks failed
Run the E2E test on kind / build (push) Failing after 6s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 5s
Bump k8s library to v1.33.
2025-09-09 17:33:34 +08:00
Xun Jiang
c34865a5fc Bump k8s library to v1.33.
Some checks failed
Run the E2E test on kind / build (push) Failing after 8s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Replace deprecated EventExpansion method with WithContext methods.
Modify UTs.
Align the E2E ginkgo CLI version with go.mod

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-09-08 20:37:51 +08:00
Clément Nussbaumer
344b09a582 test: fix backuppvc annotations test case
Signed-off-by: Clément Nussbaumer <clement.nussbaumer@postfinance.ch>
2025-08-29 08:57:12 -05:00
Clément Nussbaumer
53c46b01c7 feat: Permit specifying annotations for the BackupPVC
Signed-off-by: Clément Nussbaumer <clement.nussbaumer@postfinance.ch>
2025-08-29 08:57:12 -05:00
lyndon-li
a10f413cab Merge pull request #9216 from Lyndon-Li/release-1.17
Pin version of golang and base image
2025-08-28 15:34:43 +08:00
Lyndon-Li
de1cd6dcbf pin version of golang and base image
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-08-28 15:04:51 +08:00
56 changed files with 922 additions and 209 deletions

View File

@@ -11,6 +11,8 @@ jobs:
# Build the Velero CLI and image once for all Kubernetes versions, and cache it so the fan-out workers can get it.
build:
runs-on: ubuntu-latest
outputs:
minio-dockerfile-sha: ${{ steps.minio-version.outputs.dockerfile_sha }}
steps:
- name: Check out the code
uses: actions/checkout@v5
@@ -44,6 +46,26 @@ jobs:
run: |
IMAGE=velero VERSION=pr-test BUILD_OUTPUT_TYPE=docker make container
docker save velero:pr-test-linux-amd64 -o ./velero.tar
# Check and build MinIO image once for all e2e tests
- name: Check Bitnami MinIO Dockerfile version
id: minio-version
run: |
DOCKERFILE_SHA=$(curl -s https://api.github.com/repos/bitnami/containers/commits?path=bitnami/minio/2025/debian-12/Dockerfile\&per_page=1 | jq -r '.[0].sha')
echo "dockerfile_sha=${DOCKERFILE_SHA}" >> $GITHUB_OUTPUT
- name: Cache MinIO Image
uses: actions/cache@v4
id: minio-cache
with:
path: ./minio-image.tar
key: minio-bitnami-${{ steps.minio-version.outputs.dockerfile_sha }}
- name: Build MinIO Image from Bitnami Dockerfile
if: steps.minio-cache.outputs.cache-hit != 'true'
run: |
echo "Building MinIO image from Bitnami Dockerfile..."
git clone --depth 1 https://github.com/bitnami/containers.git /tmp/bitnami-containers
cd /tmp/bitnami-containers/bitnami/minio/2025/debian-12
docker build -t bitnami/minio:local .
docker save bitnami/minio:local > ${{ github.workspace }}/minio-image.tar
# Create json of k8s versions to test
# from guide: https://stackoverflow.com/a/65094398/4590470
setup-test-matrix:
@@ -86,9 +108,20 @@ jobs:
uses: actions/setup-go@v5
with:
go-version-file: 'go.mod'
# Fetch the pre-built MinIO image from the build job
- name: Fetch built MinIO Image
uses: actions/cache@v4
id: minio-cache
with:
path: ./minio-image.tar
key: minio-bitnami-${{ needs.build.outputs.minio-dockerfile-sha }}
- name: Load MinIO Image
run: |
echo "Loading MinIO image..."
docker load < ./minio-image.tar
- name: Install MinIO
run:
docker run -d --rm -p 9000:9000 -e "MINIO_ACCESS_KEY=minio" -e "MINIO_SECRET_KEY=minio123" -e "MINIO_DEFAULT_BUCKETS=bucket,additional-bucket" bitnami/minio:2021.6.17-debian-10-r7
run: |
docker run -d --rm -p 9000:9000 -e "MINIO_ROOT_USER=minio" -e "MINIO_ROOT_PASSWORD=minio123" -e "MINIO_DEFAULT_BUCKETS=bucket,additional-bucket" bitnami/minio:local
- uses: engineerd/setup-kind@v0.6.2
with:
skipClusterLogsExport: true

View File

@@ -13,7 +13,7 @@
# limitations under the License.
# Velero binary build section
FROM --platform=$BUILDPLATFORM golang:1.24-bookworm AS velero-builder
FROM --platform=$BUILDPLATFORM golang:1.24.9-bookworm AS velero-builder
ARG GOPROXY
ARG BIN
@@ -49,7 +49,7 @@ RUN mkdir -p /output/usr/bin && \
go clean -modcache -cache
# Restic binary build section
FROM --platform=$BUILDPLATFORM golang:1.24-bookworm AS restic-builder
FROM --platform=$BUILDPLATFORM golang:1.24.9-bookworm AS restic-builder
ARG GOPROXY
ARG BIN
@@ -73,7 +73,7 @@ RUN mkdir -p /output/usr/bin && \
go clean -modcache -cache
# Velero image packing section
FROM paketobuildpacks/run-jammy-tiny:latest
FROM paketobuildpacks/run-jammy-tiny:0.2.78
LABEL maintainer="Xun Jiang <jxun@vmware.com>"

View File

@@ -15,7 +15,7 @@
ARG OS_VERSION=1809
# Velero binary build section
FROM --platform=$BUILDPLATFORM golang:1.24-bookworm AS velero-builder
FROM --platform=$BUILDPLATFORM golang:1.24.9-bookworm AS velero-builder
ARG GOPROXY
ARG BIN

View File

@@ -52,7 +52,7 @@ git_sha = str(local("git rev-parse HEAD", quiet = True, echo_off = True)).strip(
tilt_helper_dockerfile_header = """
# Tilt image
FROM golang:1.24 as tilt-helper
FROM golang:1.24.9 as tilt-helper
# Support live reloading with Tilt
RUN wget --output-document /restart.sh --quiet https://raw.githubusercontent.com/windmilleng/rerun-process-wrapper/master/restart.sh && \

View File

@@ -0,0 +1 @@
feat: Permit specifying annotations for the BackupPVC

View File

@@ -0,0 +1 @@
Update AzureAD Microsoft Authentication Library to v1.5.0

View File

@@ -0,0 +1 @@
Get pod list once per namespace in pvc IBA

View File

@@ -0,0 +1 @@
Fix schedule controller to prevent backup queue accumulation during extended blocking scenarios by properly handling empty backup phases

View File

@@ -0,0 +1 @@
Backport to 1.17 (PR#9244 Update AzureAD Microsoft Authentication Library to v1.5.0)

View File

@@ -0,0 +1 @@
Implement concurrency control for cache of native VolumeSnapshotter plugin.

View File

@@ -0,0 +1 @@
Protect VolumeSnapshot field from race condition during multi-thread backup

View File

@@ -0,0 +1 @@
Fix issue #9229, don't attach backupPVC to the source node

View File

@@ -0,0 +1 @@
Fix repository maintenance jobs to inherit allowlisted tolerations from Velero deployment

View File

@@ -0,0 +1 @@
Add option for privileged fs-backup pod

View File

@@ -0,0 +1 @@
VerifyJSONConfigs verify every elements in Data.

View File

@@ -0,0 +1 @@
Fix issue #9332, add bytesDone for cache files

View File

@@ -0,0 +1 @@
Fix issue #9365, prevent fake completion notification due to multiple update of single PVR

60
go.mod
View File

@@ -1,6 +1,8 @@
module github.com/vmware-tanzu/velero
go 1.24
go 1.24.0
toolchain go1.24.9
require (
cloud.google.com/go/storage v1.55.0
@@ -17,7 +19,7 @@ require (
github.com/aws/aws-sdk-go-v2/service/s3 v1.48.0
github.com/aws/aws-sdk-go-v2/service/sts v1.26.7
github.com/bombsimon/logrusr/v3 v3.0.0
github.com/evanphx/json-patch/v5 v5.9.0
github.com/evanphx/json-patch/v5 v5.9.11
github.com/fatih/color v1.18.0
github.com/gobwas/glob v0.2.3
github.com/google/go-cmp v0.7.0
@@ -27,8 +29,8 @@ require (
github.com/joho/godotenv v1.3.0
github.com/kopia/kopia v0.16.0
github.com/kubernetes-csi/external-snapshotter/client/v8 v8.2.0
github.com/onsi/ginkgo/v2 v2.19.0
github.com/onsi/gomega v1.33.1
github.com/onsi/ginkgo/v2 v2.22.0
github.com/onsi/gomega v1.36.1
github.com/petar/GoLLRB v0.0.0-20210522233825-ae3b015fd3e9
github.com/pkg/errors v0.9.1
github.com/prometheus/client_golang v1.22.0
@@ -49,17 +51,17 @@ require (
google.golang.org/grpc v1.73.0
google.golang.org/protobuf v1.36.6
gopkg.in/yaml.v3 v3.0.1
k8s.io/api v0.31.3
k8s.io/apiextensions-apiserver v0.31.3
k8s.io/apimachinery v0.31.3
k8s.io/cli-runtime v0.31.3
k8s.io/client-go v0.31.3
k8s.io/api v0.33.3
k8s.io/apiextensions-apiserver v0.33.3
k8s.io/apimachinery v0.33.3
k8s.io/cli-runtime v0.33.3
k8s.io/client-go v0.33.3
k8s.io/klog/v2 v2.130.1
k8s.io/kube-aggregator v0.31.3
k8s.io/metrics v0.31.3
k8s.io/utils v0.0.0-20240711033017-18e509b52bc8
sigs.k8s.io/controller-runtime v0.19.3
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd
k8s.io/kube-aggregator v0.33.3
k8s.io/metrics v0.33.3
k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738
sigs.k8s.io/controller-runtime v0.21.0
sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3
sigs.k8s.io/yaml v1.4.0
)
@@ -72,8 +74,8 @@ require (
cloud.google.com/go/iam v1.5.2 // indirect
cloud.google.com/go/monitoring v1.24.2 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.1 // indirect
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 // indirect
github.com/AzureAD/microsoft-authentication-library-for-go v1.4.2 // indirect
github.com/Azure/go-ansiterm v0.0.0-20230124172434-306776ec8161 // indirect
github.com/AzureAD/microsoft-authentication-library-for-go v1.5.0 // indirect
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.27.0 // indirect
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.51.0 // indirect
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.51.0 // indirect
@@ -91,6 +93,7 @@ require (
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.21.6 // indirect
github.com/aws/smithy-go v1.19.0 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/chmduquesne/rollinghash v4.0.0+incompatible // indirect
github.com/cncf/xds/go v0.0.0-20250326154945-ae57f3c0d45f // indirect
@@ -101,32 +104,31 @@ require (
github.com/envoyproxy/go-control-plane/envoy v1.32.4 // indirect
github.com/envoyproxy/protoc-gen-validate v1.2.1 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/fsnotify/fsnotify v1.7.0 // indirect
github.com/fxamacker/cbor/v2 v2.7.0 // indirect
github.com/go-ini/ini v1.67.0 // indirect
github.com/go-jose/go-jose/v4 v4.0.5 // indirect
github.com/go-logr/logr v1.4.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-ole/go-ole v1.3.0 // indirect
github.com/go-openapi/jsonpointer v0.19.6 // indirect
github.com/go-openapi/jsonpointer v0.21.0 // indirect
github.com/go-openapi/jsonreference v0.20.2 // indirect
github.com/go-openapi/swag v0.22.4 // indirect
github.com/go-openapi/swag v0.23.0 // indirect
github.com/go-task/slim-sprig/v3 v3.0.0 // indirect
github.com/goccy/go-json v0.10.5 // indirect
github.com/gofrs/flock v0.12.1 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang-jwt/jwt/v5 v5.2.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.4 // indirect
github.com/google/gnostic-models v0.6.8 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/google/pprof v0.0.0-20240525223248-4bfdf5a9a2af // indirect
github.com/google/btree v1.1.3 // indirect
github.com/google/gnostic-models v0.6.9 // indirect
github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db // indirect
github.com/google/s2a-go v0.1.9 // indirect
github.com/googleapis/enterprise-certificate-proxy v0.3.6 // indirect
github.com/googleapis/gax-go/v2 v2.14.2 // indirect
github.com/gorilla/websocket v1.5.0 // indirect
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 // indirect
github.com/hashicorp/cronexpr v1.1.2 // indirect
github.com/hashicorp/yamux v0.1.1 // indirect
github.com/imdario/mergo v0.3.13 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
@@ -144,7 +146,7 @@ require (
github.com/minio/md5-simd v1.1.2 // indirect
github.com/minio/minio-go/v7 v7.0.94 // indirect
github.com/mitchellh/go-testing-interface v1.0.0 // indirect
github.com/moby/spdystream v0.4.0 // indirect
github.com/moby/spdystream v0.5.0 // indirect
github.com/moby/term v0.5.0 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
@@ -181,7 +183,7 @@ require (
go.starlark.net v0.0.0-20230525235612-a134d8f9ddca // indirect
go.uber.org/multierr v1.11.0 // indirect
golang.org/x/crypto v0.40.0 // indirect
golang.org/x/exp v0.0.0-20230522175609-2e198f4a06a1 // indirect
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 // indirect
golang.org/x/sync v0.16.0 // indirect
golang.org/x/sys v0.34.0 // indirect
golang.org/x/term v0.33.0 // indirect
@@ -193,9 +195,9 @@ require (
google.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822 // indirect
gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.4.1 // indirect
k8s.io/kube-openapi v0.0.0-20250318190949-c8a335a9a2ff // indirect
sigs.k8s.io/randfill v1.0.0 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.6.0 // indirect
)
replace github.com/kopia/kopia => github.com/project-velero/kopia v0.0.0-20250722052735-3ea24d208777

106
go.sum
View File

@@ -84,8 +84,8 @@ github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.8.0
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.8.0/go.mod h1:DWAciXemNf++PQJLeXUB4HHH5OpsAh12HZnu2wXE1jA=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.1 h1:lhZdRq7TIx0GJQvSyX2Si406vrYsov2FXGp/RnSEtcs=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.1/go.mod h1:8cl44BDmi+effbARHMQjgOKA2AYvcohNm7KEt42mSV8=
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 h1:UQHMgLO+TxOElx5B5HZ4hJQsoJ/PvUvKRhJHDQXO8P8=
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
github.com/Azure/go-ansiterm v0.0.0-20230124172434-306776ec8161 h1:L/gRVlceqvL25UVaW/CKtUDjefjrs0SPonmDGUVOYP0=
github.com/Azure/go-ansiterm v0.0.0-20230124172434-306776ec8161/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
github.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
github.com/Azure/go-autorest/autorest v0.11.18/go.mod h1:dSiJPy22c3u0OtOKDNttNgqpNFY/GeWa7GH/Pz56QRA=
github.com/Azure/go-autorest/autorest/adal v0.9.13/go.mod h1:W/MM4U6nLxnIskrw4UwWzlHfGjwUS50aOsc/I3yuU8M=
@@ -95,8 +95,8 @@ github.com/Azure/go-autorest/logger v0.2.1/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZ
github.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBpUA79WCAKPPZVC2DeU=
github.com/AzureAD/microsoft-authentication-extensions-for-go/cache v0.1.1 h1:WJTmL004Abzc5wDB5VtZG2PJk5ndYDgVacGqfirKxjM=
github.com/AzureAD/microsoft-authentication-extensions-for-go/cache v0.1.1/go.mod h1:tCcJZ0uHAmvjsVYzEFivsRTN00oz5BEsRgQHu5JZ9WE=
github.com/AzureAD/microsoft-authentication-library-for-go v1.4.2 h1:oygO0locgZJe7PpYPXT5A29ZkwJaPqcva7BVeemZOZs=
github.com/AzureAD/microsoft-authentication-library-for-go v1.4.2/go.mod h1:wP83P5OoQ5p6ip3ScPr0BAq0BvuPAvacpEuSzyouqAI=
github.com/AzureAD/microsoft-authentication-library-for-go v1.5.0 h1:XkkQbfMyuH2jTSjQjSoihryI8GINRcs4xp8lNawg0FI=
github.com/AzureAD/microsoft-authentication-library-for-go v1.5.0/go.mod h1:HKpQxkWaGLJ+D/5H8QRpyQXA1eKjxkFlOMwck5+33Jk=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/GehirnInc/crypt v0.0.0-20230320061759-8cc1b52080c5 h1:IEjq88XO4PuBDcvmjQJcQGg+w+UaafSy8G5Kcb5tBhI=
@@ -170,6 +170,8 @@ github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6r
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
github.com/bketelsen/crypt v0.0.3-0.20200106085610-5cbc8cc4026c/go.mod h1:MKsuJmJgSg28kpZDP6UIiPt0e0Oz0kqKNGyRaWEPv84=
github.com/bketelsen/crypt v0.0.4/go.mod h1:aI6NrJ0pMGgvZKL1iVgXLnfIFJtfV+bKCoqOes/6LfM=
github.com/blang/semver/v4 v4.0.0 h1:1PFHFE6yCCTv8C1TeyNNarDzntLi7wMI5i/pzqYIsAM=
github.com/blang/semver/v4 v4.0.0/go.mod h1:IbckMUScFkM3pff0VJDNKRiT6TG/YpiHIM2yvyW5YoQ=
github.com/bombsimon/logrusr/v3 v3.0.0 h1:tcAoLfuAhKP9npBxWzSdpsvKPQt1XV02nSf2lZA82TQ=
github.com/bombsimon/logrusr/v3 v3.0.0/go.mod h1:PksPPgSFEL2I52pla2glgCyyd2OqOHAnFF5E+g8Ixco=
github.com/bufbuild/protocompile v0.4.0 h1:LbFKd2XowZvQ/kajzguUp2DC9UEIQhIq77fZZlaQsNA=
@@ -239,8 +241,8 @@ github.com/envoyproxy/protoc-gen-validate v1.2.1/go.mod h1:d/C80l/jxXLdfEIhX1W2T
github.com/evanphx/json-patch v4.11.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/evanphx/json-patch v5.6.0+incompatible h1:jBYDEEiFBPxA0v50tFdvOzQQTCvpL6mnFh5mB2/l16U=
github.com/evanphx/json-patch v5.6.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/evanphx/json-patch/v5 v5.9.0 h1:kcBlZQbplgElYIlo/n1hJbls2z/1awpXxpRi0/FOJfg=
github.com/evanphx/json-patch/v5 v5.9.0/go.mod h1:VNkHZ/282BpEyt/tObQO8s5CMPmYYq14uClGH4abBuQ=
github.com/evanphx/json-patch/v5 v5.9.11 h1:/8HVnzMq13/3x9TPvjG08wUGqBTmZBsCWzjTM0wiaDU=
github.com/evanphx/json-patch/v5 v5.9.11/go.mod h1:3j+LviiESTElxA4p3EMKAB9HXj3/XEtnUf6OZxqIQTM=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM=
github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU=
@@ -282,8 +284,9 @@ github.com/go-ole/go-ole v1.3.0 h1:Dt6ye7+vXGIKZ7Xtk4s6/xVdGDQynvom7xCFEdWr6uE=
github.com/go-ole/go-ole v1.3.0/go.mod h1:5LS6F96DhAwUc7C+1HLexzMXY1xGRSryjyPPKW6zv78=
github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
github.com/go-openapi/jsonpointer v0.19.5/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
github.com/go-openapi/jsonpointer v0.19.6 h1:eCs3fxoIi3Wh6vtgmLTOjdhSpiqphQ+DaPn38N2ZdrE=
github.com/go-openapi/jsonpointer v0.19.6/go.mod h1:osyAmYz/mB/C3I+WsTTSgw1ONzaLJoLCyoi6/zppojs=
github.com/go-openapi/jsonpointer v0.21.0 h1:YgdVicSA9vH5RiHs9TZW5oyafXZFc6+2Vc1rr/O9oNQ=
github.com/go-openapi/jsonpointer v0.21.0/go.mod h1:IUyH9l/+uyhIYQ/PXVA41Rexl+kOkAPDdXEYns6fzUY=
github.com/go-openapi/jsonreference v0.19.3/go.mod h1:rjx6GuL8TTa9VaixXglHmQmIL98+wF9xc8zWvFonSJ8=
github.com/go-openapi/jsonreference v0.19.5/go.mod h1:RdybgQwPxbL4UEjuAruzK1x3nE69AqPYEJeo/TWfEeg=
github.com/go-openapi/jsonreference v0.20.2 h1:3sVjiK66+uXK/6oQ8xgcRKcFgQ5KXa2KvnJRumpMGbE=
@@ -291,8 +294,8 @@ github.com/go-openapi/jsonreference v0.20.2/go.mod h1:Bl1zwGIM8/wsvqjsOQLJ/SH+En
github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk=
github.com/go-openapi/swag v0.19.14/go.mod h1:QYRuS/SOXUCsnplDa677K7+DxSOj6IPNl/eQntq43wQ=
github.com/go-openapi/swag v0.22.3/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14=
github.com/go-openapi/swag v0.22.4 h1:QLMzNJnMGPRNDCbySlcj1x01tzU8/9LTTL9hZZZogBU=
github.com/go-openapi/swag v0.22.4/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14=
github.com/go-openapi/swag v0.23.0 h1:vsEVJDUo2hPJ2tu0/Xc+4noaxyEffXNIs3cOULZ+GrE=
github.com/go-openapi/swag v0.23.0/go.mod h1:esZ8ITTYEsH1V2trKHjAN8Ai7xHb8RV+YSZ577vPjgQ=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI=
github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8=
@@ -318,7 +321,6 @@ github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4er
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
@@ -350,8 +352,10 @@ github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.1/go.mod h1:xXMiIv4Fb/0kKde4SpL7qlzvu5cMJDRkFDxJfI9uaxA=
github.com/google/gnostic-models v0.6.8 h1:yo/ABAfM5IMRsS1VnXjTBvUb61tFIHozhlYvRgGre9I=
github.com/google/gnostic-models v0.6.8/go.mod h1:5n7qKqH0f5wFt+aWF8CW6pZLLNOfYuF5OpfBSENuI8U=
github.com/google/btree v1.1.3 h1:CVpQJjYgC4VbzxeGVHfvZrv1ctoYCAI8vbl07Fcxlyg=
github.com/google/btree v1.1.3/go.mod h1:qOPhT0dTNdNzV6Z/lhRX0YXUafgPLFUh+gZMl761Gm4=
github.com/google/gnostic-models v0.6.9 h1:MU/8wDLif2qCXZmzncUQ/BOfxWfthHi63KqpoNbWqVw=
github.com/google/gnostic-models v0.6.9/go.mod h1:CiWsm0s6BSQd1hRn8/QmxqB6BesYcbSZxsz9b0KuDBw=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
@@ -389,8 +393,8 @@ github.com/google/pprof v0.0.0-20201203190320-1bf35d6f28c2/go.mod h1:kpwsk12EmLe
github.com/google/pprof v0.0.0-20201218002935-b9804c9f04c2/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210122040257-d980be63207e/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210226084205-cbba55b83ad5/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20240525223248-4bfdf5a9a2af h1:kmjWCqn2qkEml422C2Rrd27c3VGxi6a/6HNq8QmHRKM=
github.com/google/pprof v0.0.0-20240525223248-4bfdf5a9a2af/go.mod h1:K1liHPHnj73Fdn/EKuT8nrFqBihUSKXoLYU0BuatOYo=
github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db h1:097atOisP2aRj7vFgYQBbFN4U4JNXUNYpxael3UzMyo=
github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db/go.mod h1:vavhavw2zAxS5dIdcRluK6cSGGPlZynqzFM8NdvU144=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/s2a-go v0.1.9 h1:LGD7gtMgezd8a/Xak7mEWL0PjoTQFvpRudN895yqKW0=
github.com/google/s2a-go v0.1.9/go.mod h1:YA0Ei2ZQL3acow2O62kdp9UlnvMmU7kA6Eutn0dXayM=
@@ -413,8 +417,8 @@ github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=
github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ=
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/gorilla/websocket v1.5.0 h1:PPwGk2jz7EePpoHN/+ClbZu8SPxiqlu12wZP/3sWmnc=
github.com/gorilla/websocket v1.5.0/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 h1:JeSE6pjso5THxAzdVpqr6/geYxZytqFMBCOtn/ujyeo=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674/go.mod h1:r4w70xmWCQKmi1ONH4KIaBptdivuRPyosB9RmPlGEwA=
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
@@ -455,8 +459,6 @@ github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:
github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/imdario/mergo v0.3.5/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/imdario/mergo v0.3.11/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
github.com/imdario/mergo v0.3.13 h1:lFzP57bqS/wsqKssCGmtLAb8A0wKjLGrve2q3PPVcBk=
github.com/imdario/mergo v0.3.13/go.mod h1:4lJ1jqUDcsbIECGy0RUJAXNIhg+6ocWgb1ALK2O4oXg=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
@@ -550,8 +552,8 @@ github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:F
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.4.1/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/moby/spdystream v0.2.0/go.mod h1:f7i0iNDQJ059oMTcWxx8MA/zKFIuD/lY+0GqbN2Wy8c=
github.com/moby/spdystream v0.4.0 h1:Vy79D6mHeJJjiPdFEL2yku1kl0chZpJfZcPpb16BRl8=
github.com/moby/spdystream v0.4.0/go.mod h1:xBAYlnt/ay+11ShkdFKNAG7LsyK/tmNBVvVOwrfMgdI=
github.com/moby/spdystream v0.5.0 h1:7r0J1Si3QO/kjRitvSLVVFUjxMEb/YLj6S9FF62JBCU=
github.com/moby/spdystream v0.5.0/go.mod h1:xBAYlnt/ay+11ShkdFKNAG7LsyK/tmNBVvVOwrfMgdI=
github.com/moby/term v0.5.0 h1:xt8Q1nalod/v7BqbG21f8mQPqH+xAaC9C3N3wfWbVP0=
github.com/moby/term v0.5.0/go.mod h1:8FzsFHVUBGZdbDsJw/ot+X+d5HLUbvklYLJ9uGfcI3Y=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
@@ -584,13 +586,13 @@ github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+W
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
github.com/onsi/ginkgo v1.14.0 h1:2mOpI4JVVPBN+WQRa0WKH2eXR+Ey+uK4n7Zj0aYpIQA=
github.com/onsi/ginkgo v1.14.0/go.mod h1:iSB4RoI2tjJc9BBv4NKIKWKya62Rps+oPG/Lv9klQyY=
github.com/onsi/ginkgo/v2 v2.19.0 h1:9Cnnf7UHo57Hy3k6/m5k3dRfGTMXGvxhHFvkDTCTpvA=
github.com/onsi/ginkgo/v2 v2.19.0/go.mod h1:rlwLi9PilAFJ8jCg9UE1QP6VBpd6/xj3SRC0d6TU0To=
github.com/onsi/ginkgo/v2 v2.22.0 h1:Yed107/8DjTr0lKCNt7Dn8yQ6ybuDRQoMGrNFKzMfHg=
github.com/onsi/ginkgo/v2 v2.22.0/go.mod h1:7Du3c42kxCUegi0IImZ1wUQzMBVecgIHjR1C+NkhLQo=
github.com/onsi/gomega v0.0.0-20170829124025-dcabb60a477c/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA=
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
github.com/onsi/gomega v1.33.1 h1:dsYjIxxSR755MDmKVsaFQTE22ChNBcuuTWgkUDSubOk=
github.com/onsi/gomega v1.33.1/go.mod h1:U4R44UsT+9eLIaYRB2a5qajjtQYn0hauxvRm16AVYg0=
github.com/onsi/gomega v1.36.1 h1:bJDPBO7ibjxcbHMgSCoo4Yj18UWbKDlLwX1x9sybDcw=
github.com/onsi/gomega v1.36.1/go.mod h1:PvZbdDc8J6XJEpDK4HCuRBm8a6Fzp9/DmhC9C7yFlog=
github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/pelletier/go-toml v1.9.3/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCkoOuaOx1Y+c=
@@ -804,8 +806,8 @@ golang.org/x/exp v0.0.0-20191227195350-da58074b4299/go.mod h1:2RIsYlXP63K8oxa1u0
golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EHIKF9dgMWnmCNThgcyBT1FY9mM=
golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU=
golang.org/x/exp v0.0.0-20230522175609-2e198f4a06a1 h1:k/i9J1pBpvlfR+9QsetwPyERsqu1GIbi967PQMq3Ivc=
golang.org/x/exp v0.0.0-20230522175609-2e198f4a06a1/go.mod h1:V1LtkGg67GoY2N1AnLN78QLrzxkLyJw7RJb1gzOOz9w=
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 h1:2dVuKD2vS7b0QIHQbpyTISPd0LeHDbnYEryqj5Q1ug8=
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56/go.mod h1:M4RDyNAINzryxdtnbRXRL/OHtkFuWGRjvuhBJpk2IlY=
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
@@ -1206,7 +1208,6 @@ gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
@@ -1217,47 +1218,50 @@ honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt
honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
k8s.io/api v0.22.2/go.mod h1:y3ydYpLJAaDI+BbSe2xmGcqxiWHmWjkEeIbiwHvnPR8=
k8s.io/api v0.31.3 h1:umzm5o8lFbdN/hIXbrK9oRpOproJO62CV1zqxXrLgk8=
k8s.io/api v0.31.3/go.mod h1:UJrkIp9pnMOI9K2nlL6vwpxRzzEX5sWgn8kGQe92kCE=
k8s.io/apiextensions-apiserver v0.31.3 h1:+GFGj2qFiU7rGCsA5o+p/rul1OQIq6oYpQw4+u+nciE=
k8s.io/apiextensions-apiserver v0.31.3/go.mod h1:2DSpFhUZZJmn/cr/RweH1cEVVbzFw9YBu4T+U3mf1e4=
k8s.io/api v0.33.3 h1:SRd5t//hhkI1buzxb288fy2xvjubstenEKL9K51KBI8=
k8s.io/api v0.33.3/go.mod h1:01Y/iLUjNBM3TAvypct7DIj0M0NIZc+PzAHCIo0CYGE=
k8s.io/apiextensions-apiserver v0.33.3 h1:qmOcAHN6DjfD0v9kxL5udB27SRP6SG/MTopmge3MwEs=
k8s.io/apiextensions-apiserver v0.33.3/go.mod h1:oROuctgo27mUsyp9+Obahos6CWcMISSAPzQ77CAQGz8=
k8s.io/apimachinery v0.22.2/go.mod h1:O3oNtNadZdeOMxHFVxOreoznohCpy0z6mocxbZr7oJ0=
k8s.io/apimachinery v0.31.3 h1:6l0WhcYgasZ/wk9ktLq5vLaoXJJr5ts6lkaQzgeYPq4=
k8s.io/apimachinery v0.31.3/go.mod h1:rsPdaZJfTfLsNJSQzNHQvYoTmxhoOEofxtOsF3rtsMo=
k8s.io/apimachinery v0.33.3 h1:4ZSrmNa0c/ZpZJhAgRdcsFcZOw1PQU1bALVQ0B3I5LA=
k8s.io/apimachinery v0.33.3/go.mod h1:BHW0YOu7n22fFv/JkYOEfkUYNRN0fj0BlvMFWA7b+SM=
k8s.io/cli-runtime v0.22.2/go.mod h1:tkm2YeORFpbgQHEK/igqttvPTRIHFRz5kATlw53zlMI=
k8s.io/cli-runtime v0.31.3 h1:fEQD9Xokir78y7pVK/fCJN090/iYNrLHpFbGU4ul9TI=
k8s.io/cli-runtime v0.31.3/go.mod h1:Q2jkyTpl+f6AtodQvgDI8io3jrfr+Z0LyQBPJJ2Btq8=
k8s.io/cli-runtime v0.33.3 h1:Dgy4vPjNIu8LMJBSvs8W0LcdV0PX/8aGG1DA1W8lklA=
k8s.io/cli-runtime v0.33.3/go.mod h1:yklhLklD4vLS8HNGgC9wGiuHWze4g7x6XQZ+8edsKEo=
k8s.io/client-go v0.22.2/go.mod h1:sAlhrkVDf50ZHx6z4K0S40wISNTarf1r800F+RlCF6U=
k8s.io/client-go v0.31.3 h1:CAlZuM+PH2cm+86LOBemaJI/lQ5linJ6UFxKX/SoG+4=
k8s.io/client-go v0.31.3/go.mod h1:2CgjPUTpv3fE5dNygAr2NcM8nhHzXvxB8KL5gYc3kJs=
k8s.io/client-go v0.33.3 h1:M5AfDnKfYmVJif92ngN532gFqakcGi6RvaOF16efrpA=
k8s.io/client-go v0.33.3/go.mod h1:luqKBQggEf3shbxHY4uVENAxrDISLOarxpTKMiUuujg=
k8s.io/gengo v0.0.0-20200413195148-3a45101e95ac/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
k8s.io/klog/v2 v2.0.0/go.mod h1:PBfzABfn139FHAV07az/IF9Wp1bkk3vpT2XSJ76fSDE=
k8s.io/klog/v2 v2.9.0/go.mod h1:hy9LJ/NvuK+iVyP4Ehqva4HxZG/oXyIS3n3Jmire4Ec=
k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk=
k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE=
k8s.io/kube-aggregator v0.31.3 h1:DqHPdTglJHgOfB884AaroyxrML/aL82ASYOh65m7MSk=
k8s.io/kube-aggregator v0.31.3/go.mod h1:Kx59Xjnf0SnY47qf9Or++4y3XCHQ3kR0xk1Di6KFiFU=
k8s.io/kube-aggregator v0.33.3 h1:Pa6hQpKJMX0p0D2wwcxXJgu02++gYcGWXoW1z1ZJDfo=
k8s.io/kube-aggregator v0.33.3/go.mod h1:hwvkUoQ8q6gv0+SgNnlmQ3eUue1zHhJKTHsX7BwxwSE=
k8s.io/kube-openapi v0.0.0-20210421082810-95288971da7e/go.mod h1:vHXdDvt9+2spS2Rx9ql3I8tycm3H9FDfdUoIuKCefvw=
k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 h1:BZqlfIlq5YbRMFko6/PM7FjZpUb45WallggurYhKGag=
k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340/go.mod h1:yD4MZYeKMBwQKVht279WycxKyM84kkAx2DPrTXaeb98=
k8s.io/metrics v0.31.3 h1:DkT9I3gFlb2/z+/4BMY7WrQ/PnbukuV4Yli82v/KBCM=
k8s.io/metrics v0.31.3/go.mod h1:2w9gpd8z+13oJmaPR6p3kDyrDqnxSyoKpnOw2qLIdhI=
k8s.io/kube-openapi v0.0.0-20250318190949-c8a335a9a2ff h1:/usPimJzUKKu+m+TE36gUyGcf03XZEP0ZIKgKj35LS4=
k8s.io/kube-openapi v0.0.0-20250318190949-c8a335a9a2ff/go.mod h1:5jIi+8yX4RIb8wk3XwBo5Pq2ccx4FP10ohkbSKCZoK8=
k8s.io/metrics v0.33.3 h1:9CcqBz15JZfISqwca33gdHS8I6XfsK1vA8WUdEnG70g=
k8s.io/metrics v0.33.3/go.mod h1:Aw+cdg4AYHw0HvUY+lCyq40FOO84awrqvJRTw0cmXDs=
k8s.io/utils v0.0.0-20210819203725-bdf08cb9a70a/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
k8s.io/utils v0.0.0-20240711033017-18e509b52bc8 h1:pUdcCO1Lk/tbT5ztQWOBi5HBgbBP1J8+AsQnQCKsi8A=
k8s.io/utils v0.0.0-20240711033017-18e509b52bc8/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738 h1:M3sRQVHv7vB20Xc2ybTt7ODCeFj6JSWYFzOFnYeS6Ro=
k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=
sigs.k8s.io/controller-runtime v0.19.3 h1:XO2GvC9OPftRst6xWCpTgBZO04S2cbp0Qqkj8bX1sPw=
sigs.k8s.io/controller-runtime v0.19.3/go.mod h1:j4j87DqtsThvwTv5/Tc5NFRyyF/RF0ip4+62tbTSIUM=
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd h1:EDPBXCAspyGV4jQlpZSudPeMmr1bNJefnuqLsRAsHZo=
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0=
sigs.k8s.io/controller-runtime v0.21.0 h1:CYfjpEuicjUecRk+KAeyYh+ouUBn4llGyDYytIGcJS8=
sigs.k8s.io/controller-runtime v0.21.0/go.mod h1:OSg14+F65eWqIu4DceX7k/+QRAbTTvxeQSNSOQpukWM=
sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3 h1:/Rv+M11QRah1itp8VhT6HoVx1Ray9eB4DBr+K+/sCJ8=
sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3/go.mod h1:18nIHnGi6636UCz6m8i4DhaJ65T6EruyzmoQqI2BVDo=
sigs.k8s.io/kustomize/api v0.8.11/go.mod h1:a77Ls36JdfCWojpUqR6m60pdGY1AYFix4AH83nJtY1g=
sigs.k8s.io/kustomize/kyaml v0.11.0/go.mod h1:GNMwjim4Ypgp/MueD3zXHLRJEjz7RvtPae0AwlvEMFM=
sigs.k8s.io/randfill v0.0.0-20250304075658-069ef1bbf016/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY=
sigs.k8s.io/randfill v1.0.0 h1:JfjMILfT8A6RbawdsK2JXGBR5AQVfd+9TbzrlneTyrU=
sigs.k8s.io/randfill v1.0.0/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY=
sigs.k8s.io/structured-merge-diff/v4 v4.0.2/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw=
sigs.k8s.io/structured-merge-diff/v4 v4.1.2/go.mod h1:j/nl6xW8vLS49O8YvXW1ocPhZawJtm+Yrr7PPRQ0Vg4=
sigs.k8s.io/structured-merge-diff/v4 v4.4.1 h1:150L+0vs/8DA78h1u02ooW1/fFq/Lwr+sGiqlzvrtq4=
sigs.k8s.io/structured-merge-diff/v4 v4.4.1/go.mod h1:N8hJocpFajUSSeSJ9bOZ77VzejKZaXsTtZo4/u7Io08=
sigs.k8s.io/structured-merge-diff/v4 v4.6.0 h1:IUA9nvMmnKWcj5jl84xn+T5MnlZKThmUW1TdblaLVAc=
sigs.k8s.io/structured-merge-diff/v4 v4.6.0/go.mod h1:dDy58f92j70zLsuZVuUX5Wp9vtxXpaZnkPGWeqDfCps=
sigs.k8s.io/yaml v1.2.0/go.mod h1:yfXDCHCao9+ENCvLSE62v9VSji2MKu5jeNfTrofGhJc=
sigs.k8s.io/yaml v1.4.0 h1:Mk1wCc2gy/F0THH0TAp1QYyJNzRm2KCLy3o5ASXVI5E=
sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY=

View File

@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
FROM --platform=$TARGETPLATFORM golang:1.24-bookworm
FROM --platform=$TARGETPLATFORM golang:1.24.9-bookworm
ARG GOPROXY

View File

@@ -366,7 +366,7 @@ func (kb *kubernetesBackupper) BackupWithResolvers(
discoveryHelper: kb.discoveryHelper,
podVolumeBackupper: podVolumeBackupper,
podVolumeSnapshotTracker: podvolume.NewTracker(),
volumeSnapshotterGetter: volumeSnapshotterGetter,
volumeSnapshotterCache: NewVolumeSnapshotterCache(volumeSnapshotterGetter),
itemHookHandler: &hook.DefaultItemHookHandler{
PodCommandExecutor: kb.podCommandExecutor,
},

View File

@@ -3269,7 +3269,7 @@ func TestBackupWithSnapshots(t *testing.T) {
err := h.backupper.Backup(h.log, tc.req, backupFile, nil, nil, tc.snapshotterGetter)
require.NoError(t, err)
assert.Equal(t, tc.want, tc.req.VolumeSnapshots)
assert.Equal(t, tc.want, tc.req.VolumeSnapshots.Get())
})
}
}
@@ -4213,7 +4213,7 @@ func TestBackupWithPodVolume(t *testing.T) {
assert.Equal(t, tc.want, req.PodVolumeBackups)
// this assumes that we don't have any test cases where some PVs should be snapshotted using a VolumeSnapshotter
assert.Nil(t, req.VolumeSnapshots)
assert.Nil(t, req.VolumeSnapshots.Get())
})
}
}

View File

@@ -70,13 +70,11 @@ type itemBackupper struct {
discoveryHelper discovery.Helper
podVolumeBackupper podvolume.Backupper
podVolumeSnapshotTracker *podvolume.Tracker
volumeSnapshotterGetter VolumeSnapshotterGetter
kubernetesBackupper *kubernetesBackupper
itemHookHandler hook.ItemHookHandler
snapshotLocationVolumeSnapshotters map[string]vsv1.VolumeSnapshotter
hookTracker *hook.HookTracker
volumeHelperImpl volumehelper.VolumeHelper
volumeSnapshotterCache *VolumeSnapshotterCache
itemHookHandler hook.ItemHookHandler
hookTracker *hook.HookTracker
volumeHelperImpl volumehelper.VolumeHelper
}
type FileForArchive struct {
@@ -502,30 +500,6 @@ func (ib *itemBackupper) executeActions(
return obj, itemFiles, nil
}
// volumeSnapshotter instantiates and initializes a VolumeSnapshotter given a VolumeSnapshotLocation,
// or returns an existing one if one's already been initialized for the location.
func (ib *itemBackupper) volumeSnapshotter(snapshotLocation *velerov1api.VolumeSnapshotLocation) (vsv1.VolumeSnapshotter, error) {
if bs, ok := ib.snapshotLocationVolumeSnapshotters[snapshotLocation.Name]; ok {
return bs, nil
}
bs, err := ib.volumeSnapshotterGetter.GetVolumeSnapshotter(snapshotLocation.Spec.Provider)
if err != nil {
return nil, err
}
if err := bs.Init(snapshotLocation.Spec.Config); err != nil {
return nil, err
}
if ib.snapshotLocationVolumeSnapshotters == nil {
ib.snapshotLocationVolumeSnapshotters = make(map[string]vsv1.VolumeSnapshotter)
}
ib.snapshotLocationVolumeSnapshotters[snapshotLocation.Name] = bs
return bs, nil
}
// zoneLabelDeprecated is the label that stores availability-zone info
// on PVs this is deprecated on Kubernetes >= 1.17.0
// zoneLabel is the label that stores availability-zone info
@@ -641,7 +615,7 @@ func (ib *itemBackupper) takePVSnapshot(obj runtime.Unstructured, log logrus.Fie
for _, snapshotLocation := range ib.backupRequest.SnapshotLocations {
log := log.WithField("volumeSnapshotLocation", snapshotLocation.Name)
bs, err := ib.volumeSnapshotter(snapshotLocation)
bs, err := ib.volumeSnapshotterCache.SetNX(snapshotLocation)
if err != nil {
log.WithError(err).Error("Error getting volume snapshotter for volume snapshot location")
continue
@@ -699,7 +673,7 @@ func (ib *itemBackupper) takePVSnapshot(obj runtime.Unstructured, log logrus.Fie
snapshot.Status.Phase = volume.SnapshotPhaseCompleted
snapshot.Status.ProviderSnapshotID = snapshotID
}
ib.backupRequest.VolumeSnapshots = append(ib.backupRequest.VolumeSnapshots, snapshot)
ib.backupRequest.VolumeSnapshots.Add(snapshot)
// nil errors are automatically removed
return kubeerrs.NewAggregate(errs)

View File

@@ -17,6 +17,8 @@ limitations under the License.
package backup
import (
"sync"
"github.com/vmware-tanzu/velero/internal/hook"
"github.com/vmware-tanzu/velero/internal/resourcepolicies"
"github.com/vmware-tanzu/velero/internal/volume"
@@ -32,11 +34,27 @@ type itemKey struct {
name string
}
type SynchronizedVSList struct {
sync.Mutex
VolumeSnapshotList []*volume.Snapshot
}
func (s *SynchronizedVSList) Add(vs *volume.Snapshot) {
s.Lock()
defer s.Unlock()
s.VolumeSnapshotList = append(s.VolumeSnapshotList, vs)
}
func (s *SynchronizedVSList) Get() []*volume.Snapshot {
s.Lock()
defer s.Unlock()
return s.VolumeSnapshotList
}
// Request is a request for a backup, with all references to other objects
// materialized (e.g. backup/snapshot locations, includes/excludes, etc.)
type Request struct {
*velerov1api.Backup
StorageLocation *velerov1api.BackupStorageLocation
SnapshotLocations []*velerov1api.VolumeSnapshotLocation
NamespaceIncludesExcludes *collections.IncludesExcludes
@@ -44,7 +62,7 @@ type Request struct {
ResourceHooks []hook.ResourceHook
ResolvedActions []framework.BackupItemResolvedActionV2
ResolvedItemBlockActions []framework.ItemBlockResolvedAction
VolumeSnapshots []*volume.Snapshot
VolumeSnapshots SynchronizedVSList
PodVolumeBackups []*velerov1api.PodVolumeBackup
BackedUpItems *backedUpItemsMap
itemOperationsList *[]*itemoperation.BackupOperation
@@ -80,7 +98,7 @@ func (r *Request) FillVolumesInformation() {
}
r.VolumesInformation.SkippedPVs = skippedPVMap
r.VolumesInformation.NativeSnapshots = r.VolumeSnapshots
r.VolumesInformation.NativeSnapshots = r.VolumeSnapshots.Get()
r.VolumesInformation.PodVolumeBackups = r.PodVolumeBackups
r.VolumesInformation.BackupOperations = *r.GetItemOperationsList()
r.VolumesInformation.BackupName = r.Backup.Name

View File

@@ -0,0 +1,42 @@
package backup
import (
"sync"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
vsv1 "github.com/vmware-tanzu/velero/pkg/plugin/velero/volumesnapshotter/v1"
)
type VolumeSnapshotterCache struct {
cache map[string]vsv1.VolumeSnapshotter
mutex sync.Mutex
getter VolumeSnapshotterGetter
}
func NewVolumeSnapshotterCache(getter VolumeSnapshotterGetter) *VolumeSnapshotterCache {
return &VolumeSnapshotterCache{
cache: make(map[string]vsv1.VolumeSnapshotter),
getter: getter,
}
}
func (c *VolumeSnapshotterCache) SetNX(location *velerov1api.VolumeSnapshotLocation) (vsv1.VolumeSnapshotter, error) {
c.mutex.Lock()
defer c.mutex.Unlock()
if snapshotter, exists := c.cache[location.Name]; exists {
return snapshotter, nil
}
snapshotter, err := c.getter.GetVolumeSnapshotter(location.Spec.Provider)
if err != nil {
return nil, err
}
if err := snapshotter.Init(location.Spec.Config); err != nil {
return nil, err
}
c.cache[location.Name] = snapshotter
return snapshotter, nil
}

View File

@@ -545,24 +545,22 @@ func (o *Options) Validate(c *cobra.Command, args []string, f client.Factory) er
return fmt.Errorf("fail to create go-client %w", err)
}
// If either Linux or Windows node-agent is installed, and the node-agent-configmap
// is specified, need to validate the ConfigMap.
if (o.UseNodeAgent || o.UseNodeAgentWindows) && len(o.NodeAgentConfigMap) > 0 {
if len(o.NodeAgentConfigMap) > 0 {
if err := kubeutil.VerifyJSONConfigs(c.Context(), o.Namespace, crClient, o.NodeAgentConfigMap, &velerotypes.NodeAgentConfigs{}); err != nil {
return fmt.Errorf("--node-agent-configmap specified ConfigMap %s is invalid", o.NodeAgentConfigMap)
return fmt.Errorf("--node-agent-configmap specified ConfigMap %s is invalid: %w", o.NodeAgentConfigMap, err)
}
}
if len(o.RepoMaintenanceJobConfigMap) > 0 {
if err := kubeutil.VerifyJSONConfigs(c.Context(), o.Namespace, crClient, o.RepoMaintenanceJobConfigMap, &velerotypes.JobConfigs{}); err != nil {
return fmt.Errorf("--repo-maintenance-job-configmap specified ConfigMap %s is invalid", o.RepoMaintenanceJobConfigMap)
return fmt.Errorf("--repo-maintenance-job-configmap specified ConfigMap %s is invalid: %w", o.RepoMaintenanceJobConfigMap, err)
}
}
if len(o.BackupRepoConfigMap) > 0 {
config := make(map[string]any)
if err := kubeutil.VerifyJSONConfigs(c.Context(), o.Namespace, crClient, o.BackupRepoConfigMap, &config); err != nil {
return fmt.Errorf("--backup-repository-configmap specified ConfigMap %s is invalid", o.BackupRepoConfigMap)
return fmt.Errorf("--backup-repository-configmap specified ConfigMap %s is invalid: %w", o.BackupRepoConfigMap, err)
}
}

View File

@@ -308,6 +308,8 @@ func (s *nodeAgentServer) run() {
s.logger.Infof("Using customized backupPVC config %v", backupPVCConfig)
}
privilegedFsBackup := s.dataPathConfigs != nil && s.dataPathConfigs.PrivilegedFsBackup
podResources := corev1api.ResourceRequirements{}
if s.dataPathConfigs != nil && s.dataPathConfigs.PodResources != nil {
if res, err := kube.ParseResourceRequirements(s.dataPathConfigs.PodResources.CPURequest, s.dataPathConfigs.PodResources.MemoryRequest, s.dataPathConfigs.PodResources.CPULimit, s.dataPathConfigs.PodResources.MemoryLimit); err != nil {
@@ -327,12 +329,12 @@ func (s *nodeAgentServer) run() {
}
}
pvbReconciler := controller.NewPodVolumeBackupReconciler(s.mgr.GetClient(), s.mgr, s.kubeClient, s.dataPathMgr, s.vgdpCounter, s.nodeName, s.config.dataMoverPrepareTimeout, s.config.resourceTimeout, podResources, s.metrics, s.logger, dataMovePriorityClass)
pvbReconciler := controller.NewPodVolumeBackupReconciler(s.mgr.GetClient(), s.mgr, s.kubeClient, s.dataPathMgr, s.vgdpCounter, s.nodeName, s.config.dataMoverPrepareTimeout, s.config.resourceTimeout, podResources, s.metrics, s.logger, dataMovePriorityClass, privilegedFsBackup)
if err := pvbReconciler.SetupWithManager(s.mgr); err != nil {
s.logger.Fatal(err, "unable to create controller", "controller", constant.ControllerPodVolumeBackup)
}
pvrReconciler := controller.NewPodVolumeRestoreReconciler(s.mgr.GetClient(), s.mgr, s.kubeClient, s.dataPathMgr, s.vgdpCounter, s.nodeName, s.config.dataMoverPrepareTimeout, s.config.resourceTimeout, podResources, s.logger, dataMovePriorityClass)
pvrReconciler := controller.NewPodVolumeRestoreReconciler(s.mgr.GetClient(), s.mgr, s.kubeClient, s.dataPathMgr, s.vgdpCounter, s.nodeName, s.config.dataMoverPrepareTimeout, s.config.resourceTimeout, podResources, s.logger, dataMovePriorityClass, privilegedFsBackup)
if err := pvrReconciler.SetupWithManager(s.mgr); err != nil {
s.logger.WithError(err).Fatal("Unable to create the pod volume restore controller")
}

View File

@@ -734,8 +734,8 @@ func (b *backupReconciler) runBackup(backup *pkgbackup.Request) error {
// native snapshots phase will either be failed or completed right away
// https://github.com/vmware-tanzu/velero/blob/de3ea52f0cc478e99efa7b9524c7f353514261a4/pkg/backup/item_backupper.go#L632-L639
backup.Status.VolumeSnapshotsAttempted = len(backup.VolumeSnapshots)
for _, snap := range backup.VolumeSnapshots {
backup.Status.VolumeSnapshotsAttempted = len(backup.VolumeSnapshots.Get())
for _, snap := range backup.VolumeSnapshots.Get() {
if snap.Status.Phase == volume.SnapshotPhaseCompleted {
backup.Status.VolumeSnapshotsCompleted++
}
@@ -882,7 +882,7 @@ func persistBackup(backup *pkgbackup.Request,
}
// Velero-native volume snapshots (as opposed to CSI ones)
nativeVolumeSnapshots, errs := encode.ToJSONGzip(backup.VolumeSnapshots, "native volumesnapshots list")
nativeVolumeSnapshots, errs := encode.ToJSONGzip(backup.VolumeSnapshots.Get(), "native volumesnapshots list")
if errs != nil {
persistErrs = append(persistErrs, errs...)
}

View File

@@ -1047,7 +1047,7 @@ func TestRecallMaintenance(t *testing.T) {
{
name: "wait completion error",
runtimeScheme: schemeFail,
expectedErr: "error waiting incomplete repo maintenance job for repo repo: error listing maintenance job for repo repo: no kind is registered for the type v1.JobList in scheme \"pkg/runtime/scheme.go:100\"",
expectedErr: "error waiting incomplete repo maintenance job for repo repo: error listing maintenance job for repo repo: no kind is registered for the type v1.JobList in scheme",
},
{
name: "no consolidate result",
@@ -1105,7 +1105,7 @@ func TestRecallMaintenance(t *testing.T) {
err := r.recallMaintenance(t.Context(), backupRepo, velerotest.NewLogger())
if test.expectedErr != "" {
assert.EqualError(t, err, test.expectedErr)
assert.ErrorContains(t, err, test.expectedErr)
} else {
assert.NoError(t, err)

View File

@@ -916,6 +916,13 @@ func (r *DataUploadReconciler) setupExposeParam(du *velerov2alpha1api.DataUpload
return nil, errors.Wrapf(err, "failed to get PVC %s/%s", du.Spec.SourceNamespace, du.Spec.SourcePVC)
}
pv := &corev1api.PersistentVolume{}
if err := r.client.Get(context.Background(), types.NamespacedName{
Name: pvc.Spec.VolumeName,
}, pv); err != nil {
return nil, errors.Wrapf(err, "failed to get source PV %s", pvc.Spec.VolumeName)
}
nodeOS := kube.GetPVCAttachingNodeOS(pvc, r.kubeClient.CoreV1(), r.kubeClient.StorageV1(), log)
if err := kube.HasNodeWithOS(context.Background(), nodeOS, r.kubeClient.CoreV1()); err != nil {
@@ -963,6 +970,8 @@ func (r *DataUploadReconciler) setupExposeParam(du *velerov2alpha1api.DataUpload
return &exposer.CSISnapshotExposeParam{
SnapshotName: du.Spec.CSISnapshot.VolumeSnapshot,
SourceNamespace: du.Spec.SourceNamespace,
SourcePVCName: pvc.Name,
SourcePVName: pv.Name,
StorageClass: du.Spec.CSISnapshot.StorageClass,
HostingPodLabels: hostingPodLabels,
HostingPodAnnotations: hostingPodAnnotation,

View File

@@ -60,7 +60,7 @@ const (
// NewPodVolumeBackupReconciler creates the PodVolumeBackupReconciler instance
func NewPodVolumeBackupReconciler(client client.Client, mgr manager.Manager, kubeClient kubernetes.Interface, dataPathMgr *datapath.Manager,
counter *exposer.VgdpCounter, nodeName string, preparingTimeout time.Duration, resourceTimeout time.Duration, podResources corev1api.ResourceRequirements,
metrics *metrics.ServerMetrics, logger logrus.FieldLogger, dataMovePriorityClass string) *PodVolumeBackupReconciler {
metrics *metrics.ServerMetrics, logger logrus.FieldLogger, dataMovePriorityClass string, privileged bool) *PodVolumeBackupReconciler {
return &PodVolumeBackupReconciler{
client: client,
mgr: mgr,
@@ -77,6 +77,7 @@ func NewPodVolumeBackupReconciler(client client.Client, mgr manager.Manager, kub
exposer: exposer.NewPodVolumeExposer(kubeClient, logger),
cancelledPVB: make(map[string]time.Time),
dataMovePriorityClass: dataMovePriorityClass,
privileged: privileged,
}
}
@@ -97,6 +98,7 @@ type PodVolumeBackupReconciler struct {
resourceTimeout time.Duration
cancelledPVB map[string]time.Time
dataMovePriorityClass string
privileged bool
}
// +kubebuilder:rbac:groups=velero.io,resources=podvolumebackups,verbs=get;list;watch;create;update;patch;delete
@@ -837,6 +839,7 @@ func (r *PodVolumeBackupReconciler) setupExposeParam(pvb *velerov1api.PodVolumeB
Resources: r.podResources,
// Priority class name for the data mover pod, retrieved from node-agent-configmap
PriorityClassName: r.dataMovePriorityClass,
Privileged: r.privileged,
}
}

View File

@@ -151,7 +151,8 @@ func initPVBReconcilerWithError(needError ...error) (*PodVolumeBackupReconciler,
corev1api.ResourceRequirements{},
metrics.NewServerMetrics(),
velerotest.NewLogger(),
"", // dataMovePriorityClass
"", // dataMovePriorityClass
false, // privileged
), nil
}

View File

@@ -56,7 +56,7 @@ import (
func NewPodVolumeRestoreReconciler(client client.Client, mgr manager.Manager, kubeClient kubernetes.Interface, dataPathMgr *datapath.Manager,
counter *exposer.VgdpCounter, nodeName string, preparingTimeout time.Duration, resourceTimeout time.Duration, podResources corev1api.ResourceRequirements,
logger logrus.FieldLogger, dataMovePriorityClass string) *PodVolumeRestoreReconciler {
logger logrus.FieldLogger, dataMovePriorityClass string, privileged bool) *PodVolumeRestoreReconciler {
return &PodVolumeRestoreReconciler{
client: client,
mgr: mgr,
@@ -72,6 +72,7 @@ func NewPodVolumeRestoreReconciler(client client.Client, mgr manager.Manager, ku
exposer: exposer.NewPodVolumeExposer(kubeClient, logger),
cancelledPVR: make(map[string]time.Time),
dataMovePriorityClass: dataMovePriorityClass,
privileged: privileged,
}
}
@@ -90,6 +91,7 @@ type PodVolumeRestoreReconciler struct {
resourceTimeout time.Duration
cancelledPVR map[string]time.Time
dataMovePriorityClass string
privileged bool
}
// +kubebuilder:rbac:groups=velero.io,resources=podvolumerestores,verbs=get;list;watch;create;update;patch;delete
@@ -896,6 +898,7 @@ func (r *PodVolumeRestoreReconciler) setupExposeParam(pvr *velerov1api.PodVolume
Resources: r.podResources,
// Priority class name for the data mover pod, retrieved from node-agent-configmap
PriorityClassName: r.dataMovePriorityClass,
Privileged: r.privileged,
}
}

View File

@@ -617,7 +617,7 @@ func initPodVolumeRestoreReconcilerWithError(objects []runtime.Object, cliObj []
dataPathMgr := datapath.NewManager(1)
return NewPodVolumeRestoreReconciler(fakeClient, nil, fakeKubeClient, dataPathMgr, nil, "test-node", time.Minute*5, time.Minute, corev1api.ResourceRequirements{}, velerotest.NewLogger(), ""), nil
return NewPodVolumeRestoreReconciler(fakeClient, nil, fakeKubeClient, dataPathMgr, nil, "test-node", time.Minute*5, time.Minute, corev1api.ResourceRequirements{}, velerotest.NewLogger(), "", false), nil
}
func TestPodVolumeRestoreReconcile(t *testing.T) {

View File

@@ -229,7 +229,7 @@ func (c *scheduleReconciler) checkIfBackupInNewOrProgress(schedule *velerov1.Sch
}
for _, backup := range backupList.Items {
if backup.Status.Phase == velerov1.BackupPhaseNew || backup.Status.Phase == velerov1.BackupPhaseInProgress {
if backup.Status.Phase == "" || backup.Status.Phase == velerov1.BackupPhaseNew || backup.Status.Phase == velerov1.BackupPhaseInProgress {
log.Debugf("%s/%s still has backups that are in InProgress or New...", schedule.Namespace, schedule.Name)
return true
}

View File

@@ -149,6 +149,13 @@ func TestReconcileOfSchedule(t *testing.T) {
expectedPhase: string(velerov1.SchedulePhaseEnabled),
backup: builder.ForBackup("ns", "name-20220905120000").ObjectMeta(builder.WithLabels(velerov1.ScheduleNameLabel, "name")).Phase(velerov1.BackupPhaseNew).Result(),
},
{
name: "schedule already has backup with empty phase (not yet reconciled).",
schedule: newScheduleBuilder(velerov1.SchedulePhaseEnabled).CronSchedule("@every 5m").LastBackupTime("2000-01-01 00:00:00").Result(),
fakeClockTime: "2017-01-01 12:00:00",
expectedPhase: string(velerov1.SchedulePhaseEnabled),
backup: builder.ForBackup("ns", "name-20220905120000").ObjectMeta(builder.WithLabels(velerov1.ScheduleNameLabel, "name")).Phase("").Result(),
},
}
for _, test := range tests {
@@ -215,10 +222,10 @@ func TestReconcileOfSchedule(t *testing.T) {
backups := &velerov1.BackupList{}
require.NoError(t, client.List(ctx, backups))
// If backup associated with schedule's status is in New or InProgress,
// If backup associated with schedule's status is in New or InProgress or empty phase,
// new backup shouldn't be submitted.
if test.backup != nil &&
(test.backup.Status.Phase == velerov1.BackupPhaseNew || test.backup.Status.Phase == velerov1.BackupPhaseInProgress) {
(test.backup.Status.Phase == "" || test.backup.Status.Phase == velerov1.BackupPhaseNew || test.backup.Status.Phase == velerov1.BackupPhaseInProgress) {
assert.Len(t, backups.Items, 1)
require.NoError(t, client.Delete(ctx, test.backup))
}
@@ -479,4 +486,19 @@ func TestCheckIfBackupInNewOrProgress(t *testing.T) {
reconciler = NewScheduleReconciler("namespace", logger, client, metrics.NewServerMetrics(), false)
result = reconciler.checkIfBackupInNewOrProgress(testSchedule)
assert.True(t, result)
// Clean backup in InProgress phase.
err = client.Delete(ctx, inProgressBackup)
require.NoError(t, err, "fail to delete backup in InProgress phase in TestCheckIfBackupInNewOrProgress: %v", err)
// Create backup with empty phase (not yet reconciled).
emptyPhaseBackup := builder.ForBackup("ns", "backup-3").
ObjectMeta(builder.WithLabels(velerov1.ScheduleNameLabel, "name")).
Phase("").Result()
err = client.Create(ctx, emptyPhaseBackup)
require.NoError(t, err, "fail to create backup with empty phase in TestCheckIfBackupInNewOrProgress: %v", err)
reconciler = NewScheduleReconciler("namespace", logger, client, metrics.NewServerMetrics(), false)
result = reconciler.checkIfBackupInNewOrProgress(testSchedule)
assert.True(t, result)
}

View File

@@ -35,6 +35,7 @@ import (
"github.com/vmware-tanzu/velero/pkg/nodeagent"
velerotypes "github.com/vmware-tanzu/velero/pkg/types"
"github.com/vmware-tanzu/velero/pkg/util"
"github.com/vmware-tanzu/velero/pkg/util/boolptr"
"github.com/vmware-tanzu/velero/pkg/util/csi"
"github.com/vmware-tanzu/velero/pkg/util/kube"
@@ -48,6 +49,12 @@ type CSISnapshotExposeParam struct {
// SourceNamespace is the original namespace of the volume that the snapshot is taken for
SourceNamespace string
// SourcePVCName is the original name of the PVC that the snapshot is taken for
SourcePVCName string
// SourcePVName is the name of PV for SourcePVC
SourcePVName string
// AccessMode defines the mode to access the snapshot
AccessMode string
@@ -188,6 +195,8 @@ func (e *csiSnapshotExposer) Expose(ctx context.Context, ownerObject corev1api.O
backupPVCStorageClass := csiExposeParam.StorageClass
backupPVCReadOnly := false
spcNoRelabeling := false
backupPVCAnnotations := map[string]string{}
intoleratableNodes := []string{}
if value, exists := csiExposeParam.BackupPVCConfig[csiExposeParam.StorageClass]; exists {
if value.StorageClass != "" {
backupPVCStorageClass = value.StorageClass
@@ -201,9 +210,22 @@ func (e *csiSnapshotExposer) Expose(ctx context.Context, ownerObject corev1api.O
curLog.WithField("vs name", volumeSnapshot.Name).Warn("Ignoring spcNoRelabling for read-write volume")
}
}
if len(value.Annotations) > 0 {
backupPVCAnnotations = value.Annotations
}
if _, found := backupPVCAnnotations[util.VSphereCNSFastCloneAnno]; found {
if n, err := kube.GetPVAttachedNodes(ctx, csiExposeParam.SourcePVName, e.kubeClient.StorageV1()); err != nil {
curLog.WithField("source PV", csiExposeParam.SourcePVName).WithError(err).Warnf("Failed to get attached node for source PV, ignore %s annotation", util.VSphereCNSFastCloneAnno)
delete(backupPVCAnnotations, util.VSphereCNSFastCloneAnno)
} else {
intoleratableNodes = n
}
}
}
backupPVC, err := e.createBackupPVC(ctx, ownerObject, backupVS.Name, backupPVCStorageClass, csiExposeParam.AccessMode, volumeSize, backupPVCReadOnly)
backupPVC, err := e.createBackupPVC(ctx, ownerObject, backupVS.Name, backupPVCStorageClass, csiExposeParam.AccessMode, volumeSize, backupPVCReadOnly, backupPVCAnnotations)
if err != nil {
return errors.Wrap(err, "error to create backup pvc")
}
@@ -231,6 +253,7 @@ func (e *csiSnapshotExposer) Expose(ctx context.Context, ownerObject corev1api.O
spcNoRelabeling,
csiExposeParam.NodeOS,
csiExposeParam.PriorityClassName,
intoleratableNodes,
)
if err != nil {
return errors.Wrap(err, "error to create backup pod")
@@ -485,7 +508,7 @@ func (e *csiSnapshotExposer) createBackupVSC(ctx context.Context, ownerObject co
return e.csiSnapshotClient.VolumeSnapshotContents().Create(ctx, vsc, metav1.CreateOptions{})
}
func (e *csiSnapshotExposer) createBackupPVC(ctx context.Context, ownerObject corev1api.ObjectReference, backupVS, storageClass, accessMode string, resource resource.Quantity, readOnly bool) (*corev1api.PersistentVolumeClaim, error) {
func (e *csiSnapshotExposer) createBackupPVC(ctx context.Context, ownerObject corev1api.ObjectReference, backupVS, storageClass, accessMode string, resource resource.Quantity, readOnly bool, annotations map[string]string) (*corev1api.PersistentVolumeClaim, error) {
backupPVCName := ownerObject.Name
volumeMode, err := getVolumeModeByAccessMode(accessMode)
@@ -507,8 +530,9 @@ func (e *csiSnapshotExposer) createBackupPVC(ctx context.Context, ownerObject co
pvc := &corev1api.PersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Namespace: ownerObject.Namespace,
Name: backupPVCName,
Namespace: ownerObject.Namespace,
Name: backupPVCName,
Annotations: annotations,
OwnerReferences: []metav1.OwnerReference{
{
APIVersion: ownerObject.APIVersion,
@@ -558,6 +582,7 @@ func (e *csiSnapshotExposer) createBackupPod(
spcNoRelabeling bool,
nodeOS string,
priorityClassName string,
intoleratableNodes []string,
) (*corev1api.Pod, error) {
podName := ownerObject.Name
@@ -658,6 +683,18 @@ func (e *csiSnapshotExposer) createBackupPod(
}
var podAffinity *corev1api.Affinity
if len(intoleratableNodes) > 0 {
if affinity == nil {
affinity = &kube.LoadAffinity{}
}
affinity.NodeSelector.MatchExpressions = append(affinity.NodeSelector.MatchExpressions, metav1.LabelSelectorRequirement{
Key: "kubernetes.io/hostname",
Values: intoleratableNodes,
Operator: metav1.LabelSelectorOpNotIn,
})
}
if affinity != nil {
podAffinity = kube.ToSystemAffinity([]*kube.LoadAffinity{affinity})
}

View File

@@ -153,6 +153,7 @@ func TestCreateBackupPodWithPriorityClass(t *testing.T) {
false, // spcNoRelabeling
kube.NodeOSLinux,
tc.expectedPriorityClass,
nil,
)
require.NoError(t, err, tc.description)
@@ -237,6 +238,7 @@ func TestCreateBackupPodWithMissingConfigMap(t *testing.T) {
false, // spcNoRelabeling
kube.NodeOSLinux,
"", // empty priority class since config map is missing
nil,
)
// Should succeed even when config map is missing

View File

@@ -39,8 +39,11 @@ import (
velerov1 "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
velerotest "github.com/vmware-tanzu/velero/pkg/test"
velerotypes "github.com/vmware-tanzu/velero/pkg/types"
"github.com/vmware-tanzu/velero/pkg/util"
"github.com/vmware-tanzu/velero/pkg/util/boolptr"
"github.com/vmware-tanzu/velero/pkg/util/kube"
storagev1api "k8s.io/api/storage/v1"
)
type reactor struct {
@@ -156,6 +159,31 @@ func TestExpose(t *testing.T) {
},
}
pvName := "pv-1"
volumeAttachement1 := &storagev1api.VolumeAttachment{
ObjectMeta: metav1.ObjectMeta{
Name: "va1",
},
Spec: storagev1api.VolumeAttachmentSpec{
Source: storagev1api.VolumeAttachmentSource{
PersistentVolumeName: &pvName,
},
NodeName: "node-1",
},
}
volumeAttachement2 := &storagev1api.VolumeAttachment{
ObjectMeta: metav1.ObjectMeta{
Name: "va2",
},
Spec: storagev1api.VolumeAttachmentSpec{
Source: storagev1api.VolumeAttachmentSource{
PersistentVolumeName: &pvName,
},
NodeName: "node-2",
},
}
tests := []struct {
name string
snapshotClientObj []runtime.Object
@@ -169,6 +197,7 @@ func TestExpose(t *testing.T) {
expectedReadOnlyPVC bool
expectedBackupPVCStorageClass string
expectedAffinity *corev1api.Affinity
expectedPVCAnnotation map[string]string
}{
{
name: "wait vs ready fail",
@@ -624,6 +653,117 @@ func TestExpose(t *testing.T) {
expectedBackupPVCStorageClass: "fake-sc-read-only",
expectedAffinity: nil,
},
{
name: "IntolerateSourceNode, get source node fail",
ownerBackup: backup,
exposeParam: CSISnapshotExposeParam{
SnapshotName: "fake-vs",
SourceNamespace: "fake-ns",
SourcePVName: pvName,
StorageClass: "fake-sc",
AccessMode: AccessModeFileSystem,
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
BackupPVCConfig: map[string]velerotypes.BackupPVC{
"fake-sc": {
Annotations: map[string]string{util.VSphereCNSFastCloneAnno: "true"},
},
},
Affinity: nil,
},
snapshotClientObj: []runtime.Object{
vsObject,
vscObj,
},
kubeClientObj: []runtime.Object{
daemonSet,
},
kubeReactors: []reactor{
{
verb: "list",
resource: "volumeattachments",
reactorFunc: func(action clientTesting.Action) (handled bool, ret runtime.Object, err error) {
return true, nil, errors.New("fake-create-error")
},
},
},
expectedAffinity: nil,
expectedPVCAnnotation: nil,
},
{
name: "IntolerateSourceNode, get empty source node",
ownerBackup: backup,
exposeParam: CSISnapshotExposeParam{
SnapshotName: "fake-vs",
SourceNamespace: "fake-ns",
SourcePVName: pvName,
StorageClass: "fake-sc",
AccessMode: AccessModeFileSystem,
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
BackupPVCConfig: map[string]velerotypes.BackupPVC{
"fake-sc": {
Annotations: map[string]string{util.VSphereCNSFastCloneAnno: "true"},
},
},
Affinity: nil,
},
snapshotClientObj: []runtime.Object{
vsObject,
vscObj,
},
kubeClientObj: []runtime.Object{
daemonSet,
},
expectedAffinity: nil,
expectedPVCAnnotation: map[string]string{util.VSphereCNSFastCloneAnno: "true"},
},
{
name: "IntolerateSourceNode, get source nodes",
ownerBackup: backup,
exposeParam: CSISnapshotExposeParam{
SnapshotName: "fake-vs",
SourceNamespace: "fake-ns",
SourcePVName: pvName,
StorageClass: "fake-sc",
AccessMode: AccessModeFileSystem,
OperationTimeout: time.Millisecond,
ExposeTimeout: time.Millisecond,
BackupPVCConfig: map[string]velerotypes.BackupPVC{
"fake-sc": {
Annotations: map[string]string{util.VSphereCNSFastCloneAnno: "true"},
},
},
Affinity: nil,
},
snapshotClientObj: []runtime.Object{
vsObject,
vscObj,
},
kubeClientObj: []runtime.Object{
daemonSet,
volumeAttachement1,
volumeAttachement2,
},
expectedAffinity: &corev1api.Affinity{
NodeAffinity: &corev1api.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1api.NodeSelector{
NodeSelectorTerms: []corev1api.NodeSelectorTerm{
{
MatchExpressions: []corev1api.NodeSelectorRequirement{
{
Key: "kubernetes.io/hostname",
Operator: corev1api.NodeSelectorOpNotIn,
Values: []string{"node-1", "node-2"},
},
},
},
},
},
},
},
expectedPVCAnnotation: map[string]string{util.VSphereCNSFastCloneAnno: "true"},
},
}
for _, test := range tests {
@@ -705,6 +845,12 @@ func TestExpose(t *testing.T) {
if test.expectedAffinity != nil {
assert.Equal(t, test.expectedAffinity, backupPod.Spec.Affinity)
}
if test.expectedPVCAnnotation != nil {
assert.Equal(t, test.expectedPVCAnnotation, backupPVC.Annotations)
} else {
assert.Empty(t, backupPVC.Annotations)
}
} else {
assert.EqualError(t, err, test.err)
}
@@ -1001,8 +1147,9 @@ func Test_csiSnapshotExposer_createBackupPVC(t *testing.T) {
backupPVC := corev1api.PersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Namespace: velerov1.DefaultNamespace,
Name: "fake-backup",
Namespace: velerov1.DefaultNamespace,
Name: "fake-backup",
Annotations: map[string]string{},
OwnerReferences: []metav1.OwnerReference{
{
APIVersion: backup.APIVersion,
@@ -1031,8 +1178,9 @@ func Test_csiSnapshotExposer_createBackupPVC(t *testing.T) {
backupPVCReadOnly := corev1api.PersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Namespace: velerov1.DefaultNamespace,
Name: "fake-backup",
Namespace: velerov1.DefaultNamespace,
Name: "fake-backup",
Annotations: map[string]string{},
OwnerReferences: []metav1.OwnerReference{
{
APIVersion: backup.APIVersion,
@@ -1114,7 +1262,7 @@ func Test_csiSnapshotExposer_createBackupPVC(t *testing.T) {
APIVersion: tt.ownerBackup.APIVersion,
}
}
got, err := e.createBackupPVC(t.Context(), ownerObject, tt.backupVS, tt.storageClass, tt.accessMode, tt.resource, tt.readOnly)
got, err := e.createBackupPVC(t.Context(), ownerObject, tt.backupVS, tt.storageClass, tt.accessMode, tt.resource, tt.readOnly, map[string]string{})
if !tt.wantErr(t, err, fmt.Sprintf("createBackupPVC(%v, %v, %v, %v, %v, %v)", ownerObject, tt.backupVS, tt.storageClass, tt.accessMode, tt.resource, tt.readOnly)) {
return
}

View File

@@ -73,6 +73,9 @@ type PodVolumeExposeParam struct {
// PriorityClassName is the priority class name for the data mover pod
PriorityClassName string
// Privileged indicates whether to create the pod with a privileged container
Privileged bool
}
// PodVolumeExposer is the interfaces for a pod volume exposer
@@ -153,7 +156,7 @@ func (e *podVolumeExposer) Expose(ctx context.Context, ownerObject corev1api.Obj
curLog.WithField("path", path).Infof("Host path is retrieved for pod %s, volume %s", param.ClientPodName, param.ClientPodVolume)
hostingPod, err := e.createHostingPod(ctx, ownerObject, param.Type, path.ByPath, param.OperationTimeout, param.HostingPodLabels, param.HostingPodAnnotations, param.HostingPodTolerations, pod.Spec.NodeName, param.Resources, nodeOS, param.PriorityClassName)
hostingPod, err := e.createHostingPod(ctx, ownerObject, param.Type, path.ByPath, param.OperationTimeout, param.HostingPodLabels, param.HostingPodAnnotations, param.HostingPodTolerations, pod.Spec.NodeName, param.Resources, nodeOS, param.PriorityClassName, param.Privileged)
if err != nil {
return errors.Wrapf(err, "error to create hosting pod")
}
@@ -269,7 +272,7 @@ func (e *podVolumeExposer) CleanUp(ctx context.Context, ownerObject corev1api.Ob
}
func (e *podVolumeExposer) createHostingPod(ctx context.Context, ownerObject corev1api.ObjectReference, exposeType string, hostPath string,
operationTimeout time.Duration, label map[string]string, annotation map[string]string, toleration []corev1api.Toleration, selectedNode string, resources corev1api.ResourceRequirements, nodeOS string, priorityClassName string) (*corev1api.Pod, error) {
operationTimeout time.Duration, label map[string]string, annotation map[string]string, toleration []corev1api.Toleration, selectedNode string, resources corev1api.ResourceRequirements, nodeOS string, priorityClassName string, privileged bool) (*corev1api.Pod, error) {
hostingPodName := ownerObject.Name
containerName := string(ownerObject.UID)
@@ -327,6 +330,7 @@ func (e *podVolumeExposer) createHostingPod(ctx context.Context, ownerObject cor
args = append(args, podInfo.logLevelArgs...)
var securityCtx *corev1api.PodSecurityContext
var containerSecurityCtx *corev1api.SecurityContext
nodeSelector := map[string]string{}
podOS := corev1api.PodOS{}
if nodeOS == kube.NodeOSWindows {
@@ -359,6 +363,9 @@ func (e *podVolumeExposer) createHostingPod(ctx context.Context, ownerObject cor
securityCtx = &corev1api.PodSecurityContext{
RunAsUser: &userID,
}
containerSecurityCtx = &corev1api.SecurityContext{
Privileged: &privileged,
}
nodeSelector[kube.NodeOSLabel] = kube.NodeOSLinux
podOS.Name = kube.NodeOSLinux
@@ -394,6 +401,7 @@ func (e *podVolumeExposer) createHostingPod(ctx context.Context, ownerObject cor
Env: podInfo.env,
EnvFrom: podInfo.envFrom,
Resources: resources,
SecurityContext: containerSecurityCtx,
},
},
PriorityClassName: priorityClassName,

View File

@@ -190,6 +190,29 @@ func TestPodVolumeExpose(t *testing.T) {
return "/var/lib/kubelet/pods/pod-id-xxx/volumes/kubernetes.io~csi/pvc-id-xxx/mount", nil
},
},
{
name: "succeed with privileged pod",
ownerBackup: backup,
exposeParam: PodVolumeExposeParam{
ClientNamespace: "fake-ns",
ClientPodName: "fake-client-pod",
ClientPodVolume: "fake-client-volume",
Privileged: true,
},
kubeClientObj: []runtime.Object{
podWithNode,
node,
daemonSet,
},
funcGetPodVolumeHostPath: func(context.Context, *corev1api.Pod, string, kubernetes.Interface, filesystem.Interface, logrus.FieldLogger) (datapath.AccessPoint, error) {
return datapath.AccessPoint{
ByPath: "/host_pods/pod-id-xxx/volumes/kubernetes.io~csi/pvc-id-xxx/mount",
}, nil
},
funcExtractPodVolumeHostPath: func(context.Context, string, kubernetes.Interface, string, string) (string, error) {
return "/var/lib/kubelet/pods/pod-id-xxx/volumes/kubernetes.io~csi/pvc-id-xxx/mount", nil
},
},
}
for _, test := range tests {

View File

@@ -39,6 +39,8 @@ import (
type PVCAction struct {
log logrus.FieldLogger
crClient crclient.Client
// map[namespace]->[map[pvcVolumes]->[]podName]
nsPVCs map[string]map[string][]string
}
func NewPVCAction(f client.Factory) plugincommon.HandlerInitializer {
@@ -78,31 +80,18 @@ func (a *PVCAction) GetRelatedItems(item runtime.Unstructured, backup *v1.Backup
// Adds pods mounting this PVC to ensure that multiple pods mounting the same RWX
// volume get backed up together.
pods := new(corev1api.PodList)
err := a.crClient.List(context.Background(), pods, crclient.InNamespace(pvc.Namespace))
pvcs, err := a.getPVCList(pvc.Namespace)
if err != nil {
return nil, errors.Wrap(err, "failed to list pods")
return nil, err
}
for i := range pods.Items {
for _, volume := range pods.Items[i].Spec.Volumes {
if volume.VolumeSource.PersistentVolumeClaim == nil {
continue
}
if volume.PersistentVolumeClaim.ClaimName == pvc.Name {
if kube.IsPodRunning(&pods.Items[i]) != nil {
a.log.Infof("Related pod %s is not running, not adding to ItemBlock for PVC %s", pods.Items[i].Name, pvc.Name)
} else {
a.log.Infof("Adding related Pod %s to PVC %s", pods.Items[i].Name, pvc.Name)
relatedItems = append(relatedItems, velero.ResourceIdentifier{
GroupResource: kuberesource.Pods,
Namespace: pods.Items[i].Namespace,
Name: pods.Items[i].Name,
})
}
break
}
}
for _, pod := range pvcs[pvc.Name] {
a.log.Infof("Adding related Pod %s to PVC %s", pod, pvc.Name)
relatedItems = append(relatedItems, velero.ResourceIdentifier{
GroupResource: kuberesource.Pods,
Namespace: pvc.Namespace,
Name: pod,
})
}
// Gather groupedPVCs based on VGS label provided in the backup
@@ -117,6 +106,35 @@ func (a *PVCAction) GetRelatedItems(item runtime.Unstructured, backup *v1.Backup
return relatedItems, nil
}
func (a *PVCAction) getPVCList(ns string) (map[string][]string, error) {
if a.nsPVCs == nil {
a.nsPVCs = make(map[string]map[string][]string)
}
pvcList, ok := a.nsPVCs[ns]
if ok {
return pvcList, nil
}
pvcList = make(map[string][]string)
pods := new(corev1api.PodList)
err := a.crClient.List(context.Background(), pods, crclient.InNamespace(ns))
if err != nil {
return nil, errors.Wrap(err, "failed to list pods")
}
for i := range pods.Items {
if kube.IsPodRunning(&pods.Items[i]) != nil {
a.log.Debugf("Pod %s is not running, not adding to Pod list for PVC IBA plugin", pods.Items[i].Name)
continue
}
for _, volume := range pods.Items[i].Spec.Volumes {
if volume.VolumeSource.PersistentVolumeClaim != nil {
pvcList[volume.VolumeSource.PersistentVolumeClaim.ClaimName] = append(pvcList[volume.VolumeSource.PersistentVolumeClaim.ClaimName], pods.Items[i].Name)
}
}
}
a.nsPVCs[ns] = pvcList
return pvcList, nil
}
func (a *PVCAction) Name() string {
return "PVCItemBlockAction"
}

View File

@@ -92,12 +92,18 @@ func newRestorer(
_, _ = pvrInformer.AddEventHandler(
cache.ResourceEventHandlerFuncs{
UpdateFunc: func(_, obj any) {
pvr := obj.(*velerov1api.PodVolumeRestore)
UpdateFunc: func(oldObj, newObj any) {
pvr := newObj.(*velerov1api.PodVolumeRestore)
pvrOld := oldObj.(*velerov1api.PodVolumeRestore)
if pvr.GetLabels()[velerov1api.RestoreUIDLabel] != string(restore.UID) {
return
}
if pvr.Status.Phase == pvrOld.Status.Phase {
return
}
if pvr.Status.Phase == velerov1api.PodVolumeRestorePhaseCompleted || pvr.Status.Phase == velerov1api.PodVolumeRestorePhaseFailed || pvr.Status.Phase == velerov1api.PodVolumeRestorePhaseCanceled {
r.resultsLock.Lock()
defer r.resultsLock.Unlock()

View File

@@ -81,7 +81,7 @@ func TestEnsureRepo(t *testing.T) {
namespace: "fake-ns",
bsl: "fake-bsl",
repositoryType: "fake-repo-type",
err: "error getting backup repository list: no kind is registered for the type v1.BackupRepositoryList in scheme \"pkg/runtime/scheme.go:100\"",
err: "error getting backup repository list: no kind is registered for the type v1.BackupRepositoryList in scheme",
},
{
name: "success on existing repo",
@@ -128,7 +128,7 @@ func TestEnsureRepo(t *testing.T) {
repo, err := ensurer.EnsureRepo(t.Context(), velerov1.DefaultNamespace, test.namespace, test.bsl, test.repositoryType)
if err != nil {
require.EqualError(t, err, test.err)
require.ErrorContains(t, err, test.err)
} else {
require.NoError(t, err)
}
@@ -190,7 +190,7 @@ func TestCreateBackupRepositoryAndWait(t *testing.T) {
namespace: "fake-ns",
bsl: "fake-bsl",
repositoryType: "fake-repo-type",
err: "unable to create backup repository resource: no kind is registered for the type v1.BackupRepository in scheme \"pkg/runtime/scheme.go:100\"",
err: "unable to create backup repository resource: no kind is registered for the type v1.BackupRepository in scheme",
},
{
name: "get repo fail",
@@ -252,7 +252,7 @@ func TestCreateBackupRepositoryAndWait(t *testing.T) {
RepositoryType: test.repositoryType,
})
if err != nil {
require.EqualError(t, err, test.err)
require.ErrorContains(t, err, test.err)
} else {
require.NoError(t, err)
}

View File

@@ -449,6 +449,35 @@ func StartNewJob(
return maintenanceJob.Name, nil
}
// buildTolerationsForMaintenanceJob builds the tolerations for maintenance jobs.
// It includes the required Windows toleration for backward compatibility and filters
// tolerations from the Velero deployment to only include those with keys that are
// in the ThirdPartyTolerations allowlist, following the same pattern as labels and annotations.
func buildTolerationsForMaintenanceJob(deployment *appsv1api.Deployment) []corev1api.Toleration {
// Start with the Windows toleration for backward compatibility
windowsToleration := corev1api.Toleration{
Key: "os",
Operator: "Equal",
Effect: "NoSchedule",
Value: "windows",
}
result := []corev1api.Toleration{windowsToleration}
// Filter tolerations from the Velero deployment to only include allowed ones
// Only tolerations that exist on the deployment AND have keys in the allowlist are inherited
deploymentTolerations := veleroutil.GetTolerationsFromVeleroServer(deployment)
for _, k := range util.ThirdPartyTolerations {
for _, toleration := range deploymentTolerations {
if toleration.Key == k {
result = append(result, toleration)
break // Only add the first matching toleration for each allowed key
}
}
}
return result
}
func getPriorityClassName(ctx context.Context, cli client.Client, config *velerotypes.JobConfigs, logger logrus.FieldLogger) string {
// Use the priority class name from the global job configuration if available
// Note: Priority class is only read from global config, not per-repository
@@ -593,15 +622,8 @@ func buildJob(
SecurityContext: podSecurityContext,
Volumes: volumes,
ServiceAccountName: serviceAccount,
Tolerations: []corev1api.Toleration{
{
Key: "os",
Operator: "Equal",
Effect: "NoSchedule",
Value: "windows",
},
},
ImagePullSecrets: imagePullSecrets,
Tolerations: buildTolerationsForMaintenanceJob(deployment),
ImagePullSecrets: imagePullSecrets,
},
},
},

View File

@@ -698,7 +698,7 @@ func TestWaitAllJobsComplete(t *testing.T) {
{
name: "list job error",
runtimeScheme: schemeFail,
expectedError: "error listing maintenance job for repo fake-repo: no kind is registered for the type v1.JobList in scheme \"pkg/runtime/scheme.go:100\"",
expectedError: "error listing maintenance job for repo fake-repo: no kind is registered for the type v1.JobList in scheme",
},
{
name: "job not exist",
@@ -847,7 +847,7 @@ func TestWaitAllJobsComplete(t *testing.T) {
history, err := WaitAllJobsComplete(test.ctx, fakeClient, repo, 3, velerotest.NewLogger())
if test.expectedError != "" {
require.EqualError(t, err, test.expectedError)
require.ErrorContains(t, err, test.expectedError)
} else {
require.NoError(t, err)
}
@@ -1481,3 +1481,291 @@ func TestBuildJobWithPriorityClassName(t *testing.T) {
})
}
}
func TestBuildTolerationsForMaintenanceJob(t *testing.T) {
windowsToleration := corev1api.Toleration{
Key: "os",
Operator: "Equal",
Effect: "NoSchedule",
Value: "windows",
}
testCases := []struct {
name string
deploymentTolerations []corev1api.Toleration
expectedTolerations []corev1api.Toleration
}{
{
name: "no tolerations should only include Windows toleration",
deploymentTolerations: nil,
expectedTolerations: []corev1api.Toleration{
windowsToleration,
},
},
{
name: "empty tolerations should only include Windows toleration",
deploymentTolerations: []corev1api.Toleration{},
expectedTolerations: []corev1api.Toleration{
windowsToleration,
},
},
{
name: "non-allowed toleration should not be inherited",
deploymentTolerations: []corev1api.Toleration{
{
Key: "vng-ondemand",
Operator: "Equal",
Effect: "NoSchedule",
Value: "amd64",
},
},
expectedTolerations: []corev1api.Toleration{
windowsToleration,
},
},
{
name: "allowed toleration should be inherited",
deploymentTolerations: []corev1api.Toleration{
{
Key: "kubernetes.azure.com/scalesetpriority",
Operator: "Equal",
Effect: "NoSchedule",
Value: "spot",
},
},
expectedTolerations: []corev1api.Toleration{
windowsToleration,
{
Key: "kubernetes.azure.com/scalesetpriority",
Operator: "Equal",
Effect: "NoSchedule",
Value: "spot",
},
},
},
{
name: "mixed allowed and non-allowed tolerations should only inherit allowed",
deploymentTolerations: []corev1api.Toleration{
{
Key: "vng-ondemand", // not in allowlist
Operator: "Equal",
Effect: "NoSchedule",
Value: "amd64",
},
{
Key: "CriticalAddonsOnly", // in allowlist
Operator: "Exists",
Effect: "NoSchedule",
},
{
Key: "custom-key", // not in allowlist
Operator: "Equal",
Effect: "NoSchedule",
Value: "custom-value",
},
},
expectedTolerations: []corev1api.Toleration{
windowsToleration,
{
Key: "CriticalAddonsOnly",
Operator: "Exists",
Effect: "NoSchedule",
},
},
},
{
name: "multiple allowed tolerations should all be inherited",
deploymentTolerations: []corev1api.Toleration{
{
Key: "kubernetes.azure.com/scalesetpriority",
Operator: "Equal",
Effect: "NoSchedule",
Value: "spot",
},
{
Key: "CriticalAddonsOnly",
Operator: "Exists",
Effect: "NoSchedule",
},
},
expectedTolerations: []corev1api.Toleration{
windowsToleration,
{
Key: "kubernetes.azure.com/scalesetpriority",
Operator: "Equal",
Effect: "NoSchedule",
Value: "spot",
},
{
Key: "CriticalAddonsOnly",
Operator: "Exists",
Effect: "NoSchedule",
},
},
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Create a deployment with the specified tolerations
deployment := &appsv1api.Deployment{
Spec: appsv1api.DeploymentSpec{
Template: corev1api.PodTemplateSpec{
Spec: corev1api.PodSpec{
Tolerations: tc.deploymentTolerations,
},
},
},
}
result := buildTolerationsForMaintenanceJob(deployment)
assert.Equal(t, tc.expectedTolerations, result)
})
}
}
func TestBuildJobWithTolerationsInheritance(t *testing.T) {
// Define allowed tolerations that would be set on Velero deployment
allowedTolerations := []corev1api.Toleration{
{
Key: "kubernetes.azure.com/scalesetpriority",
Operator: "Equal",
Effect: "NoSchedule",
Value: "spot",
},
{
Key: "CriticalAddonsOnly",
Operator: "Exists",
Effect: "NoSchedule",
},
}
// Mixed tolerations (allowed and non-allowed)
mixedTolerations := []corev1api.Toleration{
{
Key: "vng-ondemand", // not in allowlist
Operator: "Equal",
Effect: "NoSchedule",
Value: "amd64",
},
{
Key: "CriticalAddonsOnly", // in allowlist
Operator: "Exists",
Effect: "NoSchedule",
},
}
// Windows toleration that should always be present
windowsToleration := corev1api.Toleration{
Key: "os",
Operator: "Equal",
Effect: "NoSchedule",
Value: "windows",
}
testCases := []struct {
name string
deploymentTolerations []corev1api.Toleration
expectedTolerations []corev1api.Toleration
}{
{
name: "no tolerations on deployment should only have Windows toleration",
deploymentTolerations: nil,
expectedTolerations: []corev1api.Toleration{
windowsToleration,
},
},
{
name: "allowed tolerations should be inherited along with Windows toleration",
deploymentTolerations: allowedTolerations,
expectedTolerations: []corev1api.Toleration{
windowsToleration,
{
Key: "kubernetes.azure.com/scalesetpriority",
Operator: "Equal",
Effect: "NoSchedule",
Value: "spot",
},
{
Key: "CriticalAddonsOnly",
Operator: "Exists",
Effect: "NoSchedule",
},
},
},
{
name: "mixed tolerations should only inherit allowed ones",
deploymentTolerations: mixedTolerations,
expectedTolerations: []corev1api.Toleration{
windowsToleration,
{
Key: "CriticalAddonsOnly",
Operator: "Exists",
Effect: "NoSchedule",
},
},
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Create a new scheme and add necessary API types
localScheme := runtime.NewScheme()
err := velerov1api.AddToScheme(localScheme)
require.NoError(t, err)
err = appsv1api.AddToScheme(localScheme)
require.NoError(t, err)
err = batchv1api.AddToScheme(localScheme)
require.NoError(t, err)
// Create a deployment with the specified tolerations
deployment := &appsv1api.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: "velero",
Namespace: "velero",
},
Spec: appsv1api.DeploymentSpec{
Template: corev1api.PodTemplateSpec{
Spec: corev1api.PodSpec{
Containers: []corev1api.Container{
{
Name: "velero",
Image: "velero/velero:latest",
},
},
Tolerations: tc.deploymentTolerations,
},
},
},
}
// Create a backup repository
repo := &velerov1api.BackupRepository{
ObjectMeta: metav1.ObjectMeta{
Name: "test-repo",
Namespace: "velero",
},
Spec: velerov1api.BackupRepositorySpec{
VolumeNamespace: "velero",
BackupStorageLocation: "default",
},
}
// Create fake client and add the deployment
client := fake.NewClientBuilder().WithScheme(localScheme).WithObjects(deployment).Build()
// Create minimal job configs and resources
jobConfig := &velerotypes.JobConfigs{}
logLevel := logrus.InfoLevel
logFormat := logging.NewFormatFlag()
logFormat.Set("text")
// Call buildJob
job, err := buildJob(client, t.Context(), repo, "default", jobConfig, logLevel, logFormat, logrus.New())
require.NoError(t, err)
// Verify the tolerations are set correctly
assert.Equal(t, tc.expectedTolerations, job.Spec.Template.Spec.Tolerations)
})
}
}

View File

@@ -56,6 +56,9 @@ type BackupPVC struct {
// SPCNoRelabeling sets Spec.SecurityContext.SELinux.Type to "spc_t" for the pod mounting the backupPVC
// ignored if ReadOnly is false
SPCNoRelabeling bool `json:"spcNoRelabeling,omitempty"`
// Annotations permits setting annotations for the backupPVC
Annotations map[string]string `json:"annotations,omitempty"`
}
type RestorePVC struct {
@@ -81,4 +84,7 @@ type NodeAgentConfigs struct {
// PriorityClassName is the priority class name for data mover pods created by the node agent
PriorityClassName string `json:"priorityClassName,omitempty"`
// PrivilegedFsBackup determines whether to create fs-backup pods as privileged pods
PrivilegedFsBackup bool `json:"privilegedFsBackup,omitempty"`
}

View File

@@ -121,6 +121,7 @@ func (p *Progress) UploadStarted() {}
// CachedFile statistic the total bytes been cached currently
func (p *Progress) CachedFile(fname string, numBytes int64) {
atomic.AddInt64(&p.cachedBytes, numBytes)
atomic.AddInt64(&p.processedBytes, numBytes)
p.UpdateProgress()
}

View File

@@ -16,6 +16,7 @@ limitations under the License.
package kube
import (
"context"
"math"
"sync"
"time"
@@ -182,13 +183,13 @@ func (es *eventSink) Create(event *corev1api.Event) (*corev1api.Event, error) {
return event, nil
}
return es.sink.CreateWithEventNamespace(event)
return es.sink.CreateWithEventNamespaceWithContext(context.Background(), event)
}
func (es *eventSink) Update(event *corev1api.Event) (*corev1api.Event, error) {
return es.sink.UpdateWithEventNamespace(event)
return es.sink.UpdateWithEventNamespaceWithContext(context.Background(), event)
}
func (es *eventSink) Patch(event *corev1api.Event, data []byte) (*corev1api.Event, error) {
return es.sink.PatchWithEventNamespace(event, data)
return es.sink.PatchWithEventNamespaceWithContext(context.Background(), event, data)
}

View File

@@ -554,3 +554,19 @@ func GetPVAttachedNode(ctx context.Context, pv string, storageClient storagev1.S
return "", nil
}
func GetPVAttachedNodes(ctx context.Context, pv string, storageClient storagev1.StorageV1Interface) ([]string, error) {
vaList, err := storageClient.VolumeAttachments().List(ctx, metav1.ListOptions{})
if err != nil {
return nil, errors.Wrapf(err, "error listing volumeattachment")
}
nodes := []string{}
for _, va := range vaList.Items {
if va.Spec.Source.PersistentVolumeName != nil && *va.Spec.Source.PersistentVolumeName == pv {
nodes = append(nodes, va.Spec.NodeName)
}
}
return nodes, nil
}

View File

@@ -371,15 +371,16 @@ func VerifyJSONConfigs(ctx context.Context, namespace string, crClient client.Cl
return errors.Errorf("data is not available in ConfigMap %s", configName)
}
// Verify all the keys in ConfigMap's data.
jsonString := ""
for _, v := range cm.Data {
jsonString = v
}
configs := configType
err = json.Unmarshal([]byte(jsonString), configs)
if err != nil {
return errors.Wrapf(err, "error to unmarshall data from ConfigMap %s", configName)
configs := configType
err = json.Unmarshal([]byte(jsonString), configs)
if err != nil {
return errors.Wrapf(err, "error to unmarshall data from ConfigMap %s", configName)
}
}
return nil

View File

@@ -28,3 +28,7 @@ var ThirdPartyTolerations = []string{
"kubernetes.azure.com/scalesetpriority",
"CriticalAddonsOnly",
}
const (
VSphereCNSFastCloneAnno = "csi.vsphere.volume/fast-provisioning"
)

View File

@@ -37,6 +37,9 @@ default the source PVC's storage class will be used.
The users can specify the ConfigMap name during velero installation by CLI:
`velero install --node-agent-configmap=<ConfigMap-Name>`
- `annotations`: permits to set annotations on the backupPVC itself. typically useful for some CSI provider which cannot mount
a VolumeSnapshot without a custom annotation.
A sample of `backupPVC` config as part of the ConfigMap would look like:
```json
{
@@ -49,8 +52,11 @@ A sample of `backupPVC` config as part of the ConfigMap would look like:
"storageClass": "backupPVC-storage-class"
},
"storage-class-3": {
"readOnly": true
}
"readOnly": true,
"annotations": {
"some-csi.provider.io/readOnlyClone": true
}
},
"storage-class-4": {
"readOnly": true,
"spcNoRelabeling": true

View File

@@ -23,6 +23,8 @@ By default, `velero install` does not install Velero's [File System Backup][3].
If you've already run `velero install` without the `--use-node-agent` flag, you can run the same command again, including the `--use-node-agent` flag, to add the file system backup to your existing install.
Note that for some use cases (including installation on OpenShift clusters) the fs-backup pods must run in a Privileged security context. This is configured through the node-agent configmap (see below) by setting `privilegedFsBackup` to `true` in the configmap.
## CSI Snapshot Data Movement
Velero node-agent is required by [CSI Snapshot Data Movement][12] when Velero built-in data mover is used. By default, `velero install` does not install Velero's node-agent. To enable it, specify the `--use-node-agent` flag.

View File

@@ -184,7 +184,7 @@ ginkgo: ${GOBIN}/ginkgo
# This target does not run if ginkgo is already in $GOBIN
${GOBIN}/ginkgo:
GOBIN=${GOBIN} go install github.com/onsi/ginkgo/v2/ginkgo@v2.19.0
GOBIN=${GOBIN} go install github.com/onsi/ginkgo/v2/ginkgo@v2.22.0
.PHONY: run-e2e
run-e2e: ginkgo

View File

@@ -312,9 +312,9 @@ func installVeleroServer(
// TODO: need to consider align options.UseNodeAgentWindows usage
// with options.UseNodeAgent
// Only version after v1.16.0 support windows node agent.
// Only version after v1.16.0 or release-1.16-dev support windows node agent.
if options.WorkerOS == common.WorkerOSWindows &&
(semver.Compare(version, "v1.16") >= 0 || version == "main") {
(semver.Compare(version, "v1.16") >= 0 || version == "main" || strings.Compare(version, "release-1.16-dev") >= 0) {
fmt.Println("Install node-agent-windows. The Velero version is ", version)
args = append(args, "--use-node-agent-windows")
}