Compare commits

..

58 Commits

Author SHA1 Message Date
Xun Jiang/Bruce Jiang
7013a4097f Merge pull request #9479 from blackpiglet/add_role_rolebinding_in_resotre_sequence_1.17
Some checks failed
Run the E2E test on kind / build (push) Failing after 13m48s
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 53s
[cherry-pick][release-1.17] Add Role, RoleBinding, ClusterRole, and ClusterRoleBinding in restore sequence.
2026-01-09 11:17:35 +08:00
Xun Jiang
b188701862 Add Role, RoleBinding, ClusterRole, and ClusterRoleBinding in restore sequence.
Ensure the RBAC resources are restored before pods.
The change help to avoid pod starting error when pod depends on the RBAC resources,
e.g., prometheus operator check whether it has enough permission before launching
controller, if prometheus operator pod starts before RBAC resources created, it
will not launch controllers, and it will not retry.
f7f07bcdfb/cmd/operator/main.go (L392-L400)

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2026-01-08 15:23:19 +08:00
lyndon-li
9d79e483b2 Merge pull request #9458 from Lyndon-Li/release-1.17
Some checks failed
Run the E2E test on kind / build (push) Failing after 11m35s
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 47s
1.17.2 changelog
2025-12-26 14:41:11 +08:00
lyndon-li
1e350c02c4 Merge branch 'release-1.17' into release-1.17 2025-12-26 13:46:30 +08:00
Wenkai Yin(尹文开)
339dee02af Merge pull request #9459 from blackpiglet/bump_golang_and_ubuntu
Bump Golang to v1.24.11 and go/x/crypto to v0.45.0 to fix CVEs.
2025-12-26 12:46:59 +08:00
Xun Jiang
77b68121ae Replace golang.org/x/net/context with context package to fix linter issues.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-12-24 14:49:10 +08:00
Xun Jiang
8e35a190c2 Bump Golang to v1.24.11 and go/x/crypto to v0.45.0 to fix CVEs.
Bump paketobuildpacks/run-jammy-tiny to 0.2.90

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-12-24 13:11:11 +08:00
Lyndon-Li
69f2965cc4 1.17.2 changelog
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-12-24 11:17:57 +08:00
Shubham Pampattiwar
df05057ba9 Fix managed fields patch for resources using GenerateName (#9408)
Some checks failed
Run the E2E test on kind / build (push) Failing after 15m2s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 41s
* Fix managed fields patch for resources using GenerateName

When restoring resources with GenerateName (where name is empty and K8s
assigns the actual name), the managed fields patch was failing with error
"name is required" because it was using obj.GetName() which returns empty
for GenerateName resources.

The fix uses createdObj.GetName() instead, which contains the actual name
assigned by Kubernetes after resource creation.

This affects any resource using GenerateName for restore, including:
- PersistentVolumeClaims restored by kubevirt-velero-plugin
- Secrets and ConfigMaps created with generateName
- Any custom resources using generateName

Changes:
- Line 1707: Use createdObj.GetName() instead of obj.GetName() in Patch call
- Lines 1702, 1709, 1713, 1716: Use createdObj in error/info messages for accuracy

This is a backwards-compatible fix since:
- For resources WITHOUT generateName: obj.GetName() == createdObj.GetName()
- For resources WITH generateName: createdObj.GetName() has the actual name

The managed fields patch was already correctly using createdObj (lines 1698-1700),
only the Patch() call was incorrectly using obj.

Fixes restore status showing FinalizingPartiallyFailed with "name is required"
error when restoring resources with GenerateName.

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
(cherry picked from commit 898fa13ed7)

* Add changelog file

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

---------

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2025-11-12 15:33:25 -05:00
lyndon-li
cad0169717 Merge pull request #9409 from shubham-pampattiwar/fix-volume-info-generatename-1.17
Some checks failed
Run the E2E test on kind / build (push) Failing after 12m9s
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 38s
Fix volume info generatename 1.17
2025-11-12 17:11:41 +08:00
Shubham Pampattiwar
ba2ed54dc6 add changelog file
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2025-11-11 12:00:50 -08:00
Shubham Pampattiwar
fe7782788c Fix tests: populate createdName for all created resources
Update test expectations to include createdName field for resources
with action 'created'. Also ensure namespaces track their created
names when created via EnsureNamespaceExistsAndIsReady.

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
(cherry picked from commit c2840f1c74)
2025-11-11 11:56:31 -08:00
Shubham Pampattiwar
d40bb466ff Track actual resource names for GenerateName in restore status
When restoring resources with GenerateName, Kubernetes assigns the actual name
after creation, but Velero only tracked the original name from the backup in
itemKey. This caused volume information collection to fail when trying to fetch
PVCs using the original name instead of the actual created name.

Example:
- Original PVC name from backup: "test-vm-disk-1"
- Actual created PVC name: "test-vm-backup-2025-10-27-test-vm-disk-1-mdjkd"
- Volume info tried to fetch: "test-vm-disk-1" → Failed with "not found"

This affects any plugin or workflow using GenerateName during restore:
- kubevirt-velero-plugin (VMFR use case with PVC collision avoidance)
- Custom restore item actions using generateName
- Secrets/ConfigMaps restored with generateName

Changes:
1. Add createdName field to restoredItemStatus struct (pkg/restore/request.go)
2. Capture actual name from createdObj.GetName() (pkg/restore/restore.go:1520)
3. Use createdName in RestoredResourceList() when available (pkg/restore/request.go:93-95)

This fix is backwards compatible:
- createdName defaults to empty string
- When empty, falls back to itemKey.name (original behavior)
- Only populated for GenerateName resources where needed

Fixes volume information collection errors like:
"Failed to get PVC" error="persistentvolumeclaims \"<original-name>\" not found"

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
(cherry picked from commit 07f30d06b9)
2025-11-11 11:55:29 -08:00
Scott Seago
b6202639eb don't copy securitycontext from first container if configmap found (#9394)
Some checks failed
Run the E2E test on kind / build (push) Failing after 10m11s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 40s
Signed-off-by: Scott Seago <sseago@redhat.com>
2025-11-07 14:12:47 -05:00
Wenkai Yin(尹文开)
94f64639ce Merge pull request #9385 from Lyndon-Li/release-1.17
Some checks failed
Run the E2E test on kind / build (push) Failing after 7m15s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 35s
1.17.1 changelog
2025-11-04 14:53:14 +08:00
Lyndon-Li
bf0f30dc59 1.17.1 changelog
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-11-04 13:23:19 +08:00
Daniel Jiang
d89ab43153 Merge pull request #9378 from vmware-tanzu/1.17_e2e_fix
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m48s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 38s
Add Windows support for release dev branch.
2025-11-03 15:05:56 +08:00
Xun Jiang
8704b4d7f8 Add Windows support for release dev branch.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-10-31 11:45:21 +08:00
lyndon-li
4ce4a4803d Merge pull request #9376 from Lyndon-Li/release-1.17
Some checks failed
Run the E2E test on kind / build (push) Failing after 8m56s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 39s
issue 9365: prevent multiple update of PVR
2025-10-29 15:51:18 +08:00
Lyndon-Li
ec7fe10816 issue 9365: prevent multiple update of PVR
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-10-29 15:01:33 +08:00
Wenkai Yin(尹文开)
3ae7183473 Merge pull request #9371 from blackpiglet/1.17.1_bump
Some checks failed
Run the E2E test on kind / build (push) Failing after 7m36s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 38s
Bump base image and Golang version for v1.17.1
2025-10-28 17:58:42 +08:00
Xun Jiang
bd4c53d13e Bump base image and Golang version for v1.17.1
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-10-28 15:36:01 +08:00
lyndon-li
988bfa55d4 Merge pull request #9341 from Lyndon-Li/release-1.17
Some checks failed
Run the E2E test on kind / build (push) Failing after 9m9s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 1m23s
[1.17] issue 9332: make bytesDone correct for incremental backup
2025-10-17 11:08:40 +08:00
Lyndon-Li
71ad893618 issue 9332: make bytesDone correct for incremental backup
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-10-17 10:45:41 +08:00
Xun Jiang/Bruce Jiang
1f32333aaa VerifyJSONConfigs verify every elements in Data. (#9303)
Some checks failed
Run the E2E test on kind / build (push) Failing after 7s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 5s
Add error message in the velero install CLI output if VerifyJSONConfigs fail.

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-10-04 23:45:26 -04:00
lyndon-li
8ad7827f05 Merge pull request #9300 from sseago/privileged-fs-backup-pods-1.17
Some checks failed
Run the E2E test on kind / build (push) Failing after 6s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 4s
[release-1.17] Privileged fs backup pods 1.17
2025-09-28 11:37:06 +08:00
lyndon-li
d0c176077b Merge branch 'release-1.17' into privileged-fs-backup-pods-1.17 2025-09-28 10:54:14 +08:00
Scott Seago
1ca4c54c60 Add option for privileged fs-backup pod
Signed-off-by: Scott Seago <sseago@redhat.com>
2025-09-26 13:40:39 -04:00
Shubham Pampattiwar
d2eafe63ed Fix maintenance jobs toleration inheritance from Velero deployment (#9299)
Some checks failed
Run the E2E test on kind / build (push) Failing after 7s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 3s
fix codespell and add changelog file


(cherry picked from commit 5ba00dfb09)

update changelog filename



update changelog

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2025-09-26 11:14:23 -04:00
lyndon-li
14e2e25801 Merge pull request #9297 from Lyndon-Li/release-1.17
Some checks failed
Run the E2E test on kind / build (push) Failing after 6s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 3s
[1.17] backupPVC to different node
2025-09-25 13:33:28 +08:00
Lyndon-Li
cf9e7c5fcb backupPVC to different node
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-09-25 11:21:18 +08:00
Lyndon-Li
2e00746550 backupPVC to different node
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-09-25 11:18:34 +08:00
Shubham Pampattiwar
99b2c57511 Merge pull request #9292 from Lyndon-Li/release-1.17
Some checks failed
Run the E2E test on kind / build (push) Failing after 6s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 3s
[1.17] Issue #9247: Protect VolumeSnapshot field from race condition
2025-09-23 06:45:44 -07:00
0xLeo258
d1f7f152b7 Add built-in mutex for SynchronizedVSList && Update unit tests
Signed-off-by: 0xLeo258 <noixe0312@gmail.com>
2025-09-23 13:56:16 +08:00
0xLeo258
d82af8d8b5 add changelog
Signed-off-by: 0xLeo258 <noixe0312@gmail.com>
2025-09-23 13:51:32 +08:00
0xLeo258
a46b86fa29 fix9247: Protect VolumeSnapshot field
Signed-off-by: 0xLeo258 <noixe0312@gmail.com>
2025-09-23 13:51:21 +08:00
lyndon-li
96d5fb7210 Merge pull request #9290 from Lyndon-Li/release-1.17
Some checks failed
Run the E2E test on kind / build (push) Failing after 5s
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 5s
[1.17] Issue #9234: Fix plugin reentry with safe VolumeSnapshotterCache
2025-09-23 13:50:22 +08:00
0xLeo258
c71e065863 add changelog
Signed-off-by: 0xLeo258 <noixe0312@gmail.com>
2025-09-23 13:19:36 +08:00
0xLeo258
60338d9740 fix 9234: Add safe VolumeSnapshotterCache
Signed-off-by: 0xLeo258 <noixe0312@gmail.com>
2025-09-23 13:15:19 +08:00
lyndon-li
afa71e9e03 Merge pull request #9277 from shubham-pampattiwar/fix-backup-q-accum-cp
Some checks failed
Run the E2E test on kind / build (push) Has been cancelled
Run the E2E test on kind / setup-test-matrix (push) Has been cancelled
Main CI / Build (push) Has been cancelled
Run the E2E test on kind / run-e2e-test (push) Has been cancelled
Fix Schedule Backup Queue Accumulation During Extended Blocking Scenarios
2025-09-19 11:59:11 +08:00
lyndon-li
fc877dd2dc Merge branch 'release-1.17' into fix-backup-q-accum-cp 2025-09-19 11:30:11 +08:00
lyndon-li
bb147b972b Merge pull request #9285 from priyansh17/release-1.17
Update AzureAD Microsoft Authentication Library to v1.5.0 (#9244)
2025-09-19 11:26:17 +08:00
lyndon-li
079394cd4f Merge branch 'release-1.17' into release-1.17 2025-09-19 10:54:48 +08:00
lyndon-li
fc4394964f Merge pull request #9282 from kaovilai/bitnamiminio-1.17
1.17: Fix E2E tests: Build MinIO from Bitnami Dockerfile to replace deprecated image
2025-09-19 10:54:15 +08:00
Priyansh Choudhary
85f2f23076 Added changelog
Signed-off-by: Priyansh Choudhary im1706@gmail.com
Signed-off-by: Priyansh Choudhary <im1706@gmail.com>
2025-09-19 03:20:14 +05:30
Priyansh Choudhary
aa71b53490 Update AzureAD Microsoft Authentication Library to v1.5.0 (#9244)
Signed-off-by: Priyansh Choudhary <im1706@gmail.com>
2025-09-19 03:20:13 +05:30
Tiger Kaovilai
30cf11a6b1 Fix E2E tests: Build MinIO from Bitnami Dockerfile to replace deprecated image
The Bitnami MinIO image bitnami/minio:2021.6.17-debian-10-r7 is no longer
available on Docker Hub, causing E2E tests to fail.

This change implements a solution to build the MinIO image locally from
Bitnami's public Dockerfile and cache it for subsequent runs:
- Fetches the latest commit hash of the Bitnami MinIO Dockerfile
- Uses GitHub Actions cache to store/retrieve built images
- Only rebuilds when the upstream Dockerfile changes
- Maintains compatibility with existing environment variables

Fixes #9279

🤖 Generated with [Claude Code](https://claude.ai/code)

Update .github/workflows/e2e-test-kind.yaml

Signed-off-by: Tiger Kaovilai <passawit.kaovilai@gmail.com>
Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2025-09-18 08:53:28 -04:00
Shubham Pampattiwar
f404ff207d Fix Schedule Backup Queue Accumulation During Extended Blocking Scenarios
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
(cherry picked from commit 59289fba76)

add changelog file

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2025-09-17 09:44:44 -07:00
lyndon-li
690b074891 Merge pull request #9266 from sseago/iba-perf-1.17
Some checks failed
Run the E2E test on kind / build (push) Failing after 4s
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 5s
[release-1.17] Get pod list once per namespace in pvc IBA
2025-09-17 11:15:16 +08:00
Scott Seago
c188c454d7 Get pod list once per namespace in pvc IBA
Signed-off-by: Scott Seago <sseago@redhat.com>
2025-09-16 17:39:30 -04:00
Wenkai Yin(尹文开)
18a690d69e Merge pull request #9220 from kaovilai/9173-release-1.17
Some checks failed
Run the E2E test on kind / build (push) Failing after 5s
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 3s
release-1.17: feat: Permit specifying annotations for the BackupPVC #9173
2025-09-15 17:05:10 +08:00
lyndon-li
6986cde4d3 Merge branch 'release-1.17' into 9173-release-1.17 2025-09-15 15:46:44 +08:00
lyndon-li
3172d9f99c Merge pull request #9228 from vmware-tanzu/bump_k8s_lib_to_1.33
Some checks failed
Run the E2E test on kind / build (push) Failing after 6s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 5s
Bump k8s library to v1.33.
2025-09-09 17:33:34 +08:00
Xun Jiang
c34865a5fc Bump k8s library to v1.33.
Some checks failed
Run the E2E test on kind / build (push) Failing after 8s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Replace deprecated EventExpansion method with WithContext methods.
Modify UTs.
Align the E2E ginkgo CLI version with go.mod

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-09-08 20:37:51 +08:00
Clément Nussbaumer
344b09a582 test: fix backuppvc annotations test case
Signed-off-by: Clément Nussbaumer <clement.nussbaumer@postfinance.ch>
2025-08-29 08:57:12 -05:00
Clément Nussbaumer
53c46b01c7 feat: Permit specifying annotations for the BackupPVC
Signed-off-by: Clément Nussbaumer <clement.nussbaumer@postfinance.ch>
2025-08-29 08:57:12 -05:00
lyndon-li
a10f413cab Merge pull request #9216 from Lyndon-Li/release-1.17
Pin version of golang and base image
2025-08-28 15:34:43 +08:00
Lyndon-Li
de1cd6dcbf pin version of golang and base image
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-08-28 15:04:51 +08:00
335 changed files with 3763 additions and 17599 deletions

View File

@@ -8,26 +8,18 @@ on:
- "design/**"
- "**/*.md"
jobs:
get-go-version:
uses: ./.github/workflows/get-go-version.yaml
with:
ref: ${{ github.event.pull_request.base.ref }}
# Build the Velero CLI and image once for all Kubernetes versions, and cache it so the fan-out workers can get it.
build:
runs-on: ubuntu-latest
needs: get-go-version
outputs:
minio-dockerfile-sha: ${{ steps.minio-version.outputs.dockerfile_sha }}
steps:
- name: Check out the code
uses: actions/checkout@v6
- name: Set up Go version
uses: actions/setup-go@v6
uses: actions/checkout@v5
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ needs.get-go-version.outputs.version }}
go-version-file: 'go.mod'
# Look for a CLI that's made for this PR
- name: Fetch built CLI
id: cli-cache
@@ -105,20 +97,17 @@ jobs:
needs:
- build
- setup-test-matrix
- get-go-version
runs-on: ubuntu-latest
strategy:
matrix: ${{fromJson(needs.setup-test-matrix.outputs.matrix)}}
fail-fast: false
steps:
- name: Check out the code
uses: actions/checkout@v6
- name: Set up Go version
uses: actions/setup-go@v6
uses: actions/checkout@v5
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ needs.get-go-version.outputs.version }}
go-version-file: 'go.mod'
# Fetch the pre-built MinIO image from the build job
- name: Fetch built MinIO Image
uses: actions/cache@v4
@@ -185,7 +174,7 @@ jobs:
timeout-minutes: 30
- name: Upload debug bundle
if: ${{ failure() }}
uses: actions/upload-artifact@v5
uses: actions/upload-artifact@v4
with:
name: DebugBundle-k8s-${{ matrix.k8s }}-job-${{ strategy.job-index }}
name: DebugBundle
path: /home/runner/work/velero/velero/test/e2e/debug-bundle*

View File

@@ -1,33 +0,0 @@
on:
workflow_call:
inputs:
ref:
description: "The target branch's ref"
required: true
type: string
outputs:
version:
description: "The expected Go version"
value: ${{ jobs.extract.outputs.version }}
jobs:
extract:
runs-on: ubuntu-latest
outputs:
version: ${{ steps.pick-version.outputs.version }}
steps:
- name: Check out the code
uses: actions/checkout@v6
- id: pick-version
run: |
if [ "${{ inputs.ref }}" == "main" ]; then
version=$(grep '^go ' go.mod | awk '{print $2}' | cut -d. -f1-2)
else
goDirectiveVersion=$(grep '^go ' go.mod | awk '{print $2}')
toolChainVersion=$(grep '^toolchain ' go.mod | awk '{print $2}')
version=$(printf "%s\n%s\n" "$goDirectiveVersion" "$toolChainVersion" | sort -V | tail -n1)
fi
echo "version=$version"
echo "version=$version" >> $GITHUB_OUTPUT

View File

@@ -19,7 +19,7 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v6
uses: actions/checkout@v5
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master

View File

@@ -12,7 +12,7 @@ jobs:
steps:
- name: Check out the code
uses: actions/checkout@v6
uses: actions/checkout@v5
- name: Changelog check
if: ${{ !(contains(github.event.pull_request.labels.*.name, 'kind/changelog-not-required') || contains(github.event.pull_request.labels.*.name, 'Design') || contains(github.event.pull_request.labels.*.name, 'Website') || contains(github.event.pull_request.labels.*.name, 'Documentation'))}}

View File

@@ -1,26 +1,18 @@
name: Pull Request CI Check
on: [pull_request]
jobs:
get-go-version:
uses: ./.github/workflows/get-go-version.yaml
with:
ref: ${{ github.event.pull_request.base.ref }}
build:
name: Run CI
needs: get-go-version
runs-on: ubuntu-latest
strategy:
fail-fast: false
steps:
- name: Check out the code
uses: actions/checkout@v6
- name: Set up Go version
uses: actions/setup-go@v6
uses: actions/checkout@v5
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ needs.get-go-version.outputs.version }}
go-version-file: 'go.mod'
- name: Make ci
run: make ci
- name: Upload test coverage

View File

@@ -8,7 +8,7 @@ jobs:
steps:
- name: Check out the code
uses: actions/checkout@v6
uses: actions/checkout@v5
- name: Codespell
uses: codespell-project/actions-codespell@master

View File

@@ -13,7 +13,7 @@ jobs:
name: Build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@v5
name: Checkout
- name: Set up QEMU

View File

@@ -14,7 +14,7 @@ jobs:
name: Build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@v5
name: Checkout
- name: Verify .goreleaser.yml and try a dryrun release.

View File

@@ -7,26 +7,18 @@ on:
- "design/**"
- "**/*.md"
jobs:
get-go-version:
uses: ./.github/workflows/get-go-version.yaml
with:
ref: ${{ github.event.pull_request.base.ref }}
build:
name: Run Linter Check
runs-on: ubuntu-latest
needs: get-go-version
steps:
- name: Check out the code
uses: actions/checkout@v6
- name: Set up Go version
uses: actions/setup-go@v6
uses: actions/checkout@v5
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ needs.get-go-version.outputs.version }}
go-version-file: 'go.mod'
- name: Linter check
uses: golangci/golangci-lint-action@v9
uses: golangci/golangci-lint-action@v8
with:
version: v2.5.0
version: v2.1.1
args: --verbose

View File

@@ -12,7 +12,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@v5
with:
# The default value is "1" which fetches only a single commit. If we merge PR without squash or rebase,
# there are at least two commits: the first one is the merge commit and the second one is the real commit

View File

@@ -9,24 +9,17 @@ on:
- '*'
jobs:
get-go-version:
uses: ./.github/workflows/get-go-version.yaml
with:
ref: ${{ github.ref_name }}
build:
name: Build
runs-on: ubuntu-latest
needs: get-go-version
steps:
- name: Check out the code
uses: actions/checkout@v6
- name: Set up Go version
uses: actions/setup-go@v6
uses: actions/checkout@v5
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ needs.get-go-version.outputs.version }}
go-version-file: 'go.mod'
- name: Set up QEMU
id: qemu
uses: docker/setup-qemu-action@v3

View File

@@ -9,7 +9,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout the latest code
uses: actions/checkout@v6
uses: actions/checkout@v5
with:
fetch-depth: 0
- name: Automatic Rebase

View File

@@ -7,7 +7,7 @@ jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v10.1.1
- uses: actions/stale@v9.1.0
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-issue-message: "This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 14 days. If a Velero team member has requested log or more information, please provide the output of the shared commands."

View File

@@ -13,7 +13,7 @@
# limitations under the License.
# Velero binary build section
FROM --platform=$BUILDPLATFORM golang:1.25-bookworm AS velero-builder
FROM --platform=$BUILDPLATFORM golang:1.24.11-bookworm AS velero-builder
ARG GOPROXY
ARG BIN
@@ -49,7 +49,7 @@ RUN mkdir -p /output/usr/bin && \
go clean -modcache -cache
# Restic binary build section
FROM --platform=$BUILDPLATFORM golang:1.25-bookworm AS restic-builder
FROM --platform=$BUILDPLATFORM golang:1.24.11-bookworm AS restic-builder
ARG GOPROXY
ARG BIN
@@ -73,7 +73,7 @@ RUN mkdir -p /output/usr/bin && \
go clean -modcache -cache
# Velero image packing section
FROM paketobuildpacks/run-jammy-tiny:latest
FROM paketobuildpacks/run-jammy-tiny:0.2.90
LABEL maintainer="Xun Jiang <jxun@vmware.com>"

View File

@@ -15,7 +15,7 @@
ARG OS_VERSION=1809
# Velero binary build section
FROM --platform=$BUILDPLATFORM golang:1.25-bookworm AS velero-builder
FROM --platform=$BUILDPLATFORM golang:1.24.11-bookworm AS velero-builder
ARG GOPROXY
ARG BIN

View File

@@ -42,7 +42,7 @@ The following is a list of the supported Kubernetes versions for each Velero ver
| Velero version | Expected Kubernetes version compatibility | Tested on Kubernetes version |
|----------------|-------------------------------------------|-------------------------------------|
| 1.17 | 1.18-latest | 1.31.7, 1.32.3, 1.33.1, and 1.34.0 |
| 1.17 | 1.18-latest | 1.31.7, 1.32.3, and 1.33.1 |
| 1.16 | 1.18-latest | 1.31.4, 1.32.3, and 1.33.0 |
| 1.15 | 1.18-latest | 1.28.8, 1.29.8, 1.30.4 and 1.31.1 |
| 1.14 | 1.18-latest | 1.27.9, 1.28.9, and 1.29.4 |

View File

@@ -52,7 +52,7 @@ git_sha = str(local("git rev-parse HEAD", quiet = True, echo_off = True)).strip(
tilt_helper_dockerfile_header = """
# Tilt image
FROM golang:1.25 as tilt-helper
FROM golang:1.24.11 as tilt-helper
# Support live reloading with Tilt
RUN wget --output-document /restart.sh --quiet https://raw.githubusercontent.com/windmilleng/rerun-process-wrapper/master/restart.sh && \

View File

@@ -1,3 +1,54 @@
## v1.17.2
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.17.2
### Container Image
`velero/velero:v1.17.2`
### Documentation
https://velero.io/docs/v1.17/
### Upgrading
https://velero.io/docs/v1.17/upgrade-to-1.17/
### All Changes
* Track actual resource names for GenerateName in restore status (#9409, @shubham-pampattiwar)
* Fix managed fields patch for resources using GenerateName (#9408, @shubham-pampattiwar)
* don't copy securitycontext from first container if configmap found (#9394, @sseago)
* Add Role, RoleBinding, ClusterRole, and ClusterRoleBinding in restore sequence. (#9479, @blackpiglet)
## v1.17.1
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.17.1
### Container Image
`velero/velero:v1.17.1`
### Documentation
https://velero.io/docs/v1.17/
### Upgrading
https://velero.io/docs/v1.17/upgrade-to-1.17/
### All Changes
* Fix issue #9365, prevent fake completion notification due to multiple update of single PVR (#9376, @Lyndon-Li)
* Fix issue #9332, add bytesDone for cache files (#9341, @Lyndon-Li)
* VerifyJSONConfigs verify every elements in Data. (#9303, @blackpiglet)
* Add option for privileged fs-backup pod (#9300, @sseago)
* Fix repository maintenance jobs to inherit allowlisted tolerations from Velero deployment (#9299, @shubham-pampattiwar)
* Fix issue #9229, don't attach backupPVC to the source node (#9297, @Lyndon-Li)
* Protect VolumeSnapshot field from race condition during multi-thread backup (#9292, @0xLeo258)
* Implement concurrency control for cache of native VolumeSnapshotter plugin. (#9290, @0xLeo258)
* Backport to 1.17 (PR#9244 Update AzureAD Microsoft Authentication Library to v1.5.0) (#9285, @priyansh17)
* Fix schedule controller to prevent backup queue accumulation during extended blocking scenarios by properly handling empty backup phases (#9277, @shubham-pampattiwar)
* Get pod list once per namespace in pvc IBA (#9266, @sseago)
* Update AzureAD Microsoft Authentication Library to v1.5.0 (#9244, @priyansh17)
* feat: Permit specifying annotations for the BackupPVC (#9173, @clementnuss)
## v1.17
### Download

View File

@@ -1 +0,0 @@
Add `--apply` flag to `install` command, allowing usage of Kubernetes apply to make changes to existing installs

View File

@@ -1 +0,0 @@
feat: Enhance BackupStorageLocation with Secret-based CA certificate support

View File

@@ -1 +0,0 @@
Fix issue #7725, add design for backup repo cache configuration

View File

@@ -1 +0,0 @@
Add VolumePolicy support for PVC Phase conditions to allow skipping Pending PVCs

View File

@@ -1 +0,0 @@
feat: Permit specifying annotations for the BackupPVC

View File

@@ -1 +0,0 @@
Remove labels associated with previous backups

View File

@@ -1 +0,0 @@
Get pod list once per namespace in pvc IBA

View File

@@ -1 +0,0 @@
Fix issue #9229, don't attach backupPVC to the source node

View File

@@ -1 +0,0 @@
Update AzureAD Microsoft Authentication Library to v1.5.0

View File

@@ -1 +0,0 @@
Protect VolumeSnapshot field from race condition during multi-thread backup

View File

@@ -1,10 +0,0 @@
Implement wildcard namespace pattern expansion for backup namespace includes/excludes.
This change adds support for wildcard patterns (*, ?, [abc], {a,b,c}) in namespace includes and excludes during backup operations.
When wildcard patterns are detected, they are expanded against the list of active namespaces in the cluster before the backup proceeds.
Key features:
- Wildcard patterns in namespace includes/excludes are automatically detected and expanded
- Pattern validation ensures unsupported patterns (regex, consecutive asterisks) are rejected
- Empty wildcard results (e.g., "invalid*" matching no namespaces) correctly result in empty backups
- Exact namespace names and "*" continue to work as before (no expansion needed)

View File

@@ -1 +0,0 @@
Fix repository maintenance jobs to inherit allowlisted tolerations from Velero deployment

View File

@@ -1 +0,0 @@
Fix schedule controller to prevent backup queue accumulation during extended blocking scenarios by properly handling empty backup phases

View File

@@ -1 +0,0 @@
Fix issue #7904, remove the code and doc for PVC node selection

View File

@@ -1 +0,0 @@
Implement concurrency control for cache of native VolumeSnapshotter plugin.

View File

@@ -1 +0,0 @@
Fix issue #9193, don't connect repo in repo controller

View File

@@ -1 +0,0 @@
Add option for privileged fs-backup pod

View File

@@ -1 +0,0 @@
Fix issue #9267, add events to data mover prepare diagnostic

View File

@@ -1 +0,0 @@
VerifyJSONConfigs verify every elements in Data.

View File

@@ -1 +0,0 @@
Concurrent backup processing

View File

@@ -1 +0,0 @@
Sanitize Azure HTTP responses in BSL status messages

View File

@@ -1 +0,0 @@
Fix typos in documentation

View File

@@ -1 +0,0 @@
Fix issue #9332, add bytesDone for cache files

View File

@@ -1 +0,0 @@
Add cache configuration to VGDP

View File

@@ -1 +0,0 @@
Fix the Job build error when BackupReposiotry name longer than 63.

View File

@@ -1 +0,0 @@
Add cache dir configuration for udmrepo

View File

@@ -1 +0,0 @@
Add snapshotSize for DataDownload, PodVolumeRestore

View File

@@ -1 +0,0 @@
Add incrementalSize to DU/PVB for reporting new/changed size

View File

@@ -1 +0,0 @@
Support cache volume for generic restore exposer and pod volume exposer

View File

@@ -1 +0,0 @@
Use hookIndex for recording multiple restore exec hooks.

View File

@@ -1 +0,0 @@
Fix managed fields patch for resources using GenerateName

View File

@@ -1 +0,0 @@
Track actual resource names for GenerateName in restore status

View File

@@ -1 +0,0 @@
Add cache volume configuration

View File

@@ -1 +0,0 @@
Fix issue #9365, prevent fake completion notification due to multiple update of single PVR

View File

@@ -1 +0,0 @@
Refactor repo provider interface for static configuration

View File

@@ -1 +0,0 @@
don't copy securitycontext from first container if configmap found

View File

@@ -1 +0,0 @@
Cache volume support for DataDownload

View File

@@ -1 +0,0 @@
Cache volume for PVR

View File

@@ -1 +0,0 @@
Fix issue #9400, connect repo first time after creation so that init params could be written

View File

@@ -1 +0,0 @@
Add Prometheus metrics for maintenance jobs

View File

@@ -1 +0,0 @@
Fix issue #9276, add doc for cache volume support

View File

@@ -1 +0,0 @@
Apply volume policies to VolumeGroupSnapshot PVC filtering

View File

@@ -1 +0,0 @@
Fix issue #9194, add doc for GOMAXPROCS behavior change

View File

@@ -1 +0,0 @@
Remove VolumeSnapshotClass from CSI B/R process.

View File

@@ -1 +0,0 @@
Add PVC-to-Pod cache to improve volume policy performance

View File

@@ -1 +0,0 @@
Fix plugin init container names exceeding DNS-1123 limit

View File

@@ -1 +0,0 @@
Add maintenance job and data mover pod's labels and annotations setting.

View File

@@ -1 +0,0 @@
Add Role, RoleBinding, ClusterRole, and ClusterRoleBinding in restore sequence.

View File

@@ -1 +0,0 @@
Fix issue #9478, add diagnose info on expose peek fails

View File

@@ -594,8 +594,6 @@ spec:
description: Phase is the current state of the Backup.
enum:
- New
- Queued
- ReadyToStart
- FailedValidation
- InProgress
- WaitingForPluginOperations
@@ -627,11 +625,6 @@ spec:
filters that happen as items are processed.
type: integer
type: object
queuePosition:
description: |-
QueuePosition is the position of the backup in the queue.
Only relevant when Phase is "Queued"
type: integer
startTimestamp:
description: |-
StartTimestamp records the time a backup was started.

View File

@@ -113,38 +113,10 @@ spec:
description: Bucket is the bucket to use for object storage.
type: string
caCert:
description: |-
CACert defines a CA bundle to use when verifying TLS connections to the provider.
Deprecated: Use CACertRef instead.
description: CACert defines a CA bundle to use when verifying
TLS connections to the provider.
format: byte
type: string
caCertRef:
description: |-
CACertRef is a reference to a Secret containing the CA certificate bundle to use
when verifying TLS connections to the provider. The Secret must be in the same
namespace as the BackupStorageLocation.
properties:
key:
description: The key of the secret to select from. Must be
a valid secret key.
type: string
name:
default: ""
description: |-
Name of the referent.
This field is effectively required, but due to backwards compatibility is
allowed to be empty. Instances of this type with an empty value here are
almost certainly wrong.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
type: string
optional:
description: Specify whether the Secret or its key must be
defined
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
prefix:
description: Prefix is the path inside a bucket to use for Velero
storage. Optional.

View File

@@ -33,12 +33,6 @@ spec:
jsonPath: .status.progress.totalBytes
name: Total Bytes
type: integer
- description: Incremental bytes
format: int64
jsonPath: .status.incrementalBytes
name: Incremental Bytes
priority: 10
type: integer
- description: Name of the Backup Storage Location where this backup should be
stored
jsonPath: .spec.backupStorageLocation
@@ -195,11 +189,6 @@ spec:
format: date-time
nullable: true
type: string
incrementalBytes:
description: IncrementalBytes holds the number of bytes new or changed
since the last backup
format: int64
type: integer
message:
description: Message is a message about the pod volume backup's status.
type: string

View File

@@ -133,10 +133,6 @@ spec:
snapshotID:
description: SnapshotID is the ID of the volume snapshot to be restored.
type: string
snapshotSize:
description: SnapshotSize is the logical size in Bytes of the snapshot.
format: int64
type: integer
sourceNamespace:
description: SourceNamespace is the original namespace for namaspace
mapping.

File diff suppressed because one or more lines are too long

View File

@@ -108,10 +108,6 @@ spec:
description: SnapshotID is the ID of the Velero backup snapshot to
be restored from.
type: string
snapshotSize:
description: SnapshotSize is the logical size in Bytes of the snapshot.
format: int64
type: integer
sourceNamespace:
description: |-
SourceNamespace is the original namespace where the volume is backed up from.

View File

@@ -33,12 +33,6 @@ spec:
jsonPath: .status.progress.totalBytes
name: Total Bytes
type: integer
- description: Incremental bytes
format: int64
jsonPath: .status.incrementalBytes
name: Incremental Bytes
priority: 10
type: integer
- description: Name of the Backup Storage Location where this backup should be
stored
jsonPath: .spec.backupStorageLocation
@@ -179,11 +173,6 @@ spec:
as a result of the DataUpload.
nullable: true
type: object
incrementalBytes:
description: IncrementalBytes holds the number of bytes new or changed
since the last backup
format: int64
type: integer
message:
description: Message is a message about the DataUpload's status.
type: string

File diff suppressed because one or more lines are too long

View File

@@ -1,70 +0,0 @@
# Apply flag for install command
## Abstract
Add an `--apply` flag to the install command that enables applying existing resources rather than creating them. This can be useful as part of the upgrade process for existing installations.
## Background
The current Velero install command creates resources but doesn't provide a direct way to apply updates to an existing installation.
Users attempting to run the install command on an existing installation receive "already exists" messages.
Upgrade steps for existing installs typically involve a three (or more) step process to apply updated CRDs (using `--dry-run` and piping to `kubectl apply`) and then updating/setting images on the Velero deployment and node-agent.
## Goals
- Provide a simple flag to enable applying resources on an existing Velero installation.
- Use server-side apply to update existing resources rather than attempting to create them.
- Maintain consistency with the regular install flow.
## Non Goals
- Implement special logic for specific version-to-version upgrades (i.e. resource deletion, etc).
- Add complex upgrade validation or pre/post-upgrade hooks.
- Provide rollback capabilities.
## High-Level Design
The `--apply` flag will be added to the Velero install command.
When this flag is set, the installation process will use server-side apply to update existing resources instead of using create on new resources.
This flag can be used as _part_ of the upgrade process, but will not always fully handle an upgrade.
## Detailed Design
The implementation adds a new boolean flag `--apply` to the install command.
This flag will be passed through to the underlying install functions where the resource creation logic resides.
When the flag is set to true:
- The `createOrApplyResource` function will use server-side apply with field manager "velero-cli" and `force=true` to update resources.
- Resources will be applied in the same order as they would be created during installation.
- Custom Resource Definitions will still be processed first, and the system will wait for them to be established before continuing.
The server-side apply approach with `force=true` ensures that resources are updated even if there are conflicts with the last applied state.
This provides a best-effort mechanism to apply resources that follows the same flow as installation but updates resources instead of creating them.
No special handling is added for specific versions or resource structures, making this a general-purpose mechanism for applying resources.
## Alternatives Considered
1. Creating a separate `upgrade` command that would duplicate much of the install command logic.
- Rejected due to code duplication and maintenance overhead.
2. Implementing version-specific upgrade logic to handle breaking changes between versions.
- Rejected as overly complex and difficult to maintain across multiple version paths.
- This could be considered again in the future, but is not in the scope of the current design.
3. Adding automatic detection of existing resources and switching to apply mode.
- Rejected as it could lead to unexpected behavior and confusion if users unintentionally apply changes to existing resources.
## Security Considerations
The apply flag maintains the same security profile as the install command.
No additional permissions are required beyond what is needed for resource creation.
The use of `force=true` with server-side apply could potentially override manual changes made to resources, but this is a necessary trade-off to ensure apply is successful.
## Compatibility
This enhancement is compatible with all existing Velero installations as it is a new opt-in flag.
It does not change any resource formats or API contracts.
The apply process is best-effort and does not guarantee compatibility between arbitrary versions of Velero.
Users should still consult release notes for any breaking changes that may require manual intervention.
This flag could be adopted by the helm chart, specifically for CRD updates, to simplify the CRD update job.
## Implementation
The implementation involves:
1. Adding support for `Apply` to the existing Kubernetes client code.
1. Adding the `--apply` flag to the install command options.
1. Changing `createResource` to `createOrApplyResource` and updating it to use server-side apply when the `apply` boolean is set.
The implementation is straightforward and follows existing code patterns.
No migration of state or special handling of specific resources is required.

View File

@@ -1,231 +0,0 @@
# Backup Repository Cache Volume Design
## Glossary & Abbreviation
**Backup Storage**: The storage to store the backup data. Check [Unified Repository design][1] for details.
**Backup Repository**: Backup repository is layered between BR data movers and Backup Storage to provide BR related features that is introduced in [Unified Repository design][1].
**Velero Generic Data Path (VGDP)**: VGDP is the collective of modules that is introduced in [Unified Repository design][1]. Velero uses these modules to finish data transfer for various purposes (i.e., PodVolume backup/restore, Volume Snapshot Data Movement). VGDP modules include uploaders and the backup repository.
**Data Mover Pods**: Intermediate pods which hold VGDP and complete the data transfer. See [VGDP Micro Service for Volume Snapshot Data Movement][2] and [VGDP Micro Service For fs-backup][3] for details.
**Repository Maintenance Pods**: Pods for [Repository Maintenance Jobs][4], which holds VGDP to run repository maintenance.
## Background
According to the [Unified Repository design][1] Velero uses selectable backup repositories for various backup/restore methods, i.e., fs-backup, volume snapshot data movement, etc. Some backup repositories may need to cache data on the client side for various repository operation, so as to accelerate the execution.
In the existing [Backup Repository Configuration][5], we allow users to configure the cache data size (`cacheLimitMB`). However, the cache data is still stored in the root file system of data mover pods/repository maintenance pods, so stored in the root file system of the node. This is not good enough, reasons:
- In many distributions, the node's system disk size is predefined, non configurable and limit, e.g., the system disk size may be 20G or less
- Velero supports concurrent data movements in each node. The cache in each of the concurrent data mover pods could quickly run out of the system disk and cause problems like pod eviction, failure of pod creation, degradation of Kubernetes QoS, etc.
We need to allow users to prepare a dedicated location, e.g., a dedictated volume, for the cache.
Not all backup repositories or not all backup repository operations require cache, we need to define the details when and how the cache is used.
## Goals
- Create a mechanism for users to configure cache volumes for various pods running VGDP
- Design the workflow to assign the cache volume pod path to backup repositories
- Describe when and how the cache volume is used
## Non-Goals
- The solution is based on [Unified Repository design][1], [VGDP Micro Service for Volume Snapshot Data Movement][2] and [VGDP Micro Service For fs-backup][3], legacy data paths are not supported. E.g., when a pod volume restore (PVR) runs with legacy Restic path, if any data is cached, the cache still resides in the root file system.
## Solution
### Cache Data
Varying on backup repositoires, cache data may include payload data or repository metadata, e.g., indexes to the payload data chunks.
Payload data is highly related to the backup data, and normally take the majority of the repository data as well as the cache data.
Repository metadata is related to the backup repository's chunking algorithm, data chunk mapping method, etc, and so the size is not proportional to the backup data size.
On the other hand for some backup repository, in extreme cases, the repository metadata may be significantly large. E.g., Kopia's indexes are per chunks, if there are huge number of small files in the repository, Kopia's index data may be in the same level of or even larger than the payload data.
However, in the cases that repository metadata data become the majority, other bottlenecks may emerge and concurrency of data movers may be significantly constrained, so the requirement to cache volumes may go away.
Therefore, for now we only consider the cache volume requirement for payload data, and leave the consideration for metadata as a future enhancement.
### Scenarios
Backup repository cache varies on backup repositories and backup repository operation during VGDP runs. Below are the scenarios when VGDP runs:
- Data Upload for Backup: this is the process to upload/write the backup data into the backup repository, e.g., DataUpload or PodVolumeBackup. The pieces of data is almost directly written to the repository, sometimes with a small group staying shortly in the local place. That is to say, there should not be large scale data cached for this scenario, so we don't prepare dedicated cache for this scenario.
- Repository Maintenance: Repository maintenance most often visits the backup repository's metadata and sometimes it needs to visit the file system directories from the backed up data. On the other hand, it is not practical to run concurrent maintenance jobs in one node. So the cache data is neither large nor affect the root file system too much. Therefore, we don't need to prepare dedicated cache for this scenario.
- Data Download for Restore: this is the process to download/read the backup data from the backup repository during restore, e.g., DataDownload or PodVolumeRestore. For backup repositories for which data are stored in remote backup storages (e.g., Kopia repository stores data in remote object stores), large scale of data are cached locally to accerlerate the restore. Therefore, we need dedicate cache volumes for this scenario.
- Backup Deletion: During this scenario, backup repository is connected, metadata is enumerated to find the repository snapshot representing the backup data. That is to say, only metadata is cached if any. Therefore, dedicated cache volumes are not required in this scenario.
The above analyses are based on the common behavior of backup repositories and they are not considering the case that backup repository metadata takes majority or siginficant proportion of the cache data.
As a conclusion of the analyses, we will create dedicated cache volumes for restore scenarios.
For other scenarios, we can add them regarded to the future changes/requirements. The mechanism to expose and connect the cache volumes should work for all scenarios. E.g., if we need to consider the backup repository metadata case, we may need cache volumes for backup and repository maintenance as well, then we can just reuse the same cache volume provision and connection mechanism to backup and repository maintenance scenarios.
### Cache Data and Lifecycle
If available, one cache volume is dedicately assigned to one data mover pod. That is, the cached data is destroyed when the data mover pod completes. Then the backup repository instance also closes.
Cache data are fully managed by the specific backup repository. So the backup repository may also have its own way to GC the cache data.
That is to say, cache data GC may be launched by the backup repository instance during the running of the data mover pod; then the left data are automatically destroyed when the data mover pod and the cache PVC are destroyed (cache PVC's `reclaimPolicy` is always `Deleted`, so once the cache PVC is destroyed, the volume will also be destroyed). So no specially logics are needed for cache data GC.
### Data Size
Cache volumes take storage space and cluster resources (PVC, PV), therefore, cache volumes should be created only when necessary and the volumes should be with reasonable size based on the cache data size:
- It is not a good bargain to have cache volumes for small backups, small backups will use resident cache location (the cache location in the root file system)
- The cache data size has a limit, the existing `cacheLimitMB` is used for this purpose. E.g., it could be set as 1024 for a 1TB backup, which means 1GB of data is cached and the old cache data exceeding this size will be cleared. Therefore, it is meaningless to set the cache volume size much larger than `cacheLimitMB`
### Cache Volume Size
The cache volume size is calculated from below factors (for Restore scenarios):
- **Limit**: The limit of the cache data, that is represented by `cacheLimitMB`, the default value is 5GB
- **backupSize**: The size of the backup as a reference to evaluate whether to create a cache volume. It doesn't mean the backup data really decides the cache data all the time, it is just a reference to evaluate the scale of the backup, small scale backups may need small cache data. Sometimes, backupSize is not irrelevant to the size of cache data, in this case, ResidentThreshold should not be set, Limit will be used directly. It is unlikely that backupSize is unavailable, but once that happens, ResidentThreshold is ignored, Limit will be used directly.
- **ResidentThreshold**: The minimum backup size that a cache volume is created
- **InflationPercentage**: Considering the overhead of the file system and the possible delay of the cache cleanup, there should be an inflation for the final volume size vs. the logical size, otherwise, the cache volume may be overrun. This inflation percentage is hardcoded, e.g., 20%.
A formula is as below:
```
cacheVolumeSize = ((backupSize != 0 ? (backupSize > residentThreshold ? limit : 0) : limit) * (100 + inflationPercentage)) / 100
```
Finally, the `cacheVolumeSize` will be rounded up to GiB considering the UX friendliness, storage friendliness and management friendliness.
### PVC/PV
The PVC for a cache volume is created in Velero namespace and a storage class is required for the cache PVC. The PVC's accessMode is `ReadWriteOnce` and volumeMode is `FileSystem`, so the storage class provided should support this specification. Otherwise, if the storageclass doesn't support either of the specifications, the data mover pod may be hang in `Pending` state until a timeout setting with the data movement (e.g. `prepareTimeout`) and the data movement will finally fail.
It is not expected that the cache volume is retained after data mover pod is deleted, so the `reclaimPolicy` for the storageclass must be `Delete`.
To detect the problems in the storageclass and fail earlier, a validation is applied to the storageclass and once the validation fails, the cache configuration will be ignored, so the data mover pod will be created without a cache volume.
### Cache Volume Configurations
Below configurations are introduced:
- **residentThresholdMB**: the minimum data size(in MB) to be processed (if available) that a cache volume is created
- **cacheStorageClass**: the name of the storage class to provision the cache PVC
Not like `cacheLimitMB` which is set to and affect the backup repository, the above two configurations are actually data mover configurations of how to create cache volumes to data mover pods; and the two configurations don't need to be per backup repository. So we add them to the node-agent Configuration.
### Sample
Below are some examples of the node-agent configMap with the configurations:
Sample-1:
```json
{
"cacheVolume": {
"storageClass": "sc-1",
"residentThresholdMB": 1024
}
}
```
Sample-2:
```json
{
"cacheVolume": {
"storageClass": "sc-1",
}
}
```
Sample-3:
```json
{
"cacheVolume": {
"residentThresholdMB": 1024
}
}
```
**sample-1**: This is a valid configuration. Restores with backup data size larger than 1G will be assigned a cache volume using storage class `sc-1`.
**sample-2**: This is a valid configuration. Data mover pods are always assigned a cache volume using storage class `sc-1`.
**sample-3**: This is not a valid configuration because the storage class is absent. Velero gives up creating a cache volume.
To create the configMap, users need to save something like the above sample to a json file and then run below command:
```
kubectl create cm <ConfigMap name> -n velero --from-file=<json file name>
```
The cache volume configurations will be visited by node-agent server, so they also need to specify the `--node-agent-configmap` to the `velero node-agent` parameters.
## Detailed Design
### Backup and Restore
The restore needs to know the backup size so as to calculate the cache volume size, some new fields are added to the DataDownload and PodVolumeRestore CRDs.
`snapshotSize` field is also added to DataDownload and PodVolumeRestore's `spec`:
```yaml
spec:
snapshotID:
description: SnapshotID is the ID of the Velero backup snapshot to
be restored from.
type: string
snapshotSize:
description: SnapshotSize is the logical size of the snapshot.
format: int64
type: integer
```
`snapshotSize` represents the total size of the backup; during restore, the value is transferred from DataUpload/PodVolumeBackup's `Status.Progress.TotalBytes` to DataDownload/PodVolumeRestore.
It is unlikely that `Status.Progress.TotalBytes` from DataUpload/PodVolumeBackup is unavailable, but once it happens, according to the above formula, `residentThresholdMB` is ignored, cache volume size is calculated directly from cache limit for the corresponding backup repository.
### Exposer
Cache volume configurations are retrieved by node-agent and passed through DataDownload/PodVolumeRestore to GenericRestore exposer/PodVolume exposer.
The exposers are responsible to calculate cache volume size, create cache PVCs and mount them to the restorePods.
If the calculated cache volume size is 0, or any of the critical parameters is missing (e.g., cache volume storage class), the exposers ignore the cache volume configuration and continue with creating restorePods without cache volumes, so no impact to the result of the restore.
Exposers mount the cache volume to a predefined directory and pass the directory to the data mover pods through the `cache-volume-path` parameter.
Below data structure is added to the exposers' expose parameters:
```go
type GenericRestoreExposeParam struct {
// RestoreSize specifies the data size for the volume to be restored
RestoreSize int64
// CacheVolume specifies the info for cache volumes
CacheVolume *CacheVolumeInfo
}
type PodVolumeExposeParam struct {
// RestoreSize specifies the data size for the volume to be restored
RestoreSize int64
// CacheVolume specifies the info for cache volumes
CacheVolume *repocache.CacheConfigs
}
type CacheConfigs struct {
// StorageClass specifies the storage class for cache volumes
StorageClass string
// Limit specifies the maximum size of the cache data
Limit int64
// ResidentThreshold specifies the minimum size of the cache data to create a cache volume
ResidentThreshold int64
}
```
### Data Mover Pods
Data mover pods retrieve the cache volume directory from `cache-volume-path` parameter and pass it to Unified Repository.
If the directory is empty, Unified Repository uses the resident location for data cache, that is, the root file system.
### Kopia Repository
Kopia repository supports cache directory configuration for both metadata and data. The existing `SetupConnectOptions` is modified to customize the `CacheDirectory`:
```go
func SetupConnectOptions(ctx context.Context, repoOptions udmrepo.RepoOptions) repo.ConnectOptions {
...
return repo.ConnectOptions{
CachingOptions: content.CachingOptions{
CacheDirectory: cacheDir,
...
},
...
}
}
```
[1]: Implemented/unified-repo-and-kopia-integration/unified-repo-and-kopia-integration.md
[2]: Implemented/vgdp-micro-service/vgdp-micro-service.md
[3]: Implemented/vgdp-micro-service-for-fs-backup/vgdp-micro-service-for-fs-backup.md
[4]: Implemented/repo_maintenance_job_config.md
[5]: Implemented/backup-repo-config.md

View File

@@ -1,417 +0,0 @@
# Design for BSL Certificate Support Enhancement
## Abstract
This design document describes the enhancement of BackupStorageLocation (BSL) certificate management in Velero, introducing a Secret-based certificate reference mechanism (`caCertRef`) alongside the existing inline certificate field (`caCert`). This enhancement provides a more secure, Kubernetes-native approach to certificate management while enabling future CLI improvements for automatic certificate discovery.
## Background
Currently, Velero supports TLS certificate verification for object storage providers through an inline `caCert` field in the BSL specification. While functional, this approach has several limitations:
- **Security**: Certificates are stored directly in the BSL YAML, potentially exposing sensitive data
- **Management**: Certificate rotation requires updating the BSL resource itself
- **CLI Usability**: Users must manually specify certificates when using CLI commands
- **Size Limitations**: Large certificate bundles can make BSL resources unwieldy
Issue #9097 and PR #8557 highlight the need for improved certificate management that addresses these concerns while maintaining backward compatibility.
## Goals
- Provide a secure, Secret-based certificate storage mechanism
- Maintain full backward compatibility with existing BSL configurations
- Enable future CLI enhancements for automatic certificate discovery
- Simplify certificate rotation and management
- Provide clear migration path for existing users
## Non-Goals
- Removing support for inline certificates immediately
- Changing the behavior of existing BSL configurations
- Implementing client-side certificate validation
- Supporting certificates from ConfigMaps or other resource types
## High-Level Design
### API Changes
#### New Field: CACertRef
```go
type ObjectStorageLocation struct {
// Existing field (now deprecated)
// +optional
// +kubebuilder:deprecatedversion:warning="caCert is deprecated, use caCertRef instead"
CACert []byte `json:"caCert,omitempty"`
// New field for Secret reference
// +optional
CACertRef *corev1api.SecretKeySelector `json:"caCertRef,omitempty"`
}
```
The `SecretKeySelector` follows standard Kubernetes patterns:
```go
type SecretKeySelector struct {
// Name of the Secret
Name string `json:"name"`
// Key within the Secret
Key string `json:"key"`
}
```
### Certificate Resolution Logic
The system follows a priority-based resolution:
1. If `caCertRef` is specified, retrieve certificate from the referenced Secret
2. If `caCert` is specified (and `caCertRef` is not), use the inline certificate
3. If neither is specified, no custom CA certificate is used
### Validation
BSL validation ensures mutual exclusivity:
```go
func (bsl *BackupStorageLocation) Validate() error {
if bsl.Spec.ObjectStorage != nil &&
bsl.Spec.ObjectStorage.CACert != nil &&
bsl.Spec.ObjectStorage.CACertRef != nil {
return errors.New("cannot specify both caCert and caCertRef in objectStorage")
}
return nil
}
```
## Detailed Design
### BSL Controller Changes
The BSL controller incorporates validation during reconciliation:
```go
func (r *backupStorageLocationReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
// ... existing code ...
// Validate BSL configuration
if err := location.Validate(); err != nil {
r.logger.WithError(err).Error("BSL validation failed")
return ctrl.Result{}, err
}
// ... continue reconciliation ...
}
```
### Repository Provider Integration
All repository providers implement consistent certificate handling:
```go
func configureCACert(bsl *velerov1api.BackupStorageLocation, credGetter *credentials.CredentialGetter) ([]byte, error) {
if bsl.Spec.ObjectStorage == nil {
return nil, nil
}
// Prefer caCertRef (new method)
if bsl.Spec.ObjectStorage.CACertRef != nil {
certString, err := credGetter.FromSecret.Get(bsl.Spec.ObjectStorage.CACertRef)
if err != nil {
return nil, errors.Wrap(err, "error getting CA certificate from secret")
}
return []byte(certString), nil
}
// Fall back to caCert (deprecated)
if bsl.Spec.ObjectStorage.CACert != nil {
return bsl.Spec.ObjectStorage.CACert, nil
}
return nil, nil
}
```
### CLI Certificate Discovery Integration
#### Background: PR #8557 Implementation
PR #8557 ("CLI automatically discovers and uses cacert from BSL") was merged in August 2025, introducing automatic CA certificate discovery from BackupStorageLocation for Velero CLI download operations. This eliminated the need for users to manually specify the `--cacert` flag when performing operations like `backup describe`, `backup download`, `backup logs`, and `restore logs`.
#### Current Implementation (Post PR #8557)
The CLI now automatically discovers certificates from BSL through the `pkg/cmd/util/cacert/bsl_cacert.go` module:
```go
// Current implementation only supports inline caCert
func GetCACertFromBSL(ctx context.Context, client kbclient.Client, namespace, bslName string) (string, error) {
// ... fetch BSL ...
if bsl.Spec.ObjectStorage != nil && len(bsl.Spec.ObjectStorage.CACert) > 0 {
return string(bsl.Spec.ObjectStorage.CACert), nil
}
return "", nil
}
```
#### Enhancement with caCertRef Support
This design extends the existing CLI certificate discovery to support the new `caCertRef` field:
```go
// Enhanced implementation supporting both caCert and caCertRef
func GetCACertFromBSL(ctx context.Context, client kbclient.Client, namespace, bslName string) (string, error) {
// ... fetch BSL ...
// Prefer caCertRef over inline caCert
if bsl.Spec.ObjectStorage.CACertRef != nil {
secret := &corev1api.Secret{}
key := types.NamespacedName{
Name: bsl.Spec.ObjectStorage.CACertRef.Name,
Namespace: namespace,
}
if err := client.Get(ctx, key, secret); err != nil {
return "", errors.Wrap(err, "error getting certificate secret")
}
certData, ok := secret.Data[bsl.Spec.ObjectStorage.CACertRef.Key]
if !ok {
return "", errors.Errorf("key %s not found in secret",
bsl.Spec.ObjectStorage.CACertRef.Key)
}
return string(certData), nil
}
// Fall back to inline caCert (deprecated)
if bsl.Spec.ObjectStorage.CACert != nil {
return string(bsl.Spec.ObjectStorage.CACert), nil
}
return "", nil
}
```
#### Certificate Resolution Priority
The CLI follows this priority order for certificate resolution:
1. **`--cacert` flag** - Manual override, highest priority
2. **`caCertRef`** - Secret-based certificate (recommended)
3. **`caCert`** - Inline certificate (deprecated)
4. **System certificate pool** - Default fallback
#### User Experience Improvements
With both PR #8557 and this enhancement:
```bash
# Automatic discovery - works with both caCert and caCertRef
velero backup describe my-backup
velero backup download my-backup
velero backup logs my-backup
velero restore logs my-restore
# Manual override still available
velero backup describe my-backup --cacert /custom/ca.crt
# Debug output shows certificate source
velero backup download my-backup --log-level=debug
# [DEBUG] Resolved CA certificate from BSL 'default' Secret 'storage-ca-cert' key 'ca-bundle.crt'
```
#### RBAC Considerations for CLI
CLI users need read access to Secrets when using `caCertRef`:
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: velero-cli-user
namespace: velero
rules:
- apiGroups: ["velero.io"]
resources: ["backups", "restores", "backupstoragelocations"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
# Limited to secrets referenced by BSLs
```
### Migration Strategy
#### Phase 1: Introduction (Current)
- Add `caCertRef` field
- Mark `caCert` as deprecated
- Both fields supported, mutual exclusivity enforced
#### Phase 2: Migration Period
- Documentation and tools to help users migrate
- Warning messages for `caCert` usage
- CLI enhancements to leverage `caCertRef`
#### Phase 3: Future Removal
- Remove `caCert` field in major version update
- Provide migration tool for automatic conversion
## User Experience
### Creating a BSL with Certificate Reference
1. Create a Secret containing the CA certificate:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: storage-ca-cert
namespace: velero
type: Opaque
data:
ca-bundle.crt: <base64-encoded-certificate>
```
2. Reference the Secret in BSL:
```yaml
apiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
name: default
namespace: velero
spec:
provider: aws
objectStorage:
bucket: my-bucket
caCertRef:
name: storage-ca-cert
key: ca-bundle.crt
```
### Certificate Rotation
With Secret-based certificates:
```bash
# Update the Secret with new certificate
kubectl create secret generic storage-ca-cert \
--from-file=ca-bundle.crt=new-ca.crt \
--dry-run=client -o yaml | kubectl apply -f -
# No BSL update required - changes take effect on next use
```
### CLI Usage Examples
#### Immediate Benefits
- No change required for existing workflows
- Certificate validation errors include helpful context
#### Future CLI Enhancements
```bash
# Automatic certificate discovery
velero backup download my-backup
# Manual override still available
velero backup download my-backup --cacert /custom/ca.crt
# Debug certificate resolution
velero backup download my-backup --log-level=debug
# [DEBUG] Resolved CA certificate from BSL 'default' Secret 'storage-ca-cert'
```
## Security Considerations
### Advantages of Secret-based Storage
1. **Encryption at Rest**: Secrets are encrypted in etcd
2. **RBAC Control**: Fine-grained access control via Kubernetes RBAC
3. **Audit Trail**: Secret access is auditable
4. **Separation of Concerns**: Certificates separate from configuration
### Required Permissions
The Velero server requires additional RBAC permissions:
```yaml
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
# Scoped to secrets referenced by BSLs
```
## Compatibility
### Backward Compatibility
- Existing BSLs with `caCert` continue to function unchanged
- No breaking changes to API
- Gradual migration path
### Forward Compatibility
- Design allows for future enhancements:
- Multiple certificate support
- Certificate chain validation
- Automatic certificate discovery from cloud providers
## Implementation Phases
### Phase 1: Core Implementation ✓ (Current PR)
- API changes with new `caCertRef` field
- Controller validation
- Repository provider updates
- Basic testing
### Phase 2: CLI Enhancement (Future)
- Automatic certificate discovery in CLI
- Enhanced error messages
- Debug logging for certificate resolution
### Phase 3: Migration Tools (Future)
- Automated migration scripts
- Validation tools
- Documentation updates
## Testing
### Unit Tests
- BSL validation logic
- Certificate resolution in providers
- Controller behavior
### Integration Tests
- End-to-end backup/restore with `caCertRef`
- Certificate rotation scenarios
- Migration from `caCert` to `caCertRef`
### Manual Testing Scenarios
1. Create BSL with `caCertRef`
2. Perform backup/restore operations
3. Rotate certificate in Secret
4. Verify continued operation
## Documentation
### User Documentation
- Migration guide from `caCert` to `caCertRef`
- Examples for common cloud providers
- Troubleshooting guide
### API Documentation
- Updated API reference
- Deprecation notices
- Field descriptions
## Alternatives Considered
### ConfigMap-based Storage
- Pros: Similar to Secrets, simpler API
- Cons: Not designed for sensitive data, no encryption at rest
- Decision: Secrets are the Kubernetes-standard for sensitive data
### External Certificate Management
- Pros: Integration with cert-manager, etc.
- Cons: Additional complexity, dependencies
- Decision: Keep it simple, allow users to manage certificates as needed
### Immediate Removal of Inline Certificates
- Pros: Cleaner API, forces best practices
- Cons: Breaking change, migration burden
- Decision: Gradual deprecation respects existing users
## Conclusion
This design provides a secure, Kubernetes-native approach to certificate management in Velero while maintaining backward compatibility. It establishes the foundation for enhanced CLI functionality and improved user experience, addressing the concerns raised in issue #9097 and enabling the features proposed in PR #8557.
The phased approach ensures smooth migration for existing users while delivering immediate security benefits for new deployments.

View File

@@ -1,257 +0,0 @@
# Concurrent Backup Processing
This enhancement will enable Velero to process multiple backups at the same time. This is largely a usability enhancement rather than a performance enhancement, since the overall backup throughput may not be significantly improved over the current implementation, since we are already processing individual backup items in parallel. It is a significant usability improvement, though, as with the current design, a user who submits a small backup may have to wait significantly longer than expected if the backup is submitted immediately after a large backup.
## Background
With the current implementation, only one backup may be `InProgress` at a time. A second backup created will not start processing until the first backup moves on to `WaitingForPluginOperations` or `Finalizing`. This is a usability concern, especially in clusters when multiple users are initiating backups. With this enhancement, we intend to allow multiple backups to be processed concurrently. This will allow backups to start processing immediately, even if a large backup was just submitted by another user. This enhancement will build on top of the prior parallel item processing feature by creating a dedicatede ItemBlock worker pool for each running backup. The pool will be created at the beginning of the backup reconcile, and the input channel will be passed to the Kubernetes backupper just like it is in the current release.
The primary challenge is to make sure that the same workload in multiple backups is not backed up concurrently. If that were to happen, we would risk data corruption, especially around the processing of pod hooks and volume backup. For this first release we will take a conservative, high-level approach to overlap detection. Two backups will not run concurrently if there is any overlap in included namespaces. For example, if a backup that includes `ns1` and `ns2` is running, then a second backup for `ns2` and `ns3` will not be started. If a backup which does not filter namespaces is running (either a whole cluster backup or a non-namespace-limited backup with a label selector) then no other backups will be started, since a backup across all namespaces overlaps with any other backup. Calculating item-level overlap for queued backups is problematic since we don't know which items are included in a backup until backup processing has begun. A future release may add ItemBlock overlap detection, where at the item block worker level, the same item will not be processed by two different workers at the same time. This works together with workload conflict detection to further detect conflicts in a more granular level for shared resources between backups. Eventually, with a more complete understanding of individual workloads (either via ItemBlocks or some higher level model), the namespace level overlap detection may be relaxed in future versions.
## Goals
- Process multiple backups concurrently
- Detect namespace overlap to avoid conflicts
- For queued backups (not yet runnable due to concurrency limits or overlap), indicate the queue position in status
## Non Goals
- Handling NFS PVs when more than one PV point to the same underlying NFS share
- Handling VGDP cancellation for failed backups on restart
- Mounting a PVC for scenarios in which /tmp is too small for the number of concurrent backups
- Providing a mechanism to identify high priority backups which get preferential treatment in terms of ItemBlock worker availability
- Item-level overlap detection (future feature)
- Providing the ability to disable namespace-level overlap detection once Item-level overlap detection is in place (although this may be supported in a future version).
## High-Level Design
### Backup CRD changes
Two new backup phases will be added: `Queued` and `ReadyToStart`. In the Backup workflow, new backups will be moved to the Queued phase when they are added to the backup queue. When a backup is removed from the queue because it is now able to run, it will be moved to the `ReadyToStart` phase, which will allow the backup controller to start processing it.
In addition, a new Status field, `QueuePosition`, will be added to track the backup's current position in the queue.
### New Controller: `backupQueueReconciler`
A new reconciler will be added, `backupQueueReconciler` which will use the current `backupReconciler` logic for reconciling `New` backups but instead of running the backup, it will move the Backup to the `Queued` phase and set `QueuePosition`.
In addition, this reconciler will periodically reconcile all queued backups (on some configurable time interval) and if there is a runnable backup, remove it from the queue, update `QueuePosition` for any queued backups behind it, and update its phase to `ReadyToStart`.
Queued backups will be reconciled in order based on `QueuePosition`, so the first runnable backup found will be processed. A backup is runnable if both of the following conditions are true:
1) The total number of backups either `InProgress` or `ReadyToStart` is less than the configured number of concurrent backups.
2) The backup has no overlap with any backups currently `InProgress` or `ReadyToStart` or with any `Queued` backups with a higher (i.e. closer to 1) queue position than this backup.
### Updates to Backup controller
The current `backupReconciler` will change its reconciling rules. Instead of watching and reconciling New backups, it will reconcile `ReadyToStart` backups. In addition, it will be configured to run in parallel by setting `MaxConcurrentReconciles` based on the `concurrent-backups` server arg.
The startup (and shutdown) of the ItemBlock worker pool will be moved from reconciler startup to the backup reconcile, which will give each running backup its own dedicated worker pool. The per-backup worker pool will will use the existing `--item-block-worker-count` installer/server arg. This means that the maximum number of ItemBlock workers for the entire Velero pod will be the ItemBlock worker count multiplied by concurrentBackups. For example, if concurrentBackups is 5, and itemBlockWorkerCount is 6, then there will be, at most, 30 worker threads active, 5 dedicated to each InProgress backup, but this maximum will only be achieved when the maximum number of backups are InProgress. This also means that each InProgress backup will have a dedicated ItemBlock input channel with the same fixed buffer size.
## Detailed Design
### New Install/Server configuration args
A new install/server arg, `concurrent-backups` will be added. This will be an int-valued field specifying the number of backups which may be processed concurrently (with phase `InProgress`). If not specified, the default value of 1 will be used.
### Consideration of backup overlap and concurrent backup processing
The primary consideration for running additional backups concurrently is the configured `concurrent-backups` parameter. If the total number of `InProgress` and `ReadyToStart` backups is equal to `concurrent-backups` then any `Queued` backups will remain in the queue.
The second consideration is backup overlap. In order to prevent interaction between running backups (particularly around volume backup and pod hooks), we cannot allow two overlapping backups to run at the same time. For now, we will define overlap broadly -- requiring that two concurrent backups don't include any of the same namespaces. A backup for `ns1` can run concurrently with a backup for `ns2`, but a backup for `[ns1,ns2]` cannot run concurrently with a backup for `ns1`. One consequence of this approach is that a backup which includes all namespaces (even if further filtered by resource or label) cannot run concurrently with *any other backup*.
When determining which queued backup to run next, velero will look for the next queued backup which has no overlap with any InProgress backup or any Queued backup ahead of it. The reason we need to consider queued as well as running backups for overlap detection is as follows.
Consider the following scenario. These are the current not-completed backups (ordered from oldest to newest)
1. backup1, includedNamespaces: [ns1, ns2], phase: InProgress
2. backup2, includedNamespaces: [ns2, ns3, ns5], phase: Queued, QueuePosition: 1
3. backup3, includedNamespaces: [ns4, ns3], phase: Queued, QueuePosition: 2
4. backup4, includedNamespaces: [ns5, ns6], phase: Queued, QueuePosition: 2
5. backup5, includedNamespaces: [ns8, ns9], phase: Queued, QueuePosition: 3
Assuming `concurrent-backups` is 2, on the next reconcile, Velero will be able to start a second backup if there is one with no overlap. `backup2` cannot run, since `ns2` overlaps between it and the running `backup1`. If we only considered running overlap (and not queued overlap), then `backup3` could run now. It conflicts with the queued `backup2` on `ns3` but it does not conflict with the running backup. However, if it runs now, then when `backup1` completes, then `backup2` still can't run (since it now overlaps with running `backup3`on `ns3`), so `backup4` starts instead. Now when `backup3` completes, `backup2` still can't run (since it now conflicts with `backup4` on `ns5`). This means that even though it was the second backup created, it's the fourth to run -- providing worse time to completion than without parallel backups. If a queued backup has a large number of namespaces (a full-cluster backup for example), it would never run as long as new single-namespace backups keep being added to the queue.
To resolve this problem we consider both running backups as well as backups ahead in the queue when resolving overlap conflicts. In the above scenario, `backup2` can't run yet since it overlaps with the running backup on `ns2`. In addition, `backup3` and `backup4` also can't run yet since they overlap with queued `backup2`. Therefore, `backup5` will run now. Once `backup1` completes, `backup2` will be free to run.
### Backup CRD changes
New Backup phases:
```go
const (
// BackupPhaseQueued means the backup has been added to the
// queue by the BackupQueueReconciler.
BackupPhaseQueued BackupPhase = "Queued"
// BackupPhaseReadyToStart means the backup has been removed from the
// queue by the BackupQueueReconciler and is ready to start.
BackupPhaseReadyToStart BackupPhase = "ReadyToStart"
)
```
In addition, a new Status field, `queuePosition`, will be added to track the backup's current position in the queue.
```go
// QueuePosition is the position held by the backup in the queue.
// QueuePosition=1 means this backup is the next to be considered.
// Only relevant when Phase is "Queued"
// +optional
QueuePosition int `json:"queuePosition,omitempty"`
```
### New Controller: `backupQueueReconciler`
A new reconciler will be added, `backupQueueReconciler` which will reconcile backups under these conditions:
1) Watching Create/Update for backups in `New` (or empty) phase
2) Watching for Backup phase transition from `InProgress` to something else to reconcile all `Queued` backups
2) Watching for Backup phase transition from `New` (or empty) to `Queued` to reconcile all `Queued` backups
2) Periodic reconcile of `Queued` backups to handle backups queued at server startup as well as to make sure we never have a situation where backups are queued indefinitely because of a race condition or was otherwise missed in the reconcile on prior backup completion.
The reconciler will be set up as follows -- note that New backups are reconciled on Create/Update, while Queued backups are reconciled when an InProgress backup moves on to another state or when a new backup moves to the Queued state. We also reconcile Queued backups periodically to handle the case of a Velero pod restart with Queued backups, as well as to handle possible edge cases where a queued backup doesn't get moved out of the queue at the point of backup completion or an error occurs during a prior Queued backup reconcile.
```go
func (c *backupOperationsReconciler) SetupWithManager(mgr ctrl.Manager) error {
// only consider Queued backups, order by QueuePosition
gp := kube.NewGenericEventPredicate(func(object client.Object) bool {
backup := object.(*velerov1api.Backup)
return (backup.Status.Phase == velerov1api.BackupPhaseQueued)
})
s := kube.NewPeriodicalEnqueueSource(c.logger.WithField("controller", constant.ControllerBackupOperations), mgr.GetClient(), &velerov1api.BackupList{}, c.frequency, kube.PeriodicalEnqueueSourceOption{
Predicates: []predicate.Predicate{gp},
OrderFunc: queuePositionOrderFunc,
})
return ctrl.NewControllerManagedBy(mgr).
For(&velerov1api.Backup{}, builder.WithPredicates(predicate.Funcs{
UpdateFunc: func(ue event.UpdateEvent) bool {
backup := ue.ObjectNew.(*velerov1api.Backup)
return backup.Status.Phase == "" || backup.status.Phase == velerov1api.BackupPhaseNew
},
CreateFunc: func(event.CreateEvent) bool {
return backup.Status.Phase == "" || backup.status.Phase == velerov1api.BackupPhaseNew
},
DeleteFunc: func(de event.DeleteEvent) bool {
return false
},
GenericFunc: func(ge event.GenericEvent) bool {
return false
},
})).
Watch(
&source.Kind{Type: &velerov1api.Backup{}},
&handler.EnqueueRequestsFromMapFunc{
ToRequests: handler.ToRequestsFunc(func(a handler.MapObject) []reconcile.Request {
backupList := velerov1api.BackupList{}
if err := p.List(ctx, backupList); err != nil {
p.logger.WithError(err).Error("error listing backups")
return
}
requests = []reconcile.request{}
// filter backup list by Phase=queued
// sort backup list by queuePosition
return requests
}),
},
builder.WithPredicates(predicate.Funcs{
UpdateFunc: func(ue event.UpdateEvent) bool {
oldBackup := ue.ObjectOld.(*velerov1api.Backup)
newBackup := ue.ObjectNew.(*velerov1api.Backup)
return oldBackup.Status.Phase == velerov1api.BackupPhaseInProgress &&
newBackup.Status.Phase != velerov1api.BackupPhaseInProgress ||
oldBackup.Status.Phase != velerov1api.BackupPhaseQueued &&
newBackup.Status.Phase == velerov1api.BackupPhaseQueued
},
CreateFunc: func(event.CreateEvent) bool {
return false
},
DeleteFunc: func(de event.DeleteEvent) bool {
return false
},
GenericFunc: func(ge event.GenericEvent) bool {
return false
},
}).
WatchesRawSource(s).
Named(constant.ControllerBackupQueue).
Complete(c)
}
```
New backups will be queued: Phase will be set to `Queued`, and `QueuePosition` will be set to a int value incremented from the highest current `QueuePosition` value among Queued backups.
Queued backups will be removed from the queue if runnable:
1) If the total number of backups either InProgress or ReadyToStart is greater than or equal to the concurrency limit, then exit without removing from the queue.
2) If the current backup overlaps with any InProgress, ReadyToStart, or Queued backup with `QueuePosition < currentBackup.QueuePosition` then exit without removing from the queue.
3) If we get here, the backup is runnable. To resolve a potential race condition where an InProgress backup completes between reconciling the backup with QueuePosition `n-1` and reconciling the current backup with QueuePosition `n`, we also check to see whether there are any runnable backups in the queue ahead of this one. The only time this will happen is if a backup completes immediately before reconcile starts which either frees up a concurrency slot or removes a namespace conflict. In this case, we don't want to run the current backup since the one ahead of this one in the queue (which was recently passed over before the InProgress backup completed) must run first. In this case, exit without removing from the queue.
4) If we get here, remove the backup from the queue by setting Phase to `ReadyToStart` and `QueuePosition` to zero. Decrement the `QueuePosition` of any other Queued backups with a `QueuePosition` higher than the current backup's queue position prior to dequeuing. At this point, the backup reconciler will start the backup.
`if len(inProgressBackups)+len(pendingStartBackups) >= concurrentBackups`
```
switch original.Status.Phase {
case "", velerov1api.BackupPhaseNew:
// enqueue backup -- set phase=Queued, set queuePosition=maxCurrentQueuePosition+1
}
// We should only ever get these events when added in order by the periodical enqueue source
// so as long as the current backup has not conflicts ahead of it or running, we should be good to
// dequeue
case "", velerov1api.BackupPhaseQueued:
// list backups, filter on Queued, ReadyToStart, and InProgress
// if number of InProgress backups + number of ReadyToStart backups >= concurrency limit, exit
// generate list of all namespaces included in InProgress, ReadyToStart, and Queued backups with
// queuePosition < backup.Status.QueuePosition
// if overlap found, exit
// check backups ahead of this one in the queue for runnability. If any are runnable, exit
// dequeue backup: set Phase to ReadyToStart, QueuePosition to 0, and decrement QueuePosition
// for all QueuedBackups behind this one in the queue
}
```
The queue controller will run as a single reconciler thread, so we will not need to deal with concurrency issues when moving backups from New to Queued or from Queued to ReadyToStart, and all of the updates to QueuePosition will be from a single thread.
### Updates to Backup controller
The Reconcile logic will be updated to respond to ReadyToStart backups instead of New backups:
```
@@ -234,8 +234,8 @@ func (b *backupReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctr
// InProgress, we still need this check so we can return nil to indicate we've finished processing
// this key (even though it was a no-op).
switch original.Status.Phase {
- case "", velerov1api.BackupPhaseNew:
- // only process new backups
+ case velerov1api.BackupPhaseReadyToStart:
+ // only process ReadyToStart backups
default:
b.logger.WithFields(logrus.Fields{
"backup": kubeutil.NamespaceAndName(original),
```
In addition, it will be configured to run in parallel by setting `MaxConcurrentReconciles` based on the `concurrent-backups` server arg.
```
@@ -149,6 +149,9 @@ func NewBackupReconciler(
func (b *backupReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&velerov1api.Backup{}).
+ WithOptions(controller.Options{
+ MaxConcurrentReconciles: concurrentBackups,
+ }).
Named(constant.ControllerBackup).
Complete(b)
}
```
The controller-runtime core reconciler logic already prevents the same resource from being reconciled by two different reconciler threads, so we don't need to worry about concurrency issues at the controller level.
The workerPool reference will be moved from the backupReconciler to the backupRequest, since this will now be backup-specific, and the initialization code for the worker pool will be moved from the reconciler init into the backup reconcile. This worker pool will be shut down upon exiting the Reconcile method.
### Resilience to restart of velero pod
The new backup phases (`Queued` and `ReadyToStart`) will be resilient to velero pod restarts. If the velero pod crashes or is restarted, only backups in the `InProgress` phase will be failed, so there is no change to current behavior. Queued backups will retain their queue position on restart, and ReadyToStart backups will move to InProgress when reconciled.
### Observability
#### Logging
When a backup is dequeued, an info log message will also include the wait time, calculated as `now - creationTimestamp`. When a backup is passed over due to overlap, an info log message will indicate which namespaces were in conflict.
#### Velero CLI
The `velero backup describe` output will include the current queue position for queued backups.

View File

@@ -1,115 +0,0 @@
# Wildcard Namespace Support
## Abstract
Velero currently treats namespace patterns with glob characters as literal strings. This design adds wildcard expansion to support flexible namespace selection using patterns like `app-*` or `test-{dev,staging}`.
## Background
Requested in [#1874](https://github.com/vmware-tanzu/velero/issues/1874) for more flexible namespace selection.
## Goals
- Support glob pattern expansion in namespace includes/excludes
- Maintain backward compatibility with existing `*` behavior
## Non-Goals
- Complex regex patterns beyond basic globs
## High-Level Design
Wildcard expansion occurs early in both backup and restore flows, converting patterns to literal namespace lists before normal processing.
### Backup Flow
Expansion happens in `getResourceItems()` before namespace collection:
1. Check if wildcards exist using `ShouldExpandWildcards()`
2. Expand patterns against active cluster namespaces
3. Replace includes/excludes with expanded literal namespaces
4. Continue with normal backup processing
### Restore Flow
Expansion occurs in `execute()` after parsing backup contents:
1. Extract available namespaces from backup tar
2. Expand patterns against backup namespaces (not cluster namespaces)
3. Update restore context with expanded namespaces
4. Continue with normal restore processing
This ensures restore wildcards match actual backup contents, not current cluster state.
## Detailed Design
### Status Fields
Add wildcard expansion tracking to backup and restore CRDs:
```go
type WildcardNamespaceStatus struct {
// IncludeWildcardMatches records namespaces that matched include patterns
// +optional
IncludeWildcardMatches []string `json:"includeWildcardMatches,omitempty"`
// ExcludeWildcardMatches records namespaces that matched exclude patterns
// +optional
ExcludeWildcardMatches []string `json:"excludeWildcardMatches,omitempty"`
// WildcardResult records final namespaces after wildcard processing
// +optional
WildcardResult []string `json:"wildcardResult,omitempty"`
}
// Added to both BackupStatus and RestoreStatus
type BackupStatus struct {
// WildcardNamespaces contains wildcard expansion results
// +optional
WildcardNamespaces *WildcardNamespaceStatus `json:"wildcardNamespaces,omitempty"`
}
```
### Wildcard Expansion Package
New `pkg/util/wildcard/expand.go` package provides:
- `ShouldExpandWildcards()` - Skip expansion for simple "*" case
- `ExpandWildcards()` - Main expansion function using `github.com/gobwas/glob`
- Pattern validation rejecting unsupported regex symbols
**Supported patterns**: `*`, `?`, `[abc]`, `{a,b,c}`
**Unsupported**: `|()`, `**`
### Implementation Details
#### Backup Integration (`pkg/backup/item_collector.go`)
Expansion in `getResourceItems()`:
- Call `wildcard.ExpandWildcards()` with cluster namespaces
- Update `NamespaceIncludesExcludes` with expanded results
- Populate status fields with expansion results
#### Restore Integration (`pkg/restore/restore.go`)
Expansion in `execute()`:
```go
if wildcard.ShouldExpandWildcards(includes, excludes) {
availableNamespaces := extractNamespacesFromBackup(backupResources)
expandedIncludes, expandedExcludes, err := wildcard.ExpandWildcards(
availableNamespaces, includes, excludes)
// Update context and status
}
```
## Alternatives Considered
1. **Client-side expansion**: Rejected because it wouldn't work for scheduled backups
2. **Expansion in `collectNamespaces`**: Rejected because these functions expect literal namespaces
## Compatibility
Maintains full backward compatibility - existing "*" behavior unchanged.
## Implementation
Target: Velero 1.18

110
go.mod
View File

@@ -1,14 +1,16 @@
module github.com/vmware-tanzu/velero
go 1.25.0
go 1.24.0
toolchain go1.24.11
require (
cloud.google.com/go/storage v1.57.2
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.20.0
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.13.1
cloud.google.com/go/storage v1.55.0
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.18.1
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.10.1
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/compute/armcompute/v5 v5.6.0
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.8.1
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.3
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.8.0
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.1
github.com/aws/aws-sdk-go-v2 v1.24.1
github.com/aws/aws-sdk-go-v2/config v1.26.3
github.com/aws/aws-sdk-go-v2/credentials v1.16.14
@@ -31,22 +33,22 @@ require (
github.com/onsi/gomega v1.36.1
github.com/petar/GoLLRB v0.0.0-20210522233825-ae3b015fd3e9
github.com/pkg/errors v0.9.1
github.com/prometheus/client_golang v1.23.2
github.com/prometheus/client_golang v1.22.0
github.com/prometheus/client_model v0.6.2
github.com/robfig/cron/v3 v3.0.1
github.com/sirupsen/logrus v1.9.3
github.com/spf13/afero v1.10.0
github.com/spf13/cobra v1.8.1
github.com/spf13/pflag v1.0.5
github.com/stretchr/testify v1.11.1
github.com/stretchr/testify v1.10.0
github.com/vmware-tanzu/crash-diagnostics v0.3.7
go.uber.org/zap v1.27.1
golang.org/x/mod v0.30.0
golang.org/x/oauth2 v0.33.0
go.uber.org/zap v1.27.0
golang.org/x/mod v0.29.0
golang.org/x/oauth2 v0.30.0
golang.org/x/text v0.31.0
google.golang.org/api v0.256.0
google.golang.org/grpc v1.77.0
google.golang.org/protobuf v1.36.10
google.golang.org/api v0.241.0
google.golang.org/grpc v1.73.0
google.golang.org/protobuf v1.36.6
gopkg.in/yaml.v3 v3.0.1
k8s.io/api v0.33.3
k8s.io/apiextensions-apiserver v0.33.3
@@ -63,19 +65,19 @@ require (
)
require (
cel.dev/expr v0.24.0 // indirect
cloud.google.com/go v0.121.6 // indirect
cloud.google.com/go/auth v0.17.0 // indirect
cel.dev/expr v0.23.0 // indirect
cloud.google.com/go v0.121.1 // indirect
cloud.google.com/go/auth v0.16.2 // indirect
cloud.google.com/go/auth/oauth2adapt v0.2.8 // indirect
cloud.google.com/go/compute/metadata v0.9.0 // indirect
cloud.google.com/go/compute/metadata v0.7.0 // indirect
cloud.google.com/go/iam v1.5.2 // indirect
cloud.google.com/go/monitoring v1.24.2 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.1 // indirect
github.com/Azure/go-ansiterm v0.0.0-20230124172434-306776ec8161 // indirect
github.com/AzureAD/microsoft-authentication-library-for-go v1.6.0 // indirect
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.30.0 // indirect
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.53.0 // indirect
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.53.0 // indirect
github.com/AzureAD/microsoft-authentication-library-for-go v1.5.0 // indirect
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.27.0 // indirect
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.51.0 // indirect
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.51.0 // indirect
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.4 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.14.11 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.2.10 // indirect
@@ -93,18 +95,18 @@ require (
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/chmduquesne/rollinghash v4.0.0+incompatible // indirect
github.com/cncf/xds/go v0.0.0-20251022180443-0feb69152e9f // indirect
github.com/cncf/xds/go v0.0.0-20250326154945-ae57f3c0d45f // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/edsrzf/mmap-go v1.2.0 // indirect
github.com/emicklei/go-restful/v3 v3.11.0 // indirect
github.com/envoyproxy/go-control-plane/envoy v1.35.0 // indirect
github.com/envoyproxy/go-control-plane/envoy v1.32.4 // indirect
github.com/envoyproxy/protoc-gen-validate v1.2.1 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/fsnotify/fsnotify v1.7.0 // indirect
github.com/fxamacker/cbor/v2 v2.7.0 // indirect
github.com/go-ini/ini v1.67.0 // indirect
github.com/go-jose/go-jose/v4 v4.1.3 // indirect
github.com/go-jose/go-jose/v4 v4.0.5 // indirect
github.com/go-logr/logr v1.4.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-ole/go-ole v1.3.0 // indirect
@@ -112,36 +114,36 @@ require (
github.com/go-openapi/jsonreference v0.20.2 // indirect
github.com/go-openapi/swag v0.23.0 // indirect
github.com/go-task/slim-sprig/v3 v3.0.0 // indirect
github.com/gofrs/flock v0.13.0 // indirect
github.com/goccy/go-json v0.10.5 // indirect
github.com/gofrs/flock v0.12.1 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang-jwt/jwt/v5 v5.3.0 // indirect
github.com/golang-jwt/jwt/v5 v5.2.2 // indirect
github.com/golang/protobuf v1.5.4 // indirect
github.com/google/btree v1.1.3 // indirect
github.com/google/gnostic-models v0.6.9 // indirect
github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db // indirect
github.com/google/s2a-go v0.1.9 // indirect
github.com/googleapis/enterprise-certificate-proxy v0.3.7 // indirect
github.com/googleapis/gax-go/v2 v2.15.0 // indirect
github.com/googleapis/enterprise-certificate-proxy v0.3.6 // indirect
github.com/googleapis/gax-go/v2 v2.14.2 // indirect
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 // indirect
github.com/hashicorp/cronexpr v1.1.3 // indirect
github.com/hashicorp/cronexpr v1.1.2 // indirect
github.com/hashicorp/yamux v0.1.1 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.18.2 // indirect
github.com/klauspost/cpuid/v2 v2.3.0 // indirect
github.com/klauspost/crc32 v1.3.0 // indirect
github.com/klauspost/compress v1.18.0 // indirect
github.com/klauspost/cpuid/v2 v2.2.10 // indirect
github.com/klauspost/pgzip v1.2.6 // indirect
github.com/klauspost/reedsolomon v1.12.6 // indirect
github.com/klauspost/reedsolomon v1.12.4 // indirect
github.com/kylelemons/godebug v1.1.0 // indirect
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/mattn/go-colorable v0.1.14 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/minio/crc64nvme v1.1.0 // indirect
github.com/minio/crc64nvme v1.0.1 // indirect
github.com/minio/md5-simd v1.1.2 // indirect
github.com/minio/minio-go/v7 v7.0.97 // indirect
github.com/minio/minio-go/v7 v7.0.94 // indirect
github.com/mitchellh/go-testing-interface v1.0.0 // indirect
github.com/moby/spdystream v0.5.0 // indirect
github.com/moby/term v0.5.0 // indirect
@@ -153,44 +155,44 @@ require (
github.com/natefinch/atomic v1.0.1 // indirect
github.com/nxadm/tail v1.4.8 // indirect
github.com/oklog/run v1.0.0 // indirect
github.com/philhofer/fwd v1.2.0 // indirect
github.com/philhofer/fwd v1.1.3-0.20240916144458-20a13a1f6b7c // indirect
github.com/pierrec/lz4 v2.6.1+incompatible // indirect
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c // indirect
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/prometheus/common v0.67.4 // indirect
github.com/prometheus/procfs v0.16.1 // indirect
github.com/prometheus/common v0.65.0 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/rs/xid v1.6.0 // indirect
github.com/spiffe/go-spiffe/v2 v2.6.0 // indirect
github.com/spiffe/go-spiffe/v2 v2.5.0 // indirect
github.com/stretchr/objx v0.5.2 // indirect
github.com/tinylib/msgp v1.3.0 // indirect
github.com/vladimirvivien/gexe v0.1.1 // indirect
github.com/x448/float16 v0.8.4 // indirect
github.com/zeebo/blake3 v0.2.4 // indirect
go.opentelemetry.io/auto/sdk v1.2.1 // indirect
go.opentelemetry.io/contrib/detectors/gcp v1.38.0 // indirect
github.com/zeebo/errs v1.4.0 // indirect
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
go.opentelemetry.io/contrib/detectors/gcp v1.36.0 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0 // indirect
go.opentelemetry.io/otel v1.38.0 // indirect
go.opentelemetry.io/otel/metric v1.38.0 // indirect
go.opentelemetry.io/otel/sdk v1.38.0 // indirect
go.opentelemetry.io/otel/sdk/metric v1.38.0 // indirect
go.opentelemetry.io/otel/trace v1.38.0 // indirect
go.opentelemetry.io/otel v1.37.0 // indirect
go.opentelemetry.io/otel/metric v1.37.0 // indirect
go.opentelemetry.io/otel/sdk v1.37.0 // indirect
go.opentelemetry.io/otel/sdk/metric v1.36.0 // indirect
go.opentelemetry.io/otel/trace v1.37.0 // indirect
go.starlark.net v0.0.0-20230525235612-a134d8f9ddca // indirect
go.uber.org/multierr v1.11.0 // indirect
go.yaml.in/yaml/v2 v2.4.3 // indirect
golang.org/x/crypto v0.45.0 // indirect
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 // indirect
golang.org/x/net v0.47.0 // indirect
golang.org/x/sync v0.18.0 // indirect
golang.org/x/sys v0.38.0 // indirect
golang.org/x/term v0.37.0 // indirect
golang.org/x/time v0.14.0 // indirect
golang.org/x/time v0.12.0 // indirect
golang.org/x/tools v0.38.0 // indirect
gomodules.xyz/jsonpatch/v2 v2.4.0 // indirect
google.golang.org/genproto v0.0.0-20250603155806-513f23925822 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20251022142026-3a174f9686a8 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20251103181224-f26f9409b101 // indirect
google.golang.org/genproto v0.0.0-20250505200425-f936aa4a68b2 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20250603155806-513f23925822 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822 // indirect
gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
k8s.io/kube-openapi v0.0.0-20250318190949-c8a335a9a2ff // indirect
@@ -198,4 +200,4 @@ require (
sigs.k8s.io/structured-merge-diff/v4 v4.6.0 // indirect
)
replace github.com/kopia/kopia => github.com/project-velero/kopia v0.0.0-20251230033609-d946b1e75197
replace github.com/kopia/kopia => github.com/project-velero/kopia v0.0.0-20250722052735-3ea24d208777

242
go.sum
View File

@@ -1,7 +1,7 @@
al.essio.dev/pkg/shellescape v1.5.1 h1:86HrALUujYS/h+GtqoB26SBEdkWfmMI6FubjXlsXyho=
al.essio.dev/pkg/shellescape v1.5.1/go.mod h1:6sIqp7X2P6mThCQ7twERpZTuigpr6KbZWtls1U8I890=
cel.dev/expr v0.24.0 h1:56OvJKSH3hDGL0ml5uSxZmz3/3Pq4tJ+fb1unVLAFcY=
cel.dev/expr v0.24.0/go.mod h1:hLPLo1W4QUmuYdA72RBX06QTs6MXw941piREPl3Yfiw=
cel.dev/expr v0.23.0 h1:wUb94w6OYQS4uXraxo9U+wUAs9jT47Xvl4iPgAwM2ss=
cel.dev/expr v0.23.0/go.mod h1:hLPLo1W4QUmuYdA72RBX06QTs6MXw941piREPl3Yfiw=
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
@@ -24,10 +24,10 @@ cloud.google.com/go v0.75.0/go.mod h1:VGuuCn7PG0dwsd5XPVm2Mm3wlh3EL55/79EKB6hlPT
cloud.google.com/go v0.78.0/go.mod h1:QjdrLG0uq+YwhjoVOLsS1t7TW8fs36kLs4XO5R5ECHg=
cloud.google.com/go v0.79.0/go.mod h1:3bzgcEeQlzbuEAYu4mrWhKqWjmpprinYgKJLgKHnbb8=
cloud.google.com/go v0.81.0/go.mod h1:mk/AM35KwGk/Nm2YSeZbxXdrNK3KZOYHmLkOqC2V6E0=
cloud.google.com/go v0.121.6 h1:waZiuajrI28iAf40cWgycWNgaXPO06dupuS+sgibK6c=
cloud.google.com/go v0.121.6/go.mod h1:coChdst4Ea5vUpiALcYKXEpR1S9ZgXbhEzzMcMR66vI=
cloud.google.com/go/auth v0.17.0 h1:74yCm7hCj2rUyyAocqnFzsAYXgJhrG26XCFimrc/Kz4=
cloud.google.com/go/auth v0.17.0/go.mod h1:6wv/t5/6rOPAX4fJiRjKkJCvswLwdet7G8+UGXt7nCQ=
cloud.google.com/go v0.121.1 h1:S3kTQSydxmu1JfLRLpKtxRPA7rSrYPRPEUmL/PavVUw=
cloud.google.com/go v0.121.1/go.mod h1:nRFlrHq39MNVWu+zESP2PosMWA0ryJw8KUBZ2iZpxbw=
cloud.google.com/go/auth v0.16.2 h1:QvBAGFPLrDeoiNjyfVunhQ10HKNYuOwZ5noee0M5df4=
cloud.google.com/go/auth v0.16.2/go.mod h1:sRBas2Y1fB1vZTdurouM0AzuYQBMZinrUYL8EufhtEA=
cloud.google.com/go/auth/oauth2adapt v0.2.8 h1:keo8NaayQZ6wimpNSmW5OPc283g65QNIiLpZnkHRbnc=
cloud.google.com/go/auth/oauth2adapt v0.2.8/go.mod h1:XQ9y31RkqZCcwJWNSx2Xvric3RrU88hAYYbjDWYDL+c=
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
@@ -36,8 +36,8 @@ cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvf
cloud.google.com/go/bigquery v1.5.0/go.mod h1:snEHRnqQbz117VIFhE8bmtwIDY80NLUZUMb4Nv6dBIg=
cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4gLoIoXIAPc=
cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ=
cloud.google.com/go/compute/metadata v0.9.0 h1:pDUj4QMoPejqq20dK0Pg2N4yG9zIkYGdBtwLoEkH9Zs=
cloud.google.com/go/compute/metadata v0.9.0/go.mod h1:E0bWwX5wTnLPedCKqk3pJmVgCBSM6qQI1yTBdEb3C10=
cloud.google.com/go/compute/metadata v0.7.0 h1:PBWF+iiAerVNe8UCHxdOt6eHLVc3ydFeOCw78U8ytSU=
cloud.google.com/go/compute/metadata v0.7.0/go.mod h1:j5MvL9PprKL39t166CoB1uVHfQMs4tFQZZcKwksXUjo=
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
cloud.google.com/go/firestore v1.1.0/go.mod h1:ulACoGHTpvq5r8rxGJ4ddJZBZqakUQqClKRT5SZwBmk=
@@ -45,8 +45,8 @@ cloud.google.com/go/iam v1.5.2 h1:qgFRAGEmd8z6dJ/qyEchAuL9jpswyODjA2lS+w234g8=
cloud.google.com/go/iam v1.5.2/go.mod h1:SE1vg0N81zQqLzQEwxL2WI6yhetBdbNQuTvIKCSkUHE=
cloud.google.com/go/logging v1.13.0 h1:7j0HgAp0B94o1YRDqiqm26w4q1rDMH7XNRU34lJXHYc=
cloud.google.com/go/logging v1.13.0/go.mod h1:36CoKh6KA/M0PbhPKMq6/qety2DCAErbhXT62TuXALA=
cloud.google.com/go/longrunning v0.7.0 h1:FV0+SYF1RIj59gyoWDRi45GiYUMM3K1qO51qoboQT1E=
cloud.google.com/go/longrunning v0.7.0/go.mod h1:ySn2yXmjbK9Ba0zsQqunhDkYi0+9rlXIwnoAf+h+TPY=
cloud.google.com/go/longrunning v0.6.7 h1:IGtfDWHhQCgCjwQjV9iiLnUta9LBCo8R9QmAFsS/PrE=
cloud.google.com/go/longrunning v0.6.7/go.mod h1:EAFV3IZAKmM56TyiE6VAP3VoTzhZzySwI/YI1s/nRsY=
cloud.google.com/go/monitoring v1.24.2 h1:5OTsoJ1dXYIiMiuL+sYscLc9BumrL3CarVLL7dd7lHM=
cloud.google.com/go/monitoring v1.24.2/go.mod h1:x7yzPWcgDRnPEv3sI+jJGBkwl5qINf+6qY4eq0I9B4U=
cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
@@ -59,19 +59,19 @@ cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohl
cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs=
cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
cloud.google.com/go/storage v1.14.0/go.mod h1:GrKmX003DSIwi9o29oFT7YDnHYwZoctc3fOKtUw0Xmo=
cloud.google.com/go/storage v1.57.2 h1:sVlym3cHGYhrp6XZKkKb+92I1V42ks2qKKpB0CF5Mb4=
cloud.google.com/go/storage v1.57.2/go.mod h1:n5ijg4yiRXXpCu0sJTD6k+eMf7GRrJmPyr9YxLXGHOk=
cloud.google.com/go/storage v1.55.0 h1:NESjdAToN9u1tmhVqhXCaCwYBuvEhZLLv0gBr+2znf0=
cloud.google.com/go/storage v1.55.0/go.mod h1:ztSmTTwzsdXe5syLVS0YsbFxXuvEmEyZj7v7zChEmuY=
cloud.google.com/go/trace v1.11.6 h1:2O2zjPzqPYAHrn3OKl029qlqG6W8ZdYaOWRyr8NgMT4=
cloud.google.com/go/trace v1.11.6/go.mod h1:GA855OeDEBiBMzcckLPE2kDunIpC72N+Pq8WFieFjnI=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.20.0 h1:JXg2dwJUmPB9JmtVmdEB16APJ7jurfbY5jnfXpJoRMc=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.20.0/go.mod h1:YD5h/ldMsG0XiIw7PdyNhLxaM317eFh5yNLccNfGdyw=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.13.1 h1:Hk5QBxZQC1jb2Fwj6mpzme37xbCDdNTxU7O9eb5+LB4=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.13.1/go.mod h1:IYus9qsFobWIc2YVwe/WPjcnyCkPKtnHAqUYeebc8z0=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.18.1 h1:Wc1ml6QlJs2BHQ/9Bqu1jiyggbsSjramq2oUmp5WeIo=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.18.1/go.mod h1:Ot/6aikWnKWi4l9QB7qVSwa8iMphQNqkWALMoNT3rzM=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.10.1 h1:B+blDbyVIG3WaikNxPnhPiJ1MThR03b3vKGtER95TP4=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.10.1/go.mod h1:JdM5psgjfBf5fo2uWOZhflPWyDBZ/O/CNAH9CtsuZE4=
github.com/Azure/azure-sdk-for-go/sdk/azidentity/cache v0.3.2 h1:yz1bePFlP5Vws5+8ez6T3HWXPmwOK7Yvq8QxDBD3SKY=
github.com/Azure/azure-sdk-for-go/sdk/azidentity/cache v0.3.2/go.mod h1:Pa9ZNPuoNu/GztvBSKk9J1cDJW6vk/n0zLtV4mgd8N8=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2 h1:9iefClla7iYpfYWdzPCRDozdmndjTm8DXdpCzPajMgA=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2/go.mod h1:XtLgD3ZD34DAaVIIAyG3objl5DynM3CQ/vMcbBNJZGI=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.1 h1:FPKJS1T+clwv+OLGt13a8UjqeRuh0O4SJ3lUriThc+4=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.1/go.mod h1:j2chePtV91HrC22tGoRX3sGY42uF13WzmmV80/OdVAA=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/compute/armcompute/v5 v5.6.0 h1:ui3YNbxfW7J3tTFIZMH6LIGRjCngp+J+nIFlnizfNTE=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/compute/armcompute/v5 v5.6.0/go.mod h1:gZmgV+qBqygoznvqo2J9oKZAFziqhLZ2xE/WVUmzkHA=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/internal/v2 v2.0.0 h1:PTFGRSlMKCQelWwxUyYVEUqseBJVemLyqWJjvMyt0do=
@@ -80,10 +80,10 @@ github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/internal/v3 v3.1.0 h1:2qsI
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/internal/v3 v3.1.0/go.mod h1:AW8VEadnhw9xox+VaVd9sP7NjzOAnaZBLRH6Tq3cJ38=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/resources/armresources v1.2.0 h1:Dd+RhdJn0OTtVGaeDLZpcumkIVCtA/3/Fo42+eoYvVM=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/resources/armresources v1.2.0/go.mod h1:5kakwfW5CjC9KK+Q4wjXAg+ShuIm2mBMua0ZFj2C8PE=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.8.1 h1:/Zt+cDPnpC3OVDm/JKLOs7M2DKmLRIIp3XIx9pHHiig=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.8.1/go.mod h1:Ng3urmn6dYe8gnbCMoHHVl5APYz2txho3koEkV2o2HA=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.3 h1:ZJJNFaQ86GVKQ9ehwqyAFE6pIfyicpuJ8IkVaPBc6/4=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.3/go.mod h1:URuDvhmATVKqHBH9/0nOiNKk0+YcwfQ3WkK5PqHKxc8=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.8.0 h1:LR0kAX9ykz8G4YgLCaRDVJ3+n43R8MneB5dTy2konZo=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.8.0/go.mod h1:DWAciXemNf++PQJLeXUB4HHH5OpsAh12HZnu2wXE1jA=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.1 h1:lhZdRq7TIx0GJQvSyX2Si406vrYsov2FXGp/RnSEtcs=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.1/go.mod h1:8cl44BDmi+effbARHMQjgOKA2AYvcohNm7KEt42mSV8=
github.com/Azure/go-ansiterm v0.0.0-20230124172434-306776ec8161 h1:L/gRVlceqvL25UVaW/CKtUDjefjrs0SPonmDGUVOYP0=
github.com/Azure/go-ansiterm v0.0.0-20230124172434-306776ec8161/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
github.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
@@ -95,20 +95,20 @@ github.com/Azure/go-autorest/logger v0.2.1/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZ
github.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBpUA79WCAKPPZVC2DeU=
github.com/AzureAD/microsoft-authentication-extensions-for-go/cache v0.1.1 h1:WJTmL004Abzc5wDB5VtZG2PJk5ndYDgVacGqfirKxjM=
github.com/AzureAD/microsoft-authentication-extensions-for-go/cache v0.1.1/go.mod h1:tCcJZ0uHAmvjsVYzEFivsRTN00oz5BEsRgQHu5JZ9WE=
github.com/AzureAD/microsoft-authentication-library-for-go v1.6.0 h1:XRzhVemXdgvJqCH0sFfrBUTnUJSBrBf7++ypk+twtRs=
github.com/AzureAD/microsoft-authentication-library-for-go v1.6.0/go.mod h1:HKpQxkWaGLJ+D/5H8QRpyQXA1eKjxkFlOMwck5+33Jk=
github.com/AzureAD/microsoft-authentication-library-for-go v1.5.0 h1:XkkQbfMyuH2jTSjQjSoihryI8GINRcs4xp8lNawg0FI=
github.com/AzureAD/microsoft-authentication-library-for-go v1.5.0/go.mod h1:HKpQxkWaGLJ+D/5H8QRpyQXA1eKjxkFlOMwck5+33Jk=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/GehirnInc/crypt v0.0.0-20230320061759-8cc1b52080c5 h1:IEjq88XO4PuBDcvmjQJcQGg+w+UaafSy8G5Kcb5tBhI=
github.com/GehirnInc/crypt v0.0.0-20230320061759-8cc1b52080c5/go.mod h1:exZ0C/1emQJAw5tHOaUDyY1ycttqBAPcxuzf7QbY6ec=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.30.0 h1:sBEjpZlNHzK1voKq9695PJSX2o5NEXl7/OL3coiIY0c=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.30.0/go.mod h1:P4WPRUkOhJC13W//jWpyfJNDAIpvRbAUIYLX/4jtlE0=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.53.0 h1:owcC2UnmsZycprQ5RfRgjydWhuoxg71LUfyiQdijZuM=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.53.0/go.mod h1:ZPpqegjbE99EPKsu3iUWV22A04wzGPcAY/ziSIQEEgs=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/cloudmock v0.53.0 h1:4LP6hvB4I5ouTbGgWtixJhgED6xdf67twf9PoY96Tbg=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/cloudmock v0.53.0/go.mod h1:jUZ5LYlw40WMd07qxcQJD5M40aUxrfwqQX1g7zxYnrQ=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.53.0 h1:Ron4zCA/yk6U7WOBXhTJcDpsUBG9npumK6xw2auFltQ=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.53.0/go.mod h1:cSgYe11MCNYunTnRXrKiR/tHc0eoKjICUuWpNZoVCOo=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.27.0 h1:ErKg/3iS1AKcTkf3yixlZ54f9U1rljCkQyEXWUnIUxc=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.27.0/go.mod h1:yAZHSGnqScoU556rBOVkwLze6WP5N+U11RHuWaGVxwY=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.51.0 h1:fYE9p3esPxA/C0rQ0AHhP0drtPXDRhaWiwg1DPqO7IU=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.51.0/go.mod h1:BnBReJLvVYx2CS/UHOgVz2BXKXD9wsQPxZug20nZhd0=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/cloudmock v0.51.0 h1:OqVGm6Ei3x5+yZmSJG1Mh2NwHvpVmZ08CB5qJhT9Nuk=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/cloudmock v0.51.0/go.mod h1:SZiPHWGOOk3bl8tkevxkoiwPgsIl6CwrWcbwjfHZpdM=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.51.0 h1:6/0iUd0xrnX7qt+mLNRwg5c0PGv8wpE8K90ryANQwMI=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.51.0/go.mod h1:otE2jQekW/PqXk1Awf5lmfokJx4uwuqcj1ab5SpGeW0=
github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
@@ -189,8 +189,8 @@ github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDk
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/cncf/udpa/go v0.0.0-20200629203442-efcf912fb354/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
github.com/cncf/xds/go v0.0.0-20251022180443-0feb69152e9f h1:Y8xYupdHxryycyPlc9Y+bSQAYZnetRJ70VMVKm5CKI0=
github.com/cncf/xds/go v0.0.0-20251022180443-0feb69152e9f/go.mod h1:HlzOvOjVBOfTGSRXRyY0OiCS/3J1akRGQQpRO/7zyF4=
github.com/cncf/xds/go v0.0.0-20250326154945-ae57f3c0d45f h1:C5bqEmzEPLsHm9Mv73lSE9e9bKV23aB1vxOsmZrkl3k=
github.com/cncf/xds/go v0.0.0-20250326154945-ae57f3c0d45f/go.mod h1:W+zGtBO5Y1IgJhy4+A9GOqVhqLpfZi+vwmdNXUehLA8=
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/etcd v3.3.13+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
@@ -211,6 +211,8 @@ github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSs
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
@@ -227,10 +229,10 @@ github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1m
github.com/envoyproxy/go-control-plane v0.9.7/go.mod h1:cwu0lG7PUMfa9snN8LXBig5ynNVH9qI8YYLbd1fK2po=
github.com/envoyproxy/go-control-plane v0.9.9-0.20201210154907-fd9021fe5dad/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
github.com/envoyproxy/go-control-plane v0.9.9-0.20210217033140-668b12f5399d/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
github.com/envoyproxy/go-control-plane v0.13.5-0.20251024222203-75eaa193e329 h1:K+fnvUM0VZ7ZFJf0n4L/BRlnsb9pL/GuDG6FqaH+PwM=
github.com/envoyproxy/go-control-plane v0.13.5-0.20251024222203-75eaa193e329/go.mod h1:Alz8LEClvR7xKsrq3qzoc4N0guvVNSS8KmSChGYr9hs=
github.com/envoyproxy/go-control-plane/envoy v1.35.0 h1:ixjkELDE+ru6idPxcHLj8LBVc2bFP7iBytj353BoHUo=
github.com/envoyproxy/go-control-plane/envoy v1.35.0/go.mod h1:09qwbGVuSWWAyN5t/b3iyVfz5+z8QWGrzkoqm/8SbEs=
github.com/envoyproxy/go-control-plane v0.13.4 h1:zEqyPVyku6IvWCFwux4x9RxkLOMUL+1vC9xUFv5l2/M=
github.com/envoyproxy/go-control-plane v0.13.4/go.mod h1:kDfuBlDVsSj2MjrLEtRWtHlsWIFcGyB2RMO44Dc5GZA=
github.com/envoyproxy/go-control-plane/envoy v1.32.4 h1:jb83lalDRZSpPWW2Z7Mck/8kXZ5CQAFYVjQcdVIr83A=
github.com/envoyproxy/go-control-plane/envoy v1.32.4/go.mod h1:Gzjc5k8JcJswLjAx1Zm+wSYE20UrLtt7JZMWiWQXQEw=
github.com/envoyproxy/go-control-plane/ratelimit v0.1.0 h1:/G9QYbddjL25KvtKTv3an9lx6VBE2cnb8wp1vEGNYGI=
github.com/envoyproxy/go-control-plane/ratelimit v0.1.0/go.mod h1:Wk+tMFAFbCXaJPzVVHnPgRKdUdwW/KdbRt94AzgRee4=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
@@ -264,8 +266,8 @@ github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-ini/ini v1.67.0 h1:z6ZrTEZqSWOTyH2FlglNbNgARyHG8oLW9gMELqKr06A=
github.com/go-ini/ini v1.67.0/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8=
github.com/go-jose/go-jose/v4 v4.1.3 h1:CVLmWDhDVRa6Mi/IgCgaopNosCaHz7zrMeF9MlZRkrs=
github.com/go-jose/go-jose/v4 v4.1.3/go.mod h1:x4oUasVrzR7071A4TnHLGSPpNOm2a21K9Kf04k1rs08=
github.com/go-jose/go-jose/v4 v4.0.5 h1:M6T8+mKZl/+fNNuFHvGIzDz7BTLQPIounk/b9dw3AaE=
github.com/go-jose/go-jose/v4 v4.0.5/go.mod h1:s3P1lRrkT8igV8D9OjyL4WRyHvjB6a4JSllnOrmmBOA=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
@@ -299,19 +301,21 @@ github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1v
github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8=
github.com/gobwas/glob v0.2.3 h1:A4xDbljILXROh+kObIiy5kIaPYD8e96x1tgBhUI5J+Y=
github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8=
github.com/goccy/go-json v0.10.5 h1:Fq85nIqj+gXn/S5ahsiTlK3TmC85qgirsdTP/+DeaC4=
github.com/goccy/go-json v0.10.5/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/godbus/dbus/v5 v5.1.0 h1:4KLkAxT3aOY8Li4FRJe/KvhoNFFxo0m6fNuFUO8QJUk=
github.com/godbus/dbus/v5 v5.1.0/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/gofrs/flock v0.13.0 h1:95JolYOvGMqeH31+FC7D2+uULf6mG61mEZ/A8dRYMzw=
github.com/gofrs/flock v0.13.0/go.mod h1:jxeyy9R1auM5S6JYDBhDt+E2TCo7DkratH4Pgi8P+Z0=
github.com/gofrs/flock v0.12.1 h1:MTLVXXHf8ekldpJk3AKicLij9MdwOWkZ+a/jHHZby9E=
github.com/gofrs/flock v0.12.1/go.mod h1:9zxTsyu5xtJ9DK+1tFZyibEV7y3uwDxPPfbxeeHCoD0=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang-jwt/jwt/v4 v4.5.2 h1:YtQM7lnr8iZ+j5q71MGKkNw9Mn7AjHM68uc9g5fXeUI=
github.com/golang-jwt/jwt/v4 v4.5.2/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
github.com/golang-jwt/jwt/v5 v5.3.0 h1:pv4AsKCKKZuqlgs5sUmn4x8UlGa0kEVt/puTpKx9vvo=
github.com/golang-jwt/jwt/v5 v5.3.0/go.mod h1:fxCRLWMO43lRc8nhHWY6LGqRcf+1gQWArsqaEUEa5bE=
github.com/golang-jwt/jwt/v5 v5.2.2 h1:Rl4B7itRWVtYIHFrSNd7vhTiz9UpLdi6gZhZ3wEeDy8=
github.com/golang-jwt/jwt/v5 v5.2.2/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
@@ -399,12 +403,12 @@ github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/enterprise-certificate-proxy v0.3.7 h1:zrn2Ee/nWmHulBx5sAVrGgAa0f2/R35S4DJwfFaUPFQ=
github.com/googleapis/enterprise-certificate-proxy v0.3.7/go.mod h1:MkHOF77EYAE7qfSuSS9PU6g4Nt4e11cnsDUowfwewLA=
github.com/googleapis/enterprise-certificate-proxy v0.3.6 h1:GW/XbdyBFQ8Qe+YAmFU9uHLo7OnF5tL52HFAgMmyrf4=
github.com/googleapis/enterprise-certificate-proxy v0.3.6/go.mod h1:MkHOF77EYAE7qfSuSS9PU6g4Nt4e11cnsDUowfwewLA=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
github.com/googleapis/gax-go/v2 v2.15.0 h1:SyjDc1mGgZU5LncH8gimWo9lW1DtIfPibOG81vgd/bo=
github.com/googleapis/gax-go/v2 v2.15.0/go.mod h1:zVVkkxAQHa1RQpg9z2AUCMnKhi0Qld9rcmyfL1OZhoc=
github.com/googleapis/gax-go/v2 v2.14.2 h1:eBLnkZ9635krYIPD+ag1USrOAI0Nr0QYF3+/3GqO0k0=
github.com/googleapis/gax-go/v2 v2.14.2/go.mod h1:ON64QhlJkhVtSqp4v1uaK92VyZ2gmvDQsweuyLV+8+w=
github.com/googleapis/gnostic v0.5.1/go.mod h1:6U4PtQXGIEt/Z3h5MAT7FNofLnw9vXk2cUuW7uA/OeU=
github.com/googleapis/gnostic v0.5.5/go.mod h1:7+EbHbldMins07ALC74bsA81Ovc97DwqyJO1AENw9kA=
github.com/googleapis/google-cloud-go-testing v0.0.0-20200911160855-bcd43fbb19e8/go.mod h1:dvDLG8qkwmyD9a/MJJN3XJcT3xFxOKAvTZGvuZmac9g=
@@ -420,12 +424,12 @@ github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmg
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw=
github.com/hanwen/go-fuse/v2 v2.9.0 h1:0AOGUkHtbOVeyGLr0tXupiid1Vg7QB7M6YUcdmVdC58=
github.com/hanwen/go-fuse/v2 v2.9.0/go.mod h1:yE6D2PqWwm3CbYRxFXV9xUd8Md5d6NG0WBs5spCswmI=
github.com/hanwen/go-fuse/v2 v2.8.0 h1:wV8rG7rmCz8XHSOwBZhG5YcVqcYjkzivjmbaMafPlAs=
github.com/hanwen/go-fuse/v2 v2.8.0/go.mod h1:yE6D2PqWwm3CbYRxFXV9xUd8Md5d6NG0WBs5spCswmI=
github.com/hashicorp/consul/api v1.1.0/go.mod h1:VmuI/Lkw1nC05EYQWNKwWGbkg+FbDBtguAZLlVdkD9Q=
github.com/hashicorp/consul/sdk v0.1.1/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8=
github.com/hashicorp/cronexpr v1.1.3 h1:rl5IkxXN2m681EfivTlccqIryzYJSXRGRNa0xeG7NA4=
github.com/hashicorp/cronexpr v1.1.3/go.mod h1:P4wA0KBl9C5q2hABiMO7cp6jcIg96CDh1Efb3g1PWA4=
github.com/hashicorp/cronexpr v1.1.2 h1:wG/ZYIKT+RT3QkOdgYc+xsKWVRgnxJ1OJtjjy84fJ9A=
github.com/hashicorp/cronexpr v1.1.2/go.mod h1:P4wA0KBl9C5q2hABiMO7cp6jcIg96CDh1Efb3g1PWA4=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
github.com/hashicorp/go-hclog v0.14.1 h1:nQcJDQwIAGnmoUWp8ubocEX40cCml/17YkF6csQLReU=
@@ -482,20 +486,18 @@ github.com/keybase/go-keychain v0.0.1/go.mod h1:PdEILRW3i9D8JcdM+FmY6RwkHGnhHxXw
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.18.2 h1:iiPHWW0YrcFgpBYhsA6D1+fqHssJscY/Tm/y2Uqnapk=
github.com/klauspost/compress v1.18.2/go.mod h1:R0h/fSBs8DE4ENlcrlib3PsXS61voFxhIs2DeRhCvJ4=
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
github.com/klauspost/cpuid/v2 v2.0.1/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
github.com/klauspost/cpuid/v2 v2.3.0 h1:S4CRMLnYUhGeDFDqkGriYKdfoFlDnMtqTiI/sFzhA9Y=
github.com/klauspost/cpuid/v2 v2.3.0/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=
github.com/klauspost/crc32 v1.3.0 h1:sSmTt3gUt81RP655XGZPElI0PelVTZ6YwCRnPSupoFM=
github.com/klauspost/crc32 v1.3.0/go.mod h1:D7kQaZhnkX/Y0tstFGf8VUzv2UofNGqCjnC3zdHB0Hw=
github.com/klauspost/cpuid/v2 v2.2.10 h1:tBs3QSyvjDyFTq3uoc/9xFpCuOsJQFNPiAhYdw2skhE=
github.com/klauspost/cpuid/v2 v2.2.10/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=
github.com/klauspost/pgzip v1.2.6 h1:8RXeL5crjEUFnR2/Sn6GJNWtSQ3Dk8pq4CL3jvdDyjU=
github.com/klauspost/pgzip v1.2.6/go.mod h1:Ch1tH69qFZu15pkjo5kYi6mth2Zzwzt50oCQKQE9RUs=
github.com/klauspost/reedsolomon v1.12.6 h1:8pqE9aECQG/ZFitiUD1xK/E83zwosBAZtE3UbuZM8TQ=
github.com/klauspost/reedsolomon v1.12.6/go.mod h1:ggJT9lc71Vu+cSOPBlxGvBN6TfAS77qB4fp8vJ05NSA=
github.com/klauspost/reedsolomon v1.12.4 h1:5aDr3ZGoJbgu/8+j45KtUJxzYm8k08JGtB9Wx1VQ4OA=
github.com/klauspost/reedsolomon v1.12.4/go.mod h1:d3CzOMOt0JXGIFZm1StgkyF14EYr3xneR2rNWo7NcMU=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kopia/htmluibuild v0.0.1-0.20251125011029-7f1c3f84f29d h1:U3VB/cDMsPW4zB4JRFbVRDzIpPytt889rJUKAG40NPA=
github.com/kopia/htmluibuild v0.0.1-0.20251125011029-7f1c3f84f29d/go.mod h1:h53A5JM3t2qiwxqxusBe+PFgGcgZdS+DWCQvG5PTlto=
github.com/kopia/htmluibuild v0.0.1-0.20250607181534-77e0f3f9f557 h1:je1C/xnmKxnaJsIgj45me5qA51TgtK9uMwTxgDw+9H0=
github.com/kopia/htmluibuild v0.0.1-0.20250607181534-77e0f3f9f557/go.mod h1:h53A5JM3t2qiwxqxusBe+PFgGcgZdS+DWCQvG5PTlto=
github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
@@ -533,12 +535,12 @@ github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWE
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
github.com/minio/crc64nvme v1.1.0 h1:e/tAguZ+4cw32D+IO/8GSf5UVr9y+3eJcxZI2WOO/7Q=
github.com/minio/crc64nvme v1.1.0/go.mod h1:eVfm2fAzLlxMdUGc0EEBGSMmPwmXD5XiNRpnu9J3bvg=
github.com/minio/crc64nvme v1.0.1 h1:DHQPrYPdqK7jQG/Ls5CTBZWeex/2FMS3G5XGkycuFrY=
github.com/minio/crc64nvme v1.0.1/go.mod h1:eVfm2fAzLlxMdUGc0EEBGSMmPwmXD5XiNRpnu9J3bvg=
github.com/minio/md5-simd v1.1.2 h1:Gdi1DZK69+ZVMoNHRXJyNcxrMA4dSxoYHZSQbirFg34=
github.com/minio/md5-simd v1.1.2/go.mod h1:MzdKDxYpY2BT9XQFocsiZf/NKVtR7nkE4RoEpN+20RM=
github.com/minio/minio-go/v7 v7.0.97 h1:lqhREPyfgHTB/ciX8k2r8k0D93WaFqxbJX36UZq5occ=
github.com/minio/minio-go/v7 v7.0.97/go.mod h1:re5VXuo0pwEtoNLsNuSr0RrLfT/MBtohwdaSmPPSRSk=
github.com/minio/minio-go/v7 v7.0.94 h1:1ZoksIKPyaSt64AVOyaQvhDOgVC3MfZsWM6mZXRUGtM=
github.com/minio/minio-go/v7 v7.0.94/go.mod h1:71t2CqDt3ThzESgZUlU1rBN54mksGGlkLcFgguDnnAc=
github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc=
github.com/mitchellh/go-homedir v1.0.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
@@ -597,8 +599,8 @@ github.com/pelletier/go-toml v1.9.3/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCko
github.com/petar/GoLLRB v0.0.0-20210522233825-ae3b015fd3e9 h1:1/WtZae0yGtPq+TI6+Tv1WTxkukpXeMlviSxvL7SRgk=
github.com/petar/GoLLRB v0.0.0-20210522233825-ae3b015fd3e9/go.mod h1:x3N5drFsm2uilKKuuYo6LdyD8vZAW55sH/9w+pbo1sw=
github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
github.com/philhofer/fwd v1.2.0 h1:e6DnBTl7vGY+Gz322/ASL4Gyp1FspeMvx1RNDoToZuM=
github.com/philhofer/fwd v1.2.0/go.mod h1:RqIHx9QI14HlwKwm98g9Re5prTQ6LdeRQn+gXJFxsJM=
github.com/philhofer/fwd v1.1.3-0.20240916144458-20a13a1f6b7c h1:dAMKvw0MlJT1GshSTtih8C2gDs04w8dReiOGXrGLNoY=
github.com/philhofer/fwd v1.1.3-0.20240916144458-20a13a1f6b7c/go.mod h1:RqIHx9QI14HlwKwm98g9Re5prTQ6LdeRQn+gXJFxsJM=
github.com/pierrec/lz4 v2.6.1+incompatible h1:9UY3+iC23yxF0UfGaYrGplQ+79Rg+h/q9FV9ix19jjM=
github.com/pierrec/lz4 v2.6.1+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY=
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c h1:+mdjkGKdHQG3305AYmdv1U2eRNDiU2ErMBj1gwrq8eQ=
@@ -615,12 +617,12 @@ github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZN
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI=
github.com/project-velero/kopia v0.0.0-20251230033609-d946b1e75197 h1:iGkfuELGvFCqW+zcrhf2GsOwNH1nWYBsC69IOc57KJk=
github.com/project-velero/kopia v0.0.0-20251230033609-d946b1e75197/go.mod h1:RL4KehCNKEIDNltN7oruSa3ldwBNVPmQbwmN3Schbjc=
github.com/project-velero/kopia v0.0.0-20250722052735-3ea24d208777 h1:T7t+u+mnF33qFTDq7bIMSMB51BEA8zkD7aU6tFQNZ6E=
github.com/project-velero/kopia v0.0.0-20250722052735-3ea24d208777/go.mod h1:qlSnPHrsV8eEeU4l4zqEw8mJ5CUeXr7PDiJNI4r4Bus=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o=
github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg=
github.com/prometheus/client_golang v1.22.0 h1:rb93p9lokFEsctTys46VnV1kLCDpVZ0a/Y92Vm0Zc6Q=
github.com/prometheus/client_golang v1.22.0/go.mod h1:R7ljNsLXhuQXYZYtw6GAE9AZg8Y7vEW5scdCXrWRXC0=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
@@ -628,20 +630,22 @@ github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNw
github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE=
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.67.4 h1:yR3NqWO1/UyO1w2PhUvXlGQs/PtFmoveVO0KZ4+Lvsc=
github.com/prometheus/common v0.67.4/go.mod h1:gP0fq6YjjNCLssJCQp0yk4M8W6ikLURwkdd/YKtTbyI=
github.com/prometheus/common v0.65.0 h1:QDwzd+G1twt//Kwj/Ww6E9FQq1iVMmODnILtW1t2VzE=
github.com/prometheus/common v0.65.0/go.mod h1:0gZns+BLRQ3V6NdaerOhMbwwRbNh9hkGINtQAsP5GS8=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.16.1 h1:hZ15bTNuirocR6u0JZ6BAHHmwS1p8B4P6MRqxtzMyRg=
github.com/prometheus/procfs v0.16.1/go.mod h1:teAbpZRB1iIAJYREa1LsoWUXykVXA1KlTmWl8x/U+Is=
github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc=
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/redis/go-redis/v9 v9.8.0 h1:q3nRvjrlge/6UD7eTu/DSg2uYiU2mCL0G/uzBWqhicI=
github.com/redis/go-redis/v9 v9.8.0/go.mod h1:huWgSWd8mW6+m0VPhJjSSQ+d6Nh1VICQ6Q5lHuCH/Iw=
github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII=
github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o=
github.com/rs/xid v1.6.0 h1:fV591PaemRlL6JfRxGDEPl69wICngIQ3shQtzfy2gxU=
github.com/rs/xid v1.6.0/go.mod h1:7XoLgs4eV+QndskICGsho+ADou8ySMSjJKDIan90Nz0=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
@@ -679,8 +683,8 @@ github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An
github.com/spf13/viper v1.4.0/go.mod h1:PTJ7Z/lr49W6bUbkmS1V3by4uWynFiR9p7+dSq/yZzE=
github.com/spf13/viper v1.7.0/go.mod h1:8WkrPz2fc9jxqZNCJI/76HCieCp4Q8HaLFoCha5qpdg=
github.com/spf13/viper v1.8.1/go.mod h1:o0Pch8wJ9BVSWGQMbra6iw0oQ5oktSIBaujf1rJH9Ns=
github.com/spiffe/go-spiffe/v2 v2.6.0 h1:l+DolpxNWYgruGQVV0xsfeya3CsC7m8iBzDnMpsbLuo=
github.com/spiffe/go-spiffe/v2 v2.6.0/go.mod h1:gm2SeUoMZEtpnzPNs2Csc0D/gX33k1xIx7lEzqblHEs=
github.com/spiffe/go-spiffe/v2 v2.5.0 h1:N2I01KCUkv1FAjZXJMwh95KK1ZIQLYbPfhaxw8WS0hE=
github.com/spiffe/go-spiffe/v2 v2.5.0/go.mod h1:P+NxobPc6wXhVtINNtFjNWGBTreew1GBUCwT2wPmb7g=
github.com/stoewer/go-strcase v1.2.0/go.mod h1:IBiWB2sKIp3wVVQ3Y035++gc+knqhUQag1KpM8ahLw8=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
@@ -698,8 +702,8 @@ github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
github.com/tg123/go-htpasswd v1.2.4 h1:HgH8KKCjdmo7jjXWN9k1nefPBd7Be3tFCTjc2jPraPU=
github.com/tg123/go-htpasswd v1.2.4/go.mod h1:EKThQok9xHkun6NBMynNv6Jmu24A33XdZzzl4Q7H1+0=
@@ -727,6 +731,8 @@ github.com/zeebo/assert v1.1.0 h1:hU1L1vLTHsnO8x8c9KAR5GmM5QscxHg5RNU5z5qbUWY=
github.com/zeebo/assert v1.1.0/go.mod h1:Pq9JiuJQpG8JLJdtkwrJESF0Foym2/D9XMU5ciN/wJ0=
github.com/zeebo/blake3 v0.2.4 h1:KYQPkhpRtcqh0ssGYcKLG1JYvddkEA8QwCM/yBqhaZI=
github.com/zeebo/blake3 v0.2.4/go.mod h1:7eeQ6d2iXWRGF6npfaxl2CU+xy2Fjo2gxeyZGCRUjcE=
github.com/zeebo/errs v1.4.0 h1:XNdoD/RRMKP7HD0UhJnIzUy74ISdGGxURlYG8HSWSfM=
github.com/zeebo/errs v1.4.0/go.mod h1:sgbWHsvVuTPHcqJJGQ1WhI5KbWlHYz+2+2C/LSEtCw4=
github.com/zeebo/pcg v1.0.1 h1:lyqfGeWiv4ahac6ttHs+I5hwtH/+1mrhlCtVNQM2kHo=
github.com/zeebo/pcg v1.0.1/go.mod h1:09F0S9iiKrwn9rlI5yjLkmrug154/YRW6KnnXVDM/l4=
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
@@ -740,26 +746,26 @@ go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk=
go.opencensus.io v0.23.0/go.mod h1:XItmlyltB5F7CS4xOC1DcqMoFqwtC6OG2xF7mCv7P7E=
go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
go.opentelemetry.io/contrib/detectors/gcp v1.38.0 h1:ZoYbqX7OaA/TAikspPl3ozPI6iY6LiIY9I8cUfm+pJs=
go.opentelemetry.io/contrib/detectors/gcp v1.38.0/go.mod h1:SU+iU7nu5ud4oCb3LQOhIZ3nRLj6FNVrKgtflbaf2ts=
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
go.opentelemetry.io/contrib/detectors/gcp v1.36.0 h1:F7q2tNlCaHY9nMKHR6XH9/qkp8FktLnIcy6jJNyOCQw=
go.opentelemetry.io/contrib/detectors/gcp v1.36.0/go.mod h1:IbBN8uAIIx734PTonTPxAxnjc2pQTxWNkwfstZ+6H2k=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0 h1:q4XOmH/0opmeuJtPsbFNivyl7bCt7yRBbeEm2sC/XtQ=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0/go.mod h1:snMWehoOh2wsEwnvvwtDyFCxVeDAODenXHtn5vzrKjo=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0 h1:F7Jx+6hwnZ41NSFTO5q4LYDtJRXBf2PD0rNBkeB/lus=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0/go.mod h1:UHB22Z8QsdRDrnAtX4PntOl36ajSxcdUMt1sF7Y6E7Q=
go.opentelemetry.io/otel v1.38.0 h1:RkfdswUDRimDg0m2Az18RKOsnI8UDzppJAtj01/Ymk8=
go.opentelemetry.io/otel v1.38.0/go.mod h1:zcmtmQ1+YmQM9wrNsTGV/q/uyusom3P8RxwExxkZhjM=
go.opentelemetry.io/otel v1.37.0 h1:9zhNfelUvx0KBfu/gb+ZgeAfAgtWrfHJZcAqFC228wQ=
go.opentelemetry.io/otel v1.37.0/go.mod h1:ehE/umFRLnuLa/vSccNq9oS1ErUlkkK71gMcN34UG8I=
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.36.0 h1:rixTyDGXFxRy1xzhKrotaHy3/KXdPhlWARrCgK+eqUY=
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.36.0/go.mod h1:dowW6UsM9MKbJq5JTz2AMVp3/5iW5I/TStsk8S+CfHw=
go.opentelemetry.io/otel/metric v1.38.0 h1:Kl6lzIYGAh5M159u9NgiRkmoMKjvbsKtYRwgfrA6WpA=
go.opentelemetry.io/otel/metric v1.38.0/go.mod h1:kB5n/QoRM8YwmUahxvI3bO34eVtQf2i4utNVLr9gEmI=
go.opentelemetry.io/otel/sdk v1.38.0 h1:l48sr5YbNf2hpCUj/FoGhW9yDkl+Ma+LrVl8qaM5b+E=
go.opentelemetry.io/otel/sdk v1.38.0/go.mod h1:ghmNdGlVemJI3+ZB5iDEuk4bWA3GkTpW+DOoZMYBVVg=
go.opentelemetry.io/otel/sdk/metric v1.38.0 h1:aSH66iL0aZqo//xXzQLYozmWrXxyFkBJ6qT5wthqPoM=
go.opentelemetry.io/otel/sdk/metric v1.38.0/go.mod h1:dg9PBnW9XdQ1Hd6ZnRz689CbtrUp0wMMs9iPcgT9EZA=
go.opentelemetry.io/otel/trace v1.38.0 h1:Fxk5bKrDZJUH+AMyyIXGcFAPah0oRcT+LuNtJrmcNLE=
go.opentelemetry.io/otel/trace v1.38.0/go.mod h1:j1P9ivuFsTceSWe1oY+EeW3sc+Pp42sO++GHkg4wwhs=
go.opentelemetry.io/otel/metric v1.37.0 h1:mvwbQS5m0tbmqML4NqK+e3aDiO02vsf/WgbsdpcPoZE=
go.opentelemetry.io/otel/metric v1.37.0/go.mod h1:04wGrZurHYKOc+RKeye86GwKiTb9FKm1WHtO+4EVr2E=
go.opentelemetry.io/otel/sdk v1.37.0 h1:ItB0QUqnjesGRvNcmAcU0LyvkVyGJ2xftD29bWdDvKI=
go.opentelemetry.io/otel/sdk v1.37.0/go.mod h1:VredYzxUvuo2q3WRcDnKDjbdvmO0sCzOvVAiY+yUkAg=
go.opentelemetry.io/otel/sdk/metric v1.36.0 h1:r0ntwwGosWGaa0CrSt8cuNuTcccMXERFwHX4dThiPis=
go.opentelemetry.io/otel/sdk/metric v1.36.0/go.mod h1:qTNOhFDfKRwX0yXOqJYegL5WRaW376QbB7P4Pb0qva4=
go.opentelemetry.io/otel/trace v1.37.0 h1:HLdcFNbRQBE2imdSEgm/kwqmQj1Or1l/7bW6mxVK7z4=
go.opentelemetry.io/otel/trace v1.37.0/go.mod h1:TlgrlQ+PtQO5XFerSPUYG0JSgGyryXewPGyayAWSBS0=
go.starlark.net v0.0.0-20200306205701-8dd3e2ee1dd5/go.mod h1:nmDLcffg48OtT/PSW0Hg7FvpRQsQh5OSqIylirxKC7o=
go.starlark.net v0.0.0-20201006213952-227f4aabceb5/go.mod h1:f0znQkUKRrkk36XxWbGjMqQM8wGv/xHBVE2qc3B5oFU=
go.starlark.net v0.0.0-20230525235612-a134d8f9ddca h1:VdD38733bfYv5tUZwEIskMM93VanwNIi5bIKnDrJdEY=
@@ -774,10 +780,8 @@ go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
go.uber.org/zap v1.17.0/go.mod h1:MXVU+bhUf/A7Xi2HNOnopQOrmycQ5Ih87HtOu4q5SSo=
go.uber.org/zap v1.27.1 h1:08RqriUEv8+ArZRYSTXy1LeBScaMpVSTBhCeaZYfMYc=
go.uber.org/zap v1.27.1/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
go.yaml.in/yaml/v2 v2.4.3 h1:6gvOSjQoTB3vt1l+CU+tSyi/HOjfOjRLJ4YwYZGwRO0=
go.yaml.in/yaml/v2 v2.4.3/go.mod h1:zSxWcmIDjOzPXpjlTTbAsKokqkDNAVtZO0WOMiT90s8=
go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8=
go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181029021203-45a5f77698d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
@@ -829,8 +833,8 @@ golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.30.0 h1:fDEXFVZ/fmCKProc/yAXXUijritrDzahmwwefnjoPFk=
golang.org/x/mod v0.30.0/go.mod h1:lAsf5O2EvJeSFMiBxXDki7sCgAxEUcZHXoXMKT4GJKc=
golang.org/x/mod v0.29.0 h1:HV8lRxZC4l2cr3Zq1LvtOsi/ThTgWnUk/y64QSs8GwA=
golang.org/x/mod v0.29.0/go.mod h1:NyhrlYXJ2H4eJiRy/WDBO6HMqZQ6q9nk4JzS3NuCK+w=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@@ -891,8 +895,8 @@ golang.org/x/oauth2 v0.0.0-20210220000619-9bb904979d93/go.mod h1:KelEdhl1UZF7XfJ
golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210402161424-2e8d93401602/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.33.0 h1:4Q+qn+E5z8gPRJfmRy7C2gGG3T4jIprK6aSYgTXGRpo=
golang.org/x/oauth2 v0.33.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
golang.org/x/oauth2 v0.30.0 h1:dnDm7JmhM45NNpd8FDDeLhK6FwqbOf4MLCM9zb1BOHI=
golang.org/x/oauth2 v0.30.0/go.mod h1:B++QgG3ZKulg6sRPGD/mqlHQs5rB3Ml9erfeDY7xKlU=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -992,8 +996,8 @@ golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxb
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
golang.org/x/time v0.12.0 h1:ScB/8o8olJvc+CQPWrK3fPZNfh7qgwCrY0zJmoEQLSE=
golang.org/x/time v0.12.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
@@ -1055,8 +1059,6 @@ golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8T
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gomodules.xyz/jsonpatch/v2 v2.4.0 h1:Ci3iUJyx9UeRx7CeFN8ARgGbkESwJK+KB9lLcWxY/Zw=
gomodules.xyz/jsonpatch/v2 v2.4.0/go.mod h1:AH3dM2RI6uoBZxn3LVrfvJ3E0/9dG4cSrbuBJT4moAY=
gonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk=
gonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E=
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
@@ -1079,8 +1081,8 @@ google.golang.org/api v0.40.0/go.mod h1:fYKFpnQN0DsDSKRVRcQSDQNtqWPfM9i+zNPxepjR
google.golang.org/api v0.41.0/go.mod h1:RkxM5lITDfTzmyKFPt+wGrCJbVfniCr2ool8kTBzRTU=
google.golang.org/api v0.43.0/go.mod h1:nQsDGjRXMo4lvh5hP0TKqF244gqhGcr/YSIykhUk/94=
google.golang.org/api v0.44.0/go.mod h1:EBOGZqzyhtvMDoxwS97ctnh0zUmYY6CxqXsc1AvkYD8=
google.golang.org/api v0.256.0 h1:u6Khm8+F9sxbCTYNoBHg6/Hwv0N/i+V94MvkOSor6oI=
google.golang.org/api v0.256.0/go.mod h1:KIgPhksXADEKJlnEoRa9qAII4rXcy40vfI8HRqcU964=
google.golang.org/api v0.241.0 h1:QKwqWQlkc6O895LchPEDUSYr22Xp3NCxpQRiWTB6avE=
google.golang.org/api v0.241.0/go.mod h1:cOVEm2TpdAGHL2z+UwyS+kmlGr3bVWQQ6sYEqkKje50=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
@@ -1132,12 +1134,12 @@ google.golang.org/genproto v0.0.0-20210310155132-4ce2db91004e/go.mod h1:FWY/as6D
google.golang.org/genproto v0.0.0-20210319143718-93e7006c17a6/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210402141018-6c239bbf2bb1/go.mod h1:9lPAdzaEmUacj36I+k7YKbEc5CXzPIeORRgDAUOu28A=
google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
google.golang.org/genproto v0.0.0-20250603155806-513f23925822 h1:rHWScKit0gvAPuOnu87KpaYtjK5zBMLcULh7gxkCXu4=
google.golang.org/genproto v0.0.0-20250603155806-513f23925822/go.mod h1:HubltRL7rMh0LfnQPkMH4NPDFEWp0jw3vixw7jEM53s=
google.golang.org/genproto/googleapis/api v0.0.0-20251022142026-3a174f9686a8 h1:mepRgnBZa07I4TRuomDE4sTIYieg/osKmzIf4USdWS4=
google.golang.org/genproto/googleapis/api v0.0.0-20251022142026-3a174f9686a8/go.mod h1:fDMmzKV90WSg1NbozdqrE64fkuTv6mlq2zxo9ad+3yo=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251103181224-f26f9409b101 h1:tRPGkdGHuewF4UisLzzHHr1spKw92qLM98nIzxbC0wY=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251103181224-f26f9409b101/go.mod h1:7i2o+ce6H/6BluujYR+kqX3GKH+dChPTQU19wjRPiGk=
google.golang.org/genproto v0.0.0-20250505200425-f936aa4a68b2 h1:1tXaIXCracvtsRxSBsYDiSBN0cuJvM7QYW+MrpIRY78=
google.golang.org/genproto v0.0.0-20250505200425-f936aa4a68b2/go.mod h1:49MsLSx0oWMOZqcpB3uL8ZOkAh1+TndpJ8ONoCBWiZk=
google.golang.org/genproto/googleapis/api v0.0.0-20250603155806-513f23925822 h1:oWVWY3NzT7KJppx2UKhKmzPq4SRe0LdCijVRwvGeikY=
google.golang.org/genproto/googleapis/api v0.0.0-20250603155806-513f23925822/go.mod h1:h3c4v36UTKzUiuaOKQ6gr3S+0hovBtUrXzTG/i3+XEc=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822 h1:fc6jSaCT0vBduLYZHYrBBNY4dsWuvgyff9noRNDdBeE=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822/go.mod h1:qQ0YXyHHx3XkvlzUtpXDkS29lDSafHMZBAZDc03LQ3A=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
@@ -1159,8 +1161,8 @@ google.golang.org/grpc v1.35.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAG
google.golang.org/grpc v1.36.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.36.1/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
google.golang.org/grpc v1.77.0 h1:wVVY6/8cGA6vvffn+wWK5ToddbgdU3d8MNENr4evgXM=
google.golang.org/grpc v1.77.0/go.mod h1:z0BY1iVj0q8E1uSQCjL9cppRj+gnZjzDnzV0dHhrNig=
google.golang.org/grpc v1.73.0 h1:VIWSmpI2MegBtTuFt5/JWy2oXxtjJ/e89Z70ImfD2ok=
google.golang.org/grpc v1.73.0/go.mod h1:50sbHOUqWoCQGI8V2HQLJM0B+LMlIUjNSZmow7EVBQc=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
@@ -1174,8 +1176,8 @@ google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlba
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.36.10 h1:AYd7cD/uASjIL6Q9LiTjz8JLcrh/88q5UObnmY3aOOE=
google.golang.org/protobuf v1.36.10/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
google.golang.org/protobuf v1.36.6 h1:z1NpPI8ku2WgiWnf+t9wTPsn6eP1L7ksHUlkfLvd9xY=
google.golang.org/protobuf v1.36.6/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=

View File

@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
FROM --platform=$TARGETPLATFORM golang:1.25-bookworm
FROM --platform=$TARGETPLATFORM golang:1.24.11-bookworm
ARG GOPROXY
@@ -94,7 +94,7 @@ RUN ARCH=$(go env GOARCH) && \
chmod +x /usr/bin/goreleaser
# get golangci-lint
RUN curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/HEAD/install.sh | sh -s -- -b $(go env GOPATH)/bin v2.5.0
RUN curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(go env GOPATH)/bin v2.1.1
# install kubectl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/$(go env GOARCH)/kubectl

View File

@@ -1,5 +1,5 @@
diff --git a/go.mod b/go.mod
index 5f939c481..6ae17f4a1 100644
index 5f939c481..f6205aa3c 100644
--- a/go.mod
+++ b/go.mod
@@ -24,32 +24,31 @@ require (
@@ -14,13 +14,13 @@ index 5f939c481..6ae17f4a1 100644
- golang.org/x/term v0.4.0
- golang.org/x/text v0.6.0
- google.golang.org/api v0.106.0
+ golang.org/x/crypto v0.36.0
+ golang.org/x/net v0.38.0
+ golang.org/x/crypto v0.45.0
+ golang.org/x/net v0.47.0
+ golang.org/x/oauth2 v0.28.0
+ golang.org/x/sync v0.12.0
+ golang.org/x/sys v0.31.0
+ golang.org/x/term v0.30.0
+ golang.org/x/text v0.23.0
+ golang.org/x/sync v0.18.0
+ golang.org/x/sys v0.38.0
+ golang.org/x/term v0.37.0
+ golang.org/x/text v0.31.0
+ google.golang.org/api v0.114.0
)
@@ -64,11 +64,11 @@ index 5f939c481..6ae17f4a1 100644
)
-go 1.18
+go 1.23.0
+go 1.24.0
+
+toolchain go1.23.7
+toolchain go1.24.11
diff --git a/go.sum b/go.sum
index 026e1d2fa..805792055 100644
index 026e1d2fa..4a37e7ac7 100644
--- a/go.sum
+++ b/go.sum
@@ -1,23 +1,24 @@
@@ -170,8 +170,8 @@ index 026e1d2fa..805792055 100644
golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
-golang.org/x/crypto v0.5.0 h1:U/0M97KRkSFvyD/3FSmdP5W5swImpNgle/EHFhOsQPE=
-golang.org/x/crypto v0.5.0/go.mod h1:NK/OQwhpMQP3MwtdjgLlYHnH9ebylxKWv3e0fK+mkQU=
+golang.org/x/crypto v0.36.0 h1:AnAEvhDddvBdpY+uR+MyHmuZzzNqXSe/GvuDeob5L34=
+golang.org/x/crypto v0.36.0/go.mod h1:Y4J0ReaxCR1IMaabaSMugxJES1EpwhBHhv2bDHklZvc=
+golang.org/x/crypto v0.45.0 h1:jMBrvKuj23MTlT0bQEOBcAE0mjg8mK9RXFhRH6nyF3Q=
+golang.org/x/crypto v0.45.0/go.mod h1:XTGrrkGJve7CYK7J8PEww4aY7gM3qMCElcJQ8n8JdX4=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
@@ -181,8 +181,8 @@ index 026e1d2fa..805792055 100644
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
-golang.org/x/net v0.5.0 h1:GyT4nK/YDHSqa1c4753ouYCDajOYKTja9Xb/OHtgvSw=
-golang.org/x/net v0.5.0/go.mod h1:DivGGAXEgPSlEBzxGzZI+ZLohi+xUj054jfeKui00ws=
+golang.org/x/net v0.38.0 h1:vRMAPTMaeGqVhG5QyLJHqNDwecKTomGeqbnfZyKlBI8=
+golang.org/x/net v0.38.0/go.mod h1:ivrbrMbzFq5J41QOQh0siUuly180yBYtLp+CKbEaFx8=
+golang.org/x/net v0.47.0 h1:Mx+4dIFzqraBXUugkia1OOvlD6LemFo1ALMHjrXDOhY=
+golang.org/x/net v0.47.0/go.mod h1:/jNxtkgq5yWUGYkaZGqo27cfGZ1c5Nen03aYrrKpVRU=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
-golang.org/x/oauth2 v0.4.0 h1:NF0gk8LVPg1Ml7SSbGyySuoxdsXitj7TvgvuRxIMc/M=
-golang.org/x/oauth2 v0.4.0/go.mod h1:RznEsdpjGAINPTOF0UH/t+xJ75L18YO3Ho6Pyn+uRec=
@@ -194,8 +194,8 @@ index 026e1d2fa..805792055 100644
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
-golang.org/x/sync v0.1.0 h1:wsuoTGHzEhffawBOhz5CYhcrV4IdKZbEyZjBMuTp12o=
-golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.12.0 h1:MHc5BpPuC30uJk597Ri8TV3CNZcTLu6B6z4lJy+g6Jw=
+golang.org/x/sync v0.12.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
+golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
+golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -205,21 +205,21 @@ index 026e1d2fa..805792055 100644
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
-golang.org/x/sys v0.4.0 h1:Zr2JFtRQNX3BCZ8YtxRE9hNJYC8J6I1MVbMg6owUp18=
-golang.org/x/sys v0.4.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.31.0 h1:ioabZlmFYtWhL+TRYpcnNlLwhyxaM9kWTDEmfnprqik=
+golang.org/x/sys v0.31.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
+golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc=
+golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
-golang.org/x/term v0.4.0 h1:O7UWfv5+A2qiuulQk30kVinPoMtoIPeVaKLEgLpVkvg=
-golang.org/x/term v0.4.0/go.mod h1:9P2UbLfCdcvo3p/nzKvsmas4TnlujnuoV9hGgYzW1lQ=
+golang.org/x/term v0.30.0 h1:PQ39fJZ+mfadBm0y5WlL4vlM7Sx1Hgf13sMIY2+QS9Y=
+golang.org/x/term v0.30.0/go.mod h1:NYYFdzHoI5wRh/h5tDMdMqCqPJZEuNqVR5xJLd/n67g=
+golang.org/x/term v0.37.0 h1:8EGAD0qCmHYZg6J17DvsMy9/wJ7/D/4pV/wfnld5lTU=
+golang.org/x/term v0.37.0/go.mod h1:5pB4lxRNYYVZuTLmy8oR2BH8dflOR+IbTYFD8fi3254=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
-golang.org/x/text v0.6.0 h1:3XmdazWV+ubf7QgHSTWeykHOci5oeekaGJBLkrkaw4k=
-golang.org/x/text v0.6.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
+golang.org/x/text v0.23.0 h1:D71I7dUrlY+VX0gQShAThNGHFxZ13dGLBHQLVl1mJlY=
+golang.org/x/text v0.23.0/go.mod h1:/BLNzu4aZCJ1+kcD0DNRotWKage4q2rGVAg4o22unh4=
+golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM=
+golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=

View File

@@ -103,14 +103,6 @@ func (p *volumeSnapshotContentDeleteItemAction) Execute(
snapCont.ResourceVersion = ""
if snapCont.Spec.VolumeSnapshotClassName != nil {
// Delete VolumeSnapshotClass from the VolumeSnapshotContent.
// This is necessary to make the deletion independent of the VolumeSnapshotClass.
snapCont.Spec.VolumeSnapshotClassName = nil
p.log.Debugf("Deleted VolumeSnapshotClassName from VolumeSnapshotContent %s to make deletion independent of VolumeSnapshotClass",
snapCont.Name)
}
if err := p.crClient.Create(context.TODO(), &snapCont); err != nil {
return errors.Wrapf(err, "fail to create VolumeSnapshotContent %s", snapCont.Name)
}

View File

@@ -70,7 +70,7 @@ func TestVSCExecute(t *testing.T) {
},
{
name: "Normal case, VolumeSnapshot should be deleted",
vsc: builder.ForVolumeSnapshotContent("bar").ObjectMeta(builder.WithLabelsMap(map[string]string{velerov1api.BackupNameLabel: "backup"})).VolumeSnapshotClassName("volumesnapshotclass").Status(&snapshotv1api.VolumeSnapshotContentStatus{SnapshotHandle: &snapshotHandleStr}).Result(),
vsc: builder.ForVolumeSnapshotContent("bar").ObjectMeta(builder.WithLabelsMap(map[string]string{velerov1api.BackupNameLabel: "backup"})).Status(&snapshotv1api.VolumeSnapshotContentStatus{SnapshotHandle: &snapshotHandleStr}).Result(),
backup: builder.ForBackup("velero", "backup").ObjectMeta(builder.WithAnnotationsMap(map[string]string{velerov1api.ResourceTimeoutAnnotation: "5s"})).Result(),
expectErr: false,
function: func(
@@ -82,7 +82,7 @@ func TestVSCExecute(t *testing.T) {
},
},
{
name: "Error case, deletion fails",
name: "Normal case, VolumeSnapshot should be deleted",
vsc: builder.ForVolumeSnapshotContent("bar").ObjectMeta(builder.WithLabelsMap(map[string]string{velerov1api.BackupNameLabel: "backup"})).Status(&snapshotv1api.VolumeSnapshotContentStatus{SnapshotHandle: &snapshotHandleStr}).Result(),
backup: builder.ForBackup("velero", "backup").ObjectMeta(builder.WithAnnotationsMap(map[string]string{velerov1api.ResourceTimeoutAnnotation: "5s"})).Result(),
expectErr: true,

View File

@@ -169,7 +169,7 @@ func (e *DefaultWaitExecHookHandler) HandleHooks(
hookLog.Error(err)
errors = append(errors, err)
errTracker := multiHookTracker.Record(restoreName, newPod.Namespace, newPod.Name, hook.Hook.Container, hook.HookSource, hook.HookName, HookPhase(""), hook.hookIndex, true, err)
errTracker := multiHookTracker.Record(restoreName, newPod.Namespace, newPod.Name, hook.Hook.Container, hook.HookSource, hook.HookName, HookPhase(""), i, true, err)
if errTracker != nil {
hookLog.WithError(errTracker).Warn("Error recording the hook in hook tracker")
}
@@ -195,7 +195,7 @@ func (e *DefaultWaitExecHookHandler) HandleHooks(
hookFailed = true
}
errTracker := multiHookTracker.Record(restoreName, newPod.Namespace, newPod.Name, hook.Hook.Container, hook.HookSource, hook.HookName, HookPhase(""), hook.hookIndex, hookFailed, hookErr)
errTracker := multiHookTracker.Record(restoreName, newPod.Namespace, newPod.Name, hook.Hook.Container, hook.HookSource, hook.HookName, HookPhase(""), i, hookFailed, hookErr)
if errTracker != nil {
hookLog.WithError(errTracker).Warn("Error recording the hook in hook tracker")
}
@@ -239,7 +239,7 @@ func (e *DefaultWaitExecHookHandler) HandleHooks(
// containers to become ready.
// Each unexecuted hook is logged as an error and this error will be returned from this function.
for _, hooks := range byContainer {
for _, hook := range hooks {
for i, hook := range hooks {
if hook.executed {
continue
}
@@ -252,7 +252,7 @@ func (e *DefaultWaitExecHookHandler) HandleHooks(
},
)
errTracker := multiHookTracker.Record(restoreName, pod.Namespace, pod.Name, hook.Hook.Container, hook.HookSource, hook.HookName, HookPhase(""), hook.hookIndex, true, err)
errTracker := multiHookTracker.Record(restoreName, pod.Namespace, pod.Name, hook.Hook.Container, hook.HookSource, hook.HookName, HookPhase(""), i, true, err)
if errTracker != nil {
hookLog.WithError(errTracker).Warn("Error recording the hook in hook tracker")
}

View File

@@ -706,130 +706,6 @@ func TestWaitExecHandleHooks(t *testing.T) {
},
},
},
{
name: "Multiple hooks with non-sequential indices (bug #9359)",
initialPod: builder.ForPod("default", "my-pod").
Containers(&corev1api.Container{
Name: "container1",
}).
ContainerStatuses(&corev1api.ContainerStatus{
Name: "container1",
State: corev1api.ContainerState{
Running: &corev1api.ContainerStateRunning{},
},
}).
Result(),
groupResource: "pods",
byContainer: map[string][]PodExecRestoreHook{
"container1": {
{
HookName: "first-hook",
HookSource: HookSourceAnnotation,
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeContinue,
ExecTimeout: metav1.Duration{Duration: time.Second},
WaitTimeout: metav1.Duration{Duration: time.Minute},
},
hookIndex: 0,
},
{
HookName: "second-hook",
HookSource: HookSourceAnnotation,
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/bar"},
OnError: velerov1api.HookErrorModeContinue,
ExecTimeout: metav1.Duration{Duration: time.Second},
WaitTimeout: metav1.Duration{Duration: time.Minute},
},
hookIndex: 2,
},
{
HookName: "third-hook",
HookSource: HookSourceAnnotation,
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/third"},
OnError: velerov1api.HookErrorModeContinue,
ExecTimeout: metav1.Duration{Duration: time.Second},
WaitTimeout: metav1.Duration{Duration: time.Minute},
},
hookIndex: 4,
},
},
},
expectedExecutions: []expectedExecution{
{
name: "first-hook",
hook: &velerov1api.ExecHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeContinue,
Timeout: metav1.Duration{Duration: time.Second},
},
error: nil,
pod: builder.ForPod("default", "my-pod").
ObjectMeta(builder.WithResourceVersion("1")).
Containers(&corev1api.Container{
Name: "container1",
}).
ContainerStatuses(&corev1api.ContainerStatus{
Name: "container1",
State: corev1api.ContainerState{
Running: &corev1api.ContainerStateRunning{},
},
}).
Result(),
},
{
name: "second-hook",
hook: &velerov1api.ExecHook{
Container: "container1",
Command: []string{"/usr/bin/bar"},
OnError: velerov1api.HookErrorModeContinue,
Timeout: metav1.Duration{Duration: time.Second},
},
error: nil,
pod: builder.ForPod("default", "my-pod").
ObjectMeta(builder.WithResourceVersion("1")).
Containers(&corev1api.Container{
Name: "container1",
}).
ContainerStatuses(&corev1api.ContainerStatus{
Name: "container1",
State: corev1api.ContainerState{
Running: &corev1api.ContainerStateRunning{},
},
}).
Result(),
},
{
name: "third-hook",
hook: &velerov1api.ExecHook{
Container: "container1",
Command: []string{"/usr/bin/third"},
OnError: velerov1api.HookErrorModeContinue,
Timeout: metav1.Duration{Duration: time.Second},
},
error: nil,
pod: builder.ForPod("default", "my-pod").
ObjectMeta(builder.WithResourceVersion("1")).
Containers(&corev1api.Container{
Name: "container1",
}).
ContainerStatuses(&corev1api.ContainerStatus{
Name: "container1",
State: corev1api.ContainerState{
Running: &corev1api.ContainerStateRunning{},
},
}).
Result(),
},
},
expectedErrors: nil,
},
}
for _, test := range tests {

View File

@@ -146,9 +146,6 @@ func (p *Policies) BuildPolicy(resPolicies *ResourcePolicies) error {
if len(con.PVCLabels) > 0 {
volP.conditions = append(volP.conditions, &pvcLabelsCondition{labels: con.PVCLabels})
}
if len(con.PVCPhase) > 0 {
volP.conditions = append(volP.conditions, &pvcPhaseCondition{phases: con.PVCPhase})
}
p.volumePolicies = append(p.volumePolicies, volP)
}
@@ -194,9 +191,6 @@ func (p *Policies) GetMatchAction(res any) (*Action, error) {
if data.PVC != nil {
volume.parsePVC(data.PVC)
}
case data.PVC != nil:
// Handle PVC-only scenarios (e.g., unbound PVCs)
volume.parsePVC(data.PVC)
default:
return nil, errors.New("failed to convert object")
}

View File

@@ -983,69 +983,6 @@ volumePolicies:
},
skip: false,
},
{
name: "PVC phase matching - Pending phase should skip",
yamlData: `version: v1
volumePolicies:
- conditions:
pvcPhase: ["Pending"]
action:
type: skip`,
vol: nil,
podVol: nil,
pvc: &corev1api.PersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Namespace: "default",
Name: "pvc-pending",
},
Status: corev1api.PersistentVolumeClaimStatus{
Phase: corev1api.ClaimPending,
},
},
skip: true,
},
{
name: "PVC phase matching - Bound phase should not skip",
yamlData: `version: v1
volumePolicies:
- conditions:
pvcPhase: ["Pending"]
action:
type: skip`,
vol: nil,
podVol: nil,
pvc: &corev1api.PersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Namespace: "default",
Name: "pvc-bound",
},
Status: corev1api.PersistentVolumeClaimStatus{
Phase: corev1api.ClaimBound,
},
},
skip: false,
},
{
name: "PVC phase matching - Multiple phases (Pending, Lost)",
yamlData: `version: v1
volumePolicies:
- conditions:
pvcPhase: ["Pending", "Lost"]
action:
type: skip`,
vol: nil,
podVol: nil,
pvc: &corev1api.PersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Namespace: "default",
Name: "pvc-lost",
},
Status: corev1api.PersistentVolumeClaimStatus{
Phase: corev1api.ClaimLost,
},
},
skip: true,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
@@ -1122,53 +1059,32 @@ func TestParsePVC(t *testing.T) {
name string
pvc *corev1api.PersistentVolumeClaim
expectedLabels map[string]string
expectedPhase string
expectErr bool
}{
{
name: "valid PVC with labels and Pending phase",
name: "valid PVC with labels",
pvc: &corev1api.PersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{"env": "prod"},
},
Status: corev1api.PersistentVolumeClaimStatus{
Phase: corev1api.ClaimPending,
},
},
expectedLabels: map[string]string{"env": "prod"},
expectedPhase: "Pending",
expectErr: false,
},
{
name: "valid PVC with Bound phase",
name: "valid PVC with empty labels",
pvc: &corev1api.PersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{},
},
Status: corev1api.PersistentVolumeClaimStatus{
Phase: corev1api.ClaimBound,
},
},
expectedLabels: nil,
expectedPhase: "Bound",
expectErr: false,
},
{
name: "valid PVC with Lost phase",
pvc: &corev1api.PersistentVolumeClaim{
Status: corev1api.PersistentVolumeClaimStatus{
Phase: corev1api.ClaimLost,
},
},
expectedLabels: nil,
expectedPhase: "Lost",
expectErr: false,
},
{
name: "nil PVC pointer",
pvc: (*corev1api.PersistentVolumeClaim)(nil),
expectedLabels: nil,
expectedPhase: "",
expectErr: false,
},
}
@@ -1179,66 +1095,6 @@ func TestParsePVC(t *testing.T) {
s.parsePVC(tc.pvc)
assert.Equal(t, tc.expectedLabels, s.pvcLabels)
assert.Equal(t, tc.expectedPhase, s.pvcPhase)
})
}
}
func TestPVCPhaseMatch(t *testing.T) {
tests := []struct {
name string
condition *pvcPhaseCondition
volume *structuredVolume
expectedMatch bool
}{
{
name: "match Pending phase",
condition: &pvcPhaseCondition{phases: []string{"Pending"}},
volume: &structuredVolume{pvcPhase: "Pending"},
expectedMatch: true,
},
{
name: "match multiple phases - Pending matches",
condition: &pvcPhaseCondition{phases: []string{"Pending", "Bound"}},
volume: &structuredVolume{pvcPhase: "Pending"},
expectedMatch: true,
},
{
name: "match multiple phases - Bound matches",
condition: &pvcPhaseCondition{phases: []string{"Pending", "Bound"}},
volume: &structuredVolume{pvcPhase: "Bound"},
expectedMatch: true,
},
{
name: "no match for different phase",
condition: &pvcPhaseCondition{phases: []string{"Pending"}},
volume: &structuredVolume{pvcPhase: "Bound"},
expectedMatch: false,
},
{
name: "no match for empty phase",
condition: &pvcPhaseCondition{phases: []string{"Pending"}},
volume: &structuredVolume{pvcPhase: ""},
expectedMatch: false,
},
{
name: "match with empty phases list (always match)",
condition: &pvcPhaseCondition{phases: []string{}},
volume: &structuredVolume{pvcPhase: "Pending"},
expectedMatch: true,
},
{
name: "match with nil phases list (always match)",
condition: &pvcPhaseCondition{phases: nil},
volume: &structuredVolume{pvcPhase: "Pending"},
expectedMatch: true,
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
result := tc.condition.match(tc.volume)
assert.Equal(t, tc.expectedMatch, result)
})
}
}

View File

@@ -51,7 +51,6 @@ type structuredVolume struct {
csi *csiVolumeSource
volumeType SupportedVolume
pvcLabels map[string]string
pvcPhase string
}
func (s *structuredVolume) parsePV(pv *corev1api.PersistentVolume) {
@@ -71,11 +70,8 @@ func (s *structuredVolume) parsePV(pv *corev1api.PersistentVolume) {
}
func (s *structuredVolume) parsePVC(pvc *corev1api.PersistentVolumeClaim) {
if pvc != nil {
if len(pvc.GetLabels()) > 0 {
s.pvcLabels = pvc.Labels
}
s.pvcPhase = string(pvc.Status.Phase)
if pvc != nil && len(pvc.GetLabels()) > 0 {
s.pvcLabels = pvc.Labels
}
}
@@ -114,31 +110,6 @@ func (c *pvcLabelsCondition) validate() error {
return nil
}
// pvcPhaseCondition defines a condition that matches if the PVC's phase matches any of the provided phases.
type pvcPhaseCondition struct {
phases []string
}
func (c *pvcPhaseCondition) match(v *structuredVolume) bool {
// No phases specified: always match.
if len(c.phases) == 0 {
return true
}
if v.pvcPhase == "" {
return false
}
for _, phase := range c.phases {
if v.pvcPhase == phase {
return true
}
}
return false
}
func (c *pvcPhaseCondition) validate() error {
return nil
}
type capacityCondition struct {
capacity capacity
}

View File

@@ -46,7 +46,6 @@ type volumeConditions struct {
CSI *csiVolumeSource `yaml:"csi,omitempty"`
VolumeTypes []SupportedVolume `yaml:"volumeTypes,omitempty"`
PVCLabels map[string]string `yaml:"pvcLabels,omitempty"`
PVCPhase []string `yaml:"pvcPhase,omitempty"`
}
func (c *capacityCondition) validate() error {

View File

@@ -170,9 +170,6 @@ type SnapshotDataMovementInfo struct {
// Moved snapshot data size.
Size int64 `json:"size"`
// Moved snapshot incremental size.
IncrementalSize int64 `json:"incrementalSize,omitempty"`
// The DataUpload's Status.Phase value
Phase velerov2alpha1.DataUploadPhase
}
@@ -220,9 +217,6 @@ type PodVolumeInfo struct {
// The snapshot corresponding volume size.
Size int64 `json:"size,omitempty"`
// The incremental snapshot size.
IncrementalSize int64 `json:"incrementalSize,omitempty"`
// The type of the uploader that uploads the data. The valid values are `kopia` and `restic`.
UploaderType string `json:"uploaderType"`
@@ -246,15 +240,14 @@ type PodVolumeInfo struct {
func newPodVolumeInfoFromPVB(pvb *velerov1api.PodVolumeBackup) *PodVolumeInfo {
return &PodVolumeInfo{
SnapshotHandle: pvb.Status.SnapshotID,
Size: pvb.Status.Progress.TotalBytes,
IncrementalSize: pvb.Status.IncrementalBytes,
UploaderType: pvb.Spec.UploaderType,
VolumeName: pvb.Spec.Volume,
PodName: pvb.Spec.Pod.Name,
PodNamespace: pvb.Spec.Pod.Namespace,
NodeName: pvb.Spec.Node,
Phase: pvb.Status.Phase,
SnapshotHandle: pvb.Status.SnapshotID,
Size: pvb.Status.Progress.TotalBytes,
UploaderType: pvb.Spec.UploaderType,
VolumeName: pvb.Spec.Volume,
PodName: pvb.Spec.Pod.Name,
PodNamespace: pvb.Spec.Pod.Namespace,
NodeName: pvb.Spec.Node,
Phase: pvb.Status.Phase,
}
}

View File

@@ -1,11 +1,9 @@
package volumehelper
import (
"context"
"fmt"
"strings"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
corev1api "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/runtime"
@@ -13,7 +11,6 @@ import (
crclient "sigs.k8s.io/controller-runtime/pkg/client"
"github.com/vmware-tanzu/velero/internal/resourcepolicies"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/kuberesource"
"github.com/vmware-tanzu/velero/pkg/util/boolptr"
kubeutil "github.com/vmware-tanzu/velero/pkg/util/kube"
@@ -36,16 +33,8 @@ type volumeHelperImpl struct {
// to the volume policy check, but fs-backup is based on the pod resource,
// the resource filter on PVC and PV doesn't work on this scenario.
backupExcludePVC bool
// pvcPodCache provides cached PVC to Pod mappings for improved performance.
// When there are many PVCs and pods, using this cache avoids O(N*M) lookups.
pvcPodCache *podvolumeutil.PVCPodCache
}
// NewVolumeHelperImpl creates a VolumeHelper without PVC-to-Pod caching.
//
// Deprecated: Use NewVolumeHelperImplWithNamespaces or NewVolumeHelperImplWithCache instead
// for better performance. These functions provide PVC-to-Pod caching which avoids O(N*M)
// complexity when there are many PVCs and pods. See issue #9179 for details.
func NewVolumeHelperImpl(
volumePolicy *resourcepolicies.Policies,
snapshotVolumes *bool,
@@ -54,43 +43,6 @@ func NewVolumeHelperImpl(
defaultVolumesToFSBackup bool,
backupExcludePVC bool,
) VolumeHelper {
// Pass nil namespaces - no cache will be built, so this never fails.
// This is used by plugins that don't need the cache optimization.
vh, _ := NewVolumeHelperImplWithNamespaces(
volumePolicy,
snapshotVolumes,
logger,
client,
defaultVolumesToFSBackup,
backupExcludePVC,
nil,
)
return vh
}
// NewVolumeHelperImplWithNamespaces creates a VolumeHelper with a PVC-to-Pod cache for improved performance.
// The cache is built internally from the provided namespaces list.
// This avoids O(N*M) complexity when there are many PVCs and pods.
// See issue #9179 for details.
// Returns an error if cache building fails - callers should not proceed with backup in this case.
func NewVolumeHelperImplWithNamespaces(
volumePolicy *resourcepolicies.Policies,
snapshotVolumes *bool,
logger logrus.FieldLogger,
client crclient.Client,
defaultVolumesToFSBackup bool,
backupExcludePVC bool,
namespaces []string,
) (VolumeHelper, error) {
var pvcPodCache *podvolumeutil.PVCPodCache
if len(namespaces) > 0 {
pvcPodCache = podvolumeutil.NewPVCPodCache()
if err := pvcPodCache.BuildCacheForNamespaces(context.Background(), namespaces, client); err != nil {
return nil, err
}
logger.Infof("Built PVC-to-Pod cache for %d namespaces", len(namespaces))
}
return &volumeHelperImpl{
volumePolicy: volumePolicy,
snapshotVolumes: snapshotVolumes,
@@ -98,33 +50,7 @@ func NewVolumeHelperImplWithNamespaces(
client: client,
defaultVolumesToFSBackup: defaultVolumesToFSBackup,
backupExcludePVC: backupExcludePVC,
pvcPodCache: pvcPodCache,
}, nil
}
// NewVolumeHelperImplWithCache creates a VolumeHelper using an externally managed PVC-to-Pod cache.
// This is used by plugins that build the cache lazily per-namespace (following the pattern from PR #9226).
// The cache can be nil, in which case PVC-to-Pod lookups will fall back to direct API calls.
func NewVolumeHelperImplWithCache(
backup velerov1api.Backup,
client crclient.Client,
logger logrus.FieldLogger,
pvcPodCache *podvolumeutil.PVCPodCache,
) (VolumeHelper, error) {
resourcePolicies, err := resourcepolicies.GetResourcePoliciesFromBackup(backup, client, logger)
if err != nil {
return nil, errors.Wrap(err, "failed to get volume policies from backup")
}
return &volumeHelperImpl{
volumePolicy: resourcePolicies,
snapshotVolumes: backup.Spec.SnapshotVolumes,
logger: logger,
client: client,
defaultVolumesToFSBackup: boolptr.IsSetToTrue(backup.Spec.DefaultVolumesToFsBackup),
backupExcludePVC: boolptr.IsSetToTrue(backup.Spec.SnapshotMoveData),
pvcPodCache: pvcPodCache,
}, nil
}
func (v *volumeHelperImpl) ShouldPerformSnapshot(obj runtime.Unstructured, groupResource schema.GroupResource) (bool, error) {
@@ -179,12 +105,10 @@ func (v *volumeHelperImpl) ShouldPerformSnapshot(obj runtime.Unstructured, group
// If this PV is claimed, see if we've already taken a (pod volume backup)
// snapshot of the contents of this PV. If so, don't take a snapshot.
if pv.Spec.ClaimRef != nil {
// Use cached lookup if available for better performance with many PVCs/pods
pods, err := podvolumeutil.GetPodsUsingPVCWithCache(
pods, err := podvolumeutil.GetPodsUsingPVC(
pv.Spec.ClaimRef.Namespace,
pv.Spec.ClaimRef.Name,
v.client,
v.pvcPodCache,
)
if err != nil {
v.logger.WithError(err).Errorf("fail to get pod for PV %s", pv.Name)

View File

@@ -34,7 +34,6 @@ import (
"github.com/vmware-tanzu/velero/pkg/builder"
"github.com/vmware-tanzu/velero/pkg/kuberesource"
velerotest "github.com/vmware-tanzu/velero/pkg/test"
podvolumeutil "github.com/vmware-tanzu/velero/pkg/util/podvolume"
)
func TestVolumeHelperImpl_ShouldPerformSnapshot(t *testing.T) {
@@ -739,498 +738,3 @@ func TestGetVolumeFromResource(t *testing.T) {
assert.ErrorContains(t, err, "resource is not a PersistentVolume or Volume")
})
}
func TestVolumeHelperImplWithCache_ShouldPerformSnapshot(t *testing.T) {
testCases := []struct {
name string
inputObj runtime.Object
groupResource schema.GroupResource
pod *corev1api.Pod
resourcePolicies *resourcepolicies.ResourcePolicies
snapshotVolumesFlag *bool
defaultVolumesToFSBackup bool
buildCache bool
shouldSnapshot bool
expectedErr bool
}{
{
name: "VolumePolicy match with cache, returns true",
inputObj: builder.ForPersistentVolume("example-pv").StorageClass("gp2-csi").ClaimRef("ns", "pvc-1").Result(),
groupResource: kuberesource.PersistentVolumes,
resourcePolicies: &resourcepolicies.ResourcePolicies{
Version: "v1",
VolumePolicies: []resourcepolicies.VolumePolicy{
{
Conditions: map[string]any{
"storageClass": []string{"gp2-csi"},
},
Action: resourcepolicies.Action{
Type: resourcepolicies.Snapshot,
},
},
},
},
snapshotVolumesFlag: ptr.To(true),
buildCache: true,
shouldSnapshot: true,
expectedErr: false,
},
{
name: "VolumePolicy not match, fs-backup via opt-out with cache, skips snapshot",
inputObj: builder.ForPersistentVolume("example-pv").StorageClass("gp3-csi").ClaimRef("ns", "pvc-1").Result(),
groupResource: kuberesource.PersistentVolumes,
pod: builder.ForPod("ns", "pod-1").Volumes(
&corev1api.Volume{
Name: "volume",
VolumeSource: corev1api.VolumeSource{
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
ClaimName: "pvc-1",
},
},
},
).Result(),
resourcePolicies: &resourcepolicies.ResourcePolicies{
Version: "v1",
VolumePolicies: []resourcepolicies.VolumePolicy{
{
Conditions: map[string]any{
"storageClass": []string{"gp2-csi"},
},
Action: resourcepolicies.Action{
Type: resourcepolicies.Snapshot,
},
},
},
},
snapshotVolumesFlag: ptr.To(true),
defaultVolumesToFSBackup: true,
buildCache: true,
shouldSnapshot: false,
expectedErr: false,
},
{
name: "Cache not built, falls back to direct lookup",
inputObj: builder.ForPersistentVolume("example-pv").StorageClass("gp2-csi").ClaimRef("ns", "pvc-1").Result(),
groupResource: kuberesource.PersistentVolumes,
resourcePolicies: &resourcepolicies.ResourcePolicies{
Version: "v1",
VolumePolicies: []resourcepolicies.VolumePolicy{
{
Conditions: map[string]any{
"storageClass": []string{"gp2-csi"},
},
Action: resourcepolicies.Action{
Type: resourcepolicies.Snapshot,
},
},
},
},
snapshotVolumesFlag: ptr.To(true),
buildCache: false,
shouldSnapshot: true,
expectedErr: false,
},
{
name: "No volume policy, defaultVolumesToFSBackup with cache, skips snapshot",
inputObj: builder.ForPersistentVolume("example-pv").StorageClass("gp2-csi").ClaimRef("ns", "pvc-1").Result(),
groupResource: kuberesource.PersistentVolumes,
pod: builder.ForPod("ns", "pod-1").Volumes(
&corev1api.Volume{
Name: "volume",
VolumeSource: corev1api.VolumeSource{
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
ClaimName: "pvc-1",
},
},
},
).Result(),
resourcePolicies: nil,
snapshotVolumesFlag: ptr.To(true),
defaultVolumesToFSBackup: true,
buildCache: true,
shouldSnapshot: false,
expectedErr: false,
},
}
objs := []runtime.Object{
&corev1api.PersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Namespace: "ns",
Name: "pvc-1",
},
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
fakeClient := velerotest.NewFakeControllerRuntimeClient(t, objs...)
if tc.pod != nil {
require.NoError(t, fakeClient.Create(t.Context(), tc.pod))
}
var p *resourcepolicies.Policies
if tc.resourcePolicies != nil {
p = &resourcepolicies.Policies{}
err := p.BuildPolicy(tc.resourcePolicies)
require.NoError(t, err)
}
var namespaces []string
if tc.buildCache {
namespaces = []string{"ns"}
}
vh, err := NewVolumeHelperImplWithNamespaces(
p,
tc.snapshotVolumesFlag,
logrus.StandardLogger(),
fakeClient,
tc.defaultVolumesToFSBackup,
false,
namespaces,
)
require.NoError(t, err)
obj, err := runtime.DefaultUnstructuredConverter.ToUnstructured(tc.inputObj)
require.NoError(t, err)
actualShouldSnapshot, actualError := vh.ShouldPerformSnapshot(&unstructured.Unstructured{Object: obj}, tc.groupResource)
if tc.expectedErr {
require.Error(t, actualError)
return
}
require.NoError(t, actualError)
require.Equalf(t, tc.shouldSnapshot, actualShouldSnapshot, "Want shouldSnapshot as %t; Got shouldSnapshot as %t", tc.shouldSnapshot, actualShouldSnapshot)
})
}
}
func TestVolumeHelperImplWithCache_ShouldPerformFSBackup(t *testing.T) {
testCases := []struct {
name string
pod *corev1api.Pod
resources []runtime.Object
resourcePolicies *resourcepolicies.ResourcePolicies
snapshotVolumesFlag *bool
defaultVolumesToFSBackup bool
buildCache bool
shouldFSBackup bool
expectedErr bool
}{
{
name: "VolumePolicy match with cache, return true",
pod: builder.ForPod("ns", "pod-1").
Volumes(
&corev1api.Volume{
Name: "vol-1",
VolumeSource: corev1api.VolumeSource{
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
ClaimName: "pvc-1",
},
},
}).Result(),
resources: []runtime.Object{
builder.ForPersistentVolumeClaim("ns", "pvc-1").
VolumeName("pv-1").
StorageClass("gp2-csi").Phase(corev1api.ClaimBound).Result(),
builder.ForPersistentVolume("pv-1").StorageClass("gp2-csi").Result(),
},
resourcePolicies: &resourcepolicies.ResourcePolicies{
Version: "v1",
VolumePolicies: []resourcepolicies.VolumePolicy{
{
Conditions: map[string]any{
"storageClass": []string{"gp2-csi"},
},
Action: resourcepolicies.Action{
Type: resourcepolicies.FSBackup,
},
},
},
},
buildCache: true,
shouldFSBackup: true,
expectedErr: false,
},
{
name: "VolumePolicy match with cache, action is snapshot, return false",
pod: builder.ForPod("ns", "pod-1").
Volumes(
&corev1api.Volume{
Name: "vol-1",
VolumeSource: corev1api.VolumeSource{
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
ClaimName: "pvc-1",
},
},
}).Result(),
resources: []runtime.Object{
builder.ForPersistentVolumeClaim("ns", "pvc-1").
VolumeName("pv-1").
StorageClass("gp2-csi").Phase(corev1api.ClaimBound).Result(),
builder.ForPersistentVolume("pv-1").StorageClass("gp2-csi").Result(),
},
resourcePolicies: &resourcepolicies.ResourcePolicies{
Version: "v1",
VolumePolicies: []resourcepolicies.VolumePolicy{
{
Conditions: map[string]any{
"storageClass": []string{"gp2-csi"},
},
Action: resourcepolicies.Action{
Type: resourcepolicies.Snapshot,
},
},
},
},
buildCache: true,
shouldFSBackup: false,
expectedErr: false,
},
{
name: "Cache not built, falls back to direct lookup, opt-in annotation",
pod: builder.ForPod("ns", "pod-1").
ObjectMeta(builder.WithAnnotations(velerov1api.VolumesToBackupAnnotation, "vol-1")).
Volumes(
&corev1api.Volume{
Name: "vol-1",
VolumeSource: corev1api.VolumeSource{
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
ClaimName: "pvc-1",
},
},
}).Result(),
resources: []runtime.Object{
builder.ForPersistentVolumeClaim("ns", "pvc-1").
VolumeName("pv-1").
StorageClass("gp2-csi").Phase(corev1api.ClaimBound).Result(),
builder.ForPersistentVolume("pv-1").StorageClass("gp2-csi").Result(),
},
buildCache: false,
defaultVolumesToFSBackup: false,
shouldFSBackup: true,
expectedErr: false,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
fakeClient := velerotest.NewFakeControllerRuntimeClient(t, tc.resources...)
if tc.pod != nil {
require.NoError(t, fakeClient.Create(t.Context(), tc.pod))
}
var p *resourcepolicies.Policies
if tc.resourcePolicies != nil {
p = &resourcepolicies.Policies{}
err := p.BuildPolicy(tc.resourcePolicies)
require.NoError(t, err)
}
var namespaces []string
if tc.buildCache {
namespaces = []string{"ns"}
}
vh, err := NewVolumeHelperImplWithNamespaces(
p,
tc.snapshotVolumesFlag,
logrus.StandardLogger(),
fakeClient,
tc.defaultVolumesToFSBackup,
false,
namespaces,
)
require.NoError(t, err)
actualShouldFSBackup, actualError := vh.ShouldPerformFSBackup(tc.pod.Spec.Volumes[0], *tc.pod)
if tc.expectedErr {
require.Error(t, actualError)
return
}
require.NoError(t, actualError)
require.Equalf(t, tc.shouldFSBackup, actualShouldFSBackup, "Want shouldFSBackup as %t; Got shouldFSBackup as %t", tc.shouldFSBackup, actualShouldFSBackup)
})
}
}
// TestNewVolumeHelperImplWithCache tests the NewVolumeHelperImplWithCache constructor
// which is used by plugins that build the cache lazily per-namespace.
func TestNewVolumeHelperImplWithCache(t *testing.T) {
testCases := []struct {
name string
backup velerov1api.Backup
resourcePolicyConfigMap *corev1api.ConfigMap
pvcPodCache bool // whether to pass a cache
expectError bool
}{
{
name: "creates VolumeHelper with nil cache",
backup: velerov1api.Backup{
ObjectMeta: metav1.ObjectMeta{
Name: "test-backup",
Namespace: "velero",
},
Spec: velerov1api.BackupSpec{
SnapshotVolumes: ptr.To(true),
DefaultVolumesToFsBackup: ptr.To(false),
},
},
pvcPodCache: false,
expectError: false,
},
{
name: "creates VolumeHelper with non-nil cache",
backup: velerov1api.Backup{
ObjectMeta: metav1.ObjectMeta{
Name: "test-backup",
Namespace: "velero",
},
Spec: velerov1api.BackupSpec{
SnapshotVolumes: ptr.To(true),
DefaultVolumesToFsBackup: ptr.To(true),
SnapshotMoveData: ptr.To(true),
},
},
pvcPodCache: true,
expectError: false,
},
{
name: "creates VolumeHelper with resource policies",
backup: velerov1api.Backup{
ObjectMeta: metav1.ObjectMeta{
Name: "test-backup",
Namespace: "velero",
},
Spec: velerov1api.BackupSpec{
SnapshotVolumes: ptr.To(true),
ResourcePolicy: &corev1api.TypedLocalObjectReference{
Kind: "ConfigMap",
Name: "resource-policy",
},
},
},
resourcePolicyConfigMap: &corev1api.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: "resource-policy",
Namespace: "velero",
},
Data: map[string]string{
"policy": `version: v1
volumePolicies:
- conditions:
storageClass:
- gp2-csi
action:
type: snapshot`,
},
},
pvcPodCache: true,
expectError: false,
},
{
name: "fails when resource policy ConfigMap not found",
backup: velerov1api.Backup{
ObjectMeta: metav1.ObjectMeta{
Name: "test-backup",
Namespace: "velero",
},
Spec: velerov1api.BackupSpec{
ResourcePolicy: &corev1api.TypedLocalObjectReference{
Kind: "ConfigMap",
Name: "non-existent-policy",
},
},
},
pvcPodCache: false,
expectError: true,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
var objs []runtime.Object
if tc.resourcePolicyConfigMap != nil {
objs = append(objs, tc.resourcePolicyConfigMap)
}
fakeClient := velerotest.NewFakeControllerRuntimeClient(t, objs...)
var cache *podvolumeutil.PVCPodCache
if tc.pvcPodCache {
cache = podvolumeutil.NewPVCPodCache()
}
vh, err := NewVolumeHelperImplWithCache(
tc.backup,
fakeClient,
logrus.StandardLogger(),
cache,
)
if tc.expectError {
require.Error(t, err)
require.Nil(t, vh)
} else {
require.NoError(t, err)
require.NotNil(t, vh)
}
})
}
}
// TestNewVolumeHelperImplWithCache_UsesCache verifies that the VolumeHelper created
// via NewVolumeHelperImplWithCache actually uses the provided cache for lookups.
func TestNewVolumeHelperImplWithCache_UsesCache(t *testing.T) {
// Create a pod that uses a PVC via opt-out (defaultVolumesToFsBackup=true)
pod := builder.ForPod("ns", "pod-1").Volumes(
&corev1api.Volume{
Name: "volume",
VolumeSource: corev1api.VolumeSource{
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
ClaimName: "pvc-1",
},
},
},
).Result()
pvc := &corev1api.PersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Namespace: "ns",
Name: "pvc-1",
},
}
pv := builder.ForPersistentVolume("example-pv").StorageClass("gp2-csi").ClaimRef("ns", "pvc-1").Result()
fakeClient := velerotest.NewFakeControllerRuntimeClient(t, pvc, pv, pod)
// Build cache for the namespace
cache := podvolumeutil.NewPVCPodCache()
err := cache.BuildCacheForNamespace(t.Context(), "ns", fakeClient)
require.NoError(t, err)
backup := velerov1api.Backup{
ObjectMeta: metav1.ObjectMeta{
Name: "test-backup",
Namespace: "velero",
},
Spec: velerov1api.BackupSpec{
SnapshotVolumes: ptr.To(true),
DefaultVolumesToFsBackup: ptr.To(true), // opt-out mode
},
}
vh, err := NewVolumeHelperImplWithCache(backup, fakeClient, logrus.StandardLogger(), cache)
require.NoError(t, err)
// Convert PV to unstructured
obj, err := runtime.DefaultUnstructuredConverter.ToUnstructured(pv)
require.NoError(t, err)
// ShouldPerformSnapshot should return false because the volume is selected for fs-backup
// This relies on the cache to find the pod using the PVC
shouldSnapshot, err := vh.ShouldPerformSnapshot(&unstructured.Unstructured{Object: obj}, kuberesource.PersistentVolumes)
require.NoError(t, err)
require.False(t, shouldSnapshot, "Expected snapshot to be skipped due to fs-backup selection via cache")
}

View File

@@ -288,7 +288,7 @@ const (
// BackupPhase is a string representation of the lifecycle phase
// of a Velero backup.
// +kubebuilder:validation:Enum=New;Queued;ReadyToStart;FailedValidation;InProgress;WaitingForPluginOperations;WaitingForPluginOperationsPartiallyFailed;Finalizing;FinalizingPartiallyFailed;Completed;PartiallyFailed;Failed;Deleting
// +kubebuilder:validation:Enum=New;FailedValidation;InProgress;WaitingForPluginOperations;WaitingForPluginOperationsPartiallyFailed;Finalizing;FinalizingPartiallyFailed;Completed;PartiallyFailed;Failed;Deleting
type BackupPhase string
const (
@@ -296,12 +296,6 @@ const (
// yet processed by the BackupController.
BackupPhaseNew BackupPhase = "New"
// BackupPhaseQueued means the backup has been added to the queue and is waiting for the Queue to move it out of the queue.
BackupPhaseQueued BackupPhase = "Queued"
// BackupPhaseReadyToStart means the backup has been pulled from the queue and is ready to start.
BackupPhaseReadyToStart BackupPhase = "ReadyToStart"
// BackupPhaseFailedValidation means the backup has failed
// the controller's validations and therefore will not run.
BackupPhaseFailedValidation BackupPhase = "FailedValidation"
@@ -377,11 +371,6 @@ type BackupStatus struct {
// +optional
Phase BackupPhase `json:"phase,omitempty"`
// QueuePosition is the position of the backup in the queue.
// Only relevant when Phase is "Queued"
// +optional
QueuePosition int `json:"queuePosition,omitempty"`
// ValidationErrors is a slice of all validation errors (if
// applicable).
// +optional

View File

@@ -17,8 +17,6 @@ limitations under the License.
package v1
import (
"errors"
corev1api "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
@@ -148,15 +146,8 @@ type ObjectStorageLocation struct {
Prefix string `json:"prefix,omitempty"`
// CACert defines a CA bundle to use when verifying TLS connections to the provider.
// Deprecated: Use CACertRef instead.
// +optional
CACert []byte `json:"caCert,omitempty"`
// CACertRef is a reference to a Secret containing the CA certificate bundle to use
// when verifying TLS connections to the provider. The Secret must be in the same
// namespace as the BackupStorageLocation.
// +optional
CACertRef *corev1api.SecretKeySelector `json:"caCertRef,omitempty"`
}
// BackupStorageLocationPhase is the lifecycle phase of a Velero BackupStorageLocation.
@@ -186,13 +177,3 @@ const (
// TODO(2.0): remove the AccessMode field from BackupStorageLocationStatus.
// TODO(2.0): remove the LastSyncedRevision field from BackupStorageLocationStatus.
// Validate validates the BackupStorageLocation to ensure that only one of CACert or CACertRef is set.
func (bsl *BackupStorageLocation) Validate() error {
if bsl.Spec.ObjectStorage != nil &&
bsl.Spec.ObjectStorage.CACert != nil &&
bsl.Spec.ObjectStorage.CACertRef != nil {
return errors.New("cannot specify both caCert and caCertRef in objectStorage")
}
return nil
}

View File

@@ -1,121 +0,0 @@
/*
Copyright The Velero Contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1
import (
"testing"
corev1api "k8s.io/api/core/v1"
)
func TestBackupStorageLocationValidate(t *testing.T) {
tests := []struct {
name string
bsl *BackupStorageLocation
expectError bool
}{
{
name: "valid - neither CACert nor CACertRef set",
bsl: &BackupStorageLocation{
Spec: BackupStorageLocationSpec{
StorageType: StorageType{
ObjectStorage: &ObjectStorageLocation{
Bucket: "test-bucket",
},
},
},
},
expectError: false,
},
{
name: "valid - only CACert set",
bsl: &BackupStorageLocation{
Spec: BackupStorageLocationSpec{
StorageType: StorageType{
ObjectStorage: &ObjectStorageLocation{
Bucket: "test-bucket",
CACert: []byte("test-cert"),
},
},
},
},
expectError: false,
},
{
name: "valid - only CACertRef set",
bsl: &BackupStorageLocation{
Spec: BackupStorageLocationSpec{
StorageType: StorageType{
ObjectStorage: &ObjectStorageLocation{
Bucket: "test-bucket",
CACertRef: &corev1api.SecretKeySelector{
LocalObjectReference: corev1api.LocalObjectReference{
Name: "ca-cert-secret",
},
Key: "ca.crt",
},
},
},
},
},
expectError: false,
},
{
name: "invalid - both CACert and CACertRef set",
bsl: &BackupStorageLocation{
Spec: BackupStorageLocationSpec{
StorageType: StorageType{
ObjectStorage: &ObjectStorageLocation{
Bucket: "test-bucket",
CACert: []byte("test-cert"),
CACertRef: &corev1api.SecretKeySelector{
LocalObjectReference: corev1api.LocalObjectReference{
Name: "ca-cert-secret",
},
Key: "ca.crt",
},
},
},
},
},
expectError: true,
},
{
name: "valid - no ObjectStorage",
bsl: &BackupStorageLocation{
Spec: BackupStorageLocationSpec{
StorageType: StorageType{
ObjectStorage: nil,
},
},
},
expectError: false,
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
err := test.bsl.Validate()
if test.expectError && err == nil {
t.Errorf("expected error but got none")
}
if !test.expectError && err != nil {
t.Errorf("expected no error but got: %v", err)
}
})
}
}

View File

@@ -118,10 +118,6 @@ type PodVolumeBackupStatus struct {
// +optional
Progress shared.DataMoveOperationProgress `json:"progress,omitempty"`
// IncrementalBytes holds the number of bytes new or changed since the last backup
// +optional
IncrementalBytes int64 `json:"incrementalBytes,omitempty"`
// AcceptedTimestamp records the time the pod volume backup is to be prepared.
// The server's time is used for AcceptedTimestamp
// +optional
@@ -138,7 +134,6 @@ type PodVolumeBackupStatus struct {
// +kubebuilder:printcolumn:name="Started",type="date",JSONPath=".status.startTimestamp",description="Time duration since this PodVolumeBackup was started"
// +kubebuilder:printcolumn:name="Bytes Done",type="integer",format="int64",JSONPath=".status.progress.bytesDone",description="Completed bytes"
// +kubebuilder:printcolumn:name="Total Bytes",type="integer",format="int64",JSONPath=".status.progress.totalBytes",description="Total bytes"
// +kubebuilder:printcolumn:name="Incremental Bytes",type="integer",format="int64",JSONPath=".status.incrementalBytes",description="Incremental bytes",priority=10
// +kubebuilder:printcolumn:name="Storage Location",type="string",JSONPath=".spec.backupStorageLocation",description="Name of the Backup Storage Location where this backup should be stored"
// +kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp",description="Time duration since this PodVolumeBackup was created"
// +kubebuilder:printcolumn:name="Node",type="string",JSONPath=".status.node",description="Name of the node where the PodVolumeBackup is processed"

View File

@@ -58,10 +58,6 @@ type PodVolumeRestoreSpec struct {
// Cancel indicates request to cancel the ongoing PodVolumeRestore. It can be set
// when the PodVolumeRestore is in InProgress phase
Cancel bool `json:"cancel,omitempty"`
// SnapshotSize is the logical size in Bytes of the snapshot.
// +optional
SnapshotSize int64 `json:"snapshotSize,omitempty"`
}
// PodVolumeRestorePhase represents the lifecycle phase of a PodVolumeRestore.

Some files were not shown because too many files have changed in this diff Show More