Compare commits

..

96 Commits

Author SHA1 Message Date
Priyansh Choudhary
4c91959f23 Backport PR #9693 and #9700 to Release-1.18 (#9731)
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m28s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 2m4s
Main CI / Build (push) Has been skipped
* fix: backup deletion silently succeeds when tarball download fails (#9693)

* Enhance backup deletion logic to handle tarball download failures and clean up associated CSI VolumeSnapshotContents
Signed-off-by: Priyansh Choudhary <im1706@gmail.com>

* added changelog
Signed-off-by: Priyansh Choudhary <im1706@gmail.com>

* Refactor error handling in backup deletion
Signed-off-by: Priyansh Choudhary <im1706@gmail.com>

* Refactor backup deletion logic to skip CSI snapshot cleanup on tarball download failure
Signed-off-by: Priyansh Choudhary <im1706@gmail.com>

* prevent backup deletion when errors occur
Signed-off-by: Priyansh Choudhary <im1706@gmail.com>

* added logger
Signed-off-by: Priyansh Choudhary <im1706@gmail.com>

* Add delay to avoid race conditions during VolumeSnapshotContent deletion (#9700)

* Add delay to avoid race conditions during VolumeSnapshotContent deletion
Signed-off-by: Priyansh Choudhary <im1706@gmail.com>

* updated changelog
Signed-off-by: Priyansh Choudhary <im1706@gmail.com>

* Updated Changelog
Signed-off-by: Priyansh Choudhary <im1706@gmail.com>

* Updated changelog
Signed-off-by: Priyansh Choudhary <im1706@gmail.com>

---------

Signed-off-by: Priyansh Choudhary <im1706@gmail.com>
2026-04-16 13:35:12 -04:00
Tiger Kaovilai
77945d9176 [release-1.18] Add CI check for invalid characters in file paths (#9691)
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m4s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 12s
Main CI / Build (push) Has been skipped
* Add CI check for invalid characters in file paths

Go's module zip rejects filenames containing certain characters (shell
special chars like " ' * < > ? ` |, path separators : \, and non-letter
Unicode such as control/format characters). This caused a build failure
when a changelog file contained an invisible U+200E LEFT-TO-RIGHT MARK
(see PR #9552).

Add a GitHub Actions workflow that validates all tracked file paths on
every PR to catch these issues before they reach downstream consumers.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
(cherry picked from commit 6d18d9b303)

* Fix changelog filenames containing invisible U+200E characters

Remove LEFT-TO-RIGHT MARK unicode characters from changelog filenames
that would cause Go module zip failures.

Generated with [Claude Code](https://claude.ai/code)
via [Happy](https://happy.engineering)

Co-Authored-By: Claude <noreply@anthropic.com>
Co-Authored-By: Happy <yesreply@happy.engineering>
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

---------

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Happy <yesreply@happy.engineering>
Co-authored-by: Scott Seago <sseago@redhat.com>
2026-04-13 15:33:08 -04:00
Xun Jiang/Bruce Jiang
4b1d236b2b Merge pull request #9708 from adam-jian-zhang/fix-csi-pvc-backup-plugin-scoping-1.18
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m1s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 11s
Main CI / Build (push) Has been skipped
Fix DataUpload list scope in CSI PVC backup plugin
2026-04-13 15:02:09 +08:00
Adam Zhang
c17d6a0a04 Fix DataUpload list scope in CSI PVC backup plugin
The `getDataUpload` function in the CSI PVC backup plugin was
previously making a cluster-scoped list query to retrieve DataUpload
CRs. In environments with strict minimum-privilege RBAC, this would
fail with forbidden errors.
This explicitly passes the backup namespace into the `ListOptions`
when calling `crClient.List`, correctly scoping the queries to the
backup's namespace. Unit tests have also been updated to ensure
cross-namespace queries are rejected appropriately.

Signed-off-by: Adam Zhang <adam.zhang@broadcom.com>
2026-04-13 14:52:33 +08:00
lyndon-li
a6488a92f6 Merge pull request #9706 from shubham-pampattiwar/cherry-pick-vgs-v1beta2-release-1.18
[release-1.18] Bump external-snapshotter to v8.4.0 for VGS v1beta2 support
2026-04-13 13:35:58 +08:00
Shubham Pampattiwar
3336861cd6 Update changelog filename for cherry-pick PR #9706
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2026-04-10 17:19:20 -07:00
Shubham Pampattiwar
af22d4419c Add changelog for PR #9695
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2026-04-10 17:17:05 -07:00
Shubham Pampattiwar
124824a478 Bump external-snapshotter to v8.4.0 for VGS v1beta2 support
Kubernetes 1.34 introduced VolumeGroupSnapshot v1beta2 API and
deprecated v1beta1. Distributions running K8s 1.34+ (e.g. OpenShift
4.21+) have removed v1beta1 VGS CRDs entirely, breaking Velero's
VGS functionality on those clusters.

This change bumps external-snapshotter/client/v8 from v8.2.0 to
v8.4.0 and migrates all VGS API usage from v1beta1 to v1beta2.

The v1beta2 API is structurally compatible - the Spec-level types
(GroupSnapshotHandles, VolumeGroupSnapshotContentSource) are
unchanged. The Status-level change (VolumeSnapshotHandlePairList
replaced by VolumeSnapshotInfoList) does not affect Velero as it
does not directly consume that type.

Fixes #9694

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2026-04-10 17:16:58 -07:00
Xun Jiang/Bruce Jiang
fd99ed4dd6 Merge pull request #9696 from adam-jian-zhang/fix-restore-pvr-scope-1.18
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m10s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 13s
Main CI / Build (push) Has been skipped
Fix PodVolumeBackup list scope during restore
2026-04-10 16:01:59 +08:00
Adam Zhang
0291c53e9d Fix PodVolumeBackup list scope during restore
Restrict the listing of PodVolumeBackup resources to the specific
restore namespace in both the core restore controller and the pod
volume restore action plugin. This prevents "Forbidden" errors when
Velero is configured with namespace-scoped minimum privileges,
avoiding the need for cluster-scoped list permissions for
PodVolumeBackups.

Fixes: #9681

Signed-off-by: Adam Zhang <adam.zhang@broadcom.com>
2026-04-09 09:57:04 +08:00
Shubham Pampattiwar
5ad4e604b8 [release-1.18] Fix VolumeGroupSnapshot restore failure with Ceph RBD CSI driver (#9687)
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m4s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 12s
Main CI / Build (push) Has been skipped
* Fix VolumeGroupSnapshot restore failure with Ceph RBD CSI driver (#9516)

* Fix VolumeGroupSnapshot restore on Ceph RBD

This PR fixes two related issues affecting CSI snapshot restore on Ceph RBD:

1. VolumeGroupSnapshot restore fails because Ceph RBD populates
   volumeGroupSnapshotHandle on pre-provisioned VSCs, but Velero doesn't
   create the required VGSC during restore.

2. CSI snapshot restore fails because VolumeSnapshotClassName is removed
   from restored VSCs, preventing the CSI controller from getting
   credentials for snapshot verification.

Changes:
- Capture volumeGroupSnapshotHandle during backup as VS annotation
- Create stub VGSC during restore with matching handle in status
- Look up VolumeSnapshotClass by driver and set on restored VSC

Fixes #9512
Fixes #9515

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

* Add changelog for VGS restore fix

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

* Fix gofmt import order

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

* Add changelog for VGS restore fix

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

* Fix import alias corev1 to corev1api per lint config

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

* Fix: Add snapshot handles to existing stub VGSC and add unit tests

When multiple VolumeSnapshots from the same VolumeGroupSnapshot are
restored, they share the same VolumeGroupSnapshotHandle but have
different individual snapshot handles. This commit:

1. Fixes incomplete logic where existing VGSC wasn't updated with
   new snapshot handles (addresses review feedback)

2. Fixes race condition where Create returning AlreadyExists would
   skip adding the snapshot handle

3. Adds comprehensive unit tests for ensureStubVGSCExists (5 cases)
   and addSnapshotHandleToVGSC (4 cases) functions

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

* Clean up stub VolumeGroupSnapshotContents during restore finalization

Add cleanup logic for stub VGSCs created during VolumeGroupSnapshot restore.
The stub VGSCs are temporary objects needed to satisfy CSI controller
validation during VSC reconciliation. Once all related VSCs become
ReadyToUse, the stub VGSCs are no longer needed and should be removed.

The cleanup runs in the restore finalizer controller's execute() phase.
Before deleting each VGSC, it polls until all related VolumeSnapshotContents
(correlated by snapshot handle) are ReadyToUse, with a timeout fallback.
Deletion failures and CRD-not-installed scenarios are treated as warnings
rather than errors to avoid failing the restore.

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

* Fix lint: remove unused nolint directive and simplify cleanupStubVGSC return

The cleanupStubVGSC function only produces warnings (not errors), so
simplify its return signature. Also remove the now-unused nolint:unparam
directive on execute() since warnings are no longer always nil.

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

---------

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

* Rename changelog file to match cherry-pick PR number

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

---------

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2026-04-08 12:45:02 -07:00
lyndon-li
f854a0653a Merge pull request #9678 from sseago/custom-volume-policy-1.18
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m2s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 13s
Main CI / Build (push) Has been skipped
[release-1.18] Add custom action type to volume policies
2026-04-08 13:42:36 +08:00
lyndon-li
cce0f20168 Merge branch 'release-1.18' into custom-volume-policy-1.18 2026-04-08 11:08:42 +08:00
Xun Jiang/Bruce Jiang
2b6b3091c2 Merge pull request #9670 from blackpiglet/xj014661/1.18/go-jose-cve
[1.18][cherry-pick] Bump github.com/go-jose/go-jose/v4 from 4.1.3 to 4.1.4
2026-04-08 10:21:04 +08:00
dependabot[bot]
4cb9c7b9a2 Bump github.com/go-jose/go-jose/v4 from 4.1.3 to 4.1.4
Bumps [github.com/go-jose/go-jose/v4](https://github.com/go-jose/go-jose) from 4.1.3 to 4.1.4.
- [Release notes](https://github.com/go-jose/go-jose/releases)
- [Commits](https://github.com/go-jose/go-jose/compare/v4.1.3...v4.1.4)

---
updated-dependencies:
- dependency-name: github.com/go-jose/go-jose/v4
  dependency-version: 4.1.4
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-04-08 10:11:48 +08:00
Xun Jiang/Bruce Jiang
a33e5a3f8f Merge pull request #9587 from blackpiglet/xj014661/1.18/9180_fix
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m3s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 11s
Main CI / Build (push) Has been skipped
Remove wildcard check from getNamespacesToList.
2026-04-08 09:33:59 +08:00
Xun Jiang/Bruce Jiang
f89b55269c Update pkg/util/podvolume/pod_volume_test.go
Co-authored-by: Tiger Kaovilai <passawit.kaovilai@gmail.com>
Signed-off-by: Xun Jiang/Bruce Jiang <59276555+blackpiglet@users.noreply.github.com>
2026-04-08 08:12:21 +08:00
Xun Jiang
8ac8f49b5c Remove wildcard check from getNamespacesToList.
Expand wildcard in namespace filter only for backup scenario.
Restore doesn't need that now, because restore has logic to rely on
IncludeEverything function to check whether cluster-scoped resources
should be restored. Expand wildcard will break the logic.

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2026-04-08 08:12:21 +08:00
Scott Seago
5dd9d5242b Add custom action type to volume policies (#9540)
* Add custom action type to volume policies

Signed-off-by: Scott Seago <sseago@redhat.com>

* Update internal/resourcepolicies/resource_policies.go

Co-authored-by: Tiger Kaovilai <passawit.kaovilai@gmail.com>
Signed-off-by: Scott Seago <sseago@redhat.com>

* added "custom" to validation list

Signed-off-by: Scott Seago <sseago@redhat.com>

* responding to review comments

Signed-off-by: Scott Seago <sseago@redhat.com>

---------

Signed-off-by: Scott Seago <sseago@redhat.com>
Co-authored-by: Tiger Kaovilai <passawit.kaovilai@gmail.com>
Signed-off-by: Scott Seago <sseago@redhat.com>
2026-04-07 14:57:15 -04:00
lyndon-li
bf9e1f8fd7 Merge pull request #9671 from adam-jian-zhang/fix-node-agent-detection-1.18
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 52s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 9s
Main CI / Build (push) Has been skipped
fix node-agent node detection logic
2026-04-03 15:59:23 +08:00
lyndon-li
c9b5429a7a Merge branch 'release-1.18' into fix-node-agent-detection-1.18 2026-04-03 15:34:39 +08:00
lyndon-li
536e43719b Merge pull request #9672 from Lyndon-Li/release-1.18
[1.18] Issue 9659: fix crash on cancel without loading data path #9663
2026-04-03 15:27:15 +08:00
Lyndon-Li
ffede3ca6e issue 9659: fix crash on cancel without loading data path
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-04-03 14:46:16 +08:00
Lyndon-Li
ed2daeedf6 issue 9659: fix crash on cancel without loading data path
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-04-03 14:41:40 +08:00
Wenkai Yin(尹文开)
16f9e4f303 Merge pull request #9669 from Lyndon-Li/release-1.18
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 50s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 9s
Main CI / Build (push) Has been skipped
[1.18] Issue 9626: let go for uninitialized repo under readonly mode
2026-04-03 14:39:08 +08:00
Adam Zhang
ea057e42fa fix node-agent node detection logic
Add namespace in ListOptions, to fix node-agent node detection
in its deployed namespace.

Signed-off-by: Adam Zhang <adam.zhang@broadcom.com>
2026-04-03 14:31:13 +08:00
Lyndon-Li
a6e579cb93 issue 9626: let go for uninitialized repo under readonly mode
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-04-03 13:21:58 +08:00
Xun Jiang/Bruce Jiang
856f1296fc Merge pull request #9661 from blackpiglet/xj014661/1.18/tarball_extraction_check
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 46s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 8s
Main CI / Build (push) Has been skipped
[1.18][cherry-pick] Add more check for file extraction from tarball.
2026-04-02 17:20:33 +08:00
Xun Jiang
c7fa4bfe35 Add more check for file extraction from tarball.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2026-04-02 16:57:00 +08:00
Xun Jiang/Bruce Jiang
e9bc0eca53 Merge pull request #9665 from blackpiglet/xj014661/1.18/controller-runtime-tag
[1.18][cherry-pick] Pin the sigs.k8s.io/controller-runtime to v0.23.2
2026-04-02 16:56:36 +08:00
Xun Jiang
c857dff5a4 Pin the sigs.k8s.io/controller-runtime to v0.23.2
The tag used to latest. Due to latest tag v0.23.3 already used
Golang v1.26, Velero main still uses v1.25. Build failed.
To fix this, pin the controller-runtime to v0.23.2

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2026-04-02 15:55:21 +08:00
Xun Jiang/Bruce Jiang
1644a2c738 Merge pull request #9637 from adam-jian-zhang/fix-install-options
[1.18] fix configmap lookup in non-default namespaces
2026-04-02 15:41:29 +08:00
Adam Zhang
09795245e7 switch the call order of validate/complete
switch the call order of validate/complete which accomplish
the same effect.

Signed-off-by: Adam Zhang <adam.zhang@broadcom.com>
2026-03-24 11:12:42 +08:00
Adam Zhang
cd7c9cba3e fix configmap lookup in non-default namespaces
o.Namespace is empty when Validate runs (Complete hasn't been called yet),
causing VerifyJSONConfigs to query the default namespace instead of the
intended one. Replace o.Namespace with f.Namespace() in all three ConfigMap
validation calls so the factory's already-resolved namespace is used.

Signed-off-by: Adam Zhang <adam.zhang@broadcom.com>
2026-03-23 14:55:06 +08:00
Xun Jiang/Bruce Jiang
33b1fde8e1 Merge pull request #9629 from sseago/windows-polling-1.18
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m8s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 21s
Main CI / Build (push) Has been skipped
refactor: Optimize VSC handle readiness polling for VSS backups
2026-03-20 10:55:26 +08:00
Scott Seago
525036bc69 Merge branch 'release-1.18' into windows-polling-1.18 2026-03-19 08:44:32 -04:00
Xun Jiang/Bruce Jiang
974c465d0a Merge pull request #9631 from blackpiglet/xj014661/1.18/RepoMaintenance_e2e_fix
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m13s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 13s
Main CI / Build (push) Has been skipped
[1.18] Fix Repository Maintenance Job Configuration's global part E2E case.
2026-03-19 18:03:33 +08:00
Xun Jiang/Bruce Jiang
7da042a053 Merge branch 'release-1.18' into xj014661/1.18/RepoMaintenance_e2e_fix 2026-03-19 17:54:28 +08:00
lyndon-li
ca628ccc44 Merge pull request #9632 from blackpiglet/xj014661/1.18/grpc-1.79.3
[1.18][cherry-pick] Bump google.golang.org/grpc from 1.77.0 to 1.79.3
2026-03-19 17:24:19 +08:00
dependabot[bot]
6055bd5478 Bump google.golang.org/grpc from 1.77.0 to 1.79.3
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.77.0 to 1.79.3.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.77.0...v1.79.3)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-version: 1.79.3
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-19 16:33:59 +08:00
Xun Jiang
f7890d3c59 Fix Repository Maintenance Job Configuration's global part E2E case.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2026-03-19 16:09:50 +08:00
Scott Seago
a83ab21a9a feat: Implement early frequent polling for CSI snapshots
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
Signed-off-by: Scott Seago <sseago@redhat.com>
2026-03-18 18:18:17 -04:00
Scott Seago
79f0e72fde refactor: Optimize VSC handle readiness polling for VSS backups
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
Signed-off-by: Scott Seago <sseago@redhat.com>
2026-03-18 09:43:06 -04:00
lyndon-li
c5bca75f17 Merge pull request #9621 from Lyndon-Li/release-1.18
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m10s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 17s
Main CI / Build (push) Has been skipped
[1.18] Fix compile error for Windows
2026-03-16 14:53:19 +08:00
Lyndon-Li
fcdbc7cfa8 fix compile error for Windows
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-03-16 13:54:57 +08:00
Xun Jiang/Bruce Jiang
2b87a2306e Merge pull request #9612 from vmware-tanzu/xj014661/1.18/fix_NodeAgentConfig_e2e_error
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m21s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 15s
Main CI / Build (push) Has been skipped
[E2E][1.18] Compare affinity by string instead of exactly same compare.
2026-03-16 10:47:03 +08:00
Xun Jiang
c239b27bf2 Compare affinity by string instead of exactly same compare.
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m16s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
From 1.18.1, Velero adds some default affinity in the backup/restore pod,
so we can't directly compare the whole affinity,
but we can verify if the expected affinity is contained in the pod affinity.

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2026-03-13 18:01:07 +08:00
lyndon-li
6ba0f86586 Merge pull request #9610 from Lyndon-Li/release-1.18
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m20s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 2m31s
Main CI / Build (push) Has been skipped
[1.18] Issue 9460: Uploader flush buffer
2026-03-12 15:37:42 +08:00
lyndon-li
6dfd8c96d0 Merge branch 'release-1.18' into release-1.18 2026-03-12 15:13:08 +08:00
Shubham Pampattiwar
336e8c4b56 Merge pull request #9604 from shubham-pampattiwar/cherry-pick-fix-dbr-stuck-release-1.18
[release-1.18] Fix DBR stuck when CSI snapshot no longer exists in cloud provider
2026-03-11 22:23:47 -07:00
Shubham Pampattiwar
883befcdde Remove cherry-picked changelog file from upstream PR
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2026-03-11 20:41:06 -07:00
Shubham Pampattiwar
7cfd4af733 Add changelog for PR #9604
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2026-03-11 20:41:06 -07:00
Shubham Pampattiwar
4cc1779fec Fix DBR stuck when CSI snapshot no longer exists in cloud provider (#9581)
* Fix DBR stuck when CSI snapshot no longer exists in cloud provider

During backup deletion, VolumeSnapshotContentDeleteItemAction creates a
new VSC with the snapshot handle from the backup and polls for readiness.
If the underlying snapshot no longer exists (e.g., deleted externally),
the CSI driver reports Status.Error but checkVSCReadiness() only checks
ReadyToUse, causing it to poll for the full 10-minute timeout instead of
failing fast. Additionally, the newly created VSC is never cleaned up on
failure, leaving orphaned resources in the cluster.

This commit:
- Adds Status.Error detection in checkVSCReadiness() to fail immediately
  on permanent CSI driver errors (e.g., InvalidSnapshot.NotFound)
- Cleans up the dangling VSC when readiness polling fails

Fixes #9579

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

* Add changelog for PR #9581

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

* Fix typo in pod_volume_test.go: colume -> volume

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

---------

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2026-03-11 20:41:06 -07:00
Lyndon-Li
ce2b4c191f issue 9460: flush buffer when uploader completes
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-03-12 11:26:30 +08:00
Lyndon-Li
1e6f02dc24 flush volume after restore
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-03-12 11:19:48 +08:00
Lyndon-Li
e2bbace03b uploader flush buffer for restore
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-03-12 11:19:25 +08:00
lyndon-li
341597f542 Merge pull request #9609 from Lyndon-Li/release-1.18
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m9s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 22s
Main CI / Build (push) Has been skipped
[1.18] Issue 9475: Selected node to node selector
2026-03-12 11:18:32 +08:00
Lyndon-Li
ea97ef8279 node-selector for selected node
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-03-12 10:47:22 +08:00
Lyndon-Li
384a492aa2 replace nodeName with node selector
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-03-12 10:45:11 +08:00
lyndon-li
c3237addfe Merge pull request #9606 from Lyndon-Li/release-1.18
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m19s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 15s
Main CI / Build (push) Has been skipped
[1.18] Issue 9496: support customized host os
2026-03-11 22:25:57 +08:00
lyndon-li
e4774b32f3 Merge branch 'release-1.18' into release-1.18 2026-03-11 20:14:40 +08:00
Xun Jiang/Bruce Jiang
ac73e8f29d Merge pull request #9596 from blackpiglet/xj014661/1.18/ephemeral_storage_config
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m12s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 18s
Main CI / Build (push) Has been skipped
[cherry-pick][1.18] Add ephemeral storage limit and request support for data mover and maintenance job
2026-03-11 18:24:43 +08:00
Xun Jiang/Bruce Jiang
ea2c4f4e5c Merge branch 'release-1.18' into xj014661/1.18/ephemeral_storage_config 2026-03-11 18:14:54 +08:00
Lyndon-Li
2c0fddc498 support custom os
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-03-11 18:02:34 +08:00
Lyndon-Li
eac69375c9 support custom os
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-03-11 17:56:53 +08:00
Lyndon-Li
733b2eb6f5 support customized host os
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-03-11 17:56:14 +08:00
Lyndon-Li
01bd153968 support customized host os - use affinity for host os selection
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-03-11 17:56:08 +08:00
Lyndon-Li
57892169a9 support customized host os
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-03-11 16:09:12 +08:00
lyndon-li
072dc4c610 Merge pull request #9594 from Lyndon-Li/release-1.18
[1.18] Issue 9343: include PV topology to data mover pod affinities
2026-03-11 15:37:58 +08:00
lyndon-li
77c60589d6 Merge branch 'release-1.18' into release-1.18 2026-03-11 15:15:05 +08:00
Xun Jiang/Bruce Jiang
d0cea53676 Merge pull request #9597 from blackpiglet/xj014661/1.18/add_bia_skip_resource_logic
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m20s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 13s
Main CI / Build (push) Has been skipped
[cherry-pick][1.18] Add BIA skip resource logic
2026-03-11 09:45:32 +08:00
Xun Jiang
9a39cbfbf5 Remove the skipped item from the resource list when it's skipped by BIA.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2026-03-10 17:51:20 +08:00
Xun Jiang
62a24ece50 If BIA return updateObj with SkipFromBackupAnnotation, treat it as skip the resource from backup.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2026-03-10 17:49:59 +08:00
Xun Jiang
b85a8f6784 Add ephemeral storage limit and request support for data mover and maintenance job.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2026-03-10 17:42:06 +08:00
Lyndon-Li
d39285be32 issue 9343: include PV topology to data mover pod affinitiesq
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-03-10 15:56:01 +08:00
Lyndon-Li
c30164c355 issue 9343: include PV topology to data mover pod affinities
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-03-10 15:53:39 +08:00
Lyndon-Li
ce0888ee44 issue 9343: include PV topology to data mover pod affinities
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-03-10 15:38:27 +08:00
Xun Jiang/Bruce Jiang
8682cdd36e Merge pull request #9591 from Lyndon-Li/release-1.18
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m23s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 5s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 16s
Main CI / Build (push) Has been skipped
Remove unecessary changelogs for 1.18.0
2026-03-09 18:15:46 +08:00
Lyndon-Li
c87e8acbf4 remove unecessary changelogs for 1.18.0
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-03-09 17:47:12 +08:00
Xun Jiang/Bruce Jiang
6adcf06b5b Merge pull request #9572 from blackpiglet/xj014661/v1.18/CVE-2026-24051
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m0s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 15s
Main CI / Build (push) Has been skipped
Xj014661/v1.18/CVE 2026 24051
2026-03-02 17:42:25 +08:00
Xun Jiang
ffa65605a6 Bump go.opentelemetry.io/otel/sdk to 1.40.0 to fix CVE-2026-24051.
Bump Linux base image to paketobuildpacks/run-jammy-tiny:0.2.104

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2026-03-02 17:09:23 +08:00
Xun Jiang/Bruce Jiang
bd8dfe9ee2 Merge branch 'vmware-tanzu:release-1.18' into release-1.18 2026-03-02 17:04:43 +08:00
lyndon-li
54783fbe28 Merge pull request #9546 from Lyndon-Li/release-1.18
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1h5m30s
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 21s
Main CI / Build (push) Has been skipped
Update base image for 1.18.0
2026-02-13 17:11:09 +08:00
Lyndon-Li
cb5f56265a update base image for 1.18.0
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-02-13 16:43:12 +08:00
Tiger Kaovilai
0c7b89a44e Fix VolumePolicy PVC phase condition filter for unbound PVCs
Use typed error approach: Make GetPVForPVC return ErrPVNotFoundForPVC
when PV is not expected to be found (unbound PVC), then use errors.Is
to check for this error type. When a matching policy exists (e.g.,
pvcPhase: [Pending, Lost] with action: skip), apply the action without
error. When no policy matches, return the original error to preserve
default behavior.

Changes:
- Add ErrPVNotFoundForPVC sentinel error to pvc_pv.go
- Update ShouldPerformSnapshot to handle unbound PVCs with policies
- Update ShouldPerformFSBackup to handle unbound PVCs with policies
- Update item_backupper.go to handle Lost PVCs in tracking functions
- Remove checkPVCOnlySkip helper (no longer needed)
- Update tests to reflect new behavior

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-12 16:18:56 +08:00
Xun Jiang/Bruce Jiang
aa89713559 Merge pull request #9537 from kaovilai/1.18-9508
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m33s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 29s
Main CI / Build (push) Has been skipped
release-1.18: Fix VolumePolicy PVC phase condition filter for unbound PVCs
2026-02-12 11:41:25 +08:00
Xun Jiang/Bruce Jiang
5db4c65a92 Merge branch 'release-1.18' into 1.18-9508 2026-02-12 11:32:06 +08:00
Xun Jiang/Bruce Jiang
87db850f66 Merge pull request #9539 from Joeavaikath/1.18-9502
release-1.18: Support all glob wildcard characters in namespace validation (#9502)
2026-02-12 10:50:55 +08:00
Xun Jiang/Bruce Jiang
c7631fc4a4 Merge branch 'release-1.18' into 1.18-9502 2026-02-12 10:41:26 +08:00
Wenkai Yin(尹文开)
9a37478cc2 Merge pull request #9538 from vmware-tanzu/topic/xj014661/update_migration_test_cases
Update the migration and upgrade test cases.
2026-02-12 10:19:43 +08:00
Tiger Kaovilai
5b54ccd2e0 Fix VolumePolicy PVC phase condition filter for unbound PVCs
Use typed error approach: Make GetPVForPVC return ErrPVNotFoundForPVC
when PV is not expected to be found (unbound PVC), then use errors.Is
to check for this error type. When a matching policy exists (e.g.,
pvcPhase: [Pending, Lost] with action: skip), apply the action without
error. When no policy matches, return the original error to preserve
default behavior.

Changes:
- Add ErrPVNotFoundForPVC sentinel error to pvc_pv.go
- Update ShouldPerformSnapshot to handle unbound PVCs with policies
- Update ShouldPerformFSBackup to handle unbound PVCs with policies
- Update item_backupper.go to handle Lost PVCs in tracking functions
- Remove checkPVCOnlySkip helper (no longer needed)
- Update tests to reflect new behavior

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-11 15:29:14 -05:00
Joseph Antony Vaikath
43b926a58b Support all glob wildcard characters in namespace validation (#9502)
* Support all glob wildcard characters in namespace validation

Expand namespace validation to allow all valid glob pattern characters
(*, ?, {}, [], ,) by replacing them with valid characters during RFC 1123
validation. The actual glob pattern validation is handled separately by
the wildcard package.

Also add validation to reject unsupported characters (|, (), !) that are
not valid in glob patterns, and update terminology from "regex" to "glob"
for clarity since this implementation uses glob patterns, not regex.

Changes:
- Replace all glob wildcard characters in validateNamespaceName
- Add test coverage for valid glob patterns in includes/excludes
- Add test coverage for unsupported characters
- Reject exclamation mark (!) in wildcard patterns
- Clarify comments and error messages about glob vs regex

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Joseph <jvaikath@redhat.com>

* Changelog

Signed-off-by: Joseph <jvaikath@redhat.com>

* Add documentation: glob patterns are now accepted

Signed-off-by: Joseph <jvaikath@redhat.com>

* Error message fix

Signed-off-by: Joseph <jvaikath@redhat.com>

* Remove negation glob char test

Signed-off-by: Joseph <jvaikath@redhat.com>

* Add bracket pattern validation for namespace glob patterns

Extends wildcard validation to support square bracket patterns [] used in glob character classes. Validates bracket syntax including empty brackets, unclosed brackets, and unmatched brackets. Extracts ValidateNamespaceName as a public function to enable reuse in namespace validation logic.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Joseph <jvaikath@redhat.com>

* Reduce scope to *, ?, [ and ]

Signed-off-by: Joseph <jvaikath@redhat.com>

* Fix tests

Signed-off-by: Joseph <jvaikath@redhat.com>

* Add namespace glob patterns documentation page

Adds dedicated documentation explaining supported glob patterns
for namespace include/exclude filtering to help users understand
the wildcard syntax.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Joseph <jvaikath@redhat.com>

* Fix build-image Dockerfile envtest download

Replace inaccessible go.kubebuilder.io URL with setup-envtest and update envtest version to 1.33.0 to match Kubernetes v0.33.3 dependencies.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Joseph <jvaikath@redhat.com>

* kubebuilder binaries mv

Signed-off-by: Joseph <jvaikath@redhat.com>

* Reject brace patterns and update documentation

Add {, }, and , to unsupported characters list to explicitly reject
brace expansion patterns. Remove { from wildcard detection since these
patterns are not supported in the 1.18 release.

Update all documentation to show supported patterns inline (*, ?, [abc])
with clickable links to the detailed namespace-glob-patterns page.
Simplify YAML comments by removing non-clickable URLs.

Update tests to expect errors when brace patterns are used.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Joseph <jvaikath@redhat.com>

* Document brace expansion as unsupported

Add {} and , to the unsupported patterns section to clarify that
brace expansion patterns like {a,b,c} are not supported.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Joseph <jvaikath@redhat.com>

* Update tests to expect brace pattern rejection

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Joseph <jvaikath@redhat.com>

---------

Signed-off-by: Joseph <jvaikath@redhat.com>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-11 15:14:06 -05:00
Xun Jiang
9bfc78e769 Update the migration and upgrade test cases.
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m34s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Modify Dockerfile to fix GitHub CI action error.

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2026-02-11 14:56:55 +08:00
Xun Jiang/Bruce Jiang
c9e26256fa Bump Golang version to v1.25.7 (#9536)
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m26s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 16s
Main CI / Build (push) Has been skipped
Fix test case issue and add UT.

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2026-02-10 15:50:23 -05:00
Wenkai Yin(尹文开)
6e315c32e2 Merge pull request #9530 from Lyndon-Li/release-1.18
Some checks failed
Run the E2E test on kind / get-go-version (push) Failing after 1m55s
Run the E2E test on kind / build (push) Has been skipped
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / get-go-version (push) Failing after 12s
Main CI / Build (push) Has been skipped
[1.18] Move implemented design for 1.18
2026-02-09 11:18:40 +08:00
Lyndon-Li
91cbc40956 move implemented design for 1.18
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2026-02-06 18:56:05 +08:00
133 changed files with 3077 additions and 3103 deletions

View File

@@ -22,7 +22,7 @@ jobs:
uses: actions/checkout@v6
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@57a97c7e7821a5776cebc9bb87c984fa69cba8f1
uses: aquasecurity/trivy-action@master
with:
image-ref: 'docker.io/velero/${{ matrix.images }}:${{ matrix.versions }}'
severity: 'CRITICAL,HIGH,MEDIUM'

View File

@@ -13,7 +13,7 @@
# limitations under the License.
# Velero binary build section
FROM --platform=$BUILDPLATFORM golang:1.25-trixie AS velero-builder
FROM --platform=$BUILDPLATFORM golang:1.25.7-bookworm AS velero-builder
ARG GOPROXY
ARG BIN
@@ -48,11 +48,38 @@ RUN mkdir -p /output/usr/bin && \
-ldflags "${LDFLAGS}" ${PKG}/cmd/velero-helper && \
go clean -modcache -cache
# Restic binary build section
FROM --platform=$BUILDPLATFORM golang:1.25.7-bookworm AS restic-builder
ARG GOPROXY
ARG BIN
ARG TARGETOS
ARG TARGETARCH
ARG TARGETVARIANT
ARG RESTIC_VERSION
ENV CGO_ENABLED=0 \
GO111MODULE=on \
GOPROXY=${GOPROXY} \
GOOS=${TARGETOS} \
GOARCH=${TARGETARCH} \
GOARM=${TARGETVARIANT}
COPY . /go/src/github.com/vmware-tanzu/velero
RUN mkdir -p /output/usr/bin && \
export GOARM=$(echo "${GOARM}" | cut -c2-) && \
/go/src/github.com/vmware-tanzu/velero/hack/build-restic.sh && \
go clean -modcache -cache
# Velero image packing section
FROM paketobuildpacks/run-jammy-tiny:latest
FROM paketobuildpacks/run-jammy-tiny:0.2.104
LABEL maintainer="Xun Jiang <jxun@vmware.com>"
COPY --from=velero-builder /output /
COPY --from=restic-builder /output /
USER cnb:cnb

View File

@@ -15,7 +15,7 @@
ARG OS_VERSION=1809
# Velero binary build section
FROM --platform=$BUILDPLATFORM golang:1.25-trixie AS velero-builder
FROM --platform=$BUILDPLATFORM golang:1.25.7-bookworm AS velero-builder
ARG GOPROXY
ARG BIN

View File

@@ -105,6 +105,8 @@ see: https://velero.io/docs/main/build-from-source/#making-images-and-updating-v
endef
# comma cannot be escaped and can only be used in Make function arguments by putting into variable
comma=,
# The version of restic binary to be downloaded
RESTIC_VERSION ?= 0.15.0
CLI_PLATFORMS ?= linux-amd64 linux-arm linux-arm64 darwin-amd64 darwin-arm64 windows-amd64 linux-ppc64le linux-s390x
BUILD_OUTPUT_TYPE ?= docker
@@ -258,6 +260,7 @@ container-linux:
--build-arg=GIT_SHA=$(GIT_SHA) \
--build-arg=GIT_TREE_STATE=$(GIT_TREE_STATE) \
--build-arg=REGISTRY=$(REGISTRY) \
--build-arg=RESTIC_VERSION=$(RESTIC_VERSION) \
--provenance=false \
--sbom=false \
-f $(VELERO_DOCKERFILE) .

View File

@@ -52,7 +52,7 @@ git_sha = str(local("git rev-parse HEAD", quiet = True, echo_off = True)).strip(
tilt_helper_dockerfile_header = """
# Tilt image
FROM golang:1.25 as tilt-helper
FROM golang:1.25.7 as tilt-helper
# Support live reloading with Tilt
RUN wget --output-document /restart.sh --quiet https://raw.githubusercontent.com/windmilleng/rerun-process-wrapper/master/restart.sh && \
@@ -103,6 +103,11 @@ local_resource(
deps = ["internal", "pkg/cmd"],
)
local_resource(
"restic_binary",
cmd = 'cd ' + '.' + ';mkdir -p _tiltbuild/restic; BIN=velero GOOS=linux GOARCH=amd64 GOARM="" RESTIC_VERSION=0.13.1 OUTPUT_DIR=_tiltbuild/restic ./hack/build-restic.sh',
)
# Note: we need a distro with a bash shell to exec into the Velero container
tilt_dockerfile_header = """
FROM ubuntu:22.04 as tilt
@@ -113,6 +118,7 @@ WORKDIR /
COPY --from=tilt-helper /start.sh .
COPY --from=tilt-helper /restart.sh .
COPY velero .
COPY restic/restic /usr/bin/restic
"""
dockerfile_contents = "\n".join([

View File

@@ -1 +0,0 @@
Include InitContainer configured as Sidecars when validating the existence of the target containers configured for the Backup Hooks

View File

@@ -1 +0,0 @@
Support all glob wildcard characters in namespace validation

View File

@@ -1 +0,0 @@
Fix VolumePolicy PVC phase condition filter for unbound PVCs (#9507)

View File

@@ -1 +0,0 @@
Add block data mover design for block level incremental backup by integrating with Kubernetes CBT

View File

@@ -1 +0,0 @@
Issue #9544: Add test coverage for S3 bucket name in MRAP ARN notation and fix bucket validation to accept ARN format

View File

@@ -1 +0,0 @@
Add schedule_expected_interval_seconds metric for dynamic backup alerting thresholds (#9559)

View File

@@ -0,0 +1 @@
Remove wildcard check from getNamespacesToList.

View File

@@ -1 +0,0 @@
Implement original VolumeSnapshotContent deletion for legacy backups

View File

@@ -0,0 +1 @@
Optimize VSC handle readiness polling for VSS backups

View File

@@ -1 +0,0 @@
Fix issue #9641, Remove redundant ReadyToUse polling in CSI VolumeSnapshotContent delete plugin

View File

@@ -1 +0,0 @@
Fix service restore with null healthCheckNodePort in last-applied-configuration label

View File

@@ -1 +0,0 @@
Fix issue #9470, remove restic from repository

View File

@@ -1 +0,0 @@
Fix issue #9469, remove restic for uploader

View File

@@ -1 +0,0 @@
Fix issue #9428, increase repo maintenance history queue length from 3 to 25

View File

@@ -1 +0,0 @@
Enhance backup deletion logic to handle tarball download failures

View File

@@ -1 +0,0 @@
Fix issue #9699, add a 2-second gap between temporary CSI VolumeSnapshotContent create and delete operations

View File

@@ -1 +0,0 @@
Update Debian base image from bookworm to trixie

View File

@@ -1 +0,0 @@
perf: better string concatenation

View File

@@ -1 +0,0 @@
Fix issue #9723, extend Unified Repo Interface to support block uploader

View File

@@ -1 +0,0 @@
Remove Restic build from Dockerfile, Makefile and Tiltfile.

View File

@@ -0,0 +1,3 @@
Backporting PR #9700 and #9693
Fix issue #9699, add a 2-second gap between temporary CSI VolumeSnapshotContent create and delete operations
Enhance backup deletion logic to handle tarball download failures

View File

@@ -69,7 +69,9 @@ spec:
- ""
type: string
resticIdentifier:
description: Deprecated
description: |-
ResticIdentifier is the full restic-compatible string for identifying
this repository. This field is only used when RepositoryType is "restic".
type: string
volumeNamespace:
description: |-

File diff suppressed because one or more lines are too long

Binary file not shown.

Before

Width:  |  Height:  |  Size: 498 KiB

View File

@@ -1,551 +0,0 @@
# Block Data Mover Design
## Glossary & Abbreviation
**Backup Storage**: The storage to store the backup data. Check [Unified Repository design][1] for details.
**Backup Repository**: Backup repository is layered between BR data movers and Backup Storage to provide BR related features that is introduced in [Unified Repository design][1].
**Velero Generic Data Path (VGDP)**: VGDP is the collective of modules that is introduced in [Unified Repository design][1]. Velero uses these modules to finish data transfer for various purposes (i.e., PodVolume backup/restore, Volume Snapshot Data Movement). VGDP modules include uploaders and the backup repository.
**Velero Built-in Data Mover (VBDM)**: VBDM, which is introduced in [Volume Snapshot Data Movement design][2] and [Unified Repository design][1], is the built-in data mover shipped along with Velero, it includes Velero data mover controllers and VGDP.
**Data Mover Pods**: Intermediate pods which hold VGDP and complete the data transfer. See [VGDP Micro Service for Volume Snapshot Data Movement][3] for details.
**Change Block Tracking (CBT)**: CBT is the mechanism to track changed blocks, so that backups could back up the changed data only. CBT usually provides by the computing/storage platform.
**TCO**: Total Cost of Ownership. This is a general criteria for products/solutions, but also means a lot for BR solutions. For example, this means what kind of backup storage (and its cost) it requires, the retention policy of backup copies, the ways to remove backup data redundancy, etc.
**PodVolume Backup**: This is the Velero backup method which accesses the data from live file system, see [Kopia Integration design][1] for how it works.
**CAOS and CABS**: Content-Addressable Object Storage and Content-Addressable Block Storage, they are the parts from Kopia repository, see [Kopia Architecture][5].
## Background
Kubernetes supports two kinds of volume mode, `FileSystem` and `Block`, for persistent volumes. Underlyingly, the storage could use a block storage to provision either `FileSystem` mode or `Block` mode volumes; and the storage could use a file storage to provision `FileSystem` mode volumes.
For volumes provisioned by block storage, they could be backed up/restored from the block level, regardless the volume mode of the persistent volume.
On the other hand, as long as the data could be accessed from the file system, a backup/restore could be conducted from the file system level. That is to say `FileSystem` mode volumes could be backed up/restored from the file system level, regardless of the backend storage type.
Then if a `FileSystem` mode volume is provisioned by a block storage, the volume could be backed up/restored either from the file system level or block level.
For Velero, [CSI Snapshot Data Movement][2] which is implemented by VBDM, ships a file system uploader, so the backup/restore is done from file system only.
Once possible, block level backup/restore is better than file system level backup/restore:
- Block level backup could leverage CBT to process minimal size of data, so it significantly reduces the overhead to network, backup repository and backup storage. As a result, TCO is significantly reduced.
- Block level backup/restore is performant in throughput and resource consumption, because it doesn't need to handle the complexity of the file system, especially for the case that huge number of small files in the file system.
- Block level backup/restore is less OS dependent because the uploader doesn't need the OS to be aware of the file system in the volume.
At present, [Kubernetes CBT API][4] is mature and close to Beta stage. Many platform/storage has supported/is going to support it.
Therefore, it is very important for Velero to deliver the block level backup/restore and recommend users to use it over the file system data mover as long as:
- The volume is backed by block storage so block level access is possible
- The platform supports CBT
Meanwhile, file system level backup/restore is still valuable for below scenarios:
- The volume is backed by file storage, e.g., AWS EFS, Azure File, CephFS, VKS File Volume, etc.
- The volume is backed by block storage but CBT is not available
- The volume doesn't support CSI snapshot, so Velero PodVolume Backup method is used
There are rich features delivered with VGDP, VBDM and [VGDP micro service][3], to reuse these features, block data mover should be built based on these modules.
Velero VBDM supports linux and Windows nodes, however, Windows container doesn't support block mode volumes, so backing up/restoring from Windows nodes is not supported until Windows container removes this limitation. As a result, if there are both linux and Windows nodes in the cluster, block data mover can only run in linux nodes.
Both the Kubernetes CBT service and Velero work in the boundary of the cluster, even though the backend storage may be shared by multiple clusters, Velero can only protection workloads in the same cluster where it is running.
## Goals
Add a block data mover to VBDM and support block level backup/restore for [CSI Snapshot Data Movement][2], which includes:
- Support block level full backup for both `FileSystem` and `Block` mode volumes
- Support block level incremental backup for both `FileSystem` and `Block` mode volumes
- Support block level restore from full/incremental backup for both `FileSystem` and `Block` mode volumes
- Support block level backup/restore for both linux and Windows workloads from linux cluster nodes
- Support all existing features, i.e., load concurrency, node selection, cache volume, deduplication, compression, encryption, etc. for the block data mover
- Support volumes processed from file system level and block level in the same backup/restore
## Non-Goals
- PodVolume Backup does the backup/restore from file system level only, so block level backup/restore is not supported
- Volumes that are backed by file system storages, can only be backed up/restored from file system level, so block level backup/restore is not supported
- Backing up/restoring from Windows nodes is not supported
- Block level incremental backup requires special capabilities of the backup repository, and Velero [Unified Repository][1] supports multiple kinds of backup repositories. The current design focus on Kopia repository only, block level incremental backup support of other repositories will be considered when the specific backup repository is integrated to [Velero Unified Repository][1]
## Architecture
### Data Path
Below shows the architecture of VGDP when integrating to Unified Repository (implemented by Kopia repository).
A new block data mover will be added besides the existing file system data mover, the both data movers read/write data from/to the same backup repository through Unified Repo interface.
Unified Repo interface and the backup repository needs to be enhanced to support incremental backups.
![Data path overview](data-path-overview.png)
For more details of VGDP architecture, see [Unified Repository design][1], [Volume Snapshot Data Movement design][2] and [VGDP Micro Service for Volume Snapshot Data Movement][3].
### Backup
Below is the architecture for block data mover backup which is developed based on the existing VBDM:
![Backup architecture](backup-architecture.png)
The existing VBDM is reused, below are the major changes based on the existing VBDM:
**Exposer**: Exposer needs to create block mode backupPVC all the time regardless of the sourcePVC mode.
**CBT**: This is a new layer to retrieve, transform and store the changed blocks, it interacts with CSI SnapshotMetadataService through gRPC.
**Uploader**: A new block uploader is added. It interacts with CBT layer, holds special logics to make performant data read from block devices and holds special logics to write incremental data to Unified Repository.
**Extended Kopia repo**: A new Incremental Aware Object Extension is added to Kopia's CAOS, so as to support incremental data write. Other parts of Kopia repository, including the existing CAOS and CABS, are not changed.
### Restore
Below is architecture for block data mover restore which is developed based on the existing VBDM:
![Restore architecture](restore-architecture.png)
The existing VBDM is reused, below are the major changes based on the existing VBDM:
**Exposer**: While the restorePV is in block mode, exposer needs to rebind the restorePV to a targetPVC in either file system mode or block mode.
**Uploader**: The same block uploader holds special logics to make performant data write to block devices and holds special logics to read data from the backup chain in Unified repository.
For more details of VBDM, see [Volume Snapshot Data Movement design][2].
## Detailed Design
### Selectable Data Mover Type
#### Per Backup Selection
At present, the backup accepts a `DataMover` parameter and when its value is empty or `velero`, VBDM is used.
After block data mover is introduced, VBDM will have two types of data movers, Velero file system data mover and Velero block data mover.
A new type string `velero-block` is introduced for Velero block data mover, that is, when `DataMover` is set as `velero-block`, Velero block data mover is used.
Another new value `velero-fs` is introduced for Velero file system data mover, that is, when `DataMover` is set as `velero-fs`, Velero file system data mover is used.
For backwards compatibility consideration, `velero` is preserved a valid value, it refers to the default data mover, and the default data mover may change among releases. At present, Velero file system data mover is the default data mover; we can change the default one to Velero block data mover in future releases.
#### Volume Policy
It is a valid case that users have multiple volumes in a single backup, while they want to use Velero file system data mover for some of the volumes and use Velero block data mover for some others.
To meet this requirement, a combined solution of Per Backup Selection and Volume Policy is used.
Here are the data structs for VolumePolicy:
```go
type volPolicy struct {
action Action
conditions []volumeCondition
}
type volumeCondition interface {
match(v *structuredVolume) bool
validate() error
}
type structuredVolume struct {
capacity resource.Quantity
storageClass string
nfs *nFSVolumeSource
csi *csiVolumeSource
volumeType SupportedVolume
pvcLabels map[string]string
pvcPhase string
}
type Action struct {
Type VolumeActionType `yaml:"type"`
Parameters map[string]any `yaml:"parameters,omitempty"`
}
const (
ConfigmapRefType string = "configmap"
Skip VolumeActionType = "skip"
FSBackup VolumeActionType = "fs-backup"
Snapshot VolumeActionType = "snapshot"
)
```
`action.parameters` is used to provide extra information of the action. This is an ideal place to differentiate Velero file system data mover and Velero block data mover.
Therefore, Velero built-in data mover will support `dataMover` key in `parameters`, with the value either `velero-fs` or `velero-block`. While `velero-fs` and `velero-block` are with the same meaning with Per Backup Selection.
As an example, here is how a user might use both `velero-block` and `velero-fs` in a single backup:
- Users set `DataMover` parameter for the backup as `velero-block`
- Users add a record into Volume Policy, make `conditions` to filter the volumes they want to backup through Velero file system data mover, make `action.type` as `snapshot` and insert a record into `action.parameter` as `dataMover:velero-fs`
In this way, all volumes matched by `conditions` will be backed up with Velero file system data mover; while the others will fallback to the per backup method Velero block data mover.
Vice versa, users could set the per backup method as file system data mover and select volumes for Velero block data mover.
The selected data mover for each volume should be recorded to `volumeInfo.json`.
### Controllers
Backup controller and Restore controller are kept as is, async operations are still used to interact with VBDM with block data mover.
DataUpload controller and DataDownload controller are almost kept as is, with some minor changes to handle the data mover type and backup type appropriately and convey it to the exposers. With [VGDP Micro Service][3], the controllers are almost isolated from VGDP, so no major changes are required.
### Exposer
#### CSI Snapshot Exposer
The existing CSI Snapshot Exposer is reused with some changes to decide the backupPVC volume mode by access mode. Specifically, for Velero block data mover, access mode is always `Block`, so the backupPVC volume mode is always `Block`.
Once the backupPVC is created with correct volume mode, the existing code could create the backupPod and mount the backupPVC appropriately.
#### Generic Restore Exposer
The existing Generic Restore Exposer is reused, but the workflow needs some changes.
For block data mover, the restorePV is in Block mode all the time, whereas, the targetPVC may be in either file system mode or block mode.
However, Kubernetes doesn't allow to bound a PV to a PVC with mismatch volume mode.
Therefore, the workflow of ***Finish Volume Readiness*** as introduced in [Volume Snapshot Data Movement design][2] is changed as below:
- When restore completes and restorePV is created, set restorePV's `deletionPolicy` to `Retain`
- Create another rebindPV and copy restorePV's `volumeHandle` but the `volumeMode` matches to the targetPVC
- Delete restorePV
- Set the rebindPV's claim reference (the ```claimRef``` filed) to targetPVC
- Add the ```velero.io/dynamic-pv-restore``` label to the rebindPV
In this way, the targetPVC will be bound immediately by Kubernetes to rebindPV.
These changes work for file system data mover as well, so the old workflow will be replaced, only the new workflow is kept.
### VGDP
Below is the VGDP workflow during backup:
![VGDP Backup](vgdp-backup.png)
Below is the VGDP workflow during restore:
![VGDP Restore](vgdp-restore.png)
#### Unified Repo
For block data mover, one Unified Repo Object is created for each volume, and some metadata is also saved into Unified Repo to describe the volume.
During the backup, the write conducts a skippable-write manner:
- For the data range that the write does not skip, object is written with the real data
- For the data range that is skipped, the data is either filled as ZERO or cloned from the parent object. Specifically, for a full backup, data is filled as ZERO; for an incremental backup, data is cloned from the parent object
To support incremental backup, `ObjectWriter` interface needs to extend to support `io.WriterAt`, so that uploader could conduct a skippable-write manner:
```go
type ObjectWriter interface {
io.WriteCloser
io.WriterAt
// Seeker is used in the cases that the object is not written sequentially
io.Seeker
// Checkpoint is periodically called to preserve the state of data written to the repo so far.
// Checkpoint returns a unified identifier that represent the current state.
// An empty ID could be returned on success if the backup repository doesn't support this.
Checkpoint() (ID, error)
// Result waits for the completion of the object write.
// Result returns the object's unified identifier after the write completes.
Result() (ID, error)
}
```
To clone data from parent object, the caller needs to specify the parent object. To support this, `ObjectWriteOptions` is extended with `ParentObject`.
The existing `AccessMode` could be used to indicate the data access type, either file system or block:
```go
// ObjectWriteOptions defines the options when creating an object for write
type ObjectWriteOptions struct {
FullPath string // Full logical path of the object
DataType int // OBJECT_DATA_TYPE_*
Description string // A description of the object, could be empty
Prefix ID // A prefix of the name used to save the object
AccessMode int // OBJECT_DATA_ACCESS_*
BackupMode int // OBJECT_DATA_BACKUP_*
AsyncWrites int // Num of async writes for the object, 0 means no async write
ParentObject ID // the parent object based on which incremental write will be done
}
```
To support non-Kopia uploader to save snapshots to Unified Repo, snapshot related methods will be added to `BackupRepo` interface:
```go
// SaveSnapshot saves a repo snapshot
SaveSnapshot(ctx context.Context, snapshot Snapshot) (ID, error)
// GetSnapshot returns a repo snapshot from snapshot ID
GetSnapshot(ctx context.Context, id ID) (Snapshot, error)
// DeleteSnapshot deletes a repo snapshot
DeleteSnapshot(ctx context.Context, id ID) error
// ListSnapshot lists all snapshots in repo for the given source (if specified)
ListSnapshot(ctx context.Context, source string) ([]Snapshot, error)
```
To support non-Kopia uploader to save metadata, which is used to describe the backed up objects, some metadata related methods will be added to `BackupRepo` interface:
```go
// WriteMetadata writes metadata to the repo, metadata is used to describe data, e.g., file system
// dirs are saved as metadata
WriteMetadata(ctx context.Context, meta *Metadata, opt ObjectWriteOptions) (ID, error)
// ReadMetadata reads a metadata from repo by the metadata's object ID
ReadMetadata(ctx context.Context, id ID) (*Metadata, error)
```
kopia-lib for Unified Repo will implement these interfaces by calling the corresponding Kopia repository functions.
### Kopia Repository
CAOS of Kopia repository implements Unified Repo's Objects. However, CAOS supports full and sequential write only.
To make it support skippable write, a new Incremental Aware Object Extension is created based on the existing CAOS.
#### Block Address Table
Kopia CAOS uses Block Address Table (BAT) to track objects. It will be reused for both full backups and incremental backups.
![Incremental Aware Object Extension](caos-extension.png)
For Incremental Aware Object Extension, one object represents one volume.
For full backup, the skipped areas will be written as all ZERO by Incremental Aware Object Extension, since Kopia repository's interface doesn't support skippable write. But it is fine, the ZERO data will be deduplicated by Kopia repository so nothing is actually written to the backup storage.
For incremental backup, Incremental Aware Object Extension clones the table entries from the parent object for the skipped areas; for the written area, Incremental Aware Object Extension writes the data to Kopia repository and generate new entries. Finally, Incremental Aware Object Extension generates a new block address table for the incremental object which covers its entire logical space.
Incremental Aware Object Extension is automatically activated for block mode data access as set by `AccessMode` of `ObjectWriteOptions`.
#### Deduplication
The Incremental Aware Object Extension uses fix-sized splitter for deduplication, this is good enough for block level backup, reasons:
- Not like a file, a disk write never inserts data to the middle of the disk, it only does in-place update or append. So the data never shifts between two disks or the same disk of two different backups
- File system IO to disk general aligned to a specific size, e.g., 4KB for NTFS and ext4, as long as the chunk size is a multiply of this size, it effectively reduces the case that one IO kills two deduplication chunks
- For the usage cases that the disk is used as raw block device without a file system, the IO is still conducted by aligning to a specific boundary
The chunk size is intentionally chosen as 1MB, reasons:
- 1MB is a multiply of 4KB for file systems or common block sizes for raw block device usages
- 1MB is the start boundary of partitions for modern operating systems, for both MBR and GPT, so partition metadata could be isolated to a separate chunk
- The more chunks are there, the more indexes in the repository, 1MB is a moderate value regarding to the overhead of indexes for Kopia repository
#### Benefits
Since the existing block address table(BAT) of CAOS is reused and kept as is, it brings below benefits:
- All the entries are still managed by Kopia CAOS, so Velero doesn't need to keep an extra data
- The objects written by Velero block uploader is still recognizable by Kopia, for both full backup and incremental backup
- The existing data management in Kopia repository still works for objects generated by Velero block uploader, e.g., snapshot GC, repository maintenance, etc.
Most importantly, this solution is super performant:
- During incremental write, it doesn't copy any data from the parent object, instead, it only clones object block address entries
- During backup deletion, it doesn't need to move any data, it only deletes the BAT for the object
#### Uploader behavior
The block uploader's skippable write must also be aligned to this 1MB boundary, because Incremental Aware Object Extension needs to clone the entries that have been skipped from the parent object.
File system uploader is still using variable-sized deduplication, it is fine to keep data from the two uploaders into the same Kopia repository, though normally they won't be mutually deduplicated.
Volume could be resized; and volume size may not be aligned to 1MB boundary. The uploader need to handle the resize appropriately since Incremental Aware Object Extension cannot copy a BAT entry partially.
#### CBT Layer
CBT provides below functionalities:
1. For a full backup, it provides the allocated data ranges. E.g., for a 1TB volume, there may be only 1MB of files, with this functionality, the uploader could skip the ranges without real data
2. For an incremental backup, it provides the changed data ranges based on the provided parent snapshot. In this way, the uploader could skip the unchanged data and achieves an incremental backup
For case 1, the uploader calls Unified Repo Object's `WriteAt` method with the offset for the allocated data, ranges ahead of the offset will be filled as ZERO by unified repository.
For case 2, the uploader calls Unified Repo Object's `WriteAt` method with the offset for the changed data, ranges ahead of the offset will be cloned from the parent object unified repository.
A changeId is stored with each backup, the next backup will retrieve the parent snapshot's changeId and use it to retrieve the CBT.
The CBT retrieved from Kubernetes API are a list of `BlockMetadata`, each of range could be with fixed size or variable size.
Block uploader needs to maintain its own granularity that is friendly to its backup repository and uploader, as mentioned above.
From Kubernetes API, `GetMetadataAllocated` or `GetMetadataDelta` are called looply until all `BlockMetadata` are retrieved.
On the other hand, considering the complexity in uploader, e.g., multiple stream between read and write, the workflow should be driven by the uploader instead of the CBT iterator, therefore, in practice, all the allocated/changed blocks should be retrieved and preserved before passing it to the uploader.
As another fact, directly saving `BlockMetadata` list will be memory consuming.
With all the above considerations, the `Bitmap` data structure is used to save the allocated/changed blocks, calling CBT Bitmap.
CBT Bitmap chunk size could be set as 1MB or a multiply of it, but a larger chunk size would amplify the backup size, so 1MB size will be use.
Finally, interactions among CSI Snapshot Metadata Service, CBT Layer and Uploader is like below:
![CBT Layer](cbt.png)
In this way, CBT layer and uploader are decoupled and CBT bitmap plays as a north bound parameter of the uploader.
#### Block Uploader
Block uploader consists of the reader and writer which are running asynchronously.
During backup, reader reads data from the block device and also refers to CBT Bitmap for allocated/changed blocks; writer writes data to the Unified Repo.
During restore, reader reads data from the Unified Repo; writer writes data to the block device.
Reader and writer connects by a ring buffer, that is, reader pushes the block data to the ring buffer and writer gets data from the ring buffer and write to the target.
To improve performance, block device is opened with direct IO, so that no data is going through the system cache unnecessarily.
During restore, to optimize the write throughput and storage usage, zero blocks should be either skipped (for restoring to a new volume) or unmapped (for restoring to an existing volume). To cover the both cases in a unified way, the SCSI command `WRITE_SAME` is used. Logics are as below:
- Detect if a block read from the backup is with all zero data
- If true, the uploader sends `WRITE_SAME` SCSI command by calling `BLKZEROOUT` ioctl
- If the call fails, the uploader fallbaks to use the conservative way to write all zero bytes to the disk
Uploader implementation is OS dependent, but since Windows container doesn't support block volumes, the current implementation is for linux only.
#### ChangeId
ChangeId identifies the base that CBT is generated from, it must strictly map to the parent snapshot in the repository. Otherwise, there will be data corruption in the incremental backup.
Therefore, ChangeId is saved together with the repository snapshot.
The data mover always queries parent snapshot from Unified Repo together with the ChangeId. In this way, no mismatch would happen.
Inside the uploader, the upper layer (DataUpload controller) could also provide the ChangeId as a mechanism of double confirmation. The received ChangeId would be re-evaluated against the one in the provided snapshot.
For Kubernetes API, changeId is represented by `BaseSnapshotId`.
changeId retrieval is storage specific, generally, it is retrieved from the `SnapshotHandle` of the VolumeSnapshotContent object; however, storages may also refer to other places to retrieve the changeId.
That is, `SnapshotHandle` and changeId may be two different values, in this case, the both values need to be preserved.
#### Volume Snapshot Retention
Storages/CSI drivers may support the changeId differently based on the storage's capabilities:
1. In order to calculate the changes, some storages require the parent snapshot mapping to the changeId always exists at the time of `GetMetadataDelta` is called, then the parent snapshot can NOT be deleted as long as there are incremental backups based on it.
2. Some storages don't require the parent snapshot itself at the time of calculating changes, then parent snapshot could be deleted immediately after the parent backup completes.
The existing exposer works perfectly with Case 1, that is, the snapshot is always deleted when the backup completes.
However, for Case 2, since the snapshot must be retained, the exposer needs changes as below:
- At the end of each backup, keep the current VolumeSnapshot's `deletionPolicy` as `Retain`, then when the VolumeSnapshot is deleted at the end of the backup, the current snapshot is retained in the storage
- `GetMetadataDelta` is called with `BaseSnapshotId` set as the preserved changeId
- When deleting a backup, a VolumeSnapshot-VolumeSnapshotContent pair is rebuilt with `deletionPolicy` as `delete` and `snapshotHandle` as the preserved one
- Then the rebuilt VolumeSnapshot is deleted so that the volume snapshot is deleted from the storage
There is no way to automatically detect which way a specific volume support, so an interface is exposed to users to set the volume snapshot retention method.
The interface could be added to the `Action.Parameters` of Volume Policy. By default, Velero block data mover takes Way 1, so volume snapshot is never retained; if users specify `RetainSnapshot` parameter, Way 2 will be taken.
```go
type Action struct {
Type VolumeActionType `yaml:"type"`
Parameters map[string]any `yaml:"parameters,omitempty"`
}
```
In this way, users could specify --- for storage class "xxx" or CSI driver "yyy", backup through CSI snapshot with Velero block data mover and retain the snapshot.
#### Incremental Size
By the end of the backup, incremental size is also returned by the uploader, as same as Velero file system uploader. The size indicates how much data are unique so processed by the uploader, based on the provided CBT.
### Fallback to Full Backup
There are some occasions that the incremental backup won't continue, so the data mover fallbacks to full backup:
- `GetMetadataAllocated` or `GetMetadataDelta` returns error
- ChangeId is missing
- Parent snapshot is missing
When the fallback happens, the volume will be fully backed up from block level, but since because of the data deduplication from the backup repository, the unallocated/unchanged data would be probably deduplicated.
During restore, the volume will also be fully restored. The zero blocks handling as mentioned above is still working, so that write IO for unallocated data would be probably eliminated.
Fallback is to handle the exceptional cases, for most of the backups/restores, fallback is never expected.
### Irregular Volume Size
As mentioned above, during incremental backup, block uploader IO should be restricted to be aligned to the deduplication chunk size (1MB); on the other hand, there is no hard limit for users' volume size to be aligned.
To support volumes with irregular size, below measures are taken:
- Volume objects in the repository is always aligned to 1MB
- If the volume size is irregular, zero bytes will be padded to the tail of the volume object
- A real size is recorded in the repository snapshot
- During restore, the real size of data is restored
The padding must be always with zero bytes.
### Volume Size Change
Incremental backup could continue when volume is resized.
Block uploader supports to write disk with arbitrary size.
The volume resize cases don't need to be handled case by case.
Instead, when volume resize happens, block uploader needs to handle it appropriately in below ways:
- Loop with CBT
- Read data between RoundDownTo1M(newSize) and newSize to get the tail data
- If there is no tail data, which means the volume size is aligned to 1MB, then call `WriteAt(newSize, nil)`
- Otherwise, call `WriteAt(RoundDownTo1M(newSize), taildata)`, `taildata` is also padded to 1MB
That is to say:
- If CBT covers the tail of the volume, loop with CBT is enough for both shrink and expand case
- Otherwise, if volume is expanded, `WriteAt` guarantees to clone appropriate objects entries from the parent object and append zero data for the expanded areas. Particularly, if the parent volume is not in regular size, the zero padding bytes is also reused. Therefore, the parent object's padding bytes must be zero
- In the case the volume is shrunk, writing the tail data makes sure zero bytes are padding to the new volume object instead of inheriting non-zero data from the parent object
### Cancellation
The existing Cancellation mechanism is reused, so there is no change outside of the block uploader.
Inside the uploader, cancellation checkpoints are embedded to the uploader reader and writer, so that the execution could quit in a reasonable time once cancellation happens.
### Parallelism
Parallelism among data movers will reuse the existing mechanism --- load concurrency.
Inside the data mover, uploader reader and writer are always running in parallel. The number of reader and writer is always 1.
Sequential read/write of the volume is always optimized, there is no prove that multiple readers/writers are beneficial.
### Progress Report
Progress report outside of the data mover will reuse the existing mechanism.
Inside the data mover, progress update is embedded to the uploader writer.
The progress struct is kept as is, Velero block data mover still supports `TotalBytes` and `BytesDone`:
```go
type Progress struct {
TotalBytes int64 `json:"totalBytes,omitempty"`
BytesDone int64 `json:"doneBytes,omitempty"`
}
```
By the end of the backup, the progress for block data mover provides the same `GetIncrementalSize` which reports the incremental size of the backup, so that the incremental size is reported to users in the same way as the file system data mover.
### Selectable Backup Type
For many reasons, a periodical full backup is required:
- From user experience, a periodical full is required to make sure the data integrity among the incremental backups, e.g., every 1 week or 1 month
Therefore, backup type (full/incremental) should be supported in Velero's manual backup and backup schedule.
Backup type will also be added to `volumeInfo.json` to support observability purposes.
Backup TTL is still used for users to specify a backup's retention time. By default, both full and incremental backups are with 30 days retention, even though this is not so reasonable for the full backups. This could be enhanced when Velero supports sophisticated retention policy.
As a workaround, users could create two schedules for the same scope of backup, one is for full backups, with less frequency and longer backup TTL; the other one is for incremental backups, with normal frequency and shorter backup TTL.
#### File System Data Mover
At present, Velero file system data mover doesn't support selectable backup type, instead, incremental backups are always conducted once possible.
From user experience this is not reasonable.
Therefore, to solve this problem and to make it align with Velero block data mover, Velero file system data mover will support backup type as well.
At present, the data path for Velero file system data mover has already supported it, we only need to expose this functionality to users.
### Backup Describe
Backup type should be added to backup description, there are two appearances:
- The `backupType` in the Backup CR. This is the selected backup type by users
- The backup type recorded in `volumeInfo.json`, which is the actual type taken by the backup
With these two values, users are able to know the actual backup type and also whether a fallback happens.
The `DataMover` item in the existing backup description should be updated to reflect the actual data mover completing the backup, this information could be retrieved from `volumeInfo.json`.
### Backup Sync
No more data is required for sync, so Backup Sync is kept as is.
### Backup Deletion
As mentioned above, no data is moved when deleting a repo snapshot for Velero block data mover, so Backup Deletion is kept as is regarding to repo snapshot; and for volume snapshot retention case, backup deletion logics will be modified accordingly to delete the retained snapshots.
### Restarts
Restarts mechanism is reused without any change.
### Logging
Logging mechanism is not changed.
### Backup CRD
A `backupType` field is added to Backup CRD, two values are supported `full` or `incremental`.
`full` indicates the data mover to take a full backup.
`incremental` which is the default value, indicates the data mover to take an incremental backup.
```yaml
spec:
description: BackupSpec defines the specification for a Velero backup.
properties:
backupType:
description: BackupType indicates the type of the backup
enum:
- full
- incremental
type: string
```
### DataUpload CRD
A `parentSnapshot` field is added to the DataUpload CRD, below values are supported:
- `""`: it fallbacks to `auto`
- `auto`: it means the data mover finds the recent snapshot of the same volume from Unified Repository and use it as the parent
- `none`: it means the data mover is not assigned with a parent snapshot, so it runs a full backup
- a specific snapshotID: it means the data mover use the specific snapshotID to find the parent snapshot. If it cannot be found, the data mover fallbacks to a full backup
The last option is for a backup plan, it will not be used for now and may be useful when Velero supports sophisticated retention policy. This means, Velero always finds the recent backup as the parent.
When `backupType` of the Backup is `full`, the data mover controller sets `none` to `parentSnapshot` of DataUpload.
When `backupType` of the Backup is `incremental`, the data mover controller sets `auto` to `parentSnapshot` of DataUpload. And `""` is just kept for backwards compatibility consideration.
```yaml
spec:
description: DataUploadSpec is the specification for a DataUpload.
properties:
parentSnapshot:
description: |-
ParentSnapshot specifies the parent snapshot that current backup is based on.
If its value is "" or "auto", the data mover finds the recent backup of the same volume as parent.
If its value is "none", the data mover will do a full backup
If its value is a specific snapshotID, the data mover finds the specific snapshot as parent.
type: string
```
### DataDownload CRD
No change is required to DataDownload CRD.
## Plugin Data Movers
The current design doesn't break anything for plugin data movers.
The enhancement in VolumePolicy could also be used for plugin data movers. That is, users could select a plugin data mover through VolumePolicy as same as Velero built-in data movers.
## Installation
No change to Installation.
## Upgrade
No impacts to Upgrade. The new fields in the CRDs are all optional fields and have backwards compatible values.
## CLI
Backup type parameter is added to Velero CLI as below:
```
velero backup create --full
velero schedule create --full
```
When the parameter is not specified, by default, Velero goes with incremental backups.
[1]: ../Implemented/unified-repo-and-kopia-integration/unified-repo-and-kopia-integration.md
[2]: ../Implemented/volume-snapshot-data-movement/volume-snapshot-data-movement.md
[3]: ../Implemented/vgdp-micro-service/vgdp-micro-service.md
[4]: https://kubernetes.io/blog/2025/09/25/csi-changed-block-tracking/
[5]: https://kopia.io/docs/advanced/architecture/

Binary file not shown.

Before

Width:  |  Height:  |  Size: 518 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 377 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 389 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 476 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 547 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 504 KiB

View File

@@ -1,93 +0,0 @@
# Add custom volume policy action
## Abstract
Currently, velero supports 3 different volume policy actions:
snapshot, fs-backup, and skip, which tell Velero how to handle backing
up PVC contents. Any other policy action is not allowed. This prevents
third party BackupItemAction (BIA) plugins which might want to perform
different actions on PVC via defined volume policies.
## Background
An external BIA plugin that wants to back up volumes via some custom
means (i.e. not CSI snapshots or fs-backup with kopia) is not able to
make use of the existing volume policy API. While the plugin could use
something like PVC annotations instead, this won't integrate with
existing volume policies, which is desirable in case the user wants to
specify some PVCs to use the custom plugin while leaving others using
CSI snapshots or fs-backup.
## Goals
- Add a fourth valid volume policy action "custom"
- Make use of the existing action parameters field to distinguish between multiple custom actions.
## Non Goals
- Implementing custom action logic in velero repo
## High-Level Design
A new VolumeActionType with the value "custom" will be added to
`internal/resourcepolicies`. When the action is "custom", velero will
not perform a snapshot or use fs-backup on the PVC. If there is no
registered plugin which implements the desired custom action, then it
will be equivalent to the "skip" action. Since there could be
different plugins that implement custom actions, when making the API
call (defined below) the plugin should also pass in a partial action
parameters map containing the portion of the map that identifies the
custom plugin as belonging to a particular external
implementation. For example, there might be a custom BIA that's
looking for a `custom` volume policy action with the parameter
`myCustomAction=true`. The volume policy action would be defined like
this:
```yaml
action:
type: custom
parameters:
myCustomAction: true
```
In `internal/volumehelper/volume_policy_helper.go` a new interface
method will be added, similar to `ShouldPerformSnapshot` but it takes
a partial parameter map as an additional input, since for custom
actions to match, the action type must be `custom`, but also there may
be some parameter that needs to match (to distinguish between
different custom actions). We also want a way for the plugin to get
the parameter map for the action. This should probably just return the
map rather than the Action struct is under `internal`.
```go
type VolumeHelper interface {
ShouldPerformSnapshot(obj runtime.Unstructured, groupResource schema.GroupResource) (bool, error)
ShouldPerformFSBackup(volume corev1api.Volume, pod corev1api.Pod) (bool, error)
ShouldPerformCustomAction(obj runtime.Unstructured, groupResource schema.GroupResource, map[string]any) (bool, error)
GetActionParameters(obj runtime.Unstructured, groupResource schema.GroupResource) (map[string]any, error)
}
```
In addition, since the VolumeHelper interface is expected to be called by external plugins, the interface (but not the implementation) should be moved from `internal/volumehelper` to `pkg/util/volumehelper`.
In `pkg/plugin/utils/volumehelper/volume_policy_helper.go`, a new helper func will be added which delegates to the internal volumehelper.NewVolumeHelperImplWithNamespaces
```go
func NewVolumeHelper(
volumePolicy *resourcepolicies.Policies,
snapshotVolumes *bool,
logger logrus.FieldLogger,
client crclient.Client,
defaultVolumesToFSBackup bool,
backupExcludePVC bool,
namespaces []string,
) (VolumeHelper, error) {
```
## Alternative Considered
An alternate approach was to create a new server arg to allow
user-defined parameters. That was rejected in favor of this approach,
as the explicitly-supported "custom" option integrates more easily
into a supportable plugin-callable API.

6
go.mod
View File

@@ -1,6 +1,6 @@
module github.com/vmware-tanzu/velero
go 1.25.0
go 1.25.7
require (
cloud.google.com/go/storage v1.57.2
@@ -105,7 +105,7 @@ require (
github.com/fsnotify/fsnotify v1.7.0 // indirect
github.com/fxamacker/cbor/v2 v2.7.0 // indirect
github.com/go-ini/ini v1.67.0 // indirect
github.com/go-jose/go-jose/v4 v4.1.3 // indirect
github.com/go-jose/go-jose/v4 v4.1.4 // indirect
github.com/go-logr/logr v1.4.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-ole/go-ole v1.3.0 // indirect
@@ -144,7 +144,7 @@ require (
github.com/minio/md5-simd v1.1.2 // indirect
github.com/minio/minio-go/v7 v7.0.97 // indirect
github.com/mitchellh/go-testing-interface v1.0.0 // indirect
github.com/moby/spdystream v0.5.1 // indirect
github.com/moby/spdystream v0.5.0 // indirect
github.com/moby/term v0.5.0 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect

8
go.sum
View File

@@ -264,8 +264,8 @@ github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-ini/ini v1.67.0 h1:z6ZrTEZqSWOTyH2FlglNbNgARyHG8oLW9gMELqKr06A=
github.com/go-ini/ini v1.67.0/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8=
github.com/go-jose/go-jose/v4 v4.1.3 h1:CVLmWDhDVRa6Mi/IgCgaopNosCaHz7zrMeF9MlZRkrs=
github.com/go-jose/go-jose/v4 v4.1.3/go.mod h1:x4oUasVrzR7071A4TnHLGSPpNOm2a21K9Kf04k1rs08=
github.com/go-jose/go-jose/v4 v4.1.4 h1:moDMcTHmvE6Groj34emNPLs/qtYXRVcd6S7NHbHz3kA=
github.com/go-jose/go-jose/v4 v4.1.4/go.mod h1:x4oUasVrzR7071A4TnHLGSPpNOm2a21K9Kf04k1rs08=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
@@ -550,8 +550,8 @@ github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:F
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.4.1/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/moby/spdystream v0.2.0/go.mod h1:f7i0iNDQJ059oMTcWxx8MA/zKFIuD/lY+0GqbN2Wy8c=
github.com/moby/spdystream v0.5.1 h1:9sNYeYZUcci9R6/w7KDaFWEWeV4LStVG78Mpyq/Zm/Y=
github.com/moby/spdystream v0.5.1/go.mod h1:xBAYlnt/ay+11ShkdFKNAG7LsyK/tmNBVvVOwrfMgdI=
github.com/moby/spdystream v0.5.0 h1:7r0J1Si3QO/kjRitvSLVVFUjxMEb/YLj6S9FF62JBCU=
github.com/moby/spdystream v0.5.0/go.mod h1:xBAYlnt/ay+11ShkdFKNAG7LsyK/tmNBVvVOwrfMgdI=
github.com/moby/term v0.5.0 h1:xt8Q1nalod/v7BqbG21f8mQPqH+xAaC9C3N3wfWbVP0=
github.com/moby/term v0.5.0/go.mod h1:8FzsFHVUBGZdbDsJw/ot+X+d5HLUbvklYLJ9uGfcI3Y=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=

View File

@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
FROM --platform=$TARGETPLATFORM golang:1.25-trixie
FROM --platform=$TARGETPLATFORM golang:1.25.7-bookworm
ARG GOPROXY

56
hack/build-restic.sh Executable file
View File

@@ -0,0 +1,56 @@
#!/bin/bash
# Copyright 2020 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -o errexit
set -o nounset
set -o pipefail
# Use /output/usr/bin/ as the default output directory as this
# is the path expected by the Velero Dockerfile.
output_dir=${OUTPUT_DIR:-/output/usr/bin}
restic_bin=${output_dir}/restic
build_path=$(dirname "$PWD")
if [[ -z "${BIN}" ]]; then
echo "BIN must be set"
exit 1
fi
if [[ "${BIN}" != "velero" ]]; then
echo "${BIN} does not need the restic binary"
exit 0
fi
if [[ -z "${GOOS}" ]]; then
echo "GOOS must be set"
exit 1
fi
if [[ -z "${GOARCH}" ]]; then
echo "GOARCH must be set"
exit 1
fi
if [[ -z "${RESTIC_VERSION}" ]]; then
echo "RESTIC_VERSION must be set"
exit 1
fi
mkdir ${build_path}/restic
git clone -b v${RESTIC_VERSION} https://github.com/restic/restic.git ${build_path}/restic
pushd ${build_path}/restic
git apply /go/src/github.com/vmware-tanzu/velero/hack/fix_restic_cve.txt
go run build.go --goos "${GOOS}" --goarch "${GOARCH}" --goarm "${GOARM}" -o ${restic_bin}
chmod +x ${restic_bin}
popd

274
hack/fix_restic_cve.txt Normal file
View File

@@ -0,0 +1,274 @@
diff --git a/go.mod b/go.mod
index 5f939c481..f6205aa3c 100644
--- a/go.mod
+++ b/go.mod
@@ -24,32 +24,31 @@ require (
github.com/restic/chunker v0.4.0
github.com/spf13/cobra v1.6.1
github.com/spf13/pflag v1.0.5
- golang.org/x/crypto v0.5.0
- golang.org/x/net v0.5.0
- golang.org/x/oauth2 v0.4.0
- golang.org/x/sync v0.1.0
- golang.org/x/sys v0.4.0
- golang.org/x/term v0.4.0
- golang.org/x/text v0.6.0
- google.golang.org/api v0.106.0
+ golang.org/x/crypto v0.45.0
+ golang.org/x/net v0.47.0
+ golang.org/x/oauth2 v0.28.0
+ golang.org/x/sync v0.18.0
+ golang.org/x/sys v0.38.0
+ golang.org/x/term v0.37.0
+ golang.org/x/text v0.31.0
+ google.golang.org/api v0.114.0
)
require (
- cloud.google.com/go v0.108.0 // indirect
- cloud.google.com/go/compute v1.15.1 // indirect
- cloud.google.com/go/compute/metadata v0.2.3 // indirect
- cloud.google.com/go/iam v0.10.0 // indirect
+ cloud.google.com/go v0.110.0 // indirect
+ cloud.google.com/go/compute/metadata v0.3.0 // indirect
+ cloud.google.com/go/iam v0.13.0 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.1.2 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.2 // indirect
github.com/dnaeon/go-vcr v1.2.0 // indirect
github.com/dustin/go-humanize v1.0.0 // indirect
github.com/felixge/fgprof v0.9.3 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
- github.com/golang/protobuf v1.5.2 // indirect
+ github.com/golang/protobuf v1.5.3 // indirect
github.com/google/pprof v0.0.0-20230111200839-76d1ae5aea2b // indirect
github.com/google/uuid v1.3.0 // indirect
- github.com/googleapis/enterprise-certificate-proxy v0.2.1 // indirect
- github.com/googleapis/gax-go/v2 v2.7.0 // indirect
+ github.com/googleapis/enterprise-certificate-proxy v0.2.3 // indirect
+ github.com/googleapis/gax-go/v2 v2.7.1 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/cpuid/v2 v2.2.3 // indirect
@@ -63,11 +62,13 @@ require (
go.opencensus.io v0.24.0 // indirect
golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2 // indirect
google.golang.org/appengine v1.6.7 // indirect
- google.golang.org/genproto v0.0.0-20230110181048-76db0878b65f // indirect
- google.golang.org/grpc v1.52.0 // indirect
- google.golang.org/protobuf v1.28.1 // indirect
+ google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1 // indirect
+ google.golang.org/grpc v1.56.3 // indirect
+ google.golang.org/protobuf v1.33.0 // indirect
gopkg.in/ini.v1 v1.67.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)
-go 1.18
+go 1.24.0
+
+toolchain go1.24.11
diff --git a/go.sum b/go.sum
index 026e1d2fa..4a37e7ac7 100644
--- a/go.sum
+++ b/go.sum
@@ -1,23 +1,24 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
-cloud.google.com/go v0.108.0 h1:xntQwnfn8oHGX0crLVinvHM+AhXvi3QHQIEcX/2hiWk=
-cloud.google.com/go v0.108.0/go.mod h1:lNUfQqusBJp0bgAg6qrHgYFYbTB+dOiob1itwnlD33Q=
-cloud.google.com/go/compute v1.15.1 h1:7UGq3QknM33pw5xATlpzeoomNxsacIVvTqTTvbfajmE=
-cloud.google.com/go/compute v1.15.1/go.mod h1:bjjoF/NtFUrkD/urWfdHaKuOPDR5nWIs63rR+SXhcpA=
-cloud.google.com/go/compute/metadata v0.2.3 h1:mg4jlk7mCAj6xXp9UJ4fjI9VUI5rubuGBW5aJ7UnBMY=
-cloud.google.com/go/compute/metadata v0.2.3/go.mod h1:VAV5nSsACxMJvgaAuX6Pk2AawlZn8kiOGuCv6gTkwuA=
-cloud.google.com/go/iam v0.10.0 h1:fpP/gByFs6US1ma53v7VxhvbJpO2Aapng6wabJ99MuI=
-cloud.google.com/go/iam v0.10.0/go.mod h1:nXAECrMt2qHpF6RZUZseteD6QyanL68reN4OXPw0UWM=
-cloud.google.com/go/longrunning v0.3.0 h1:NjljC+FYPV3uh5/OwWT6pVU+doBqMg2x/rZlE+CamDs=
+cloud.google.com/go v0.110.0 h1:Zc8gqp3+a9/Eyph2KDmcGaPtbKRIoqq4YTlL4NMD0Ys=
+cloud.google.com/go v0.110.0/go.mod h1:SJnCLqQ0FCFGSZMUNUf84MV3Aia54kn7pi8st7tMzaY=
+cloud.google.com/go/compute/metadata v0.3.0 h1:Tz+eQXMEqDIKRsmY3cHTL6FVaynIjX2QxYC4trgAKZc=
+cloud.google.com/go/compute/metadata v0.3.0/go.mod h1:zFmK7XCadkQkj6TtorcaGlCW1hT1fIilQDwofLpJ20k=
+cloud.google.com/go/iam v0.13.0 h1:+CmB+K0J/33d0zSQ9SlFWUeCCEn5XJA0ZMZ3pHE9u8k=
+cloud.google.com/go/iam v0.13.0/go.mod h1:ljOg+rcNfzZ5d6f1nAUJ8ZIxOaZUVoS14bKCtaLZ/D0=
+cloud.google.com/go/longrunning v0.4.1 h1:v+yFJOfKC3yZdY6ZUI933pIYdhyhV8S3NpWrXWmg7jM=
+cloud.google.com/go/longrunning v0.4.1/go.mod h1:4iWDqhBZ70CvZ6BfETbvam3T8FMvLK+eFj0E6AaRQTo=
cloud.google.com/go/storage v1.28.1 h1:F5QDG5ChchaAVQhINh24U99OWHURqrW8OmQcGKXcbgI=
cloud.google.com/go/storage v1.28.1/go.mod h1:Qnisd4CqDdo6BGs2AD5LLnEsmSQ80wQ5ogcBBKhU86Y=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.3.0 h1:VuHAcMq8pU1IWNT/m5yRaGqbK0BiQKHT8X4DTp9CHdI=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.3.0/go.mod h1:tZoQYdDZNOiIjdSn0dVWVfl0NEPGOJqVLzSrcFk4Is0=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.1.0 h1:QkAcEIAKbNL4KoFr4SathZPhDhF4mVwpBMFlYjyAqy8=
+github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.1.0/go.mod h1:bhXu1AjYL+wutSL/kpSq6s7733q2Rb0yuot9Zgfqa/0=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.1.2 h1:+5VZ72z0Qan5Bog5C+ZkgSqUbeVUd9wgtHOrIKuc5b8=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.1.2/go.mod h1:eWRD7oawr1Mu1sLCawqVc0CUiF43ia3qQMxLscsKQ9w=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v0.5.1 h1:BMTdr+ib5ljLa9MxTJK8x/Ds0MbBb4MfuW5BL0zMJnI=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v0.5.1/go.mod h1:c6WvOhtmjNUWbLfOG1qxM/q0SPvQNSVJvolm+C52dIU=
github.com/AzureAD/microsoft-authentication-library-for-go v0.5.1 h1:BWe8a+f/t+7KY7zH2mqygeUD0t8hNFXe08p1Pb3/jKE=
+github.com/AzureAD/microsoft-authentication-library-for-go v0.5.1/go.mod h1:Vt9sXTKwMyGcOxSmLDMnGPgqsUg7m8pe215qMLrDXw4=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/Julusian/godocdown v0.0.0-20170816220326-6d19f8ff2df8/go.mod h1:INZr5t32rG59/5xeltqoCJoNY7e5x/3xoY9WSWVWg74=
github.com/anacrolix/fuse v0.2.0 h1:pc+To78kI2d/WUjIyrsdqeJQAesuwpGxlI3h1nAv3Do=
@@ -54,6 +55,7 @@ github.com/felixge/fgprof v0.9.3/go.mod h1:RdbpDgzqYVh/T9fPELJyV7EYJuHB55UTEULNu
github.com/go-ole/go-ole v1.2.6 h1:/Fpf6oFPoeFik9ty7siob0G6Ke8QvQEuVcuChpwXzpY=
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
github.com/golang-jwt/jwt v3.2.1+incompatible h1:73Z+4BJcrTC+KczS6WvTPvRGOp1WmfEP4Q1lOd9Z/+c=
+github.com/golang-jwt/jwt v3.2.1+incompatible/go.mod h1:8pz2t5EyA70fFQQSrl6XZXzqecmYZeUEB8OUGHkxJ+I=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE=
@@ -70,8 +72,8 @@ github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvq
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
-github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw=
-github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
+github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
+github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
@@ -82,17 +84,18 @@ github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
-github.com/google/martian/v3 v3.2.1 h1:d8MncMlErDFTwQGBK1xhv026j9kqhvw1Qv9IbWT1VLQ=
+github.com/google/martian/v3 v3.3.2 h1:IqNFLAmvJOgVlpdEBiQbDc2EwKW77amAycfTuWKdfvw=
+github.com/google/martian/v3 v3.3.2/go.mod h1:oBOf6HBosgwRXnUGWUB05QECsc6uvmMiJ3+6W4l/CUk=
github.com/google/pprof v0.0.0-20211214055906-6f57359322fd/go.mod h1:KgnwoLYCZ8IQu3XUZ8Nc/bM9CCZFOyjUNOSygVozoDg=
github.com/google/pprof v0.0.0-20230111200839-76d1ae5aea2b h1:8htHrh2bw9c7Idkb7YNac+ZpTqLMjRpI+FWu51ltaQc=
github.com/google/pprof v0.0.0-20230111200839-76d1ae5aea2b/go.mod h1:dDKJzRmX4S37WGHujM7tX//fmj1uioxKzKxz3lo4HJo=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I=
github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
-github.com/googleapis/enterprise-certificate-proxy v0.2.1 h1:RY7tHKZcRlk788d5WSo/e83gOyyy742E8GSs771ySpg=
-github.com/googleapis/enterprise-certificate-proxy v0.2.1/go.mod h1:AwSRAtLfXpU5Nm3pW+v7rGDHp09LsPtGY9MduiEsR9k=
-github.com/googleapis/gax-go/v2 v2.7.0 h1:IcsPKeInNvYi7eqSaDjiZqDDKu5rsmunY0Y1YupQSSQ=
-github.com/googleapis/gax-go/v2 v2.7.0/go.mod h1:TEop28CZZQ2y+c0VxMUmu1lV+fQx57QpBWsYpwqHJx8=
+github.com/googleapis/enterprise-certificate-proxy v0.2.3 h1:yk9/cqRKtT9wXZSsRH9aurXEpJX+U6FLtpYTdC3R06k=
+github.com/googleapis/enterprise-certificate-proxy v0.2.3/go.mod h1:AwSRAtLfXpU5Nm3pW+v7rGDHp09LsPtGY9MduiEsR9k=
+github.com/googleapis/gax-go/v2 v2.7.1 h1:gF4c0zjUP2H/s/hEGyLA3I0fA2ZWjzYiONAD6cvPr8A=
+github.com/googleapis/gax-go/v2 v2.7.1/go.mod h1:4orTrqY6hXxxaUL4LHIPl6lGo8vAE38/qKbhSAKP6QI=
github.com/hashicorp/golang-lru/v2 v2.0.1 h1:5pv5N1lT1fjLg2VQ5KWc7kmucp2x/kvFOnxuVTqZ6x4=
github.com/hashicorp/golang-lru/v2 v2.0.1/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/ianlancetaylor/demangle v0.0.0-20210905161508-09a460cdf81d/go.mod h1:aYm2/VgdVmcIU8iMfdMvDMsRAQjcfZSKFby6HOFvi/w=
@@ -114,6 +117,7 @@ github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg=
github.com/kurin/blazer v0.5.4-0.20211030221322-ba894c124ac6 h1:nz7i1au+nDzgExfqW5Zl6q85XNTvYoGnM5DHiQC0yYs=
github.com/kurin/blazer v0.5.4-0.20211030221322-ba894c124ac6/go.mod h1:4FCXMUWo9DllR2Do4TtBd377ezyAJ51vB5uTBjt0pGU=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
+github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/minio/md5-simd v1.1.2 h1:Gdi1DZK69+ZVMoNHRXJyNcxrMA4dSxoYHZSQbirFg34=
github.com/minio/md5-simd v1.1.2/go.mod h1:MzdKDxYpY2BT9XQFocsiZf/NKVtR7nkE4RoEpN+20RM=
github.com/minio/minio-go/v7 v7.0.46 h1:Vo3tNmNXuj7ME5qrvN4iadO7b4mzu/RSFdUkUhaPldk=
@@ -129,6 +133,7 @@ github.com/modocache/gover v0.0.0-20171022184752-b58185e213c5/go.mod h1:caMODM3P
github.com/ncw/swift/v2 v2.0.1 h1:q1IN8hNViXEv8Zvg3Xdis4a3c4IlIGezkYz09zQL5J0=
github.com/ncw/swift/v2 v2.0.1/go.mod h1:z0A9RVdYPjNjXVo2pDOPxZ4eu3oarO1P91fTItcb+Kg=
github.com/pkg/browser v0.0.0-20210115035449-ce105d075bb4 h1:Qj1ukM4GlMWXNdMBuXcXfz/Kw9s1qm0CLY32QxuSImI=
+github.com/pkg/browser v0.0.0-20210115035449-ce105d075bb4/go.mod h1:N6UoU20jOqggOuDwUaBQpluzLNDqif3kq9z2wpdYEfQ=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/profile v1.7.0 h1:hnbDkaNWPCLMO9wGLdBFTIZvzDrDfBM2072E1S9gJkA=
@@ -172,8 +177,8 @@ golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACk
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
-golang.org/x/crypto v0.5.0 h1:U/0M97KRkSFvyD/3FSmdP5W5swImpNgle/EHFhOsQPE=
-golang.org/x/crypto v0.5.0/go.mod h1:NK/OQwhpMQP3MwtdjgLlYHnH9ebylxKWv3e0fK+mkQU=
+golang.org/x/crypto v0.45.0 h1:jMBrvKuj23MTlT0bQEOBcAE0mjg8mK9RXFhRH6nyF3Q=
+golang.org/x/crypto v0.45.0/go.mod h1:XTGrrkGJve7CYK7J8PEww4aY7gM3qMCElcJQ8n8JdX4=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
@@ -189,17 +194,17 @@ golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLL
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
-golang.org/x/net v0.5.0 h1:GyT4nK/YDHSqa1c4753ouYCDajOYKTja9Xb/OHtgvSw=
-golang.org/x/net v0.5.0/go.mod h1:DivGGAXEgPSlEBzxGzZI+ZLohi+xUj054jfeKui00ws=
+golang.org/x/net v0.47.0 h1:Mx+4dIFzqraBXUugkia1OOvlD6LemFo1ALMHjrXDOhY=
+golang.org/x/net v0.47.0/go.mod h1:/jNxtkgq5yWUGYkaZGqo27cfGZ1c5Nen03aYrrKpVRU=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
-golang.org/x/oauth2 v0.4.0 h1:NF0gk8LVPg1Ml7SSbGyySuoxdsXitj7TvgvuRxIMc/M=
-golang.org/x/oauth2 v0.4.0/go.mod h1:RznEsdpjGAINPTOF0UH/t+xJ75L18YO3Ho6Pyn+uRec=
+golang.org/x/oauth2 v0.28.0 h1:CrgCKl8PPAVtLnU3c+EDw6x11699EWlsDeWNWKdIOkc=
+golang.org/x/oauth2 v0.28.0/go.mod h1:onh5ek6nERTohokkhCD/y2cV4Do3fxFHFuAejCkRWT8=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
-golang.org/x/sync v0.1.0 h1:wsuoTGHzEhffawBOhz5CYhcrV4IdKZbEyZjBMuTp12o=
-golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
+golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -214,17 +219,17 @@ golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20220408201424-a24fb2fb8a0f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220704084225-05e143d24a9e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
-golang.org/x/sys v0.4.0 h1:Zr2JFtRQNX3BCZ8YtxRE9hNJYC8J6I1MVbMg6owUp18=
-golang.org/x/sys v0.4.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc=
+golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
-golang.org/x/term v0.4.0 h1:O7UWfv5+A2qiuulQk30kVinPoMtoIPeVaKLEgLpVkvg=
-golang.org/x/term v0.4.0/go.mod h1:9P2UbLfCdcvo3p/nzKvsmas4TnlujnuoV9hGgYzW1lQ=
+golang.org/x/term v0.37.0 h1:8EGAD0qCmHYZg6J17DvsMy9/wJ7/D/4pV/wfnld5lTU=
+golang.org/x/term v0.37.0/go.mod h1:5pB4lxRNYYVZuTLmy8oR2BH8dflOR+IbTYFD8fi3254=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
-golang.org/x/text v0.6.0 h1:3XmdazWV+ubf7QgHSTWeykHOci5oeekaGJBLkrkaw4k=
-golang.org/x/text v0.6.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
+golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM=
+golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
@@ -237,8 +242,8 @@ golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8T
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2 h1:H2TDz8ibqkAF6YGhCdN3jS9O0/s90v0rJh3X/OLHEUk=
golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8=
-google.golang.org/api v0.106.0 h1:ffmW0faWCwKkpbbtvlY/K/8fUl+JKvNS5CVzRoyfCv8=
-google.golang.org/api v0.106.0/go.mod h1:2Ts0XTHNVWxypznxWOYUeI4g3WdP9Pk2Qk58+a/O9MY=
+google.golang.org/api v0.114.0 h1:1xQPji6cO2E2vLiI+C/XiFAnsn1WV3mjaEwGLhi3grE=
+google.golang.org/api v0.114.0/go.mod h1:ifYI2ZsFK6/uGddGfAD5BMxlnkBqCmqHSDUVi45N5Yg=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c=
@@ -246,15 +251,15 @@ google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCID
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
-google.golang.org/genproto v0.0.0-20230110181048-76db0878b65f h1:BWUVssLB0HVOSY78gIdvk1dTVYtT1y8SBWtPYuTJ/6w=
-google.golang.org/genproto v0.0.0-20230110181048-76db0878b65f/go.mod h1:RGgjbofJ8xD9Sq1VVhDM1Vok1vRONV+rg+CjzG4SZKM=
+google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1 h1:KpwkzHKEF7B9Zxg18WzOa7djJ+Ha5DzthMyZYQfEn2A=
+google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1/go.mod h1:nKE/iIaLqn2bQwXBg8f1g2Ylh6r5MN5CmZvuzZCgsCU=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc=
-google.golang.org/grpc v1.52.0 h1:kd48UiU7EHsV4rnLyOJRuP/Il/UHE7gdDAQ+SZI7nZk=
-google.golang.org/grpc v1.52.0/go.mod h1:pu6fVzoFb+NBYNAvQL08ic+lvB2IojljRYuun5vorUY=
+google.golang.org/grpc v1.56.3 h1:8I4C0Yq1EjstUzUJzpcRVbuYA2mODtEmpWiQoN/b2nc=
+google.golang.org/grpc v1.56.3/go.mod h1:I9bI3vqKfayGqPUAwGdOSu7kt6oIJLixfffKrpXqQ9s=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
@@ -266,14 +271,15 @@ google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpAD
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
-google.golang.org/protobuf v1.28.1 h1:d0NfwRgPtno5B1Wa6L2DAG+KivqkdutMf1UhdNx175w=
-google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
+google.golang.org/protobuf v1.33.0 h1:uNO2rsAINq/JlFpSdYEKIZ0uKD/R9cpdv0T+yoGwGmI=
+google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/ini.v1 v1.67.0 h1:Dgnx+6+nfE+IfzjUEISNeydPJh9AXNNsWbGP9KzCsOA=
gopkg.in/ini.v1 v1.67.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
+gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

View File

@@ -32,6 +32,7 @@ import (
"github.com/vmware-tanzu/velero/pkg/client"
plugincommon "github.com/vmware-tanzu/velero/pkg/plugin/framework/common"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
"github.com/vmware-tanzu/velero/pkg/util/boolptr"
kubeutil "github.com/vmware-tanzu/velero/pkg/util/kube"
)
@@ -72,7 +73,7 @@ func (p *volumeSnapshotContentDeleteItemAction) Execute(
// So skip deleting VolumeSnapshotContent not have the backup name
// in its labels.
if !kubeutil.HasBackupLabel(&snapCont.ObjectMeta, input.Backup.Name) {
p.log.Infof(
p.log.Info(
"VolumeSnapshotContent %s was not taken by backup %s, skipping deletion",
snapCont.Name,
input.Backup.Name,
@@ -82,17 +83,6 @@ func (p *volumeSnapshotContentDeleteItemAction) Execute(
p.log.Infof("Deleting VolumeSnapshotContent %s", snapCont.Name)
// Try to delete the original VSC from the cluster first.
// This handles legacy (pre-1.15) backups where the original VSC
// with DeletionPolicy=Retain still exists in the cluster.
originalVSCName := snapCont.Name
if cleaned := p.tryDeleteOriginalVSC(context.TODO(), originalVSCName); cleaned {
p.log.Infof("Successfully deleted original VolumeSnapshotContent %s from cluster, skipping temp VSC creation", originalVSCName)
return nil
}
// create a temp VSC to trigger cloud snapshot deletion
// (for backups where the original VSC no longer exists in cluster)
uuid, err := uuid.NewRandom()
if err != nil {
p.log.WithError(err).Errorf("Fail to generate the UUID to create VSC %s", snapCont.Name)
@@ -126,7 +116,6 @@ func (p *volumeSnapshotContentDeleteItemAction) Execute(
if err := p.crClient.Create(context.TODO(), &snapCont); err != nil {
return errors.Wrapf(err, "fail to create VolumeSnapshotContent %s", snapCont.Name)
}
p.log.Infof("Created temp VolumeSnapshotContent %s with DeletionPolicy=Delete to trigger cloud snapshot cleanup", snapCont.Name)
// Add a small delay before delete to avoid create/delete race conditions in CSI controllers.
sleepBetweenTempVSCCreateAndDelete(tempVSCCreateDeleteGap)
@@ -137,54 +126,37 @@ func (p *volumeSnapshotContentDeleteItemAction) Execute(
context.TODO(),
&snapCont,
); err != nil && !apierrors.IsNotFound(err) {
p.log.WithError(err).Errorf("Failed to delete temp VolumeSnapshotContent %s", snapCont.Name)
p.log.Infof("VolumeSnapshotContent %s not found", snapCont.Name)
return err
}
p.log.Infof("Successfully triggered deletion of VolumeSnapshotContent %s and its cloud snapshot", snapCont.Name)
return nil
}
// tryDeleteOriginalVSC attempts to find and delete the original VSC from
// the cluster (legacy pre-1.15 backups). It patches the DeletionPolicy to
// Delete so the CSI driver also removes the cloud snapshot, then deletes
// the VSC object itself.
// Returns true if the original VSC was found and deletion was initiated.
func (p *volumeSnapshotContentDeleteItemAction) tryDeleteOriginalVSC(
var checkVSCReadiness = func(
ctx context.Context,
vscName string,
) bool {
existing := new(snapshotv1api.VolumeSnapshotContent)
if err := p.crClient.Get(ctx, crclient.ObjectKey{Name: vscName}, existing); err != nil {
if apierrors.IsNotFound(err) {
p.log.Debugf("Original VolumeSnapshotContent %s not found in cluster, will use temp VSC flow", vscName)
} else {
p.log.WithError(err).Warnf("Error looking up original VolumeSnapshotContent %s, will use temp VSC flow", vscName)
}
return false
vsc *snapshotv1api.VolumeSnapshotContent,
client crclient.Client,
) (bool, error) {
tmpVSC := new(snapshotv1api.VolumeSnapshotContent)
if err := client.Get(ctx, crclient.ObjectKeyFromObject(vsc), tmpVSC); err != nil {
return false, errors.Wrapf(
err, "failed to get VolumeSnapshotContent %s", vsc.Name,
)
}
p.log.Debugf("Found original VolumeSnapshotContent %s in cluster (legacy backup), cleaning up directly", vscName)
// Patch DeletionPolicy to Delete so the CSI driver removes the cloud snapshot
if existing.Spec.DeletionPolicy != snapshotv1api.VolumeSnapshotContentDelete {
original := existing.DeepCopy()
existing.Spec.DeletionPolicy = snapshotv1api.VolumeSnapshotContentDelete
if err := p.crClient.Patch(ctx, existing, crclient.MergeFrom(original)); err != nil {
p.log.WithError(err).Warnf("Failed to patch DeletionPolicy on original VSC %s, will use temp VSC flow", vscName)
return false
}
p.log.Debugf("Patched DeletionPolicy to Delete on original VolumeSnapshotContent %s", vscName)
if tmpVSC.Status != nil && boolptr.IsSetToTrue(tmpVSC.Status.ReadyToUse) {
return true, nil
}
// Delete the original VSC — the CSI driver will clean up the cloud snapshot
if err := p.crClient.Delete(ctx, existing); err != nil && !apierrors.IsNotFound(err) {
p.log.WithError(err).Warnf("Failed to delete original VolumeSnapshotContent %s, will use temp VSC flow", vscName)
return false
// Fail fast on permanent CSI driver errors (e.g., InvalidSnapshot.NotFound)
if tmpVSC.Status != nil && tmpVSC.Status.Error != nil && tmpVSC.Status.Error.Message != nil {
return false, errors.Errorf(
"VolumeSnapshotContent %s has error: %s", vsc.Name, *tmpVSC.Status.Error.Message,
)
}
p.log.Infof("Deleted original VolumeSnapshotContent %s with DeletionPolicy=Delete, CSI driver will remove cloud snapshot", vscName)
return true
return false, nil
}
func NewVolumeSnapshotContentDeleteItemAction(

View File

@@ -23,10 +23,9 @@ import (
"time"
snapshotv1api "github.com/kubernetes-csi/external-snapshotter/client/v8/apis/volumesnapshot/v1"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/stretchr/testify/require"
corev1api "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
@@ -86,12 +85,16 @@ func (c *fakeClientWithErrors) Delete(ctx context.Context, obj crclient.Object,
func TestVSCExecute(t *testing.T) {
snapshotHandleStr := "test"
tests := []struct {
name string
item runtime.Unstructured
vsc *snapshotv1api.VolumeSnapshotContent
backup *velerov1api.Backup
preExistingVSC *snapshotv1api.VolumeSnapshotContent
expectErr bool
name string
item runtime.Unstructured
vsc *snapshotv1api.VolumeSnapshotContent
backup *velerov1api.Backup
function func(
ctx context.Context,
vsc *snapshotv1api.VolumeSnapshotContent,
client crclient.Client,
) (bool, error)
expectErr bool
}{
{
name: "VolumeSnapshotContent doesn't have backup label",
@@ -113,22 +116,40 @@ func TestVSCExecute(t *testing.T) {
{
name: "Normal case, VolumeSnapshot should be deleted",
vsc: builder.ForVolumeSnapshotContent("bar").ObjectMeta(builder.WithLabelsMap(map[string]string{velerov1api.BackupNameLabel: "backup"})).VolumeSnapshotClassName("volumesnapshotclass").Status(&snapshotv1api.VolumeSnapshotContentStatus{SnapshotHandle: &snapshotHandleStr}).Result(),
backup: builder.ForBackup("velero", "backup").Result(),
backup: builder.ForBackup("velero", "backup").ObjectMeta(builder.WithAnnotationsMap(map[string]string{velerov1api.ResourceTimeoutAnnotation: "5s"})).Result(),
expectErr: false,
function: func(
ctx context.Context,
vsc *snapshotv1api.VolumeSnapshotContent,
client crclient.Client,
) (bool, error) {
return true, nil
},
},
{
name: "Original VSC exists in cluster, cleaned up directly",
name: "Error case, deletion fails",
vsc: builder.ForVolumeSnapshotContent("bar").ObjectMeta(builder.WithLabelsMap(map[string]string{velerov1api.BackupNameLabel: "backup"})).Status(&snapshotv1api.VolumeSnapshotContentStatus{SnapshotHandle: &snapshotHandleStr}).Result(),
backup: builder.ForBackup("velero", "backup").Result(),
expectErr: false,
preExistingVSC: &snapshotv1api.VolumeSnapshotContent{
ObjectMeta: metav1.ObjectMeta{Name: "bar"},
Spec: snapshotv1api.VolumeSnapshotContentSpec{
DeletionPolicy: snapshotv1api.VolumeSnapshotContentRetain,
Driver: "disk.csi.azure.com",
Source: snapshotv1api.VolumeSnapshotContentSource{SnapshotHandle: stringPtr("snap-123")},
VolumeSnapshotRef: corev1api.ObjectReference{Name: "vs-1", Namespace: "default"},
},
backup: builder.ForBackup("velero", "backup").ObjectMeta(builder.WithAnnotationsMap(map[string]string{velerov1api.ResourceTimeoutAnnotation: "5s"})).Result(),
expectErr: true,
function: func(
ctx context.Context,
vsc *snapshotv1api.VolumeSnapshotContent,
client crclient.Client,
) (bool, error) {
return false, errors.Errorf("test error case")
},
},
{
name: "Error case with CSI error, dangling VSC should be cleaned up",
vsc: builder.ForVolumeSnapshotContent("bar").ObjectMeta(builder.WithLabelsMap(map[string]string{velerov1api.BackupNameLabel: "backup"})).Status(&snapshotv1api.VolumeSnapshotContentStatus{SnapshotHandle: &snapshotHandleStr}).Result(),
backup: builder.ForBackup("velero", "backup").ObjectMeta(builder.WithAnnotationsMap(map[string]string{velerov1api.ResourceTimeoutAnnotation: "5s"})).Result(),
expectErr: true,
function: func(
ctx context.Context,
vsc *snapshotv1api.VolumeSnapshotContent,
client crclient.Client,
) (bool, error) {
return false, errors.Errorf("VolumeSnapshotContent %s has error: InvalidSnapshot.NotFound", vsc.Name)
},
},
}
@@ -137,10 +158,7 @@ func TestVSCExecute(t *testing.T) {
t.Run(test.name, func(t *testing.T) {
crClient := velerotest.NewFakeControllerRuntimeClient(t)
logger := logrus.StandardLogger()
if test.preExistingVSC != nil {
require.NoError(t, crClient.Create(t.Context(), test.preExistingVSC))
}
checkVSCReadiness = test.function
p := volumeSnapshotContentDeleteItemAction{log: logger, crClient: crClient}
@@ -198,147 +216,72 @@ func TestNewVolumeSnapshotContentDeleteItemAction(t *testing.T) {
require.NoError(t, err1)
}
func TestTryDeleteOriginalVSC(t *testing.T) {
func TestCheckVSCReadiness(t *testing.T) {
tests := []struct {
name string
vscName string
existing *snapshotv1api.VolumeSnapshotContent
createIt bool
expectRet bool
vsc *snapshotv1api.VolumeSnapshotContent
createVSC bool
expectErr bool
ready bool
}{
{
name: "VSC not found in cluster, returns false",
vscName: "not-found",
expectRet: false,
name: "VSC not exist",
vsc: &snapshotv1api.VolumeSnapshotContent{
ObjectMeta: metav1.ObjectMeta{
Name: "vsc-1",
Namespace: "velero",
},
},
createVSC: false,
expectErr: true,
ready: false,
},
{
name: "VSC found with Retain policy, patches and deletes",
vscName: "legacy-vsc",
existing: &snapshotv1api.VolumeSnapshotContent{
ObjectMeta: metav1.ObjectMeta{Name: "legacy-vsc"},
Spec: snapshotv1api.VolumeSnapshotContentSpec{
DeletionPolicy: snapshotv1api.VolumeSnapshotContentRetain,
Driver: "disk.csi.azure.com",
Source: snapshotv1api.VolumeSnapshotContentSource{
SnapshotHandle: stringPtr("snap-123"),
},
VolumeSnapshotRef: corev1api.ObjectReference{
Name: "vs-1",
Namespace: "default",
name: "VSC not ready",
vsc: &snapshotv1api.VolumeSnapshotContent{
ObjectMeta: metav1.ObjectMeta{
Name: "vsc-1",
Namespace: "velero",
},
},
createVSC: true,
expectErr: false,
ready: false,
},
{
name: "VSC has error from CSI driver",
vsc: &snapshotv1api.VolumeSnapshotContent{
ObjectMeta: metav1.ObjectMeta{
Name: "vsc-1",
Namespace: "velero",
},
Status: &snapshotv1api.VolumeSnapshotContentStatus{
ReadyToUse: boolPtr(false),
Error: &snapshotv1api.VolumeSnapshotError{
Message: stringPtr("InvalidSnapshot.NotFound: The snapshot 'snap-0abc123' does not exist."),
},
},
},
createIt: true,
expectRet: true,
},
{
name: "VSC found with Delete policy already, just deletes",
vscName: "already-delete-vsc",
existing: &snapshotv1api.VolumeSnapshotContent{
ObjectMeta: metav1.ObjectMeta{Name: "already-delete-vsc"},
Spec: snapshotv1api.VolumeSnapshotContentSpec{
DeletionPolicy: snapshotv1api.VolumeSnapshotContentDelete,
Driver: "disk.csi.azure.com",
Source: snapshotv1api.VolumeSnapshotContentSource{
SnapshotHandle: stringPtr("snap-456"),
},
VolumeSnapshotRef: corev1api.ObjectReference{
Name: "vs-2",
Namespace: "default",
},
},
},
createIt: true,
expectRet: true,
createVSC: true,
expectErr: true,
ready: false,
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
crClient := velerotest.NewFakeControllerRuntimeClient(t)
logger := logrus.StandardLogger()
p := &volumeSnapshotContentDeleteItemAction{
log: logger,
crClient: crClient,
if test.createVSC {
require.NoError(t, crClient.Create(t.Context(), test.vsc))
}
if test.createIt && test.existing != nil {
require.NoError(t, crClient.Create(t.Context(), test.existing))
}
result := p.tryDeleteOriginalVSC(t.Context(), test.vscName)
require.Equal(t, test.expectRet, result)
// If cleanup succeeded, verify the VSC is gone
if test.expectRet {
err := crClient.Get(t.Context(), crclient.ObjectKey{Name: test.vscName},
&snapshotv1api.VolumeSnapshotContent{})
require.True(t, apierrors.IsNotFound(err),
"VSC should have been deleted from cluster")
ready, err := checkVSCReadiness(t.Context(), test.vsc, crClient)
require.Equal(t, test.ready, ready)
if test.expectErr {
require.Error(t, err)
}
})
}
// Error injection tests for tryDeleteOriginalVSC
t.Run("Get returns non-NotFound error, returns false", func(t *testing.T) {
errClient := &fakeClientWithErrors{
Client: velerotest.NewFakeControllerRuntimeClient(t),
getError: fmt.Errorf("connection refused"),
}
p := &volumeSnapshotContentDeleteItemAction{
log: logrus.StandardLogger(),
crClient: errClient,
}
require.False(t, p.tryDeleteOriginalVSC(t.Context(), "some-vsc"))
})
t.Run("Patch fails, returns false", func(t *testing.T) {
realClient := velerotest.NewFakeControllerRuntimeClient(t)
vsc := &snapshotv1api.VolumeSnapshotContent{
ObjectMeta: metav1.ObjectMeta{Name: "patch-fail-vsc"},
Spec: snapshotv1api.VolumeSnapshotContentSpec{
DeletionPolicy: snapshotv1api.VolumeSnapshotContentRetain,
Driver: "disk.csi.azure.com",
Source: snapshotv1api.VolumeSnapshotContentSource{SnapshotHandle: stringPtr("snap-789")},
VolumeSnapshotRef: corev1api.ObjectReference{Name: "vs-3", Namespace: "default"},
},
}
require.NoError(t, realClient.Create(t.Context(), vsc))
errClient := &fakeClientWithErrors{
Client: realClient,
patchError: fmt.Errorf("patch forbidden"),
}
p := &volumeSnapshotContentDeleteItemAction{
log: logrus.StandardLogger(),
crClient: errClient,
}
require.False(t, p.tryDeleteOriginalVSC(t.Context(), "patch-fail-vsc"))
})
t.Run("Delete fails, returns false", func(t *testing.T) {
realClient := velerotest.NewFakeControllerRuntimeClient(t)
vsc := &snapshotv1api.VolumeSnapshotContent{
ObjectMeta: metav1.ObjectMeta{Name: "delete-fail-vsc"},
Spec: snapshotv1api.VolumeSnapshotContentSpec{
DeletionPolicy: snapshotv1api.VolumeSnapshotContentDelete,
Driver: "disk.csi.azure.com",
Source: snapshotv1api.VolumeSnapshotContentSource{SnapshotHandle: stringPtr("snap-999")},
VolumeSnapshotRef: corev1api.ObjectReference{Name: "vs-4", Namespace: "default"},
},
}
require.NoError(t, realClient.Create(t.Context(), vsc))
errClient := &fakeClientWithErrors{
Client: realClient,
deleteError: fmt.Errorf("delete forbidden"),
}
p := &volumeSnapshotContentDeleteItemAction{
log: logrus.StandardLogger(),
crClient: errClient,
}
require.False(t, p.tryDeleteOriginalVSC(t.Context(), "delete-fail-vsc"))
})
}
func TestVSCExecute_CreateSleepDeleteOrder(t *testing.T) {

View File

@@ -35,7 +35,8 @@ type BackupRepositorySpec struct {
// +optional
RepositoryType string `json:"repositoryType"`
// Deprecated
// ResticIdentifier is the full restic-compatible string for identifying
// this repository. This field is only used when RepositoryType is "restic".
// +optional
ResticIdentifier string `json:"resticIdentifier,omitempty"`
@@ -57,7 +58,8 @@ const (
BackupRepositoryPhaseReady BackupRepositoryPhase = "Ready"
BackupRepositoryPhaseNotReady BackupRepositoryPhase = "NotReady"
BackupRepositoryTypeKopia string = "kopia"
BackupRepositoryTypeRestic string = "restic"
BackupRepositoryTypeKopia string = "kopia"
)
// BackupRepositoryStatus is the current status of a BackupRepository.

View File

@@ -187,7 +187,7 @@ func getNamespaceIncludesExcludesAndArgoCDNamespaces(backup *velerov1api.Backup,
Excludes(backup.Spec.ExcludedNamespaces...)
// Expand wildcards if needed
if err := includesExcludes.ExpandIncludesExcludes(); err != nil {
if err := includesExcludes.ExpandIncludesExcludes(true); err != nil {
return nil, []string{}, err
}
@@ -286,7 +286,7 @@ func (kb *kubernetesBackupper) BackupWithResolvers(
expandedExcludes := backupRequest.NamespaceIncludesExcludes.GetExcludes()
// Get the final namespace list after wildcard expansion
wildcardResult, err := backupRequest.NamespaceIncludesExcludes.ResolveNamespaceList()
wildcardResult, err := backupRequest.NamespaceIncludesExcludes.ResolveNamespaceList(true)
if err != nil {
log.WithError(err).Errorf("error resolving namespace list")
return err
@@ -410,7 +410,7 @@ func (kb *kubernetesBackupper) BackupWithResolvers(
// Resolve namespaces for PVC-to-Pod cache building in volumehelper.
// See issue #9179 for details.
namespaces, err := backupRequest.NamespaceIncludesExcludes.ResolveNamespaceList()
namespaces, err := backupRequest.NamespaceIncludesExcludes.ResolveNamespaceList(true)
if err != nil {
log.WithError(err).Error("Failed to resolve namespace list for PVC-to-Pod cache")
return err

View File

@@ -36,6 +36,7 @@ import (
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
corev1api "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/api/meta"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
@@ -281,8 +282,8 @@ func TestBackupOldResourceFiltering(t *testing.T) {
Result(),
apiResources: []*test.APIResource{
test.Pods(
builder.ForPod("foo", "bar").Result(),
builder.ForPod("zoo", "raz").Result(),
builder.ForPod("foo", "bar").Phase(corev1api.PodRunning).Result(),
builder.ForPod("zoo", "raz").Phase(corev1api.PodRunning).Result(),
),
test.Deployments(
builder.ForDeployment("foo", "bar").Result(),
@@ -980,28 +981,6 @@ func TestCRDInclusion(t *testing.T) {
"resources/volumesnapshotlocations.velero.io/v1-preferredversion/namespaces/foo/vsl-1.json",
},
},
{
name: "include cluster resources=auto includes CRDs with CRs when backing up selected namespaces",
backup: defaultBackup().
IncludedNamespaces("foo").
Result(),
apiResources: []*test.APIResource{
test.CRDs(
builder.ForCustomResourceDefinitionV1Beta1("backups.velero.io").Result(),
builder.ForCustomResourceDefinitionV1Beta1("volumesnapshotlocations.velero.io").Result(),
builder.ForCustomResourceDefinitionV1Beta1("test.velero.io").Result(),
),
test.VSLs(
builder.ForVolumeSnapshotLocation("foo", "vsl-1").Result(),
),
},
want: []string{
"resources/customresourcedefinitions.apiextensions.k8s.io/cluster/volumesnapshotlocations.velero.io.json",
"resources/volumesnapshotlocations.velero.io/namespaces/foo/vsl-1.json",
"resources/customresourcedefinitions.apiextensions.k8s.io/v1beta1-preferredversion/cluster/volumesnapshotlocations.velero.io.json",
"resources/volumesnapshotlocations.velero.io/v1-preferredversion/namespaces/foo/vsl-1.json",
},
},
{
name: "include-cluster-resources=false excludes all CRDs when backing up selected namespaces",
backup: defaultBackup().
@@ -4296,6 +4275,12 @@ func (h *harness) addItems(t *testing.T, resource *test.APIResource) {
unstructuredObj := &unstructured.Unstructured{Object: obj}
if resource.Namespaced {
namespace := &corev1api.Namespace{ObjectMeta: metav1.ObjectMeta{Name: item.GetNamespace()}}
err = h.backupper.kbClient.Create(t.Context(), namespace)
if err != nil && !apierrors.IsAlreadyExists(err) {
require.NoError(t, err)
}
_, err = h.DynamicClient.Resource(resource.GVR()).Namespace(item.GetNamespace()).Create(t.Context(), unstructuredObj, metav1.CreateOptions{})
} else {
_, err = h.DynamicClient.Resource(resource.GVR()).Create(t.Context(), unstructuredObj, metav1.CreateOptions{})
@@ -4346,7 +4331,7 @@ func newSnapshotLocation(ns, name, provider string) *velerov1.VolumeSnapshotLoca
}
func defaultBackup() *builder.BackupBuilder {
return builder.ForBackup(velerov1.DefaultNamespace, "backup-1").DefaultVolumesToFsBackup(false)
return builder.ForBackup(velerov1.DefaultNamespace, "backup-1").DefaultVolumesToFsBackup(false).IncludedNamespaces("*")
}
func toUnstructuredOrFail(t *testing.T, obj any) map[string]any {
@@ -5422,8 +5407,6 @@ func TestBackupNamespaces(t *testing.T) {
want: []string{
"resources/namespaces/cluster/ns-1.json",
"resources/namespaces/v1-preferredversion/cluster/ns-1.json",
"resources/namespaces/cluster/ns-3.json",
"resources/namespaces/v1-preferredversion/cluster/ns-3.json",
},
},
{
@@ -5457,10 +5440,6 @@ func TestBackupNamespaces(t *testing.T) {
want: []string{
"resources/namespaces/cluster/ns-1.json",
"resources/namespaces/v1-preferredversion/cluster/ns-1.json",
"resources/namespaces/cluster/ns-2.json",
"resources/namespaces/v1-preferredversion/cluster/ns-2.json",
"resources/namespaces/cluster/ns-3.json",
"resources/namespaces/v1-preferredversion/cluster/ns-3.json",
"resources/deployments.apps/namespaces/ns-1/deploy-1.json",
"resources/deployments.apps/v1-preferredversion/namespaces/ns-1/deploy-1.json",
},

View File

@@ -633,22 +633,19 @@ func coreGroupResourcePriority(resource string) int {
}
// getNamespacesToList examines ie and resolves the includes and excludes to a full list of
// namespaces to list. If ie is nil or it includes *, the result is just "" (list across all
// namespaces). Otherwise, the result is a list of every included namespace minus all excluded ones.
// namespaces to list. If ie is nil, the result is just "" (list across all namespaces).
// Otherwise, the result is a list of every included namespace minus all excluded ones.
// Because the namespace IE filter is expanded from 1.18, there is no need to consider
// wildcard characters anymore.
func getNamespacesToList(ie *collections.NamespaceIncludesExcludes) []string {
if ie == nil {
return []string{""}
}
if ie.ShouldInclude("*") {
// "" means all namespaces
return []string{""}
}
var list []string
for _, i := range ie.GetIncludes() {
if ie.ShouldInclude(i) {
list = append(list, i)
for _, n := range ie.GetIncludes() {
if ie.ShouldInclude(n) {
list = append(list, n)
}
}

View File

@@ -81,6 +81,7 @@ type Options struct {
DefaultVolumesToFsBackup bool
UploaderType string
DefaultSnapshotMoveData bool
CSISnapshotEarlyFrequentPolling bool
DisableInformerCache bool
ScheduleSkipImmediately bool
PodResources kubeutil.PodResources
@@ -141,6 +142,7 @@ func (o *Options) BindFlags(flags *pflag.FlagSet) {
flags.BoolVar(&o.DefaultVolumesToFsBackup, "default-volumes-to-fs-backup", o.DefaultVolumesToFsBackup, "Bool flag to configure Velero server to use pod volume file system backup by default for all volumes on all backups. Optional.")
flags.StringVar(&o.UploaderType, "uploader-type", o.UploaderType, fmt.Sprintf("The type of uploader to transfer the data of pod volumes, supported value: '%s'", uploader.KopiaType))
flags.BoolVar(&o.DefaultSnapshotMoveData, "default-snapshot-move-data", o.DefaultSnapshotMoveData, "Bool flag to configure Velero server to move data by default for all snapshots supporting data movement. Optional.")
flags.BoolVar(&o.CSISnapshotEarlyFrequentPolling, "csi-snapshot-early-frequent-polling", o.CSISnapshotEarlyFrequentPolling, "Bool flag to configure Velero server to use early frequent polling by default for all CSI snapshots. Optional.")
flags.BoolVar(&o.DisableInformerCache, "disable-informer-cache", o.DisableInformerCache, "Disable informer cache for Get calls on restore. With this enabled, it will speed up restore in cases where there are backup resources which already exist in the cluster, but for very large clusters this will increase velero memory usage. Default is false (don't disable). Optional.")
flags.BoolVar(&o.ScheduleSkipImmediately, "schedule-skip-immediately", o.ScheduleSkipImmediately, "Skip the first scheduled backup immediately after creating a schedule. Default is false (don't skip).")
flags.BoolVar(&o.NodeAgentDisableHostPath, "node-agent-disable-host-path", o.NodeAgentDisableHostPath, "Don't mount the pod volume host path to node-agent. Optional. Pod volume host path mount is required by fs-backup but could be disabled for other backup methods.")
@@ -238,16 +240,17 @@ func NewInstallOptions() *Options {
NodeAgentPodCPULimit: install.DefaultNodeAgentPodCPULimit,
NodeAgentPodMemLimit: install.DefaultNodeAgentPodMemLimit,
// Default to creating a VSL unless we're told otherwise
UseVolumeSnapshots: true,
NoDefaultBackupLocation: false,
CRDsOnly: false,
DefaultVolumesToFsBackup: false,
UploaderType: uploader.KopiaType,
DefaultSnapshotMoveData: false,
DisableInformerCache: false,
ScheduleSkipImmediately: false,
kubeletRootDir: install.DefaultKubeletRootDir,
NodeAgentDisableHostPath: false,
UseVolumeSnapshots: true,
NoDefaultBackupLocation: false,
CRDsOnly: false,
DefaultVolumesToFsBackup: false,
UploaderType: uploader.KopiaType,
DefaultSnapshotMoveData: false,
CSISnapshotEarlyFrequentPolling: false,
DisableInformerCache: false,
ScheduleSkipImmediately: false,
kubeletRootDir: install.DefaultKubeletRootDir,
NodeAgentDisableHostPath: false,
}
}
@@ -324,6 +327,7 @@ func (o *Options) AsVeleroOptions() (*install.VeleroOptions, error) {
DefaultVolumesToFsBackup: o.DefaultVolumesToFsBackup,
UploaderType: o.UploaderType,
DefaultSnapshotMoveData: o.DefaultSnapshotMoveData,
CSISnapshotEarlyFrequentPolling: o.CSISnapshotEarlyFrequentPolling,
DisableInformerCache: o.DisableInformerCache,
ScheduleSkipImmediately: o.ScheduleSkipImmediately,
PodResources: o.PodResources,

View File

@@ -204,9 +204,9 @@ func Test_newServer(t *testing.T) {
}, logger)
require.Error(t, err)
// invalid clientQPS Kopia uploader
// invalid clientQPS Restic uploader
_, err = newServer(factory, &config.Config{
UploaderType: uploader.KopiaType,
UploaderType: uploader.ResticType,
ClientQPS: -1,
}, logger)
require.Error(t, err)

View File

@@ -575,6 +575,14 @@ func (b *backupReconciler) prepareBackupRequest(ctx context.Context, backup *vel
request.Status.ValidationErrors = append(request.Status.ValidationErrors, fmt.Sprintf("Invalid included/excluded namespace lists: %v", err))
}
// if included namespaces is empty, default to wildcard to include all namespaces
// This is useful for later wildcard expansion logic.
// This also align the behavior between backup creation from CLI and from API,
// as CLI will default to wildcard if included namespaces is not specified.
if request.Spec.IncludedNamespaces == nil {
request.Spec.IncludedNamespaces = []string{"*"}
}
// validate that only one exists orLabelSelector or just labelSelector (singular)
if request.Spec.OrLabelSelectors != nil && request.Spec.LabelSelector != nil {
request.Status.ValidationErrors = append(request.Status.ValidationErrors, "encountered labelSelector as well as orLabelSelectors in backup spec, only one can be specified")

View File

@@ -713,6 +713,7 @@ func TestProcessBackupCompletions(t *testing.T) {
SnapshotMoveData: boolptr.False(),
ExcludedClusterScopedResources: autoExcludeClusterScopedResources,
ExcludedNamespaceScopedResources: autoExcludeNamespaceScopedResources,
IncludedNamespaces: []string{"*"},
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseFinalizing,
@@ -752,6 +753,7 @@ func TestProcessBackupCompletions(t *testing.T) {
SnapshotMoveData: boolptr.False(),
ExcludedClusterScopedResources: autoExcludeClusterScopedResources,
ExcludedNamespaceScopedResources: autoExcludeNamespaceScopedResources,
IncludedNamespaces: []string{"*"},
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseFinalizing,
@@ -795,6 +797,7 @@ func TestProcessBackupCompletions(t *testing.T) {
SnapshotMoveData: boolptr.False(),
ExcludedClusterScopedResources: autoExcludeClusterScopedResources,
ExcludedNamespaceScopedResources: autoExcludeNamespaceScopedResources,
IncludedNamespaces: []string{"*"},
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseFinalizing,
@@ -835,6 +838,7 @@ func TestProcessBackupCompletions(t *testing.T) {
SnapshotMoveData: boolptr.False(),
ExcludedClusterScopedResources: autoExcludeClusterScopedResources,
ExcludedNamespaceScopedResources: autoExcludeNamespaceScopedResources,
IncludedNamespaces: []string{"*"},
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseFinalizing,
@@ -875,6 +879,7 @@ func TestProcessBackupCompletions(t *testing.T) {
SnapshotMoveData: boolptr.False(),
ExcludedClusterScopedResources: autoExcludeClusterScopedResources,
ExcludedNamespaceScopedResources: autoExcludeNamespaceScopedResources,
IncludedNamespaces: []string{"*"},
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseFinalizing,
@@ -916,6 +921,7 @@ func TestProcessBackupCompletions(t *testing.T) {
SnapshotMoveData: boolptr.False(),
ExcludedClusterScopedResources: autoExcludeClusterScopedResources,
ExcludedNamespaceScopedResources: autoExcludeNamespaceScopedResources,
IncludedNamespaces: []string{"*"},
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseFinalizing,
@@ -957,6 +963,7 @@ func TestProcessBackupCompletions(t *testing.T) {
SnapshotMoveData: boolptr.False(),
ExcludedClusterScopedResources: autoExcludeClusterScopedResources,
ExcludedNamespaceScopedResources: autoExcludeNamespaceScopedResources,
IncludedNamespaces: []string{"*"},
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseFinalizing,
@@ -998,6 +1005,7 @@ func TestProcessBackupCompletions(t *testing.T) {
SnapshotMoveData: boolptr.False(),
ExcludedClusterScopedResources: autoExcludeClusterScopedResources,
ExcludedNamespaceScopedResources: autoExcludeNamespaceScopedResources,
IncludedNamespaces: []string{"*"},
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseFinalizing,
@@ -1039,6 +1047,7 @@ func TestProcessBackupCompletions(t *testing.T) {
SnapshotMoveData: boolptr.False(),
ExcludedClusterScopedResources: autoExcludeClusterScopedResources,
ExcludedNamespaceScopedResources: autoExcludeNamespaceScopedResources,
IncludedNamespaces: []string{"*"},
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseFinalizing,
@@ -1081,6 +1090,7 @@ func TestProcessBackupCompletions(t *testing.T) {
SnapshotMoveData: boolptr.False(),
ExcludedClusterScopedResources: autoExcludeClusterScopedResources,
ExcludedNamespaceScopedResources: autoExcludeNamespaceScopedResources,
IncludedNamespaces: []string{"*"},
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseFailed,
@@ -1123,6 +1133,7 @@ func TestProcessBackupCompletions(t *testing.T) {
SnapshotMoveData: boolptr.False(),
ExcludedClusterScopedResources: autoExcludeClusterScopedResources,
ExcludedNamespaceScopedResources: autoExcludeNamespaceScopedResources,
IncludedNamespaces: []string{"*"},
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseFailed,
@@ -1165,6 +1176,7 @@ func TestProcessBackupCompletions(t *testing.T) {
SnapshotMoveData: boolptr.True(),
ExcludedClusterScopedResources: autoExcludeClusterScopedResources,
ExcludedNamespaceScopedResources: autoExcludeNamespaceScopedResources,
IncludedNamespaces: []string{"*"},
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseFinalizing,
@@ -1208,6 +1220,7 @@ func TestProcessBackupCompletions(t *testing.T) {
SnapshotMoveData: boolptr.False(),
ExcludedClusterScopedResources: autoExcludeClusterScopedResources,
ExcludedNamespaceScopedResources: autoExcludeNamespaceScopedResources,
IncludedNamespaces: []string{"*"},
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseFinalizing,
@@ -1251,6 +1264,7 @@ func TestProcessBackupCompletions(t *testing.T) {
SnapshotMoveData: boolptr.False(),
ExcludedClusterScopedResources: autoExcludeClusterScopedResources,
ExcludedNamespaceScopedResources: autoExcludeNamespaceScopedResources,
IncludedNamespaces: []string{"*"},
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseFinalizing,
@@ -1294,6 +1308,7 @@ func TestProcessBackupCompletions(t *testing.T) {
SnapshotMoveData: boolptr.True(),
ExcludedClusterScopedResources: autoExcludeClusterScopedResources,
ExcludedNamespaceScopedResources: autoExcludeNamespaceScopedResources,
IncludedNamespaces: []string{"*"},
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseFinalizing,
@@ -1338,6 +1353,7 @@ func TestProcessBackupCompletions(t *testing.T) {
SnapshotMoveData: boolptr.False(),
ExcludedClusterScopedResources: autoExcludeClusterScopedResources,
ExcludedNamespaceScopedResources: autoExcludeNamespaceScopedResources,
IncludedNamespaces: []string{"*"},
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseFinalizing,
@@ -1381,6 +1397,7 @@ func TestProcessBackupCompletions(t *testing.T) {
SnapshotMoveData: boolptr.True(),
ExcludedClusterScopedResources: autoExcludeClusterScopedResources,
ExcludedNamespaceScopedResources: autoExcludeNamespaceScopedResources,
IncludedNamespaces: []string{"*"},
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseFinalizing,
@@ -1430,6 +1447,7 @@ func TestProcessBackupCompletions(t *testing.T) {
ExcludedClusterScopedResources: append([]string{"clusterroles"}, autoExcludeClusterScopedResources...),
IncludedNamespaceScopedResources: []string{"pods"},
ExcludedNamespaceScopedResources: append([]string{"secrets"}, autoExcludeNamespaceScopedResources...),
IncludedNamespaces: []string{"*"},
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseFinalizing,
@@ -1479,6 +1497,7 @@ func TestProcessBackupCompletions(t *testing.T) {
ExcludedClusterScopedResources: append([]string{"clusterroles"}, autoExcludeClusterScopedResources...),
IncludedNamespaceScopedResources: []string{"pods"},
ExcludedNamespaceScopedResources: append([]string{"secrets"}, autoExcludeNamespaceScopedResources...),
IncludedNamespaces: []string{"*"},
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseFinalizing,

View File

@@ -889,12 +889,12 @@ func TestGetSnapshotsInBackup(t *testing.T) {
{
VolumeNamespace: "ns-1",
SnapshotID: "snap-3",
RepositoryType: "kopia",
RepositoryType: "restic",
},
{
VolumeNamespace: "ns-1",
SnapshotID: "snap-4",
RepositoryType: "kopia",
RepositoryType: "restic",
},
},
},
@@ -944,7 +944,7 @@ func TestGetSnapshotsInBackup(t *testing.T) {
{
VolumeNamespace: "ns-1",
SnapshotID: "snap-3",
RepositoryType: "kopia",
RepositoryType: "restic",
},
},
},

View File

@@ -200,7 +200,7 @@ func (r *backupQueueReconciler) checkForEarlierRunnableBackups(backup *velerov1a
func namespacesForBackup(backup *velerov1api.Backup, clusterNamespaces []string) []string {
// Ignore error here. If a backup has invalid namespace wildcards, the backup controller
// will validate and fail it. Consider the ns list empty for conflict detection purposes.
nsList, err := collections.NewNamespaceIncludesExcludes().Includes(backup.Spec.IncludedNamespaces...).Excludes(backup.Spec.ExcludedNamespaces...).ActiveNamespaces(clusterNamespaces).ResolveNamespaceList()
nsList, err := collections.NewNamespaceIncludesExcludes().Includes(backup.Spec.IncludedNamespaces...).Excludes(backup.Spec.ExcludedNamespaces...).ActiveNamespaces(clusterNamespaces).ResolveNamespaceList(true)
if err != nil {
return []string{}
}

View File

@@ -43,6 +43,7 @@ import (
"github.com/vmware-tanzu/velero/pkg/constant"
"github.com/vmware-tanzu/velero/pkg/label"
"github.com/vmware-tanzu/velero/pkg/metrics"
repoconfig "github.com/vmware-tanzu/velero/pkg/repository/config"
"github.com/vmware-tanzu/velero/pkg/repository/maintenance"
repomanager "github.com/vmware-tanzu/velero/pkg/repository/manager"
"github.com/vmware-tanzu/velero/pkg/util/kube"
@@ -52,11 +53,9 @@ import (
const (
repoSyncPeriod = 5 * time.Minute
defaultMaintainFrequency = 7 * 24 * time.Hour
defaultMaintenanceStatusQueueLength = 25
defaultMaintenanceStatusQueueLength = 3
)
var maintenanceStatusQueueLength = defaultMaintenanceStatusQueueLength
type BackupRepoReconciler struct {
client.Client
namespace string
@@ -239,10 +238,6 @@ func (r *BackupRepoReconciler) Reconcile(ctx context.Context, req ctrl.Request)
return ctrl.Result{}, err
}
if backupRepo.Spec.RepositoryType != velerov1api.BackupRepositoryTypeKopia {
return ctrl.Result{}, nil
}
bsl, bslErr := r.getBSL(ctx, backupRepo)
if bslErr != nil {
log.WithError(bslErr).Error("Fail to get BSL for BackupRepository. Skip reconciling.")
@@ -250,7 +245,7 @@ func (r *BackupRepoReconciler) Reconcile(ctx context.Context, req ctrl.Request)
}
if backupRepo.Status.Phase == "" || backupRepo.Status.Phase == velerov1api.BackupRepositoryPhaseNew {
if err := r.initializeRepo(ctx, backupRepo, log); err != nil {
if err := r.initializeRepo(ctx, backupRepo, bsl, log); err != nil {
log.WithError(err).Error("error initialize repository")
return ctrl.Result{}, errors.WithStack(err)
}
@@ -268,7 +263,7 @@ func (r *BackupRepoReconciler) Reconcile(ctx context.Context, req ctrl.Request)
switch backupRepo.Status.Phase {
case velerov1api.BackupRepositoryPhaseNotReady:
ready, err := r.checkNotReadyRepo(ctx, backupRepo, log)
ready, err := r.checkNotReadyRepo(ctx, backupRepo, bsl, log)
if err != nil {
return ctrl.Result{}, err
} else if !ready {
@@ -316,9 +311,35 @@ func (r *BackupRepoReconciler) getBSL(ctx context.Context, req *velerov1api.Back
return loc, nil
}
func (r *BackupRepoReconciler) initializeRepo(ctx context.Context, req *velerov1api.BackupRepository, log logrus.FieldLogger) error {
func (r *BackupRepoReconciler) getIdentifierByBSL(bsl *velerov1api.BackupStorageLocation, req *velerov1api.BackupRepository) (string, error) {
repoIdentifier, err := repoconfig.GetRepoIdentifier(bsl, req.Spec.VolumeNamespace)
if err != nil {
return "", errors.Wrapf(err, "error to get identifier for repo %s", req.Name)
}
return repoIdentifier, nil
}
func (r *BackupRepoReconciler) initializeRepo(ctx context.Context, req *velerov1api.BackupRepository, bsl *velerov1api.BackupStorageLocation, log logrus.FieldLogger) error {
log.WithField("repoConfig", r.backupRepoConfig).Info("Initializing backup repository")
var repoIdentifier string
// Only get restic identifier for restic repositories
if req.Spec.RepositoryType == "" || req.Spec.RepositoryType == velerov1api.BackupRepositoryTypeRestic {
var err error
repoIdentifier, err = r.getIdentifierByBSL(bsl, req)
if err != nil {
return r.patchBackupRepository(ctx, req, func(rr *velerov1api.BackupRepository) {
rr.Status.Message = err.Error()
rr.Status.Phase = velerov1api.BackupRepositoryPhaseNotReady
if rr.Spec.MaintenanceFrequency.Duration <= 0 {
rr.Spec.MaintenanceFrequency = metav1.Duration{Duration: r.getRepositoryMaintenanceFrequency(req)}
}
})
}
}
config, err := getBackupRepositoryConfig(ctx, r, r.backupRepoConfig, r.namespace, req.Name, req.Spec.RepositoryType, log)
if err != nil {
log.WithError(err).Warn("Failed to get repo config, repo config is ignored")
@@ -328,6 +349,11 @@ func (r *BackupRepoReconciler) initializeRepo(ctx context.Context, req *velerov1
// defaulting - if the patch fails, return an error so the item is returned to the queue
if err := r.patchBackupRepository(ctx, req, func(rr *velerov1api.BackupRepository) {
// Only set ResticIdentifier for restic repositories
if rr.Spec.RepositoryType == "" || rr.Spec.RepositoryType == velerov1api.BackupRepositoryTypeRestic {
rr.Spec.ResticIdentifier = repoIdentifier
}
if rr.Spec.MaintenanceFrequency.Duration <= 0 {
rr.Spec.MaintenanceFrequency = metav1.Duration{Duration: r.getRepositoryMaintenanceFrequency(req)}
}
@@ -371,7 +397,7 @@ func ensureRepo(repo *velerov1api.BackupRepository, repoManager repomanager.Mana
}
func (r *BackupRepoReconciler) recallMaintenance(ctx context.Context, req *velerov1api.BackupRepository, log logrus.FieldLogger) error {
history, err := maintenance.WaitAllJobsComplete(ctx, r.Client, req, maintenanceStatusQueueLength, log)
history, err := maintenance.WaitAllJobsComplete(ctx, r.Client, req, defaultMaintenanceStatusQueueLength, log)
if err != nil {
return errors.Wrapf(err, "error waiting incomplete repo maintenance job for repo %s", req.Name)
}
@@ -429,7 +455,7 @@ func consolidateHistory(coming, cur []velerov1api.BackupRepositoryMaintenanceSta
truncated := []velerov1api.BackupRepositoryMaintenanceStatus{}
for consolidator.Len() > 0 {
if len(truncated) == maintenanceStatusQueueLength {
if len(truncated) == defaultMaintenanceStatusQueueLength {
break
}
@@ -539,8 +565,8 @@ func updateRepoMaintenanceHistory(repo *velerov1api.BackupRepository, result vel
}
startingPos := 0
if len(repo.Status.RecentMaintenance) >= maintenanceStatusQueueLength {
startingPos = len(repo.Status.RecentMaintenance) - maintenanceStatusQueueLength + 1
if len(repo.Status.RecentMaintenance) >= defaultMaintenanceStatusQueueLength {
startingPos = len(repo.Status.RecentMaintenance) - defaultMaintenanceStatusQueueLength + 1
}
repo.Status.RecentMaintenance = append(repo.Status.RecentMaintenance[startingPos:], latest)
@@ -550,9 +576,25 @@ func dueForMaintenance(req *velerov1api.BackupRepository, now time.Time) bool {
return req.Status.LastMaintenanceTime == nil || req.Status.LastMaintenanceTime.Add(req.Spec.MaintenanceFrequency.Duration).Before(now)
}
func (r *BackupRepoReconciler) checkNotReadyRepo(ctx context.Context, req *velerov1api.BackupRepository, log logrus.FieldLogger) (bool, error) {
func (r *BackupRepoReconciler) checkNotReadyRepo(ctx context.Context, req *velerov1api.BackupRepository, bsl *velerov1api.BackupStorageLocation, log logrus.FieldLogger) (bool, error) {
log.Info("Checking backup repository for readiness")
// Only check and update restic identifier for restic repositories
if req.Spec.RepositoryType == "" || req.Spec.RepositoryType == velerov1api.BackupRepositoryTypeRestic {
repoIdentifier, err := r.getIdentifierByBSL(bsl, req)
if err != nil {
return false, r.patchBackupRepository(ctx, req, repoNotReady(err.Error()))
}
if repoIdentifier != req.Spec.ResticIdentifier {
if err := r.patchBackupRepository(ctx, req, func(rr *velerov1api.BackupRepository) {
rr.Spec.ResticIdentifier = repoIdentifier
}); err != nil {
return false, err
}
}
}
// we need to ensure it (first check, if check fails, attempt to init)
// because we don't know if it's been successfully initialized yet.
if err := ensureRepo(req, r.repositoryManager); err != nil {

View File

@@ -98,6 +98,32 @@ func TestPatchBackupRepository(t *testing.T) {
}
func TestCheckNotReadyRepo(t *testing.T) {
// Test for restic repository
t.Run("restic repository", func(t *testing.T) {
rr := mockBackupRepositoryCR()
rr.Spec.BackupStorageLocation = "default"
rr.Spec.ResticIdentifier = "fake-identifier"
rr.Spec.VolumeNamespace = "volume-ns-1"
rr.Spec.RepositoryType = velerov1api.BackupRepositoryTypeRestic
reconciler := mockBackupRepoReconciler(t, "PrepareRepo", rr, nil)
err := reconciler.Client.Create(t.Context(), rr)
require.NoError(t, err)
location := velerov1api.BackupStorageLocation{
Spec: velerov1api.BackupStorageLocationSpec{
Config: map[string]string{"resticRepoPrefix": "s3:test.amazonaws.com/bucket/restic"},
},
ObjectMeta: metav1.ObjectMeta{
Namespace: velerov1api.DefaultNamespace,
Name: rr.Spec.BackupStorageLocation,
},
}
_, err = reconciler.checkNotReadyRepo(t.Context(), rr, &location, reconciler.logger)
require.NoError(t, err)
assert.Equal(t, velerov1api.BackupRepositoryPhaseReady, rr.Status.Phase)
assert.Equal(t, "s3:test.amazonaws.com/bucket/restic/volume-ns-1", rr.Spec.ResticIdentifier)
})
// Test for kopia repository
t.Run("kopia repository", func(t *testing.T) {
rr := mockBackupRepositoryCR()
@@ -107,13 +133,48 @@ func TestCheckNotReadyRepo(t *testing.T) {
reconciler := mockBackupRepoReconciler(t, "PrepareRepo", rr, nil)
err := reconciler.Client.Create(t.Context(), rr)
require.NoError(t, err)
location := velerov1api.BackupStorageLocation{
Spec: velerov1api.BackupStorageLocationSpec{
Config: map[string]string{"resticRepoPrefix": "s3:test.amazonaws.com/bucket/restic"},
},
ObjectMeta: metav1.ObjectMeta{
Namespace: velerov1api.DefaultNamespace,
Name: rr.Spec.BackupStorageLocation,
},
}
_, err = reconciler.checkNotReadyRepo(t.Context(), rr, reconciler.logger)
_, err = reconciler.checkNotReadyRepo(t.Context(), rr, &location, reconciler.logger)
require.NoError(t, err)
assert.Equal(t, velerov1api.BackupRepositoryPhaseReady, rr.Status.Phase)
// ResticIdentifier should remain empty for kopia
assert.Empty(t, rr.Spec.ResticIdentifier)
})
// Test for empty repository type (defaults to restic)
t.Run("empty repository type", func(t *testing.T) {
rr := mockBackupRepositoryCR()
rr.Spec.BackupStorageLocation = "default"
rr.Spec.ResticIdentifier = "fake-identifier"
rr.Spec.VolumeNamespace = "volume-ns-1"
// Deliberately leave RepositoryType empty
reconciler := mockBackupRepoReconciler(t, "PrepareRepo", rr, nil)
err := reconciler.Client.Create(t.Context(), rr)
require.NoError(t, err)
location := velerov1api.BackupStorageLocation{
Spec: velerov1api.BackupStorageLocationSpec{
Config: map[string]string{"resticRepoPrefix": "s3:test.amazonaws.com/bucket/restic"},
},
ObjectMeta: metav1.ObjectMeta{
Namespace: velerov1api.DefaultNamespace,
Name: rr.Spec.BackupStorageLocation,
},
}
_, err = reconciler.checkNotReadyRepo(t.Context(), rr, &location, reconciler.logger)
require.NoError(t, err)
assert.Equal(t, velerov1api.BackupRepositoryPhaseReady, rr.Status.Phase)
assert.Equal(t, "s3:test.amazonaws.com/bucket/restic/volume-ns-1", rr.Spec.ResticIdentifier)
})
}
func startMaintenanceJobFail(client.Client, context.Context, *velerov1api.BackupRepository, string, logrus.Level, *logging.FormatFlag, logrus.FieldLogger) (string, error) {
@@ -402,8 +463,17 @@ func TestInitializeRepo(t *testing.T) {
reconciler := mockBackupRepoReconciler(t, "PrepareRepo", rr, nil)
err := reconciler.Client.Create(t.Context(), rr)
require.NoError(t, err)
location := velerov1api.BackupStorageLocation{
Spec: velerov1api.BackupStorageLocationSpec{
Config: map[string]string{"resticRepoPrefix": "s3:test.amazonaws.com/bucket/restic"},
},
ObjectMeta: metav1.ObjectMeta{
Namespace: velerov1api.DefaultNamespace,
Name: rr.Spec.BackupStorageLocation,
},
}
err = reconciler.initializeRepo(t.Context(), rr, reconciler.logger)
err = reconciler.initializeRepo(t.Context(), rr, &location, reconciler.logger)
require.NoError(t, err)
assert.Equal(t, velerov1api.BackupRepositoryPhaseReady, rr.Status.Phase)
}
@@ -929,8 +999,6 @@ func TestUpdateRepoMaintenanceHistory(t *testing.T) {
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
maintenanceStatusQueueLength = 3
updateRepoMaintenanceHistory(test.backupRepo, test.result, &metav1.Time{Time: standardTime}, &metav1.Time{Time: standardTime.Add(time.Hour)}, "fake-message-0")
for at := range test.backupRepo.Status.RecentMaintenance {
@@ -1426,7 +1494,7 @@ func TestDeleteOldMaintenanceJobWithConfigMap(t *testing.T) {
MaintenanceFrequency: metav1.Duration{Duration: testMaintenanceFrequency},
BackupStorageLocation: "default",
VolumeNamespace: "test-ns",
RepositoryType: "kopia",
RepositoryType: "restic",
},
Status: velerov1api.BackupRepositoryStatus{
Phase: velerov1api.BackupRepositoryPhaseReady,
@@ -1463,7 +1531,7 @@ func TestDeleteOldMaintenanceJobWithConfigMap(t *testing.T) {
MaintenanceFrequency: metav1.Duration{Duration: testMaintenanceFrequency},
BackupStorageLocation: "default",
VolumeNamespace: "test-ns",
RepositoryType: "kopia",
RepositoryType: "restic",
},
Status: velerov1api.BackupRepositoryStatus{
Phase: velerov1api.BackupRepositoryPhaseReady,
@@ -1482,8 +1550,8 @@ func TestDeleteOldMaintenanceJobWithConfigMap(t *testing.T) {
Name: "repo-maintenance-job-config",
},
Data: map[string]string{
"global": `{"keepLatestMaintenanceJobs": 5}`,
"test-ns-default-kopia": `{"keepLatestMaintenanceJobs": 2}`,
"global": `{"keepLatestMaintenanceJobs": 5}`,
"test-ns-default-restic": `{"keepLatestMaintenanceJobs": 2}`,
},
},
},
@@ -1537,6 +1605,58 @@ func TestInitializeRepoWithRepositoryTypes(t *testing.T) {
corev1api.AddToScheme(scheme)
velerov1api.AddToScheme(scheme)
// Test for restic repository
t.Run("restic repository", func(t *testing.T) {
rr := mockBackupRepositoryCR()
rr.Spec.BackupStorageLocation = "default"
rr.Spec.VolumeNamespace = "volume-ns-1"
rr.Spec.RepositoryType = velerov1api.BackupRepositoryTypeRestic
location := &velerov1api.BackupStorageLocation{
ObjectMeta: metav1.ObjectMeta{
Namespace: velerov1api.DefaultNamespace,
Name: "default",
},
Spec: velerov1api.BackupStorageLocationSpec{
Provider: "aws",
StorageType: velerov1api.StorageType{
ObjectStorage: &velerov1api.ObjectStorageLocation{
Bucket: "test-bucket",
Prefix: "test-prefix",
},
},
Config: map[string]string{
"region": "us-east-1",
},
},
}
fakeClient := clientFake.NewClientBuilder().WithScheme(scheme).WithRuntimeObjects(rr, location).Build()
mgr := &repomokes.Manager{}
mgr.On("PrepareRepo", rr).Return(nil)
reconciler := NewBackupRepoReconciler(
velerov1api.DefaultNamespace,
velerotest.NewLogger(),
fakeClient,
mgr,
testMaintenanceFrequency,
"",
"",
logrus.InfoLevel,
nil,
nil,
)
err := reconciler.initializeRepo(t.Context(), rr, location, reconciler.logger)
require.NoError(t, err)
// Verify ResticIdentifier is set for restic
assert.NotEmpty(t, rr.Spec.ResticIdentifier)
assert.Contains(t, rr.Spec.ResticIdentifier, "volume-ns-1")
assert.Equal(t, velerov1api.BackupRepositoryPhaseReady, rr.Status.Phase)
})
// Test for kopia repository
t.Run("kopia repository", func(t *testing.T) {
rr := mockBackupRepositoryCR()
@@ -1580,13 +1700,65 @@ func TestInitializeRepoWithRepositoryTypes(t *testing.T) {
nil,
)
err := reconciler.initializeRepo(t.Context(), rr, reconciler.logger)
err := reconciler.initializeRepo(t.Context(), rr, location, reconciler.logger)
require.NoError(t, err)
// Verify ResticIdentifier is NOT set for kopia
assert.Empty(t, rr.Spec.ResticIdentifier)
assert.Equal(t, velerov1api.BackupRepositoryPhaseReady, rr.Status.Phase)
})
// Test for empty repository type (defaults to restic)
t.Run("empty repository type", func(t *testing.T) {
rr := mockBackupRepositoryCR()
rr.Spec.BackupStorageLocation = "default"
rr.Spec.VolumeNamespace = "volume-ns-1"
// Leave RepositoryType empty
location := &velerov1api.BackupStorageLocation{
ObjectMeta: metav1.ObjectMeta{
Namespace: velerov1api.DefaultNamespace,
Name: "default",
},
Spec: velerov1api.BackupStorageLocationSpec{
Provider: "aws",
StorageType: velerov1api.StorageType{
ObjectStorage: &velerov1api.ObjectStorageLocation{
Bucket: "test-bucket",
Prefix: "test-prefix",
},
},
Config: map[string]string{
"region": "us-east-1",
},
},
}
fakeClient := clientFake.NewClientBuilder().WithScheme(scheme).WithRuntimeObjects(rr, location).Build()
mgr := &repomokes.Manager{}
mgr.On("PrepareRepo", rr).Return(nil)
reconciler := NewBackupRepoReconciler(
velerov1api.DefaultNamespace,
velerotest.NewLogger(),
fakeClient,
mgr,
testMaintenanceFrequency,
"",
"",
logrus.InfoLevel,
nil,
nil,
)
err := reconciler.initializeRepo(t.Context(), rr, location, reconciler.logger)
require.NoError(t, err)
// Verify ResticIdentifier is set when type is empty (defaults to restic)
assert.NotEmpty(t, rr.Spec.ResticIdentifier)
assert.Contains(t, rr.Spec.ResticIdentifier, "volume-ns-1")
assert.Equal(t, velerov1api.BackupRepositoryPhaseReady, rr.Status.Phase)
})
}
func TestRepoMaintenanceMetricsRecording(t *testing.T) {

View File

@@ -360,5 +360,5 @@ func (c *PodVolumeRestoreReconcilerLegacy) closeDataPath(ctx context.Context, pv
}
func IsLegacyPVR(pvr *velerov1api.PodVolumeRestore) bool {
return pvr.Spec.UploaderType == "restic"
return pvr.Spec.UploaderType == uploader.ResticType
}

View File

@@ -129,13 +129,6 @@ func (c *scheduleReconciler) Reconcile(ctx context.Context, req ctrl.Request) (c
} else {
schedule.Status.Phase = velerov1.SchedulePhaseEnabled
schedule.Status.ValidationErrors = nil
// Compute expected interval between consecutive scheduled backup runs.
// Only meaningful when the cron expression is valid.
now := c.clock.Now()
nextRun := cronSchedule.Next(now)
nextNextRun := cronSchedule.Next(nextRun)
c.metrics.SetScheduleExpectedIntervalSeconds(schedule.Name, nextNextRun.Sub(nextRun).Seconds())
}
scheduleNeedsPatch := false

View File

@@ -251,9 +251,11 @@ func (fs *fileSystemBR) boostRepoConnect(ctx context.Context, repositoryType str
if err := repoProvider.NewUnifiedRepoProvider(*credentialGetter, repositoryType, fs.log).BoostRepoConnect(ctx, repoProvider.RepoParam{BackupLocation: fs.backupLocation, BackupRepo: fs.backupRepo, CacheDir: cacheDir}); err != nil {
return err
}
return nil
} else {
if err := repoProvider.NewResticRepositoryProvider(*credentialGetter, filesystem.NewFileSystem(), fs.log).BoostRepoConnect(ctx, repoProvider.RepoParam{BackupLocation: fs.backupLocation, BackupRepo: fs.backupRepo}); err != nil {
return err
}
}
return errors.Errorf("error getting provider for repo %s", repositoryType)
return nil
}

View File

@@ -50,6 +50,7 @@ type podTemplateConfig struct {
serviceAccountName string
uploaderType string
defaultSnapshotMoveData bool
csiSnapshotEarlyFrequentPolling bool
privilegedNodeAgent bool
disableInformerCache bool
scheduleSkipImmediately bool
@@ -166,6 +167,12 @@ func WithDefaultSnapshotMoveData(b bool) podTemplateOption {
}
}
func WithCSISnapshotEarlyFrequentPolling(b bool) podTemplateOption {
return func(c *podTemplateConfig) {
c.csiSnapshotEarlyFrequentPolling = b
}
}
func WithDisableInformerCache(b bool) podTemplateOption {
return func(c *podTemplateConfig) {
c.disableInformerCache = b
@@ -488,6 +495,15 @@ func Deployment(namespace string, opts ...podTemplateOption) *appsv1api.Deployme
}...)
}
if c.csiSnapshotEarlyFrequentPolling {
deployment.Spec.Template.Spec.Containers[0].Env = append(deployment.Spec.Template.Spec.Containers[0].Env, []corev1api.EnvVar{
{
Name: "CSI_SNAPSHOT_EARLY_FREQUENT_POLLING",
Value: "true",
},
}...)
}
deployment.Spec.Template.Spec.Containers[0].Env = append(deployment.Spec.Template.Spec.Containers[0].Env, c.envVars...)
if len(c.plugins) > 0 {

View File

@@ -263,6 +263,7 @@ type VeleroOptions struct {
DefaultVolumesToFsBackup bool
UploaderType string
DefaultSnapshotMoveData bool
CSISnapshotEarlyFrequentPolling bool
DisableInformerCache bool
ScheduleSkipImmediately bool
PodResources kube.PodResources
@@ -390,6 +391,10 @@ func AllResources(o *VeleroOptions) *unstructured.UnstructuredList {
deployOpts = append(deployOpts, WithDefaultSnapshotMoveData(true))
}
if o.CSISnapshotEarlyFrequentPolling {
deployOpts = append(deployOpts, WithCSISnapshotEarlyFrequentPolling(true))
}
if o.DisableInformerCache {
deployOpts = append(deployOpts, WithDisableInformerCache(true))
}

View File

@@ -80,9 +80,6 @@ const (
DataDownloadFailureTotal = "data_download_failure_total"
DataDownloadCancelTotal = "data_download_cancel_total"
// schedule metrics
scheduleExpectedIntervalSeconds = "schedule_expected_interval_seconds"
// repo maintenance metrics
repoMaintenanceSuccessTotal = "repo_maintenance_success_total"
repoMaintenanceFailureTotal = "repo_maintenance_failure_total"
@@ -350,14 +347,6 @@ func NewServerMetrics() *ServerMetrics {
},
[]string{scheduleLabel, backupNameLabel},
),
scheduleExpectedIntervalSeconds: prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Namespace: metricNamespace,
Name: scheduleExpectedIntervalSeconds,
Help: "Expected interval between consecutive scheduled backups, in seconds",
},
[]string{scheduleLabel},
),
repoMaintenanceSuccessTotal: prometheus.NewCounterVec(
prometheus.CounterOpts{
Namespace: metricNamespace,
@@ -655,9 +644,6 @@ func (m *ServerMetrics) RemoveSchedule(scheduleName string) {
if c, ok := m.metrics[csiSnapshotFailureTotal].(*prometheus.CounterVec); ok {
c.DeleteLabelValues(scheduleName, "")
}
if g, ok := m.metrics[scheduleExpectedIntervalSeconds].(*prometheus.GaugeVec); ok {
g.DeleteLabelValues(scheduleName)
}
}
// InitMetricsForNode initializes counter metrics for a node.
@@ -772,14 +758,6 @@ func (m *ServerMetrics) SetBackupLastSuccessfulTimestamp(backupSchedule string,
}
}
// SetScheduleExpectedIntervalSeconds records the expected interval in seconds,
// between consecutive backups for a schedule.
func (m *ServerMetrics) SetScheduleExpectedIntervalSeconds(scheduleName string, seconds float64) {
if g, ok := m.metrics[scheduleExpectedIntervalSeconds].(*prometheus.GaugeVec); ok {
g.WithLabelValues(scheduleName).Set(seconds)
}
}
// SetBackupTotal records the current number of existent backups.
func (m *ServerMetrics) SetBackupTotal(numberOfBackups int64) {
if g, ok := m.metrics[backupTotal].(prometheus.Gauge); ok {

View File

@@ -259,90 +259,6 @@ func TestMultipleAdhocBackupsShareMetrics(t *testing.T) {
assert.Equal(t, float64(1), validationFailureMetric, "All adhoc validation failures should be counted together")
}
// TestSetScheduleExpectedIntervalSeconds verifies that the expected interval metric
// is properly recorded for schedules.
func TestSetScheduleExpectedIntervalSeconds(t *testing.T) {
tests := []struct {
name string
scheduleName string
intervalSeconds float64
description string
}{
{
name: "every 5 minutes schedule",
scheduleName: "frequent-backup",
intervalSeconds: 300,
description: "Expected interval should be 5m in seconds",
},
{
name: "daily schedule",
scheduleName: "daily-backup",
intervalSeconds: 86400,
description: "Expected interval should be 24h in seconds",
},
{
name: "monthly schedule",
scheduleName: "monthly-backup",
intervalSeconds: 2678400, // 31 days in seconds
description: "Expected interval should be 31 days in seconds",
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
m := NewServerMetrics()
m.SetScheduleExpectedIntervalSeconds(tc.scheduleName, tc.intervalSeconds)
metric := getMetricValue(t, m.metrics[scheduleExpectedIntervalSeconds].(*prometheus.GaugeVec), tc.scheduleName)
assert.Equal(t, tc.intervalSeconds, metric, tc.description)
})
}
}
// TestScheduleExpectedIntervalNotInitializedByDefault verifies that the expected
// interval metric is not initialized by InitSchedule, so it only appears for
// schedules with a valid cron expression.
func TestScheduleExpectedIntervalNotInitializedByDefault(t *testing.T) {
m := NewServerMetrics()
m.InitSchedule("test-schedule")
// The metric should not have any values after InitSchedule
ch := make(chan prometheus.Metric, 1)
m.metrics[scheduleExpectedIntervalSeconds].(*prometheus.GaugeVec).Collect(ch)
close(ch)
count := 0
for range ch {
count++
}
assert.Equal(t, 0, count, "scheduleExpectedIntervalSeconds should not be initialized by InitSchedule")
}
// TestRemoveScheduleCleansUpExpectedInterval verifies that RemoveSchedule
// cleans up the expected interval metric.
func TestRemoveScheduleCleansUpExpectedInterval(t *testing.T) {
m := NewServerMetrics()
m.InitSchedule("test-schedule")
m.SetScheduleExpectedIntervalSeconds("test-schedule", 3600)
// Verify metric exists
metric := getMetricValue(t, m.metrics[scheduleExpectedIntervalSeconds].(*prometheus.GaugeVec), "test-schedule")
assert.Equal(t, float64(3600), metric)
// Remove schedule and verify metric is cleaned up
m.RemoveSchedule("test-schedule")
ch := make(chan prometheus.Metric, 1)
m.metrics[scheduleExpectedIntervalSeconds].(*prometheus.GaugeVec).Collect(ch)
close(ch)
count := 0
for range ch {
count++
}
assert.Equal(t, 0, count, "scheduleExpectedIntervalSeconds should be removed after RemoveSchedule")
}
// TestInitScheduleWithEmptyName verifies that InitSchedule works correctly
// with an empty schedule name (for adhoc backups).
func TestInitScheduleWithEmptyName(t *testing.T) {

View File

@@ -149,8 +149,7 @@ func (b *objectBackupStoreGetter) Get(location *velerov1api.BackupStorageLocatio
// if there are any slashes in the middle of 'bucket', the user
// probably put <bucket>/<prefix> in the bucket field, which we
// don't support.
// Exception: MRAP ARNs (arn:aws:s3::...) legitimately contain slashes.
if strings.Contains(bucket, "/") && !strings.HasPrefix(bucket, "arn:aws:s3:") {
if strings.Contains(bucket, "/") {
return nil, errors.Errorf("backup storage location's bucket name %q must not contain a '/' (if using a prefix, put it in the 'Prefix' field instead)", location.Spec.ObjectStorage.Bucket)
}

View File

@@ -943,24 +943,6 @@ func TestNewObjectBackupStoreGetter(t *testing.T) {
wantBucket: "bucket",
wantPrefix: "prefix/",
},
{
name: "when the Bucket field is an MRAP ARN, it should be valid",
location: builder.ForBackupStorageLocation("", "").Provider("provider-1").Bucket("arn:aws:s3::123456789012:accesspoint/abcdef0123456.mrap").Result(),
objectStoreGetter: objectStoreGetter{
"provider-1": newInMemoryObjectStore("arn:aws:s3::123456789012:accesspoint/abcdef0123456.mrap"),
},
credFileStore: velerotest.NewFakeCredentialsFileStore("", nil),
wantBucket: "arn:aws:s3::123456789012:accesspoint/abcdef0123456.mrap",
},
{
name: "when the Bucket field is an MRAP ARN with trailing slash, it should be valid and trimmed",
location: builder.ForBackupStorageLocation("", "").Provider("provider-1").Bucket("arn:aws:s3::123456789012:accesspoint/abcdef0123456.mrap/").Result(),
objectStoreGetter: objectStoreGetter{
"provider-1": newInMemoryObjectStore("arn:aws:s3::123456789012:accesspoint/abcdef0123456.mrap"),
},
credFileStore: velerotest.NewFakeCredentialsFileStore("", nil),
wantBucket: "arn:aws:s3::123456789012:accesspoint/abcdef0123456.mrap",
},
}
for _, tc := range tests {

View File

@@ -20,7 +20,6 @@ import (
"bytes"
"context"
"net/url"
"slices"
"time"
"github.com/pkg/errors"
@@ -185,22 +184,10 @@ func (e *defaultPodCommandExecutor) ExecutePodCommand(log logrus.FieldLogger, it
}
func ensureContainerExists(pod *corev1api.Pod, container string) error {
existsAsMainContainer := slices.ContainsFunc(pod.Spec.Containers, func(c corev1api.Container) bool {
return c.Name == container
})
if existsAsMainContainer {
return nil
}
existsAsSidecar := slices.ContainsFunc(pod.Spec.InitContainers, func(c corev1api.Container) bool {
return c.RestartPolicy != nil &&
*c.RestartPolicy == corev1api.ContainerRestartPolicyAlways &&
c.Name == container
})
if existsAsSidecar {
return nil
for _, c := range pod.Spec.Containers {
if c.Name == container {
return nil
}
}
return errors.Errorf("no such container: %q", container)

View File

@@ -35,7 +35,6 @@ import (
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/remotecommand"
"k8s.io/utils/ptr"
v1 "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
velerotest "github.com/vmware-tanzu/velero/pkg/test"
@@ -254,15 +253,6 @@ func TestEnsureContainerExists(t *testing.T) {
Name: "foo",
},
},
InitContainers: []corev1api.Container{
{
Name: "baz",
},
{
Name: "qux",
RestartPolicy: ptr.To(corev1api.ContainerRestartPolicyAlways),
},
},
},
}
@@ -270,13 +260,7 @@ func TestEnsureContainerExists(t *testing.T) {
require.EqualError(t, err, `no such container: "bar"`)
err = ensureContainerExists(pod, "foo")
require.NoError(t, err)
err = ensureContainerExists(pod, "baz")
require.EqualError(t, err, `no such container: "baz"`)
err = ensureContainerExists(pod, "qux")
require.NoError(t, err)
assert.NoError(t, err)
}
func TestPodCompeted(t *testing.T) {

View File

@@ -272,7 +272,7 @@ func (b *backupper) BackupPodVolumes(backup *velerov1api.Backup, pod *corev1api.
return nil, pvcSummary, []error{err}
}
repositoryType := funcGetRepositoryType()
repositoryType := funcGetRepositoryType(b.uploaderType)
if repositoryType == "" {
err := errors.Errorf("empty repository type, uploader %s", b.uploaderType)
skipAllPodVolumes(pod, volumesToBackup, err, pvcSummary, log)
@@ -305,6 +305,11 @@ func (b *backupper) BackupPodVolumes(backup *velerov1api.Backup, pod *corev1api.
}
}
repoIdentifier := ""
if repositoryType == velerov1api.BackupRepositoryTypeRestic {
repoIdentifier = repo.Spec.ResticIdentifier
}
for _, volumeName := range volumesToBackup {
volume, ok := podVolumes[volumeName]
if !ok {
@@ -361,7 +366,7 @@ func (b *backupper) BackupPodVolumes(backup *velerov1api.Backup, pod *corev1api.
continue
}
volumeBackup := newPodVolumeBackup(backup, pod, volume, "", b.uploaderType, pvc)
volumeBackup := newPodVolumeBackup(backup, pod, volume, repoIdentifier, b.uploaderType, pvc)
// the PVB must be added into the indexer before creating it in API server otherwise unexpected behavior may happen:
// the PVB may be handled very quickly by the controller and the informer handler will insert the PVB before "b.pvbIndexer.Add(volumeBackup)" runs,
// this causes the PVB inserted by "b.pvbIndexer.Add(volumeBackup)" overrides the PVB in the indexer while the PVB inserted by "b.pvbIndexer.Add(volumeBackup)"

View File

@@ -580,7 +580,7 @@ func TestBackupPodVolumes(t *testing.T) {
require.NoError(t, err)
if test.mockGetRepositoryType {
funcGetRepositoryType = func() string { return "" }
funcGetRepositoryType = func(string) string { return "" }
} else {
funcGetRepositoryType = getRepositoryType
}

View File

@@ -166,6 +166,11 @@ func (r *restorer) RestorePodVolumes(data RestoreData, tracker *volume.RestoreVo
podVolumes[podVolume.Name] = podVolume
}
repoIdentifier := ""
if repositoryType == velerov1api.BackupRepositoryTypeRestic {
repoIdentifier = repo.Spec.ResticIdentifier
}
for volume, backupInfo := range volumesToRestore {
volumeObj, ok := podVolumes[volume]
var pvc *corev1api.PersistentVolumeClaim
@@ -180,7 +185,7 @@ func (r *restorer) RestorePodVolumes(data RestoreData, tracker *volume.RestoreVo
}
}
volumeRestore := newPodVolumeRestore(data.Restore, data.Pod, data.BackupLocation, volume, backupInfo.snapshotID, backupInfo.snapshotSize, "", backupInfo.uploaderType, data.SourceNamespace, pvc)
volumeRestore := newPodVolumeRestore(data.Restore, data.Pod, data.BackupLocation, volume, backupInfo.snapshotID, backupInfo.snapshotSize, repoIdentifier, backupInfo.uploaderType, data.SourceNamespace, pvc)
if err := veleroclient.CreateRetryGenerateName(r.crClient, r.ctx, volumeRestore); err != nil {
errs = append(errs, errors.WithStack(err))
continue

View File

@@ -204,6 +204,24 @@ func TestRestorePodVolumes(t *testing.T) {
},
},
},
{
name: "get repository type fail",
pvbs: []*velerov1api.PodVolumeBackup{
createPVBObj(true, true, 1, "restic"),
createPVBObj(true, true, 2, "kopia"),
},
kubeClientObj: []runtime.Object{
createNodeAgentDaemonset(),
},
restoredPod: createPodObj(false, false, false, 2),
sourceNamespace: "fake-ns",
errs: []expectError{
{
err: "multiple repository type in one backup",
prefixOnly: true,
},
},
},
{
name: "ensure repo fail",
pvbs: []*velerov1api.PodVolumeBackup{

View File

@@ -62,12 +62,12 @@ func GetVolumeBackupsForPod(podVolumeBackups []*velerov1api.PodVolumeBackup, pod
// GetPvbRepositoryType returns the repositoryType according to the PVB information
func GetPvbRepositoryType(pvb *velerov1api.PodVolumeBackup) string {
return getRepositoryType()
return getRepositoryType(pvb.Spec.UploaderType)
}
// GetPvrRepositoryType returns the repositoryType according to the PVR information
func GetPvrRepositoryType(pvr *velerov1api.PodVolumeRestore) string {
return getRepositoryType()
return getRepositoryType(pvr.Spec.UploaderType)
}
// getVolumeBackupInfoForPod returns a map, of volume name -> VolumeBackupInfo,
@@ -97,7 +97,7 @@ func getVolumeBackupInfoForPod(podVolumeBackups []*velerov1api.PodVolumeBackup,
snapshotID: pvb.Status.SnapshotID,
snapshotSize: pvb.Status.Progress.TotalBytes,
uploaderType: getUploaderTypeOrDefault(pvb.Spec.UploaderType),
repositoryType: getRepositoryType(),
repositoryType: getRepositoryType(pvb.Spec.UploaderType),
}
}
@@ -111,7 +111,7 @@ func getVolumeBackupInfoForPod(podVolumeBackups []*velerov1api.PodVolumeBackup,
}
for k, v := range fromAnnntation {
volumes[k] = volumeBackupInfo{v, 0, uploader.KopiaType, velerov1api.BackupRepositoryTypeKopia}
volumes[k] = volumeBackupInfo{v, 0, uploader.ResticType, velerov1api.BackupRepositoryTypeRestic}
}
return volumes
@@ -135,7 +135,7 @@ func GetSnapshotIdentifier(podVolumeBackups *velerov1api.PodVolumeBackupList) ma
VolumeNamespace: item.Spec.Pod.Namespace,
BackupStorageLocation: item.Spec.BackupStorageLocation,
SnapshotID: item.Status.SnapshotID,
RepositoryType: getRepositoryType(),
RepositoryType: getRepositoryType(item.Spec.UploaderType),
UploaderType: item.Spec.UploaderType,
Source: item.Status.Path,
RepoIdentifier: item.Spec.RepoIdentifier,
@@ -164,14 +164,27 @@ func getUploaderTypeOrDefault(uploaderType string) string {
if uploaderType != "" {
return uploaderType
}
return uploader.KopiaType
return uploader.ResticType
}
// getRepositoryType returns the hardcode repositoryType.
// TODO: In future, when we have multiple implementations of Unified Repo (besides Kopia), we will add the repositoryType to BSL,
// because by then, we are not able to hardcode the repositoryType to BackupRepositoryTypeKopia for Unified Repo.
func getRepositoryType() string {
return velerov1api.BackupRepositoryTypeKopia
// getRepositoryType returns the hardcode repositoryType for different backup methods - Restic or Kopia,uploaderType
// indicates the method.
// For Restic backup method, it is always hardcode to BackupRepositoryTypeRestic, never changed.
// For Kopia backup method, this means we hardcode repositoryType as BackupRepositoryTypeKopia for Unified Repo,
// at present (Kopia backup method is using Unified Repo). However, it doesn't mean we could deduce repositoryType
// from uploaderType for Unified Repo.
// TODO: post v1.10, refactor this function for Kopia backup method. In future, when we have multiple implementations of
// Unified Repo (besides Kopia), we will add the repositoryType to BSL, because by then, we are not able to hardcode
// the repositoryType to BackupRepositoryTypeKopia for Unified Repo.
func getRepositoryType(uploaderType string) string {
switch uploaderType {
case "", uploader.ResticType:
return velerov1api.BackupRepositoryTypeRestic
case uploader.KopiaType:
return velerov1api.BackupRepositoryTypeKopia
default:
return ""
}
}
func isPVBMatchPod(pvb *velerov1api.PodVolumeBackup, podName string, namespace string) bool {

View File

@@ -17,7 +17,14 @@ limitations under the License.
package config
import (
"fmt"
"path"
"strings"
"github.com/pkg/errors"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/persistence"
)
type BackendType string
@@ -33,6 +40,56 @@ const (
CredentialsFileKey = "credentialsFile"
)
// this func is assigned to a package-level variable so it can be
// replaced when unit-testing
var getAWSBucketRegion = GetAWSBucketRegion
// getRepoPrefix returns the prefix of the value of the --repo flag for
// restic commands, i.e. everything except the "/<repo-name>".
func getRepoPrefix(location *velerov1api.BackupStorageLocation) (string, error) {
var bucket, prefix string
if location.Spec.ObjectStorage != nil {
layout := persistence.NewObjectStoreLayout(location.Spec.ObjectStorage.Prefix)
bucket = location.Spec.ObjectStorage.Bucket
prefix = layout.GetResticDir()
}
backendType := GetBackendType(location.Spec.Provider, location.Spec.Config)
if repoPrefix := location.Spec.Config["resticRepoPrefix"]; repoPrefix != "" {
return repoPrefix, nil
}
switch backendType {
case AWSBackend:
var url string
// non-AWS, S3-compatible object store
if s3Url := location.Spec.Config["s3Url"]; s3Url != "" {
url = strings.TrimSuffix(s3Url, "/")
} else {
var err error
region := location.Spec.Config["region"]
if region == "" {
region, err = getAWSBucketRegion(bucket, location.Spec.Config)
}
if err != nil {
return "", errors.Wrapf(err, "failed to detect the region via bucket: %s", bucket)
}
url = fmt.Sprintf("s3-%s.amazonaws.com", region)
}
return fmt.Sprintf("s3:%s/%s", url, path.Join(bucket, prefix)), nil
case AzureBackend:
return fmt.Sprintf("azure:%s:/%s", bucket, prefix), nil
case GCPBackend:
return fmt.Sprintf("gs:%s:/%s", bucket, prefix), nil
}
return "", errors.Errorf("invalid backend type %s, provider %s", backendType, location.Spec.Provider)
}
// GetBackendType returns a backend type that is known by Velero.
// If the provider doesn't indicate a known backend type, but the endpoint is
// specified, Velero regards it as a S3 compatible object store and return AWSBackend as the type.
@@ -54,3 +111,14 @@ func GetBackendType(provider string, config map[string]string) BackendType {
func IsBackendTypeValid(backendType BackendType) bool {
return (backendType == AWSBackend || backendType == AzureBackend || backendType == GCPBackend || backendType == FSBackend)
}
// GetRepoIdentifier returns the string to be used as the value of the --repo flag in
// restic commands for the given repository.
func GetRepoIdentifier(location *velerov1api.BackupStorageLocation, name string) (string, error) {
prefix, err := getRepoPrefix(location)
if err != nil {
return "", err
}
return fmt.Sprintf("%s/%s", strings.TrimSuffix(prefix, "/"), name), nil
}

View File

@@ -0,0 +1,261 @@
/*
Copyright 2018, 2019 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package config
import (
"testing"
"github.com/pkg/errors"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
)
func TestGetRepoIdentifier(t *testing.T) {
testCases := []struct {
name string
bsl *velerov1api.BackupStorageLocation
repoName string
getAWSBucketRegion func(s string, config map[string]string) (string, error)
expected string
expectedErr string
}{
{
name: "error is returned if BSL uses unsupported provider and resticRepoPrefix is not set",
bsl: &velerov1api.BackupStorageLocation{
Spec: velerov1api.BackupStorageLocationSpec{
Provider: "unsupported-provider",
StorageType: velerov1api.StorageType{
ObjectStorage: &velerov1api.ObjectStorageLocation{
Bucket: "bucket-2",
Prefix: "prefix-2",
},
},
},
},
repoName: "repo-1",
expectedErr: "invalid backend type velero.io/unsupported-provider, provider unsupported-provider",
},
{
name: "resticRepoPrefix in BSL config is used if set",
bsl: &velerov1api.BackupStorageLocation{
Spec: velerov1api.BackupStorageLocationSpec{
Provider: "custom-repo-identifier",
Config: map[string]string{
"resticRepoPrefix": "custom:prefix:/restic",
},
StorageType: velerov1api.StorageType{
ObjectStorage: &velerov1api.ObjectStorageLocation{
Bucket: "bucket",
Prefix: "prefix",
},
},
},
},
repoName: "repo-1",
expected: "custom:prefix:/restic/repo-1",
},
{
name: "s3Url in BSL config is used",
bsl: &velerov1api.BackupStorageLocation{
Spec: velerov1api.BackupStorageLocationSpec{
Provider: "custom-repo-identifier",
Config: map[string]string{
"s3Url": "s3Url",
},
StorageType: velerov1api.StorageType{
ObjectStorage: &velerov1api.ObjectStorageLocation{
Bucket: "bucket",
Prefix: "prefix",
},
},
},
},
repoName: "repo-1",
expected: "s3:s3Url/bucket/prefix/restic/repo-1",
},
{
name: "s3.amazonaws.com URL format is used if region cannot be determined for AWS BSL",
bsl: &velerov1api.BackupStorageLocation{
Spec: velerov1api.BackupStorageLocationSpec{
Provider: "aws",
StorageType: velerov1api.StorageType{
ObjectStorage: &velerov1api.ObjectStorageLocation{
Bucket: "bucket",
},
},
},
},
repoName: "repo-1",
getAWSBucketRegion: func(s string, config map[string]string) (string, error) {
return "", errors.New("no region found")
},
expected: "",
expectedErr: "failed to detect the region via bucket: bucket: no region found",
},
{
name: "s3.s3-<region>.amazonaws.com URL format is used if region can be determined for AWS BSL",
bsl: &velerov1api.BackupStorageLocation{
Spec: velerov1api.BackupStorageLocationSpec{
Provider: "aws",
StorageType: velerov1api.StorageType{
ObjectStorage: &velerov1api.ObjectStorageLocation{
Bucket: "bucket",
},
},
},
},
repoName: "repo-1",
getAWSBucketRegion: func(string, map[string]string) (string, error) {
return "eu-west-1", nil
},
expected: "s3:s3-eu-west-1.amazonaws.com/bucket/restic/repo-1",
},
{
name: "prefix is included in repo identifier if set for AWS BSL",
bsl: &velerov1api.BackupStorageLocation{
Spec: velerov1api.BackupStorageLocationSpec{
Provider: "aws",
StorageType: velerov1api.StorageType{
ObjectStorage: &velerov1api.ObjectStorageLocation{
Bucket: "bucket",
Prefix: "prefix",
},
},
},
},
repoName: "repo-1",
getAWSBucketRegion: func(s string, config map[string]string) (string, error) {
return "eu-west-1", nil
},
expected: "s3:s3-eu-west-1.amazonaws.com/bucket/prefix/restic/repo-1",
},
{
name: "s3Url is used in repo identifier if set for AWS BSL",
bsl: &velerov1api.BackupStorageLocation{
Spec: velerov1api.BackupStorageLocationSpec{
Provider: "aws",
Config: map[string]string{
"s3Url": "alternate-url",
},
StorageType: velerov1api.StorageType{
ObjectStorage: &velerov1api.ObjectStorageLocation{
Bucket: "bucket",
Prefix: "prefix",
},
},
},
},
repoName: "repo-1",
getAWSBucketRegion: func(s string, config map[string]string) (string, error) {
return "eu-west-1", nil
},
expected: "s3:alternate-url/bucket/prefix/restic/repo-1",
},
{
name: "region is used in repo identifier if set for AWS BSL",
bsl: &velerov1api.BackupStorageLocation{
Spec: velerov1api.BackupStorageLocationSpec{
Provider: "aws",
Config: map[string]string{
"region": "us-west-1",
},
StorageType: velerov1api.StorageType{
ObjectStorage: &velerov1api.ObjectStorageLocation{
Bucket: "bucket",
Prefix: "prefix",
},
},
},
},
repoName: "aws-repo",
getAWSBucketRegion: func(s string, config map[string]string) (string, error) {
return "eu-west-1", nil
},
expected: "s3:s3-us-west-1.amazonaws.com/bucket/prefix/restic/aws-repo",
},
{
name: "trailing slash in s3Url is not included in repo identifier for AWS BSL",
bsl: &velerov1api.BackupStorageLocation{
Spec: velerov1api.BackupStorageLocationSpec{
Provider: "aws",
Config: map[string]string{
"s3Url": "alternate-url-with-trailing-slash/",
},
StorageType: velerov1api.StorageType{
ObjectStorage: &velerov1api.ObjectStorageLocation{
Bucket: "bucket",
Prefix: "prefix",
},
},
},
},
repoName: "aws-repo",
getAWSBucketRegion: func(s string, config map[string]string) (string, error) {
return "eu-west-1", nil
},
expected: "s3:alternate-url-with-trailing-slash/bucket/prefix/restic/aws-repo",
},
{
name: "repo identifier includes bucket and prefix for Azure BSL",
bsl: &velerov1api.BackupStorageLocation{
Spec: velerov1api.BackupStorageLocationSpec{
Provider: "azure",
StorageType: velerov1api.StorageType{
ObjectStorage: &velerov1api.ObjectStorageLocation{
Bucket: "azure-bucket",
Prefix: "azure-prefix",
},
},
},
},
repoName: "azure-repo",
expected: "azure:azure-bucket:/azure-prefix/restic/azure-repo",
},
{
name: "repo identifier includes bucket and prefix for GCP BSL",
bsl: &velerov1api.BackupStorageLocation{
Spec: velerov1api.BackupStorageLocationSpec{
Provider: "gcp",
StorageType: velerov1api.StorageType{
ObjectStorage: &velerov1api.ObjectStorageLocation{
Bucket: "gcp-bucket",
Prefix: "gcp-prefix",
},
},
},
},
repoName: "gcp-repo",
expected: "gs:gcp-bucket:/gcp-prefix/restic/gcp-repo",
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
getAWSBucketRegion = tc.getAWSBucketRegion
id, err := GetRepoIdentifier(tc.bsl, tc.repoName)
assert.Equal(t, tc.expected, id)
if tc.expectedErr == "" {
assert.NoError(t, err)
} else {
require.EqualError(t, err, tc.expectedErr)
assert.Empty(t, id)
}
})
}
}

View File

@@ -109,6 +109,10 @@ func NewManager(
log: log,
}
mgr.providers[velerov1api.BackupRepositoryTypeRestic] = provider.NewResticRepositoryProvider(credentials.CredentialGetter{
FromFile: credentialFileStore,
FromSecret: credentialSecretStore,
}, mgr.fileSystem, mgr.log)
mgr.providers[velerov1api.BackupRepositoryTypeKopia] = provider.NewUnifiedRepoProvider(credentials.CredentialGetter{
FromFile: credentialFileStore,
FromSecret: credentialSecretStore,
@@ -271,6 +275,8 @@ func (m *manager) ClientSideCacheLimit(repo *velerov1api.BackupRepository) (int6
func (m *manager) getRepositoryProvider(repo *velerov1api.BackupRepository) (provider.Provider, error) {
switch repo.Spec.RepositoryType {
case "", velerov1api.BackupRepositoryTypeRestic:
return m.providers[velerov1api.BackupRepositoryTypeRestic], nil
case velerov1api.BackupRepositoryTypeKopia:
return m.providers[velerov1api.BackupRepositoryTypeKopia], nil
default:

View File

@@ -32,13 +32,15 @@ func TestGetRepositoryProvider(t *testing.T) {
repo := &velerov1.BackupRepository{}
// empty repository type
_, err := mgr.getRepositoryProvider(repo)
require.Error(t, err)
provider, err := mgr.getRepositoryProvider(repo)
require.NoError(t, err)
assert.NotNil(t, provider)
// invalid repository type
repo.Spec.RepositoryType = "restic"
_, err = mgr.getRepositoryProvider(repo)
require.Error(t, err)
// valid repository type
repo.Spec.RepositoryType = velerov1.BackupRepositoryTypeRestic
provider, err = mgr.getRepositoryProvider(repo)
require.NoError(t, err)
assert.NotNil(t, provider)
// invalid repository type
repo.Spec.RepositoryType = "unknown"
@@ -59,6 +61,6 @@ func TestGetRepositoryConfigProvider(t *testing.T) {
assert.NotNil(t, provider)
// invalid repository type
_, err = mgr.getRepositoryProvider("restic")
_, err = mgr.getRepositoryProvider(velerov1.BackupRepositoryTypeRestic)
require.Error(t, err)
}

View File

@@ -0,0 +1,99 @@
/*
Copyright the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package provider
import (
"context"
"strings"
"time"
"github.com/sirupsen/logrus"
"github.com/vmware-tanzu/velero/internal/credentials"
"github.com/vmware-tanzu/velero/pkg/repository/restic"
"github.com/vmware-tanzu/velero/pkg/util/filesystem"
)
func NewResticRepositoryProvider(credGetter credentials.CredentialGetter, fs filesystem.Interface, log logrus.FieldLogger) Provider {
return &resticRepositoryProvider{
svc: restic.NewRepositoryService(credGetter, fs, log),
}
}
type resticRepositoryProvider struct {
svc *restic.RepositoryService
}
func (r *resticRepositoryProvider) InitRepo(ctx context.Context, param RepoParam) error {
return r.svc.InitRepo(param.BackupLocation, param.BackupRepo)
}
func (r *resticRepositoryProvider) ConnectToRepo(ctx context.Context, param RepoParam) error {
return r.svc.ConnectToRepo(param.BackupLocation, param.BackupRepo)
}
func (r *resticRepositoryProvider) PrepareRepo(ctx context.Context, param RepoParam) error {
if err := r.ConnectToRepo(ctx, param); err != nil {
// If the repository has not yet been initialized, the error message will always include
// the following string. This is the only scenario where we should try to initialize it.
// Other errors (e.g. "already locked") should be returned as-is since the repository
// does already exist, but it can't be connected to.
if strings.Contains(err.Error(), "Is there a repository at the following location?") {
return r.InitRepo(ctx, param)
}
return err
}
return nil
}
func (r *resticRepositoryProvider) BoostRepoConnect(ctx context.Context, param RepoParam) error {
return nil
}
func (r *resticRepositoryProvider) PruneRepo(ctx context.Context, param RepoParam) error {
return r.svc.PruneRepo(param.BackupLocation, param.BackupRepo)
}
func (r *resticRepositoryProvider) EnsureUnlockRepo(ctx context.Context, param RepoParam) error {
return r.svc.UnlockRepo(param.BackupLocation, param.BackupRepo)
}
func (r *resticRepositoryProvider) Forget(ctx context.Context, snapshotID string, param RepoParam) error {
return r.svc.Forget(param.BackupLocation, param.BackupRepo, snapshotID)
}
func (r *resticRepositoryProvider) BatchForget(ctx context.Context, snapshotIDs []string, param RepoParam) []error {
errs := []error{}
for _, snapshot := range snapshotIDs {
err := r.Forget(ctx, snapshot, param)
if err != nil {
errs = append(errs, err)
}
}
return errs
}
func (r *resticRepositoryProvider) DefaultMaintenanceFrequency() time.Duration {
return r.svc.DefaultMaintenanceFrequency()
}
func (r *resticRepositoryProvider) ClientSideCacheLimit(repoOption map[string]string) int64 {
return 0
}

View File

@@ -0,0 +1,147 @@
/*
Copyright the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package restic
import (
"os"
"time"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/vmware-tanzu/velero/internal/credentials"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
repokey "github.com/vmware-tanzu/velero/pkg/repository/keys"
"github.com/vmware-tanzu/velero/pkg/restic"
veleroexec "github.com/vmware-tanzu/velero/pkg/util/exec"
"github.com/vmware-tanzu/velero/pkg/util/filesystem"
)
func NewRepositoryService(credGetter credentials.CredentialGetter, fs filesystem.Interface, log logrus.FieldLogger) *RepositoryService {
return &RepositoryService{
credGetter: credGetter,
fileSystem: fs,
log: log,
}
}
type RepositoryService struct {
credGetter credentials.CredentialGetter
fileSystem filesystem.Interface
log logrus.FieldLogger
}
func (r *RepositoryService) InitRepo(bsl *velerov1api.BackupStorageLocation, repo *velerov1api.BackupRepository) error {
return r.exec(restic.InitCommand(repo.Spec.ResticIdentifier), bsl)
}
func (r *RepositoryService) ConnectToRepo(bsl *velerov1api.BackupStorageLocation, repo *velerov1api.BackupRepository) error {
snapshotsCmd := restic.SnapshotsCommand(repo.Spec.ResticIdentifier)
// use the '--latest=1' flag to minimize the amount of data fetched since
// we're just validating that the repo exists and can be authenticated
// to.
// "--last" is replaced by "--latest=1" in restic v0.12.1
snapshotsCmd.ExtraFlags = append(snapshotsCmd.ExtraFlags, "--latest=1")
return r.exec(snapshotsCmd, bsl)
}
func (r *RepositoryService) PruneRepo(bsl *velerov1api.BackupStorageLocation, repo *velerov1api.BackupRepository) error {
return r.exec(restic.PruneCommand(repo.Spec.ResticIdentifier), bsl)
}
func (r *RepositoryService) UnlockRepo(bsl *velerov1api.BackupStorageLocation, repo *velerov1api.BackupRepository) error {
return r.exec(restic.UnlockCommand(repo.Spec.ResticIdentifier), bsl)
}
func (r *RepositoryService) Forget(bsl *velerov1api.BackupStorageLocation, repo *velerov1api.BackupRepository, snapshotID string) error {
return r.exec(restic.ForgetCommand(repo.Spec.ResticIdentifier, snapshotID), bsl)
}
func (r *RepositoryService) DefaultMaintenanceFrequency() time.Duration {
return restic.DefaultMaintenanceFrequency
}
func (r *RepositoryService) exec(cmd *restic.Command, bsl *velerov1api.BackupStorageLocation) error {
file, err := r.credGetter.FromFile.Path(repokey.RepoKeySelector())
if err != nil {
return err
}
// ignore error since there's nothing we can do and it's a temp file.
defer os.Remove(file)
cmd.PasswordFile = file
// if there's a caCert on the ObjectStorage, write it to disk so that it can be passed to restic
var caCertFile string
if bsl.Spec.ObjectStorage != nil {
var caCertData []byte
// Try CACertRef first (new method), then fall back to CACert (deprecated)
if bsl.Spec.ObjectStorage.CACertRef != nil {
caCertString, err := r.credGetter.FromSecret.Get(bsl.Spec.ObjectStorage.CACertRef)
if err != nil {
return errors.Wrap(err, "error getting CA certificate from secret")
}
caCertData = []byte(caCertString)
} else if bsl.Spec.ObjectStorage.CACert != nil {
caCertData = bsl.Spec.ObjectStorage.CACert
}
if caCertData != nil {
caCertFile, err = restic.TempCACertFile(caCertData, bsl.Name, r.fileSystem)
if err != nil {
return errors.Wrap(err, "error creating temp cacert file")
}
// ignore error since there's nothing we can do and it's a temp file.
defer os.Remove(caCertFile)
}
}
cmd.CACertFile = caCertFile
// CmdEnv uses credGetter.FromFile (not FromSecret) to get cloud provider credentials.
// FromFile materializes the BSL's Credential secret to a file path that cloud SDKs
// can read (e.g., AWS_SHARED_CREDENTIALS_FILE). This is different from caCertRef above,
// which uses FromSecret to read the CA certificate data directly into memory, then
// writes it to a temp file because restic CLI only accepts file paths (--cacert flag).
env, err := restic.CmdEnv(bsl, r.credGetter.FromFile)
if err != nil {
return err
}
cmd.Env = env
// #4820: restrieve insecureSkipTLSVerify from BSL configuration for
// AWS plugin. If nothing is return, that means insecureSkipTLSVerify
// is not enable for Restic command.
skipTLSRet := restic.GetInsecureSkipTLSVerifyFromBSL(bsl, r.log)
if len(skipTLSRet) > 0 {
cmd.ExtraFlags = append(cmd.ExtraFlags, skipTLSRet)
}
stdout, stderr, err := veleroexec.RunCommandWithLog(cmd.Cmd(), r.log)
r.log.WithFields(logrus.Fields{
"repository": cmd.RepoName(),
"command": cmd.String(),
"stdout": stdout,
"stderr": stderr,
}).Debugf("Ran restic command")
if err != nil {
return errors.Wrapf(err, "error running command=%s, stdout=%s, stderr=%s", cmd.String(), stdout, stderr)
}
return nil
}

View File

@@ -388,9 +388,9 @@ func (kr *kopiaRepository) Close(ctx context.Context) error {
return nil
}
func (kr *kopiaRepository) NewObjectWriter(ctx context.Context, opt udmrepo.ObjectWriteOptions) (udmrepo.ObjectWriter, error) {
func (kr *kopiaRepository) NewObjectWriter(ctx context.Context, opt udmrepo.ObjectWriteOptions) udmrepo.ObjectWriter {
if kr.rawWriter == nil {
return nil, errors.New("repo writer is closed or not open")
return nil
}
writer := kr.rawWriter.NewObjectWriter(kopia.SetupKopiaLog(ctx, kr.logger), object.WriterOptions{
@@ -402,22 +402,12 @@ func (kr *kopiaRepository) NewObjectWriter(ctx context.Context, opt udmrepo.Obje
})
if writer == nil {
return nil, errors.Errorf("error creating writer for object %s", opt.Description)
return nil
}
return &kopiaObjectWriter{
rawWriter: writer,
}, nil
}
// TODO add implementation in following PRs
func (kr *kopiaRepository) WriteMetadata(ctx context.Context, meta *udmrepo.Metadata, opt udmrepo.ObjectWriteOptions) (udmrepo.ID, error) {
return "", errors.New("not supported")
}
// TODO add implementation in following PRs
func (kr *kopiaRepository) ReadMetadata(ctx context.Context, id udmrepo.ID) (*udmrepo.Metadata, error) {
return nil, errors.New("not supported")
}
}
func (kr *kopiaRepository) PutManifest(ctx context.Context, manifest udmrepo.RepoManifest) (udmrepo.ID, error) {
@@ -446,21 +436,6 @@ func (kr *kopiaRepository) DeleteManifest(ctx context.Context, id udmrepo.ID) er
return nil
}
// TODO add implementation in following PRs
func (kr *kopiaRepository) SaveSnapshot(ctx context.Context, snap udmrepo.Snapshot) (udmrepo.ID, error) {
return "", errors.New("not supported")
}
// TODO add implementation in following PRs
func (kr *kopiaRepository) GetSnapshot(ctx context.Context, id udmrepo.ID) (udmrepo.Snapshot, error) {
return udmrepo.Snapshot{}, errors.New("not supported")
}
// TODO add implementation in following PRs
func (kr *kopiaRepository) DeleteSnapshot(ctx context.Context, id udmrepo.ID) error {
return errors.New("not supported")
}
func (kr *kopiaRepository) Flush(ctx context.Context) error {
if kr.rawWriter == nil {
return errors.New("repo writer is closed or not open")
@@ -571,9 +546,8 @@ func (kow *kopiaObjectWriter) Write(p []byte) (int, error) {
return kow.rawWriter.Write(p)
}
// TODO add implementation in following PRs
func (kow *kopiaObjectWriter) WriteAt(p []byte, offset int64) (int, error) {
return 0, errors.New("not supported")
func (kow *kopiaObjectWriter) Seek(offset int64, whence int) (int64, error) {
return -1, errors.New("not supported")
}
func (kow *kopiaObjectWriter) Checkpoint() (udmrepo.ID, error) {

View File

@@ -663,16 +663,13 @@ func TestNewObjectWriter(t *testing.T) {
rawWriter *repomocks.MockRepositoryWriter
rawWriterRet object.Writer
expectedRet udmrepo.ObjectWriter
expectedErr string
}{
{
name: "raw writer is nil",
expectedErr: "repo writer is closed or not open",
name: "raw writer is nil",
},
{
name: "new object writer fail",
rawWriter: repomocks.NewMockRepositoryWriter(t),
expectedErr: "error creating writer for object ",
name: "new object writer fail",
rawWriter: repomocks.NewMockRepositoryWriter(t),
},
{
name: "succeed",
@@ -691,14 +688,9 @@ func TestNewObjectWriter(t *testing.T) {
kr.rawWriter = tc.rawWriter
}
ret, err := kr.NewObjectWriter(t.Context(), udmrepo.ObjectWriteOptions{})
ret := kr.NewObjectWriter(t.Context(), udmrepo.ObjectWriteOptions{})
if tc.expectedErr == "" {
require.NoError(t, err)
require.Equal(t, tc.expectedRet, ret)
} else {
require.EqualError(t, err, tc.expectedErr)
}
assert.Equal(t, tc.expectedRet, ret)
})
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,14 +1,147 @@
// Code generated by mockery; DO NOT EDIT.
// github.com/vektra/mockery
// template: testify
// Code generated by mockery v2.39.1. DO NOT EDIT.
package mocks
import (
mock "github.com/stretchr/testify/mock"
"github.com/vmware-tanzu/velero/pkg/repository/udmrepo"
udmrepo "github.com/vmware-tanzu/velero/pkg/repository/udmrepo"
)
// ObjectWriter is an autogenerated mock type for the ObjectWriter type
type ObjectWriter struct {
mock.Mock
}
// Checkpoint provides a mock function with given fields:
func (_m *ObjectWriter) Checkpoint() (udmrepo.ID, error) {
ret := _m.Called()
if len(ret) == 0 {
panic("no return value specified for Checkpoint")
}
var r0 udmrepo.ID
var r1 error
if rf, ok := ret.Get(0).(func() (udmrepo.ID, error)); ok {
return rf()
}
if rf, ok := ret.Get(0).(func() udmrepo.ID); ok {
r0 = rf()
} else {
r0 = ret.Get(0).(udmrepo.ID)
}
if rf, ok := ret.Get(1).(func() error); ok {
r1 = rf()
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// Close provides a mock function with given fields:
func (_m *ObjectWriter) Close() error {
ret := _m.Called()
if len(ret) == 0 {
panic("no return value specified for Close")
}
var r0 error
if rf, ok := ret.Get(0).(func() error); ok {
r0 = rf()
} else {
r0 = ret.Error(0)
}
return r0
}
// Result provides a mock function with given fields:
func (_m *ObjectWriter) Result() (udmrepo.ID, error) {
ret := _m.Called()
if len(ret) == 0 {
panic("no return value specified for Result")
}
var r0 udmrepo.ID
var r1 error
if rf, ok := ret.Get(0).(func() (udmrepo.ID, error)); ok {
return rf()
}
if rf, ok := ret.Get(0).(func() udmrepo.ID); ok {
r0 = rf()
} else {
r0 = ret.Get(0).(udmrepo.ID)
}
if rf, ok := ret.Get(1).(func() error); ok {
r1 = rf()
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// Seek provides a mock function with given fields: offset, whence
func (_m *ObjectWriter) Seek(offset int64, whence int) (int64, error) {
ret := _m.Called(offset, whence)
if len(ret) == 0 {
panic("no return value specified for Seek")
}
var r0 int64
var r1 error
if rf, ok := ret.Get(0).(func(int64, int) (int64, error)); ok {
return rf(offset, whence)
}
if rf, ok := ret.Get(0).(func(int64, int) int64); ok {
r0 = rf(offset, whence)
} else {
r0 = ret.Get(0).(int64)
}
if rf, ok := ret.Get(1).(func(int64, int) error); ok {
r1 = rf(offset, whence)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// Write provides a mock function with given fields: p
func (_m *ObjectWriter) Write(p []byte) (int, error) {
ret := _m.Called(p)
if len(ret) == 0 {
panic("no return value specified for Write")
}
var r0 int
var r1 error
if rf, ok := ret.Get(0).(func([]byte) (int, error)); ok {
return rf(p)
}
if rf, ok := ret.Get(0).(func([]byte) int); ok {
r0 = rf(p)
} else {
r0 = ret.Get(0).(int)
}
if rf, ok := ret.Get(1).(func([]byte) error); ok {
r1 = rf(p)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// NewObjectWriter creates a new instance of ObjectWriter. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewObjectWriter(t interface {
@@ -22,292 +155,3 @@ func NewObjectWriter(t interface {
return mock
}
// ObjectWriter is an autogenerated mock type for the ObjectWriter type
type ObjectWriter struct {
mock.Mock
}
type ObjectWriter_Expecter struct {
mock *mock.Mock
}
func (_m *ObjectWriter) EXPECT() *ObjectWriter_Expecter {
return &ObjectWriter_Expecter{mock: &_m.Mock}
}
// Checkpoint provides a mock function for the type ObjectWriter
func (_mock *ObjectWriter) Checkpoint() (udmrepo.ID, error) {
ret := _mock.Called()
if len(ret) == 0 {
panic("no return value specified for Checkpoint")
}
var r0 udmrepo.ID
var r1 error
if returnFunc, ok := ret.Get(0).(func() (udmrepo.ID, error)); ok {
return returnFunc()
}
if returnFunc, ok := ret.Get(0).(func() udmrepo.ID); ok {
r0 = returnFunc()
} else {
r0 = ret.Get(0).(udmrepo.ID)
}
if returnFunc, ok := ret.Get(1).(func() error); ok {
r1 = returnFunc()
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// ObjectWriter_Checkpoint_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'Checkpoint'
type ObjectWriter_Checkpoint_Call struct {
*mock.Call
}
// Checkpoint is a helper method to define mock.On call
func (_e *ObjectWriter_Expecter) Checkpoint() *ObjectWriter_Checkpoint_Call {
return &ObjectWriter_Checkpoint_Call{Call: _e.mock.On("Checkpoint")}
}
func (_c *ObjectWriter_Checkpoint_Call) Run(run func()) *ObjectWriter_Checkpoint_Call {
_c.Call.Run(func(args mock.Arguments) {
run()
})
return _c
}
func (_c *ObjectWriter_Checkpoint_Call) Return(iD udmrepo.ID, err error) *ObjectWriter_Checkpoint_Call {
_c.Call.Return(iD, err)
return _c
}
func (_c *ObjectWriter_Checkpoint_Call) RunAndReturn(run func() (udmrepo.ID, error)) *ObjectWriter_Checkpoint_Call {
_c.Call.Return(run)
return _c
}
// Close provides a mock function for the type ObjectWriter
func (_mock *ObjectWriter) Close() error {
ret := _mock.Called()
if len(ret) == 0 {
panic("no return value specified for Close")
}
var r0 error
if returnFunc, ok := ret.Get(0).(func() error); ok {
r0 = returnFunc()
} else {
r0 = ret.Error(0)
}
return r0
}
// ObjectWriter_Close_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'Close'
type ObjectWriter_Close_Call struct {
*mock.Call
}
// Close is a helper method to define mock.On call
func (_e *ObjectWriter_Expecter) Close() *ObjectWriter_Close_Call {
return &ObjectWriter_Close_Call{Call: _e.mock.On("Close")}
}
func (_c *ObjectWriter_Close_Call) Run(run func()) *ObjectWriter_Close_Call {
_c.Call.Run(func(args mock.Arguments) {
run()
})
return _c
}
func (_c *ObjectWriter_Close_Call) Return(err error) *ObjectWriter_Close_Call {
_c.Call.Return(err)
return _c
}
func (_c *ObjectWriter_Close_Call) RunAndReturn(run func() error) *ObjectWriter_Close_Call {
_c.Call.Return(run)
return _c
}
// Result provides a mock function for the type ObjectWriter
func (_mock *ObjectWriter) Result() (udmrepo.ID, error) {
ret := _mock.Called()
if len(ret) == 0 {
panic("no return value specified for Result")
}
var r0 udmrepo.ID
var r1 error
if returnFunc, ok := ret.Get(0).(func() (udmrepo.ID, error)); ok {
return returnFunc()
}
if returnFunc, ok := ret.Get(0).(func() udmrepo.ID); ok {
r0 = returnFunc()
} else {
r0 = ret.Get(0).(udmrepo.ID)
}
if returnFunc, ok := ret.Get(1).(func() error); ok {
r1 = returnFunc()
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// ObjectWriter_Result_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'Result'
type ObjectWriter_Result_Call struct {
*mock.Call
}
// Result is a helper method to define mock.On call
func (_e *ObjectWriter_Expecter) Result() *ObjectWriter_Result_Call {
return &ObjectWriter_Result_Call{Call: _e.mock.On("Result")}
}
func (_c *ObjectWriter_Result_Call) Run(run func()) *ObjectWriter_Result_Call {
_c.Call.Run(func(args mock.Arguments) {
run()
})
return _c
}
func (_c *ObjectWriter_Result_Call) Return(iD udmrepo.ID, err error) *ObjectWriter_Result_Call {
_c.Call.Return(iD, err)
return _c
}
func (_c *ObjectWriter_Result_Call) RunAndReturn(run func() (udmrepo.ID, error)) *ObjectWriter_Result_Call {
_c.Call.Return(run)
return _c
}
// Write provides a mock function for the type ObjectWriter
func (_mock *ObjectWriter) Write(p []byte) (int, error) {
ret := _mock.Called(p)
if len(ret) == 0 {
panic("no return value specified for Write")
}
var r0 int
var r1 error
if returnFunc, ok := ret.Get(0).(func([]byte) (int, error)); ok {
return returnFunc(p)
}
if returnFunc, ok := ret.Get(0).(func([]byte) int); ok {
r0 = returnFunc(p)
} else {
r0 = ret.Get(0).(int)
}
if returnFunc, ok := ret.Get(1).(func([]byte) error); ok {
r1 = returnFunc(p)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// ObjectWriter_Write_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'Write'
type ObjectWriter_Write_Call struct {
*mock.Call
}
// Write is a helper method to define mock.On call
// - p []byte
func (_e *ObjectWriter_Expecter) Write(p interface{}) *ObjectWriter_Write_Call {
return &ObjectWriter_Write_Call{Call: _e.mock.On("Write", p)}
}
func (_c *ObjectWriter_Write_Call) Run(run func(p []byte)) *ObjectWriter_Write_Call {
_c.Call.Run(func(args mock.Arguments) {
var arg0 []byte
if args[0] != nil {
arg0 = args[0].([]byte)
}
run(
arg0,
)
})
return _c
}
func (_c *ObjectWriter_Write_Call) Return(n int, err error) *ObjectWriter_Write_Call {
_c.Call.Return(n, err)
return _c
}
func (_c *ObjectWriter_Write_Call) RunAndReturn(run func(p []byte) (int, error)) *ObjectWriter_Write_Call {
_c.Call.Return(run)
return _c
}
// WriteAt provides a mock function for the type ObjectWriter
func (_mock *ObjectWriter) WriteAt(p []byte, off int64) (int, error) {
ret := _mock.Called(p, off)
if len(ret) == 0 {
panic("no return value specified for WriteAt")
}
var r0 int
var r1 error
if returnFunc, ok := ret.Get(0).(func([]byte, int64) (int, error)); ok {
return returnFunc(p, off)
}
if returnFunc, ok := ret.Get(0).(func([]byte, int64) int); ok {
r0 = returnFunc(p, off)
} else {
r0 = ret.Get(0).(int)
}
if returnFunc, ok := ret.Get(1).(func([]byte, int64) error); ok {
r1 = returnFunc(p, off)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// ObjectWriter_WriteAt_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'WriteAt'
type ObjectWriter_WriteAt_Call struct {
*mock.Call
}
// WriteAt is a helper method to define mock.On call
// - p []byte
// - off int64
func (_e *ObjectWriter_Expecter) WriteAt(p interface{}, off interface{}) *ObjectWriter_WriteAt_Call {
return &ObjectWriter_WriteAt_Call{Call: _e.mock.On("WriteAt", p, off)}
}
func (_c *ObjectWriter_WriteAt_Call) Run(run func(p []byte, off int64)) *ObjectWriter_WriteAt_Call {
_c.Call.Run(func(args mock.Arguments) {
var arg0 []byte
if args[0] != nil {
arg0 = args[0].([]byte)
}
var arg1 int64
if args[1] != nil {
arg1 = args[1].(int64)
}
run(
arg0,
arg1,
)
})
return _c
}
func (_c *ObjectWriter_WriteAt_Call) Return(n int, err error) *ObjectWriter_WriteAt_Call {
_c.Call.Return(n, err)
return _c
}
func (_c *ObjectWriter_WriteAt_Call) RunAndReturn(run func(p []byte, off int64) (int, error)) *ObjectWriter_WriteAt_Call {
_c.Call.Return(run)
return _c
}

View File

@@ -62,41 +62,19 @@ const (
// ObjectWriteOptions defines the options when creating an object for write
type ObjectWriteOptions struct {
FullPath string // Full logical path of the object
DataType int // OBJECT_DATA_TYPE_*
Description string // A description of the object, could be empty
Prefix ID // A prefix of the name used to save the object
AccessMode int // OBJECT_DATA_ACCESS_*
BackupMode int // OBJECT_DATA_BACKUP_*
AsyncWrites int // Num of async writes for the object, 0 means no async write
ParentObject ID // The object in the previous snapshot, for incremental backup
FullPath string // Full logical path of the object
DataType int // OBJECT_DATA_TYPE_*
Description string // A description of the object, could be empty
Prefix ID // A prefix of the name used to save the object
AccessMode int // OBJECT_DATA_ACCESS_*
BackupMode int // OBJECT_DATA_BACKUP_*
AsyncWrites int // Num of async writes for the object, 0 means no async write
}
type AdvancedFeatureInfo struct {
MultiPartBackup bool // if set to true, it means the repo supports multiple-part backup
}
type ObjectMetadata struct {
ID ID
Type int // OBJECT_DATA_TYPE_*
Size int64
}
type Metadata struct {
SubObjects []ObjectMetadata // For dir metadata only, the sub objects in this dir.
ExtraDataLen int // Extra data associated to this metadata.
ExtraData []byte
}
type Snapshot struct {
Source string
Description string
StartTime time.Time
EndTime time.Time
Tags map[string]string
RootObject ID
}
// BackupRepoService is used to initialize, open or maintain a backup repository
type BackupRepoService interface {
// Create creates a new backup repository.
@@ -141,14 +119,7 @@ type BackupRepo interface {
// NewObjectWriter creates a new object and return the object's writer interface.
// return: A unified identifier of the object on success.
NewObjectWriter(ctx context.Context, opt ObjectWriteOptions) (ObjectWriter, error)
// WriteMetadata writes metadata to the repo, metadata is used to describe data, e.g., file system
// dirs are saved as metadata
WriteMetadata(ctx context.Context, meta *Metadata, opt ObjectWriteOptions) (ID, error)
// ReadMetadata reads a metadata from repo by the metadata's object ID
ReadMetadata(ctx context.Context, id ID) (*Metadata, error)
NewObjectWriter(ctx context.Context, opt ObjectWriteOptions) ObjectWriter
// PutManifest saves a manifest object into the backup repository.
PutManifest(ctx context.Context, mani RepoManifest) (ID, error)
@@ -168,15 +139,6 @@ type BackupRepo interface {
// Time returns the local time of the backup repository. It may be different from the time of the caller
Time() time.Time
// SaveSnapshot saves a repo snapshot
SaveSnapshot(ctx context.Context, snapshot Snapshot) (ID, error)
// GetSnapshot returns a repo snapshot from snapshot ID
GetSnapshot(ctx context.Context, id ID) (Snapshot, error)
// DeleteSnapshot deletes a repo snapshot
DeleteSnapshot(ctx context.Context, id ID) error
// Close closes the backup repository
Close(ctx context.Context) error
}
@@ -192,8 +154,8 @@ type ObjectReader interface {
type ObjectWriter interface {
io.WriteCloser
// WriterAt is used in the cases that the object is not written sequentially
io.WriterAt
// Seeker is used in the cases that the object is not written sequentially
io.Seeker
// Checkpoint is periodically called to preserve the state of data written to the repo so far.
// Checkpoint returns a unified identifier that represent the current state.

Some files were not shown because too many files have changed in this diff Show More