Compare commits

...

103 Commits

Author SHA1 Message Date
Xun Jiang/Bruce Jiang
5c4fdfe147 Merge pull request #6993 from allenxu404/release-1.12
Change v1.12.1 changelog
2023-10-20 21:04:47 +08:00
allenxu404
226237bab4 Change v1.12.1 changelog
Signed-off-by: allenxu404 <qix2@vmware.com>
2023-10-20 20:50:12 +08:00
Xun Jiang/Bruce Jiang
bbc9790316 Merge pull request #6991 from Lyndon-Li/release-1.12
[1.12] Issue 6988: udmrepo use region specified in BSL when s3URL is empty
2023-10-20 20:32:29 +08:00
Lyndon-Li
10744ec516 udmrepo use region specified in BSL when s3URL is empty
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-10-20 20:14:01 +08:00
lyndon
905cd43140 Merge pull request #6986 from blackpiglet/support_windows_build
Add both non-Windows version and Windows version code for PVC block mode logic
2023-10-20 19:37:05 +08:00
Xun Jiang
3034cdb448 Make Windows build skip BlockMode code.
PVC block mode backup and restore introduced some OS specific
system calls. Those calls are not available for Windows, so
add both non Windows version and Windows version code, and
return error for block mode on the Windows platform.

Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-10-20 17:44:38 +08:00
Xun Jiang/Bruce Jiang
353ff55e42 Merge pull request #6984 from allenxu404/release-1.12
Add v1.12.1 changelog
2023-10-20 12:00:11 +08:00
allenxu404
468017d7db Add v1.12.1 changelog
Signed-off-by: allenxu404 <qix2@vmware.com>
2023-10-20 11:18:27 +08:00
lyndon
6bf705fd25 Merge pull request #6970 from ywk253100/231018_auth
[cherry-pick]Import auth provider plugins
2023-10-18 16:48:58 +08:00
Sebastian Glab
7a909d8ff5 Import auth provider plugins
Signed-off-by: Sebastian Glab <sglab@catalogicsoftware.com>
2023-10-18 16:06:18 +08:00
lyndon
ef1b9816b2 Merge pull request #6943 from sseago/retry-generateName1.12
[release-1.12]: CP issue #6807: Retry failed create when using generateName
2023-10-16 11:09:21 +08:00
Scott Seago
457fcc6893 issue #6807: Retry failed create when using generateName
When creating resources with generateName, apimachinery
does not guarantee uniqueness when it appends the random
suffix to the generateName stub, so if it fails with
already exists error, we need to retry.

Signed-off-by: Scott Seago <sseago@redhat.com>
2023-10-13 10:29:46 -04:00
Wenkai Yin(尹文开)
b498847b5b Merge pull request #6951 from blackpiglet/fix_CVE-2023-39325
Fix CVE-2023-39325.
2023-10-13 18:22:45 +08:00
Xun Jiang
af9697814e Fix CVE-2023-39325.
Bump Velero and Restic golang.org/x/net to v0.17.0.
Bump golang version to v1.20.10.

Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-10-13 14:22:18 +08:00
lyndon
d92a051795 Merge pull request #6948 from sseago/restore-get-perf1.12
[release-1.120: CP Perf improvements for existing resource restore
2023-10-13 08:13:50 +08:00
Scott Seago
a3cb39d62e Perf improvements for existing resource restore
Use informer cache with dynamic client for Get calls on restore
When enabled, also make the Get call before create.

Add server and install parameter to allow disabling this feature,
but enable by default

Signed-off-by: Scott Seago <sseago@redhat.com>
2023-10-12 15:07:55 -04:00
Shubham Pampattiwar
c1ace31466 Merge pull request #6940 from Lyndon-Li/release-1.12
[1.12] Issue 6647: add default-snapshot-move-data parameter to Velero install
2023-10-11 08:54:35 -07:00
Lyndon-Li
8bf98e8895 fix issue 6647
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-10-11 17:43:38 +08:00
Xun Jiang/Bruce Jiang
e53cfdf85e Merge pull request #6887 from allenxu404/release-1.12
Add docs link for new features to release note
2023-10-10 16:55:30 +08:00
allenxu404
d93cc9094a Add doc links for new features to release note
Signed-off-by: allenxu404 <qix2@vmware.com>
2023-10-10 16:05:11 +08:00
lyndon
15dd67e203 Merge pull request #6935 from Lyndon-Li/release-1.12
[1.12] Issue 6734: spread backup pod evenly
2023-10-10 11:41:37 +08:00
Lyndon-Li
877592194b issue 6734: spread backup pod evenly
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-10-10 11:04:04 +08:00
Wenkai Yin(尹文开)
17b495fcfd Merge pull request #6934 from ywk253100/231010_image
[cherry-pick]Replace the base image with paketobuildpacks image
2023-10-10 10:32:14 +08:00
Wenkai Yin(尹文开)
b99a59480d Replace the base image with paketobuildpacks image
Replace the base image with paketobuildpacks image

Fixes #6851

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2023-10-10 09:38:54 +08:00
qiuming
a789976a03 Merge pull request #6925 from qiuming-best/target-v1.12.1
[Cherry-pick v1.12]Keep the logs info ns/name same with other modules
2023-10-08 11:23:52 +08:00
qiuming
52878de077 Merge branch 'release-1.12' into target-v1.12.1 2023-10-08 11:08:07 +08:00
yanggang
432a5fe566 [Cherry-pick v1.12]Keep the logs info ns/name is the same with other modules.
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-10-08 03:07:43 +00:00
Xun Jiang/Bruce Jiang
175047baa9 Merge pull request #6912 from blackpiglet/6750_cherry_pick
[cherry-pick][release-1.12]Code clean for backup cmd client. (#6750)
2023-10-08 11:05:06 +08:00
Yang Gang
0eaf14ed19 Code clean for backup cmd client. (#6750)
Address some code spell check errors.

Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-10-03 23:55:25 +08:00
David Zaninovic
c415fd4bcc Add support for block volumes (#6680) (#6897)
(cherry picked from commit 8e01d1b9be)

Signed-off-by: David Zaninovic <dzaninovic@catalogicsoftware.com>
2023-09-29 15:28:35 -04:00
Shubham Pampattiwar
554403df5c Merge pull request #6713 from kaovilai/jobs-label-k8s1.27-velero1.12
release-1.12: On restore, delete Kubernetes 1.27 job controller uid label
2023-09-29 11:49:32 -07:00
Tiger Kaovilai
aba64ba151 Remove legacy label version check, to be added back when version is known
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-09-29 12:27:29 -04:00
Tiger Kaovilai
3a410c9f04 changelog
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-09-29 12:27:29 -04:00
Tiger Kaovilai
2f92f78be5 Handle 1.27 k8s job label changes
per  0e86fa5115/CHANGELOG/CHANGELOG-1.27.md (L1768)

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-09-29 12:27:29 -04:00
Shubham Pampattiwar
9d5dd8e09d Merge pull request #6899 from sseago/pvb-start-logs
[release-1.12] cherry-pick Fix some wrong logs and code clean.
2023-09-29 09:27:01 -07:00
yanggang
6103073551 Fix some wrong logs and code clean.
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-09-28 13:27:51 -04:00
Xun Jiang/Bruce Jiang
83f892d81f Add go clean in Dockerfile and action. (#6896)
Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-09-28 10:20:48 -04:00
Wenkai Yin(尹文开)
2cd15f1e4b Merge pull request #6892 from reasonerjt/cp-6768-1.12
[Cherry-pick v1.12] code clean for repository (#6768)
2023-09-28 17:56:59 +08:00
Yang Gang (成都)
27a89df34d code clean for repository (#6768)
Signed-off-by: Yang Gang (成都) <gang.yang@daocloud.io>
2023-09-28 16:51:22 +08:00
lyndon
e4c2b2b157 Merge pull request #6886 from Lyndon-Li/release-1.12
[1.12] Issue 6880: set ParallelUploadAboveSize as MaxInt64
2023-09-28 15:15:53 +08:00
Lyndon-Li
edefe7a63b issue 6880: set ParallelUploadAboveSize as MaxInt64
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-28 14:26:46 +08:00
Wenkai Yin(尹文开)
a097094bcf Merge pull request #6881 from ywk253100/230927_cherry_pick
Add 'orLabelSelector' for backup, restore command
2023-09-28 07:21:13 +08:00
Wenkai Yin(尹文开)
bc4dc6c0c8 Rename the changelog
Rename the changelog

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2023-09-27 20:16:48 +08:00
Nilesh Akhade
343e54f1b8 Add 'orLabelSelector' for backup, restore command
Signed-off-by: Nilesh Akhade <nakhade@catalogicsoftware.com>
2023-09-27 20:13:48 +08:00
lyndon
08d44b02a8 Merge pull request #6877 from Lyndon-Li/release-1.12
[1.12] Issue: 6859 move plugin depdending podvolume functions to util pkg
2023-09-27 17:46:48 +08:00
Lyndon-Li
a8c76a4a00 issue: move plugin depdending podvolume functions to util pkg
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-27 11:29:02 +08:00
lyndon
0623ac363a Merge pull request #6873 from Lyndon-Li/release-1.12
[1.12] Issue 6786:always delete VSC regardless of the deletion policy
2023-09-26 20:56:50 +08:00
Lyndon-Li
1aea12a80c issue 6786:always delete VSC regardless of the deletion policy
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-26 15:53:05 +08:00
Xun Jiang/Bruce Jiang
7112c62e49 Merge pull request #6842 from allenxu404/release-1.12
Modify changelogs for v1.12
2023-09-19 17:53:28 +08:00
allenxu404
dcb891a307 Modify changelogs for v1.12
Signed-off-by: allenxu404 <qix2@vmware.com>
2023-09-19 17:36:36 +08:00
lyndon
21353f00a8 Merge pull request #6841 from Lyndon-Li/release-1.12
[1.12] Doc for multiple snapshot class
2023-09-19 16:45:26 +08:00
Lyndon-Li
5e7114899b doc for multiple snapshot class
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-19 16:28:28 +08:00
Wenkai Yin(尹文开)
b035680ce6 Set data mover related properties for schedule (#6823)
Set data mover related properties for schedule

Fixes #6820

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2023-09-14 18:04:48 +08:00
lyndon
9eb133e635 Merge pull request #6821 from reasonerjt/update-kopia-repo-1.12
[cherrypick-1.12]Switch the kopia repo to new org
2023-09-14 11:53:12 +08:00
Daniel Jiang
6f1262d4c6 Switch the kopia repo to new org
Signed-off-by: Daniel Jiang <jiangd@vmware.com>
2023-09-14 11:19:43 +08:00
lyndon
48e3278c6c Merge pull request #6814 from allenxu404/release-1.12
[Cherry-pick]Add doc changes after rc1 to v1.12 docs
2023-09-14 10:25:23 +08:00
Qi Xu
acfc6e474f Add doc changes after rc1 to v1.12 docs (#6812)
Signed-off-by: allenxu404 <qix2@vmware.com>
2023-09-13 18:43:00 +08:00
qiuming
993d2c775f Merge pull request #6803 from qiuming-best/finalizer-optimize
Optimize of removing dataupload datadownload finalizer
2023-09-12 15:44:21 +08:00
qiuming
b70b01cde9 Merge branch 'release-1.12' into finalizer-optimize 2023-09-12 15:00:58 +08:00
Anshul Ahuja
8b8a5a2bcc use old namespace in resource modifier (#6724) (#6794)
* use old namespace in resource modifier



* add changelog



* update docs



* updated after review



---------

Signed-off-by: lou <alex1988@outlook.com>
Signed-off-by: Anshul Ahuja <anshulahuja@microsoft.com>
Co-authored-by: Guang Jiong Lou <7991675+27149chen@users.noreply.github.com>
Co-authored-by: Daniel Jiang <jiangd@vmware.com>
2023-09-12 14:54:03 +08:00
qiuming
5b36cd7e83 Merge branch 'release-1.12' into finalizer-optimize 2023-09-12 11:57:07 +08:00
Ming Qiu
3240fb196c Optimize of removing finalizer no matter the dataupload datadownload cr is been deleted or not
Signed-off-by: Ming Qiu <mqiu@vmware.com>
2023-09-12 11:55:44 +08:00
Daniel Jiang
d9859d99ba Merge pull request #6793 from Lyndon-Li/release-1.12
[1.12] Add csi snapshot data mover doc
2023-09-08 18:09:31 +08:00
Lyndon-Li
18d4fe45e8 add csi snapshot data movement doc
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-08 17:51:49 +08:00
qiuming
60d5bb22f7 Merge pull request #6791 from Lyndon-Li/release-1.12
[1.12] Fix issue 6785
2023-09-08 16:32:39 +08:00
Lyndon-Li
9468b8cfa9 fix issue 6785
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-08 15:18:02 +08:00
lyndon
420562111b Merge pull request #6789 from Lyndon-Li/release-1.12
[1.12] Fix issue 6748 [2]
2023-09-08 15:15:19 +08:00
lyndon
cf0b2e9139 Merge branch 'release-1.12' into release-1.12 2023-09-08 14:57:03 +08:00
lyndon
506415e60c Merge pull request #6787 from kaovilai/fsb-yaml-example-r1.12
cherry-pick: r1.12: Show yaml example of repository password: file-system-backup.md #6783
2023-09-08 12:53:42 +08:00
Lyndon-Li
3733a40637 fix issue 6748
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-08 09:53:00 +08:00
Tiger Kaovilai
fe1ade0226 Show yaml example of repository password: file-system-backup.md
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2023-09-07 11:54:10 -04:00
Xun Jiang/Bruce Jiang
86e1a74937 Merge pull request #6778 from allenxu404/release-1.12
[cherry-pick] add note for backup repository password configuration
2023-09-07 10:25:05 +08:00
Shubham Pampattiwar
6260a44e62 add note for backup repository password configuration
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

address PR feedback

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

reword the note

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

change FS backups to normal backups in the note

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2023-09-06 17:58:57 +08:00
Xun Jiang/Bruce Jiang
06d9bfae8d Merge pull request #6762 from blackpiglet/6750_fix_1.12
[cherry-pick][release-1.12]Fix #6752: add namespace exclude check.
2023-09-06 15:44:09 +08:00
Xun Jiang
4d1617470f Fix #6752: add namespace exclude check.
Add PSA audit and warn labels.

Signed-off-by: Xun Jiang <jxun@vmware.com>
2023-09-06 15:28:17 +08:00
lyndon
1b2c82c9eb Merge pull request #6772 from Lyndon-Li/release-1.12
[1.12] Fix issue 6748
2023-09-06 11:21:19 +08:00
Lyndon-Li
040060082a fix issue 6748
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-06 10:37:47 +08:00
Wenkai Yin(尹文开)
fc653bdfbe Update restore controller logic for restore deletion (#6761)
1. Skip deleting the restore files from storage if the backup/BSL is not found
2. Allow deleting the restore files from storage even though the BSL is readonly

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2023-09-06 08:38:45 +08:00
lyndon
6790a18814 Merge pull request #6758 from Lyndon-Li/release-1.12
[1.12] Fix issue 6753
2023-09-05 11:06:55 +08:00
Lyndon-Li
93995bfd00 fix issue 6753
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-05 10:44:58 +08:00
lyndon
80572934dc Merge pull request #6743 from Lyndon-Li/release-1.12-1
[1.12] Fix issue 6733
2023-09-01 18:40:11 +08:00
lyndon
41d9b67945 Merge branch 'release-1.12' into release-1.12-1 2023-09-01 18:22:05 +08:00
lyndon
a06107ac70 Merge pull request #6742 from Lyndon-Li/release-1.12
[1.12] Fix issue 6709
2023-09-01 18:18:27 +08:00
lyndon
40a94e39ad Merge branch 'release-1.12' into release-1.12-1 2023-09-01 18:07:13 +08:00
lyndon
7ea0d434d6 Merge branch 'release-1.12' into release-1.12 2023-09-01 17:53:16 +08:00
qiuming
6b884ecc39 Merge pull request #6740 from qiuming-best/release-1.12
Fix kopia snapshot policy not work
2023-09-01 17:43:36 +08:00
Lyndon-Li
183f7ac154 fix issue 6733
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-01 17:30:52 +08:00
lyndon
75bda412a1 fix issue 6709 (#6741)
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-09-01 17:12:31 +08:00
Ming Qiu
a2eb10df8f Fix kopia snapshot policy not work
Signed-off-by: Ming Qiu <mqiu@vmware.com>
2023-09-01 16:34:02 +08:00
Daniel Jiang
90bc1abd21 Merge pull request #6725 from anshulahuja98/cherrypick-6704
[1.12] Cherry Pick - add label selector in Resource Modifiers  (#6704)
2023-08-31 15:59:34 +08:00
Guang Jiong Lou
45165503ba add label selector in Resource Modifiers (#6704)
* add label selector in resource modifier

Signed-off-by: lou <alex1988@outlook.com>

* add ut

Signed-off-by: lou <alex1988@outlook.com>

* update after review

Signed-off-by: lou <alex1988@outlook.com>

* update after review

Signed-off-by: lou <alex1988@outlook.com>

---------

Signed-off-by: lou <alex1988@outlook.com>
Signed-off-by: Anshul Ahuja <anshulahuja@microsoft.com>
2023-08-31 10:52:03 +05:30
qiuming
53530130a5 Fix velero uninstall bug (#6720)
Signed-off-by: Ming <mqiu@vmware.com>
2023-08-30 17:54:30 +08:00
danfengliu
ed256d74dd Merge pull request #6708 from danfengliu/cherry-pick-fix-nightly-issues
[cherry-pick to v1.12] monitor velero logs and fix E2E issues
2023-08-29 15:40:18 +08:00
Xun Jiang/Bruce Jiang
ab28a09a07 Merge branch 'release-1.12' into cherry-pick-fix-nightly-issues 2023-08-29 10:17:53 +08:00
qiuming
90f4cc5497 Merge pull request #6707 from qiuming-best/mqiu-release-1.12
make Velero uninstall backward compatible
2023-08-29 09:03:37 +08:00
qiuming
f505ed709b Merge branch 'release-1.12' into mqiu-release-1.12 2023-08-28 17:31:48 +08:00
Ming
28074e3f37 make velero uninstall backward compatible
Signed-off-by: Ming <mqiu@vmware.com>
2023-08-28 09:31:29 +00:00
danfengl
240f33c09d monitor velero logs and fix E2E issues
1. Capture Velero pod log and K8S cluster event;
2. Fix wrong path of storageclass yaml file issue caused by pert test;
3. Fix change storageclass test issue that no sc named 'default' in EKS cluster;
4. Support AWS credential as config format;
5. Support more E2E script input parameters like standy cluster plugins and provider.

Signed-off-by: danfengl <danfengl@vmware.com>
2023-08-28 06:04:09 +00:00
lyndon
fd08848471 fix issue 6391 (#6703)
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2023-08-25 16:36:45 +08:00
qiuming
5f585be24b Merge pull request #6701 from qiuming-best/mqiu-release-1.12
[CherryPick v1.12] Fix delete dataupload datadownload CR failure
2023-08-25 11:45:33 +08:00
Ming
5480acf0a0 [CherryPick v1.12] Fix delete dataupload datadownload failure when Velero uninstall
Signed-off-by: Ming <mqiu@vmware.com>
2023-08-25 03:24:40 +00:00
Daniel Jiang
e2d3e84bab skip subresource in resource discovery (#6688)
Signed-off-by: lou <alex1988@outlook.com>
Co-authored-by: lou <alex1988@outlook.com>
2023-08-23 15:13:34 +08:00
Qi Xu
0c0ccf949b Ping golang/distroless image to latest version (#6679)
Signed-off-by: allenxu404 <qix2@vmware.com>
2023-08-18 19:17:50 +08:00
161 changed files with 4269 additions and 1194 deletions

View File

@@ -14,7 +14,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.20'
go-version: '1.20.10'
id: go
# Look for a CLI that's made for this PR
- name: Fetch built CLI

View File

@@ -14,7 +14,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.20'
go-version: '1.20.10'
id: go
# Look for a CLI that's made for this PR
- name: Fetch built CLI
@@ -72,7 +72,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.20'
go-version: '1.20.10'
id: go
- name: Check out the code
uses: actions/checkout@v2

View File

@@ -10,7 +10,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.20'
go-version: '1.20.10'
id: go
- name: Check out the code
uses: actions/checkout@v2

View File

@@ -18,7 +18,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.20'
go-version: '1.20.10'
id: go
- uses: actions/checkout@v3
@@ -48,7 +48,10 @@ jobs:
version: latest
- name: Build
run: make local
run: |
make local
# Clean go cache to ease the build environment storage pressure.
go clean -modcache -cache
- name: Test
run: make test

View File

@@ -13,7 +13,7 @@
# limitations under the License.
# Velero binary build section
FROM --platform=$BUILDPLATFORM golang:1.20-bullseye as velero-builder
FROM --platform=$BUILDPLATFORM golang:1.20.10-bullseye as velero-builder
ARG GOPROXY
ARG BIN
@@ -43,10 +43,11 @@ RUN mkdir -p /output/usr/bin && \
go build -o /output/${BIN} \
-ldflags "${LDFLAGS}" ${PKG}/cmd/${BIN} && \
go build -o /output/velero-helper \
-ldflags "${LDFLAGS}" ${PKG}/cmd/velero-helper
-ldflags "${LDFLAGS}" ${PKG}/cmd/velero-helper && \
go clean -modcache -cache
# Restic binary build section
FROM --platform=$BUILDPLATFORM golang:1.20-bullseye as restic-builder
FROM --platform=$BUILDPLATFORM golang:1.20.10-bullseye as restic-builder
ARG BIN
ARG TARGETOS
@@ -65,10 +66,11 @@ COPY . /go/src/github.com/vmware-tanzu/velero
RUN mkdir -p /output/usr/bin && \
export GOARM=$(echo "${GOARM}" | cut -c2-) && \
/go/src/github.com/vmware-tanzu/velero/hack/build-restic.sh
/go/src/github.com/vmware-tanzu/velero/hack/build-restic.sh && \
go clean -modcache -cache
# Velero image packing section
FROM gcr.io/distroless/base-nossl-debian11:nonroot
FROM paketobuildpacks/run-jammy-tiny:0.2.5
LABEL maintainer="Xun Jiang <jxun@vmware.com>"
@@ -76,5 +78,5 @@ COPY --from=velero-builder /output /
COPY --from=restic-builder /output /
USER nonroot:nonroot
USER cnb:cnb

View File

@@ -52,7 +52,7 @@ git_sha = str(local("git rev-parse HEAD", quiet = True, echo_off = True)).strip(
tilt_helper_dockerfile_header = """
# Tilt image
FROM golang:1.20 as tilt-helper
FROM golang:1.20.10 as tilt-helper
# Support live reloading with Tilt
RUN wget --output-document /restart.sh --quiet https://raw.githubusercontent.com/windmilleng/rerun-process-wrapper/master/restart.sh && \

View File

@@ -100,7 +100,7 @@ To fix CVEs and keep pace with Golang, Velero made changes as follows:
* Enable staticcheck linter. (#5788, @blackpiglet)
* Set Kopia IgnoreUnknownTypes in ErrorHandlingPolicy to True for ignoring backup unknown file type (#5786, @qiuming-best)
* Bump up Restic version to 0.15.0 (#5784, @qiuming-best)
* Add File system backup related matrics to Grafana dashboard
* Add File system backup related metrics to Grafana dashboard
- Add metrics backup_warning_total for record of total warnings
- Add metrics backup_last_status for record of last status of the backup (#5779, @allenxu404)
* Design for Handling backup of volumes by resources filters (#5773, @qiuming-best)

View File

@@ -1,3 +1,48 @@
## v1.12.1
### 2023-10-20
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.12.1
### Container Image
`velero/velero:v1.12.1`
### Documentation
https://velero.io/docs/v1.12/
### Upgrading
https://velero.io/docs/v1.12/upgrade-to-1.12/
### Highlights
#### Data Mover Adds Support for Block Mode Volumes
For PersistentVolumes with volumeMode set as Block, the volumes are mounted as raw block devices in pods, in 1.12.1, Velero CSI snapshot data movement supports to backup and restore this kind of volumes under linux based Kubernetes clusters.
#### New Parameter in Installation to Enable Data Mover
The `velero install` sub-command now includes a new parameter,`--default-snapshot-move-data`, which configures Velero server to move data by default for all snapshots supporting data movement. This feature is useful for users who will always want to use VBDM for backups instead of plain CSI , as they no longer need to specify the `--snapshot-move-data` flag for each individual backup.
#### Velero Base Image change
The base image previously used by Velero was `distroless`, which contains several CVEs cannot be addressed quickly. As a result, Velero will now use `paketobuildpacks` image starting from this new version.
### Limitations/Known issues
* The data mover's support for block mode volumes is currently only applicable to Linux environments.
### All changes
* Import auth provider plugins (#6970, @0x113)
* Perf improvements for existing resource restore (#6948, @sseago)
* Retry failed create when using generateName (#6943, @sseago)
* Fix issue #6647, add the --default-snapshot-move-data parameter to Velero install, so that users don't need to specify --snapshot-move-data per backup when they want to move snapshot data for all backups (#6940, @Lyndon-Li)
* Partially fix #6734, guide Kubernetes' scheduler to spread backup pods evenly across nodes as much as possible, so that data mover backup could achieve better parallelism (#6935, @Lyndon-Li)
* Replace the base image with paketobuildpacks image (#6934, @ywk253100)
* Add support for block volumes with Kopia (#6897, @dzaninovic)
* Set ParallelUploadAboveSize as MaxInt64 and flush repo after setting up policy so that policy is retrieved correctly by TreeForSource (#6886, @Lyndon-Li)
* Kubernetes 1.27 new job label batch.kubernetes.io/controller-uid are deleted during restore per https://github.com/kubernetes/kubernetes/pull/114930 (#6713, @kaovilai)
* Add `orLabelSelectors` for backup, restore commands (#6881, @nilesh-akhade)
* Fix issue #6859, move plugin depending podvolume functions to util pkg, so as to remove the dependencies to unnecessary repository packages like kopia, azure, etc. (#6877, @Lyndon-Li)
* Fix issue #6786, always delete VSC regardless of the deletion policy (#6873, @Lyndon-Li)
* Fix #6988, always get region from BSL if it is not empty (#6991, @Lyndon-Li)
* Add both non-Windows version and Windows version code for PVC block mode logic. (#6986, @blackpiglet)
## v1.12
### 2023-08-18
@@ -23,17 +68,17 @@ CSI Snapshot Data Movement is useful in below scenarios:
* For on-premises users, the storage usually doesn't support durable snapshots, so it is impossible/less efficient/cost ineffective to keep volume snapshots by the storage This feature helps to move the snapshot data to a storage with lower cost and larger scale for long time preservation.
* For public cloud users, this feature helps users to fulfill the multiple cloud strategy. It allows users to back up volume snapshots from one cloud provider and preserve or restore the data to another cloud provider. Then users will be free to flow their business data across cloud providers based on Velero backup and restore
CSI Snapshot Data Movement is built according to the Volume Snapshot Data Movement design ([Volume Snapshot Data Movement](https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/unified-repo-and-kopia-integration/unified-repo-and-kopia-integration.md)). More details can be found in the design.
CSI Snapshot Data Movement is built according to the Volume Snapshot Data Movement design ([Volume Snapshot Data Movement design](https://github.com/vmware-tanzu/velero/blob/main/design/volume-snapshot-data-movement/volume-snapshot-data-movement.md)). Additionally, guidance on how to use the feature can be found in the Volume Snapshot Data Movement doc([Volume Snapshot Data Movement doc](https://velero.io/docs/v1.12/csi-snapshot-data-movement)).
#### Resource Modifiers
In many use cases, customers often need to substitute specific values in Kubernetes resources during the restoration process like changing the namespace, changing the storage class, etc.
To address this need, Resource Modifiers (also known as JSON Substitutions) offer a generic solution in the restore workflow. It allows the user to define filters for specific resources and then specify a JSON patch (operator, path, value) to apply to the resource. This feature simplifies the process of making substitutions without requiring the implementation of a new RestoreItemAction plugin. More details can be found in Volume Snapshot Resource Modifiers design ([Resource Modifiers](https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/json-substitution-action-design.md)).
To address this need, Resource Modifiers (also known as JSON Substitutions) offer a generic solution in the restore workflow. It allows the user to define filters for specific resources and then specify a JSON patch (operator, path, value) to apply to the resource. This feature simplifies the process of making substitutions without requiring the implementation of a new RestoreItemAction plugin. More design details can be found in Resource Modifiers design ([Resource Modifiers design](https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/json-substitution-action-design.md)). For instructions on how to use the feature, please refer to Resource Modifiers doc([Resource Modifiers doc](https://velero.io/docs/v1.12/restore-resource-modifiers)).
#### Multiple VolumeSnapshotClasses
Prior to version 1.12, the Velero CSI plugin would choose the VolumeSnapshotClass in the cluster based on matching driver names and the presence of the "velero.io/csi-volumesnapshot-class" label. However, this approach proved inadequate for many user scenarios.
With the introduction of version 1.12, Velero now offers support for multiple VolumeSnapshotClasses in the CSI Plugin, enabling users to select a specific class for a particular backup. More details can be found in Multiple VolumeSnapshotClasses design ([Multiple VolumeSnapshotClasses](https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/multiple-csi-volumesnapshotclass-support.md)).
With the introduction of version 1.12, Velero now offers support for multiple VolumeSnapshotClasses in the CSI Plugin, enabling users to select a specific class for a particular backup. More design details can be found in Multiple VolumeSnapshotClasses design ([Multiple VolumeSnapshotClasses design](https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/multiple-csi-volumesnapshotclass-support.md)). For instructions on how to use the feature, please refer to Multiple VolumeSnapshotClasses doc ([Multiple VolumeSnapshotClasses doc](https://velero.io/docs/v1.12/csi/#implementation-choices)).
#### Restore Finalizer
Before v1.12, the restore controller would only delete restore resources but wouldnt delete restore data from the backup storage location when the command `velero restore delete` was executed. The only chance Velero deletes restores data from the backup storage location is when the associated backup is deleted.
@@ -51,10 +96,12 @@ To fix CVEs and keep pace with Golang, Velero made changes as follows:
* Prior to v1.12, the parameter `uploader-type` for Velero installation had a default value of "restic". However, starting from this version, the default value has been changed to "kopia". This means that Velero will now use Kopia as the default path for file system backup.
* The ways of setting CSI snapshot time have changed in v1.12. First, the sync waiting time for creating a snapshot handle in the CSI plugin is changed from the fixed 10 minutes into backup.Spec.CSISnapshotTimeout. The second, the async waiting time for VolumeSnapshot and VolumeSnapshotContent's status turning into `ReadyToUse` in operation uses the operation's timeout. The default value is 4 hours.
* As from [Velero helm chart v4.0.0](https://github.com/vmware-tanzu/helm-charts/releases/tag/velero-4.0.0), it supports multiple BSL and VSL, and the BSL and VSL have changed from the map into a slice, and[ this breaking change](https://github.com/vmware-tanzu/helm-charts/pull/413) is not backward compatible. So it would be best to change the BSL and VSL configuration into slices before the Upgrade.
* Prior to v1.12, deleting the Velero namespace would easily remove all the resources within it. However, with the introduction of finalizers attached to the Velero CR including `restore`, `dataupload`, and `datadownload` in this version, directly deleting Velero namespace may get stuck indefinitely because the pods responsible for handling the finalizers might be deleted before the resources attached to the finalizers. To avoid this issue, please use the command `velero uninstall` to delete all the Velero resources or ensure that you handle the finalizer appropriately before deleting the Velero namespace.
### Limitations/Known issues
* The Azure plugin supports Azure AD Workload identity way, but it only works for Velero native snapshots. It cannot support filesystem backup and snapshot data mover scenarios.
* File System backup under Kopia path and CSI Snapshot Data Movement backup fail to back up files that are large the 2GiB due to issue https://github.com/vmware-tanzu/velero/issues/6668.
### All Changes
@@ -132,3 +179,10 @@ prior PVC restores with CSI (#6111, @eemcmullan)
* Make GetPluginConfig accessible from other packages. (#6151, @tkaovila)
* Ignore not found error during patching managedFields (#6136, @ywk253100)
* Fix the goreleaser issues and add a new goreleaser action (#6109, @blackpiglet)
* Add CSI snapshot data movement doc (#6793, @Lyndon-Li)
* Use old(origin) namespace in resource modifier conditions in case namespace may change during restore (#6724, @27149chen)
* Fix #6752: add namespace exclude check. (#6762, @blackpiglet)
* Update restore controller logic for restore deletion (#6761, @ywk253100)
* Fix issue #6753, remove the check for read-only BSL in restore async operation controller since Velero cannot fully support read-only mode BSL in restore at present (#6758, @Lyndon-Li)
* Fixes #6636, skip subresource in resource discovery (#6688, @27149chen)
* This pr made some improvements in Resource Modifiers:1. add label selector 2. change the field name from groupKind to groupResource (#6704, @27149chen)

View File

@@ -61,7 +61,7 @@ in progress for 1.9.
* Add rbac and annotation test cases (#4455, @mqiu)
* remove --crds-version in velero install command. (#4446, @jxun)
* Upgrade e2e test vsphere plugin (#4440, @mqiu)
* Fix e2e test failures for the inappropriate optimaze of velero install (#4438, @mqiu)
* Fix e2e test failures for the inappropriate optimize of velero install (#4438, @mqiu)
* Limit backup namespaces on test resource filtering cases (#4437, @mqiu)
* Bump up Go to 1.17 (#4431, @reasonerjt)
* Added `<backup name>`-itemsnapshots.json.gz to the backup format. This file exists

View File

@@ -49,6 +49,9 @@ spec:
- mountPath: /host_pods
mountPropagation: HostToContainer
name: host-pods
- mountPath: /var/lib/kubelet/plugins
mountPropagation: HostToContainer
name: host-plugins
- mountPath: /scratch
name: scratch
- mountPath: /credentials
@@ -60,6 +63,9 @@ spec:
- hostPath:
path: /var/lib/kubelet/pods
name: host-pods
- hostPath:
path: /var/lib/kubelet/plugins
name: host-plugins
- emptyDir: {}
name: scratch
- name: cloud-credentials

View File

@@ -175,7 +175,7 @@ If there are one or more, download the backup tarball from backup storage, untar
## Alternatives Considered
Another proposal for higher level `DeleteItemActions` was initially included, which would require implementors to individually download the backup tarball themselves.
Another proposal for higher level `DeleteItemActions` was initially included, which would require implementers to individually download the backup tarball themselves.
While this may be useful long term, it is not a good fit for the current goals as each plugin would be re-implementing a lot of boilerplate.
See the deletion-plugins.md file for this alternative proposal in more detail.

View File

@@ -26,7 +26,7 @@ Currently velero supports substituting certain values in the K8s resources durin
<!-- ## Background -->
## Goals
- Allow the user to specify a GroupKind, Name(optional), JSON patch for modification.
- Allow the user to specify a GroupResource, Name(optional), JSON patch for modification.
- Allow the user to specify multiple JSON patch.
## Non Goals
@@ -74,7 +74,7 @@ velero restore create --from-backup backup-1 --resource-modifier-configmap resou
### Resource Modifier ConfigMap Structure
- User first needs to provide details on which resources the JSON Substitutions need to be applied.
- For this the user will provide 4 inputs - Namespaces(for NS Scoped resources), GroupKind (kind.group format similar to includeResources field in velero) and Name Regex(optional).
- For this the user will provide 4 inputs - Namespaces(for NS Scoped resources), GroupResource (resource.group format similar to includeResources field in velero) and Name Regex(optional).
- If the user does not provide the Name, the JSON Substitutions will be applied to all the resources of the given Group and Kind under the given namespaces.
- Further the use will specify the JSON Patch using the structure of kubectl's "JSON Patch" based inputs.
@@ -83,7 +83,7 @@ velero restore create --from-backup backup-1 --resource-modifier-configmap resou
version: v1
resourceModifierRules:
- conditions:
groupKind: persistentvolumeclaims
groupResource: persistentvolumeclaims
resourceNameRegex: "mysql.*"
namespaces:
- bar
@@ -96,6 +96,7 @@ resourceModifierRules:
path: "/metadata/labels/test"
```
- The above configmap will apply the JSON Patch to all the PVCs in the namespaces bar and foo with name starting with mysql. The JSON Patch will replace the storageClassName with "premium" and remove the label "test" from the PVCs.
- Note that the Namespace here is the original namespace of the backed up resource, not the new namespace where the resource is going to be restored.
- The user can specify multiple JSON Patches for a particular resource. The patches will be applied in the order specified in the configmap. A subsequent patch is applied in order and if multiple patches are specified for the same path, the last patch will override the previous patches.
- The user can specify multiple resourceModifierRules in the configmap. The rules will be applied in the order specified in the configmap.
@@ -119,7 +120,7 @@ kubectl create cm <configmap-name> --from-file <yaml-file> -n velero
version: v1
resourceModifierRules:
- conditions:
groupKind: persistentvolumeclaims.storage.k8s.io
groupResource: persistentvolumeclaims.storage.k8s.io
resourceNameRegex: ".*"
namespaces:
- bar

View File

@@ -67,12 +67,12 @@ The Velero CSI plugin chooses the VolumeSnapshotClass in the cluster that has th
metadata:
name: backup-1
annotations:
velero.io/csi-volumesnapshot-class/csi.cloud.disk.driver: csi-diskdriver-snapclass
velero.io/csi-volumesnapshot-class/csi.cloud.file.driver: csi-filedriver-snapclass
velero.io/csi-volumesnapshot-class/<driver name>: csi-snapclass
velero.io/csi-volumesnapshot-class_csi.cloud.disk.driver: csi-diskdriver-snapclass
velero.io/csi-volumesnapshot-class_csi.cloud.file.driver: csi-filedriver-snapclass
velero.io/csi-volumesnapshot-class_<driver name>: csi-snapclass
```
To query the annotations on a backup: "velero.io/csi-volumesnapshot-class/'driver name'" - where driver names comes from the PVC's driver.
To query the annotations on a backup: "velero.io/csi-volumesnapshot-class_'driver name'" - where driver names comes from the PVC's driver.
2. **Support VolumeSnapshotClass selection at PVC level**
The user can annotate the PVCs with driver and VolumeSnapshotClass name. The CSI plugin will use the VolumeSnapshotClass specified in the annotation. If the annotation is not present, the CSI plugin will use the default VolumeSnapshotClass for the driver. If the VolumeSnapshotClass provided is of a different driver, the CSI plugin will use the default VolumeSnapshotClass for the driver.

View File

@@ -703,33 +703,38 @@ type Provider interface {
In this case, we will extend the default kopia uploader to add the ability, when a given volume is for a block mode and is mapped as a device, we will use the [StreamingFile](https://pkg.go.dev/github.com/kopia/kopia@v0.13.0/fs#StreamingFile) to stream the device and backup to the kopia repository.
```go
func getLocalBlockEntry(kopiaEntry fs.Entry, log logrus.FieldLogger) (fs.Entry, error) {
path := kopiaEntry.LocalFilesystemPath()
fileInfo, err := os.Lstat(path)
func getLocalBlockEntry(sourcePath string) (fs.Entry, error) {
source, err := resolveSymlink(sourcePath)
if err != nil {
return nil, errors.Wrapf(err, "Unable to get the source device information %s", path)
return nil, errors.Wrap(err, "resolveSymlink")
}
fileInfo, err := os.Lstat(source)
if err != nil {
return nil, errors.Wrapf(err, "unable to get the source device information %s", source)
}
if (fileInfo.Sys().(*syscall.Stat_t).Mode & syscall.S_IFMT) != syscall.S_IFBLK {
return nil, errors.Errorf("Source path %s is not a block device", path)
return nil, errors.Errorf("source path %s is not a block device", source)
}
device, err := os.Open(path)
device, err := os.Open(source)
if err != nil {
if os.IsPermission(err) || err.Error() == ErrNotPermitted {
return nil, errors.Wrapf(err, "No permission to open the source device %s, make sure that node agent is running in privileged mode", path)
return nil, errors.Wrapf(err, "no permission to open the source device %s, make sure that node agent is running in privileged mode", source)
}
return nil, errors.Wrapf(err, "Unable to open the source device %s", path)
return nil, errors.Wrapf(err, "unable to open the source device %s", source)
}
return virtualfs.StreamingFileFromReader(kopiaEntry.Name(), device), nil
sf := virtualfs.StreamingFileFromReader(source, device)
return virtualfs.NewStaticDirectory(source, []fs.Entry{sf}), nil
}
```
In the `pkg/uploader/kopia/snapshot.go` this is used in the Backup call like
```go
if volMode == PersistentVolumeFilesystem {
if volMode == uploader.PersistentVolumeFilesystem {
// to be consistent with restic when backup empty dir returns one error for upper logic handle
dirs, err := os.ReadDir(source)
if err != nil {
@@ -742,15 +747,17 @@ In the `pkg/uploader/kopia/snapshot.go` this is used in the Backup call like
source = filepath.Clean(source)
...
sourceEntry, err := getLocalFSEntry(source)
if err != nil {
return nil, false, errors.Wrap(err, "Unable to get local filesystem entry")
}
var sourceEntry fs.Entry
if volMode == PersistentVolumeBlock {
sourceEntry, err = getLocalBlockEntry(sourceEntry, log)
if volMode == uploader.PersistentVolumeBlock {
sourceEntry, err = getLocalBlockEntry(source)
if err != nil {
return nil, false, errors.Wrap(err, "Unable to get local block device entry")
return nil, false, errors.Wrap(err, "unable to get local block device entry")
}
} else {
sourceEntry, err = getLocalFSEntry(source)
if err != nil {
return nil, false, errors.Wrap(err, "unable to get local filesystem entry")
}
}
@@ -766,6 +773,8 @@ We only need to extend two functions the rest will be passed through.
```go
type BlockOutput struct {
*restore.FilesystemOutput
targetFileName string
}
var _ restore.Output = &BlockOutput{}
@@ -773,30 +782,15 @@ var _ restore.Output = &BlockOutput{}
const bufferSize = 128 * 1024
func (o *BlockOutput) WriteFile(ctx context.Context, relativePath string, remoteFile fs.File) error {
targetFileName, err := filepath.EvalSymlinks(o.TargetPath)
if err != nil {
return errors.Wrapf(err, "Unable to evaluate symlinks for %s", targetFileName)
}
fileInfo, err := os.Lstat(targetFileName)
if err != nil {
return errors.Wrapf(err, "Unable to get the target device information for %s", targetFileName)
}
if (fileInfo.Sys().(*syscall.Stat_t).Mode & syscall.S_IFMT) != syscall.S_IFBLK {
return errors.Errorf("Target file %s is not a block device", targetFileName)
}
remoteReader, err := remoteFile.Open(ctx)
if err != nil {
return errors.Wrapf(err, "Failed to open remote file %s", remoteFile.Name())
return errors.Wrapf(err, "failed to open remote file %s", remoteFile.Name())
}
defer remoteReader.Close()
targetFile, err := os.Create(targetFileName)
targetFile, err := os.Create(o.targetFileName)
if err != nil {
return errors.Wrapf(err, "Failed to open file %s", targetFileName)
return errors.Wrapf(err, "failed to open file %s", o.targetFileName)
}
defer targetFile.Close()
@@ -807,7 +801,7 @@ func (o *BlockOutput) WriteFile(ctx context.Context, relativePath string, remote
bytesToWrite, err := remoteReader.Read(buffer)
if err != nil {
if err != io.EOF {
return errors.Wrapf(err, "Failed to read data from remote file %s", targetFileName)
return errors.Wrapf(err, "failed to read data from remote file %s", o.targetFileName)
}
readData = false
}
@@ -819,7 +813,7 @@ func (o *BlockOutput) WriteFile(ctx context.Context, relativePath string, remote
bytesToWrite -= bytesWritten
offset += bytesWritten
} else {
return errors.Wrapf(err, "Failed to write data to file %s", targetFileName)
return errors.Wrapf(err, "failed to write data to file %s", o.targetFileName)
}
}
}
@@ -829,42 +823,43 @@ func (o *BlockOutput) WriteFile(ctx context.Context, relativePath string, remote
}
func (o *BlockOutput) BeginDirectory(ctx context.Context, relativePath string, e fs.Directory) error {
targetFileName, err := filepath.EvalSymlinks(o.TargetPath)
var err error
o.targetFileName, err = filepath.EvalSymlinks(o.TargetPath)
if err != nil {
return errors.Wrapf(err, "Unable to evaluate symlinks for %s", targetFileName)
return errors.Wrapf(err, "unable to evaluate symlinks for %s", o.targetFileName)
}
fileInfo, err := os.Lstat(targetFileName)
fileInfo, err := os.Lstat(o.targetFileName)
if err != nil {
return errors.Wrapf(err, "Unable to get the target device information for %s", o.TargetPath)
return errors.Wrapf(err, "unable to get the target device information for %s", o.TargetPath)
}
if (fileInfo.Sys().(*syscall.Stat_t).Mode & syscall.S_IFMT) != syscall.S_IFBLK {
return errors.Errorf("Target file %s is not a block device", o.TargetPath)
return errors.Errorf("target file %s is not a block device", o.TargetPath)
}
return nil
}
```
Of note, we do need to add root access to the daemon set node agent to access the new mount.
Additional mount is required in the node-agent specification to resolve symlinks to the block devices from /host_pods/POD_ID/volumeDevices/kubernetes.io~csi directory.
```yaml
...
- mountPath: /var/lib/kubelet/plugins
mountPropagation: HostToContainer
name: host-plugins
....
- hostPath:
path: /var/lib/kubelet/plugins
name: host-plugins
```
...
Privileged mode is required to access the block devices in /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish directory as confirmed by testing on EKS and Minikube.
```yaml
SecurityContext: &corev1.SecurityContext{
Privileged: &c.privilegedAgent,
Privileged: &c.privilegedNodeAgent,
},
```
## Plugin Data Movers

12
go.mod
View File

@@ -37,9 +37,9 @@ require (
go.uber.org/zap v1.24.0
golang.org/x/exp v0.0.0-20221028150844-83b7d23a625f
golang.org/x/mod v0.10.0
golang.org/x/net v0.9.0
golang.org/x/net v0.17.0
golang.org/x/oauth2 v0.7.0
golang.org/x/text v0.9.0
golang.org/x/text v0.13.0
google.golang.org/api v0.120.0
google.golang.org/grpc v1.54.0
google.golang.org/protobuf v1.30.0
@@ -140,10 +140,10 @@ require (
go.starlark.net v0.0.0-20201006213952-227f4aabceb5 // indirect
go.uber.org/atomic v1.9.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
golang.org/x/crypto v0.8.0 // indirect
golang.org/x/crypto v0.14.0 // indirect
golang.org/x/sync v0.1.0 // indirect
golang.org/x/sys v0.7.0 // indirect
golang.org/x/term v0.7.0 // indirect
golang.org/x/sys v0.13.0 // indirect
golang.org/x/term v0.13.0 // indirect
golang.org/x/time v0.0.0-20220609170525-579cf78fd858 // indirect
golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2 // indirect
gomodules.xyz/jsonpatch/v2 v2.2.0 // indirect
@@ -158,3 +158,5 @@ require (
sigs.k8s.io/json v0.0.0-20220713155537-f223a00ba0e2 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 // indirect
)
replace github.com/kopia/kopia => github.com/project-velero/kopia v0.13.0-velero.1

24
go.sum
View File

@@ -493,8 +493,6 @@ github.com/klauspost/reedsolomon v1.11.7/go.mod h1:4bXRN+cVzMdml6ti7qLouuYi32KHJ
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kopia/htmluibuild v0.0.0-20230326183719-f482ef17e2c9 h1:s5Wa89s8RlPjuwqd8K8kuf+T9Kz4+NsbKwR/pJ3PAT0=
github.com/kopia/kopia v0.13.0 h1:efxs/vw1cS9HldlHcQ8TPxlsYz+cWCkiS4IWMbR3D1s=
github.com/kopia/kopia v0.13.0/go.mod h1:Iic7CcKhsq+A7MLR9hh6VJfgpcJhLx3Kn+BgjY+azvI=
github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
@@ -617,6 +615,8 @@ github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZb
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI=
github.com/pquerna/cachecontrol v0.0.0-20171018203845-0dec1b30a021/go.mod h1:prYjPmNq4d1NPVmpShWobRqXY3q7Vp+80DqgxxUrUIA=
github.com/project-velero/kopia v0.13.0-velero.1 h1:rjiP7os4Eaek/gbyIKqzwkp2XLbJHl0NkczSgJ7AOEo=
github.com/project-velero/kopia v0.13.0-velero.1/go.mod h1:Iic7CcKhsq+A7MLR9hh6VJfgpcJhLx3Kn+BgjY+azvI=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
@@ -822,8 +822,8 @@ golang.org/x/crypto v0.0.0-20210513164829-c07d793c2f9a/go.mod h1:P+XmwS30IXTQdn5
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.0.0-20220214200702-86341886e292/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.8.0 h1:pd9TJtTueMTVQXzk8E2XESSMQDj/U7OUu0PqJqPXQjQ=
golang.org/x/crypto v0.8.0/go.mod h1:mRqEX+O9/h5TFCrQhkgjo2yKi0yYA+9ecGkdQoHrywE=
golang.org/x/crypto v0.14.0 h1:wBqGXzWJW6m1XrIKlAH0Hs1JJ7+9KBwnIO8v66Q9cHc=
golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf4=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@@ -923,8 +923,8 @@ golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qx
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco=
golang.org/x/net v0.9.0 h1:aWJ/m6xSmxWBx+V0XRHTlrYrPG56jKsLdTFmsSsCzOM=
golang.org/x/net v0.9.0/go.mod h1:d48xBJpPfHeWQsugry2m+kC02ZBRGRgulfHnEXEuWns=
golang.org/x/net v0.17.0 h1:pVaXccu2ozPjCXewfr1S7xza/zcXTity9cCdXQYSjIM=
golang.org/x/net v0.17.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -1037,15 +1037,15 @@ golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.7.0 h1:3jlCCIQZPdOYu1h8BkNvLz8Kgwtae2cagcG/VamtZRU=
golang.org/x/sys v0.7.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.13.0 h1:Af8nKPmuFypiUBjVoU9V20FiaFXOcuZI21p0ycVYYGE=
golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.7.0 h1:BEvjmm5fURWqcfbSKTdpkDXYBrUS1c0m8agp14W48vQ=
golang.org/x/term v0.7.0/go.mod h1:P32HKFT3hSsZrRxla30E9HqToFYAQPCMs/zFMBUFqPY=
golang.org/x/term v0.13.0 h1:bb+I9cTfFazGW51MZqBVmZy7+JEJMouUHTUSKVQLBek=
golang.org/x/term v0.13.0/go.mod h1:LTmsnFJwVN6bCy1rVCoS+qHT1HhALEFxKncY3WNNh4U=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@@ -1056,8 +1056,8 @@ golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.9.0 h1:2sjJmO8cDvYveuX97RDLsxlyUxLl+GHoLxBiRdHllBE=
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.13.0 h1:ablQoSUd0tRdKxZewP80B+BaqeKJuVhuRxj/dkrun3k=
golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=

View File

@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
FROM --platform=linux/amd64 golang:1.20-bullseye
FROM --platform=linux/amd64 golang:1.20.10-bullseye
ARG GOPROXY

View File

@@ -89,7 +89,7 @@ else
fi
if [[ -z "$BUILDX_PLATFORMS" ]]; then
BUILDX_PLATFORMS="linux/amd64,linux/arm64,linux/arm/v7,linux/ppc64le"
BUILDX_PLATFORMS="linux/amd64,linux/arm64"
fi
# Debugging info

View File

@@ -1,36 +1,49 @@
diff --git a/go.mod b/go.mod
index 5f939c481..6f281b45d 100644
index 5f939c481..3f5ea4096 100644
--- a/go.mod
+++ b/go.mod
@@ -25,12 +25,12 @@ require (
@@ -24,13 +24,13 @@ require (
github.com/restic/chunker v0.4.0
github.com/spf13/cobra v1.6.1
github.com/spf13/pflag v1.0.5
golang.org/x/crypto v0.5.0
- golang.org/x/crypto v0.5.0
- golang.org/x/net v0.5.0
+ golang.org/x/net v0.7.0
+ golang.org/x/crypto v0.14.0
+ golang.org/x/net v0.17.0
golang.org/x/oauth2 v0.4.0
golang.org/x/sync v0.1.0
- golang.org/x/sys v0.4.0
- golang.org/x/term v0.4.0
- golang.org/x/text v0.6.0
+ golang.org/x/sys v0.5.0
+ golang.org/x/term v0.5.0
+ golang.org/x/text v0.7.0
+ golang.org/x/sys v0.13.0
+ golang.org/x/term v0.13.0
+ golang.org/x/text v0.13.0
google.golang.org/api v0.106.0
)
diff --git a/go.sum b/go.sum
index 026e1d2fa..da35b7a6c 100644
index 026e1d2fa..f24fef5b8 100644
--- a/go.sum
+++ b/go.sum
@@ -172,8 +172,8 @@ golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACk
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
-golang.org/x/crypto v0.5.0 h1:U/0M97KRkSFvyD/3FSmdP5W5swImpNgle/EHFhOsQPE=
-golang.org/x/crypto v0.5.0/go.mod h1:NK/OQwhpMQP3MwtdjgLlYHnH9ebylxKWv3e0fK+mkQU=
+golang.org/x/crypto v0.14.0 h1:wBqGXzWJW6m1XrIKlAH0Hs1JJ7+9KBwnIO8v66Q9cHc=
+golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf4=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
@@ -189,8 +189,8 @@ golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLL
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
-golang.org/x/net v0.5.0 h1:GyT4nK/YDHSqa1c4753ouYCDajOYKTja9Xb/OHtgvSw=
-golang.org/x/net v0.5.0/go.mod h1:DivGGAXEgPSlEBzxGzZI+ZLohi+xUj054jfeKui00ws=
+golang.org/x/net v0.7.0 h1:rJrUqqhjsgNp7KqAIc25s9pZnjU7TUcSY7HcVZjdn1g=
+golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
+golang.org/x/net v0.17.0 h1:pVaXccu2ozPjCXewfr1S7xza/zcXTity9cCdXQYSjIM=
+golang.org/x/net v0.17.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.4.0 h1:NF0gk8LVPg1Ml7SSbGyySuoxdsXitj7TvgvuRxIMc/M=
golang.org/x/oauth2 v0.4.0/go.mod h1:RznEsdpjGAINPTOF0UH/t+xJ75L18YO3Ho6Pyn+uRec=
@@ -40,21 +53,21 @@ index 026e1d2fa..da35b7a6c 100644
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
-golang.org/x/sys v0.4.0 h1:Zr2JFtRQNX3BCZ8YtxRE9hNJYC8J6I1MVbMg6owUp18=
-golang.org/x/sys v0.4.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.5.0 h1:MUK/U/4lj1t1oPg0HfuXDN/Z1wv31ZJ/YcPiGccS4DU=
+golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.13.0 h1:Af8nKPmuFypiUBjVoU9V20FiaFXOcuZI21p0ycVYYGE=
+golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
-golang.org/x/term v0.4.0 h1:O7UWfv5+A2qiuulQk30kVinPoMtoIPeVaKLEgLpVkvg=
-golang.org/x/term v0.4.0/go.mod h1:9P2UbLfCdcvo3p/nzKvsmas4TnlujnuoV9hGgYzW1lQ=
+golang.org/x/term v0.5.0 h1:n2a8QNdAb0sZNpU9R1ALUXBbY+w51fCQDN+7EdxNBsY=
+golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
+golang.org/x/term v0.13.0 h1:bb+I9cTfFazGW51MZqBVmZy7+JEJMouUHTUSKVQLBek=
+golang.org/x/term v0.13.0/go.mod h1:LTmsnFJwVN6bCy1rVCoS+qHT1HhALEFxKncY3WNNh4U=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
-golang.org/x/text v0.6.0 h1:3XmdazWV+ubf7QgHSTWeykHOci5oeekaGJBLkrkaw4k=
-golang.org/x/text v0.6.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
+golang.org/x/text v0.7.0 h1:4BRB4x83lYWy72KwLD/qYDuTu7q9PjSagHvijDw7cLo=
+golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
+golang.org/x/text v0.13.0 h1:ablQoSUd0tRdKxZewP80B+BaqeKJuVhuRxj/dkrun3k=
+golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=

View File

@@ -2,7 +2,6 @@ package resourcemodifiers
import (
"fmt"
"io"
"regexp"
"strconv"
"strings"
@@ -10,9 +9,11 @@ import (
jsonpatch "github.com/evanphx/json-patch"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"gopkg.in/yaml.v3"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/labels"
"sigs.k8s.io/yaml"
"github.com/vmware-tanzu/velero/pkg/util/collections"
)
@@ -23,26 +24,27 @@ const (
)
type JSONPatch struct {
Operation string `yaml:"operation"`
From string `yaml:"from,omitempty"`
Path string `yaml:"path"`
Value string `yaml:"value,omitempty"`
Operation string `json:"operation"`
From string `json:"from,omitempty"`
Path string `json:"path"`
Value string `json:"value,omitempty"`
}
type Conditions struct {
Namespaces []string `yaml:"namespaces,omitempty"`
GroupKind string `yaml:"groupKind"`
ResourceNameRegex string `yaml:"resourceNameRegex"`
Namespaces []string `json:"namespaces,omitempty"`
GroupResource string `json:"groupResource"`
ResourceNameRegex string `json:"resourceNameRegex,omitempty"`
LabelSelector *metav1.LabelSelector `json:"labelSelector,omitempty"`
}
type ResourceModifierRule struct {
Conditions Conditions `yaml:"conditions"`
Patches []JSONPatch `yaml:"patches"`
Conditions Conditions `json:"conditions"`
Patches []JSONPatch `json:"patches"`
}
type ResourceModifiers struct {
Version string `yaml:"version"`
ResourceModifierRules []ResourceModifierRule `yaml:"resourceModifierRules"`
Version string `json:"version"`
ResourceModifierRules []ResourceModifierRule `json:"resourceModifierRules"`
}
func GetResourceModifiersFromConfig(cm *v1.ConfigMap) (*ResourceModifiers, error) {
@@ -50,7 +52,7 @@ func GetResourceModifiersFromConfig(cm *v1.ConfigMap) (*ResourceModifiers, error
return nil, fmt.Errorf("could not parse config from nil configmap")
}
if len(cm.Data) != 1 {
return nil, fmt.Errorf("illegal resource modifiers %s/%s configmap", cm.Name, cm.Namespace)
return nil, fmt.Errorf("illegal resource modifiers %s/%s configmap", cm.Namespace, cm.Name)
}
var yamlData string
@@ -58,7 +60,7 @@ func GetResourceModifiersFromConfig(cm *v1.ConfigMap) (*ResourceModifiers, error
yamlData = v
}
resModifiers, err := unmarshalResourceModifiers(&yamlData)
resModifiers, err := unmarshalResourceModifiers([]byte(yamlData))
if err != nil {
return nil, errors.WithStack(err)
}
@@ -83,9 +85,11 @@ func (r *ResourceModifierRule) Apply(obj *unstructured.Unstructured, groupResour
if !namespaceInclusion.ShouldInclude(obj.GetNamespace()) {
return nil
}
if !strings.EqualFold(groupResource, r.Conditions.GroupKind) {
if r.Conditions.GroupResource != groupResource {
return nil
}
if r.Conditions.ResourceNameRegex != "" {
match, err := regexp.MatchString(r.Conditions.ResourceNameRegex, obj.GetName())
if err != nil {
@@ -95,6 +99,17 @@ func (r *ResourceModifierRule) Apply(obj *unstructured.Unstructured, groupResour
return nil
}
}
if r.Conditions.LabelSelector != nil {
selector, err := metav1.LabelSelectorAsSelector(r.Conditions.LabelSelector)
if err != nil {
return errors.Errorf("error in creating label selector %s", err.Error())
}
if !selector.Matches(labels.Set(obj.GetLabels())) {
return nil
}
}
patches, err := r.PatchArrayToByteArray()
if err != nil {
return err
@@ -107,7 +122,7 @@ func (r *ResourceModifierRule) Apply(obj *unstructured.Unstructured, groupResour
return nil
}
// convert all JsonPatch to string array with the format of jsonpatch.Patch and then convert it to byte array
// PatchArrayToByteArray converts all JsonPatch to string array with the format of jsonpatch.Patch and then convert it to byte array
func (r *ResourceModifierRule) PatchArrayToByteArray() ([]byte, error) {
var patches []string
for _, patch := range r.Patches {
@@ -148,22 +163,15 @@ func ApplyPatch(patch []byte, obj *unstructured.Unstructured, log logrus.FieldLo
return nil
}
func unmarshalResourceModifiers(yamlData *string) (*ResourceModifiers, error) {
func unmarshalResourceModifiers(yamlData []byte) (*ResourceModifiers, error) {
resModifiers := &ResourceModifiers{}
err := decodeStruct(strings.NewReader(*yamlData), resModifiers)
err := yaml.UnmarshalStrict(yamlData, resModifiers)
if err != nil {
return nil, fmt.Errorf("failed to decode yaml data into resource modifiers %v", err)
}
return resModifiers, nil
}
// decodeStruct restrict validate the keys in decoded mappings to exist as fields in the struct being decoded into
func decodeStruct(r io.Reader, s interface{}) error {
dec := yaml.NewDecoder(r)
dec.KnownFields(true)
return dec.Decode(s)
}
func addQuotes(value string) bool {
if value == "" {
return true

View File

@@ -18,7 +18,7 @@ func TestGetResourceModifiersFromConfig(t *testing.T) {
Namespace: "test-namespace",
},
Data: map[string]string{
"sub.yml": "version: v1\nresourceModifierRules:\n- conditions:\n groupKind: persistentvolumeclaims\n resourceNameRegex: \".*\"\n namespaces:\n - bar\n - foo\n patches:\n - operation: replace\n path: \"/spec/storageClassName\"\n value: \"premium\"\n - operation: remove\n path: \"/metadata/labels/test\"\n\n\n",
"sub.yml": "version: v1\nresourceModifierRules:\n- conditions:\n groupResource: persistentvolumeclaims\n resourceNameRegex: \".*\"\n namespaces:\n - bar\n - foo\n patches:\n - operation: replace\n path: \"/spec/storageClassName\"\n value: \"premium\"\n - operation: remove\n path: \"/metadata/labels/test\"\n\n\n",
},
}
@@ -27,7 +27,7 @@ func TestGetResourceModifiersFromConfig(t *testing.T) {
ResourceModifierRules: []ResourceModifierRule{
{
Conditions: Conditions{
GroupKind: "persistentvolumeclaims",
GroupResource: "persistentvolumeclaims",
ResourceNameRegex: ".*",
Namespaces: []string{"bar", "foo"},
},
@@ -51,7 +51,7 @@ func TestGetResourceModifiersFromConfig(t *testing.T) {
Namespace: "test-namespace",
},
Data: map[string]string{
"sub.yml": "version: v1\nresourceModifierRules:\n- conditions:\n groupKind: deployments.apps\n resourceNameRegex: \"^test-.*$\"\n namespaces:\n - bar\n - foo\n patches:\n - operation: add\n path: \"/spec/template/spec/containers/0\"\n value: \"{\\\"name\\\": \\\"nginx\\\", \\\"image\\\": \\\"nginx:1.14.2\\\", \\\"ports\\\": [{\\\"containerPort\\\": 80}]}\"\n - operation: copy\n from: \"/spec/template/spec/containers/0\"\n path: \"/spec/template/spec/containers/1\"\n\n\n",
"sub.yml": "version: v1\nresourceModifierRules:\n- conditions:\n groupResource: deployments.apps\n resourceNameRegex: \"^test-.*$\"\n namespaces:\n - bar\n - foo\n patches:\n - operation: add\n path: \"/spec/template/spec/containers/0\"\n value: \"{\\\"name\\\": \\\"nginx\\\", \\\"image\\\": \\\"nginx:1.14.2\\\", \\\"ports\\\": [{\\\"containerPort\\\": 80}]}\"\n - operation: copy\n from: \"/spec/template/spec/containers/0\"\n path: \"/spec/template/spec/containers/1\"\n\n\n",
},
}
@@ -60,7 +60,7 @@ func TestGetResourceModifiersFromConfig(t *testing.T) {
ResourceModifierRules: []ResourceModifierRule{
{
Conditions: Conditions{
GroupKind: "deployments.apps",
GroupResource: "deployments.apps",
ResourceNameRegex: "^test-.*$",
Namespaces: []string{"bar", "foo"},
},
@@ -86,7 +86,33 @@ func TestGetResourceModifiersFromConfig(t *testing.T) {
Namespace: "test-namespace",
},
Data: map[string]string{
"sub.yml": "version1: v1\nresourceModifierRules:\n- conditions:\n groupKind: deployments.apps\n resourceNameRegex: \"^test-.*$\"\n namespaces:\n - bar\n - foo\n patches:\n - operation: add\n path: \"/spec/template/spec/containers/0\"\n value: \"{\\\"name\\\": \\\"nginx\\\", \\\"image\\\": \\\"nginx:1.14.2\\\", \\\"ports\\\": [{\\\"containerPort\\\": 80}]}\"\n - operation: copy\n from: \"/spec/template/spec/containers/0\"\n path: \"/spec/template/spec/containers/1\"\n\n\n",
"sub.yml": "version1: v1\nresourceModifierRules:\n- conditions:\n groupResource: deployments.apps\n resourceNameRegex: \"^test-.*$\"\n namespaces:\n - bar\n - foo\n patches:\n - operation: add\n path: \"/spec/template/spec/containers/0\"\n value: \"{\\\"name\\\": \\\"nginx\\\", \\\"image\\\": \\\"nginx:1.14.2\\\", \\\"ports\\\": [{\\\"containerPort\\\": 80}]}\"\n - operation: copy\n from: \"/spec/template/spec/containers/0\"\n path: \"/spec/template/spec/containers/1\"\n\n\n",
},
}
cm4 := &v1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: "test-configmap",
Namespace: "test-namespace",
},
Data: map[string]string{
"sub.yml": "version: v1\nresourceModifierRules:\n- conditions:\n groupResource: deployments.apps\n labelSelector:\n matchLabels:\n a: b\n",
},
}
rules4 := &ResourceModifiers{
Version: "v1",
ResourceModifierRules: []ResourceModifierRule{
{
Conditions: Conditions{
GroupResource: "deployments.apps",
LabelSelector: &metav1.LabelSelector{
MatchLabels: map[string]string{
"a": "b",
},
},
},
},
},
}
@@ -123,6 +149,14 @@ func TestGetResourceModifiersFromConfig(t *testing.T) {
want: nil,
wantErr: true,
},
{
name: "match labels",
args: args{
cm: cm4,
},
want: rules4,
wantErr: false,
},
{
name: "nil configmap",
args: args{
@@ -183,6 +217,9 @@ func TestResourceModifiers_ApplyResourceModifierRules(t *testing.T) {
"metadata": map[string]interface{}{
"name": "test-deployment",
"namespace": "foo",
"labels": map[string]interface{}{
"app": "nginx",
},
},
"spec": map[string]interface{}{
"replicas": int64(1),
@@ -211,6 +248,9 @@ func TestResourceModifiers_ApplyResourceModifierRules(t *testing.T) {
"metadata": map[string]interface{}{
"name": "test-deployment",
"namespace": "foo",
"labels": map[string]interface{}{
"app": "nginx",
},
},
"spec": map[string]interface{}{
"replicas": int64(2),
@@ -239,6 +279,9 @@ func TestResourceModifiers_ApplyResourceModifierRules(t *testing.T) {
"metadata": map[string]interface{}{
"name": "test-deployment",
"namespace": "foo",
"labels": map[string]interface{}{
"app": "nginx",
},
},
"spec": map[string]interface{}{
"replicas": int64(1),
@@ -287,7 +330,7 @@ func TestResourceModifiers_ApplyResourceModifierRules(t *testing.T) {
ResourceModifierRules: []ResourceModifierRule{
{
Conditions: Conditions{
GroupKind: "persistentvolumeclaims",
GroupResource: "persistentvolumeclaims",
ResourceNameRegex: "[a-z",
Namespaces: []string{"foo"},
},
@@ -320,7 +363,7 @@ func TestResourceModifiers_ApplyResourceModifierRules(t *testing.T) {
ResourceModifierRules: []ResourceModifierRule{
{
Conditions: Conditions{
GroupKind: "persistentvolumeclaims",
GroupResource: "persistentvolumeclaims",
ResourceNameRegex: ".*",
Namespaces: []string{"foo"},
},
@@ -353,7 +396,7 @@ func TestResourceModifiers_ApplyResourceModifierRules(t *testing.T) {
ResourceModifierRules: []ResourceModifierRule{
{
Conditions: Conditions{
GroupKind: "deployments.apps",
GroupResource: "deployments.apps",
ResourceNameRegex: "^test-.*$",
Namespaces: []string{"foo"},
},
@@ -386,7 +429,7 @@ func TestResourceModifiers_ApplyResourceModifierRules(t *testing.T) {
ResourceModifierRules: []ResourceModifierRule{
{
Conditions: Conditions{
GroupKind: "deployments.apps",
GroupResource: "deployments.apps",
ResourceNameRegex: "^test-.*$",
Namespaces: []string{"foo"},
},
@@ -419,8 +462,8 @@ func TestResourceModifiers_ApplyResourceModifierRules(t *testing.T) {
ResourceModifierRules: []ResourceModifierRule{
{
Conditions: Conditions{
GroupKind: "deployments.apps",
Namespaces: []string{"foo"},
GroupResource: "deployments.apps",
Namespaces: []string{"foo"},
},
Patches: []JSONPatch{
{
@@ -451,7 +494,7 @@ func TestResourceModifiers_ApplyResourceModifierRules(t *testing.T) {
ResourceModifierRules: []ResourceModifierRule{
{
Conditions: Conditions{
GroupKind: "deployments.apps",
GroupResource: "deployments.apps",
},
Patches: []JSONPatch{
{
@@ -482,7 +525,7 @@ func TestResourceModifiers_ApplyResourceModifierRules(t *testing.T) {
ResourceModifierRules: []ResourceModifierRule{
{
Conditions: Conditions{
GroupKind: "deployments.apps",
GroupResource: "deployments.apps",
ResourceNameRegex: ".*",
Namespaces: []string{"bar"},
},
@@ -515,7 +558,7 @@ func TestResourceModifiers_ApplyResourceModifierRules(t *testing.T) {
ResourceModifierRules: []ResourceModifierRule{
{
Conditions: Conditions{
GroupKind: "deployments.apps",
GroupResource: "deployments.apps",
ResourceNameRegex: "^test-.*$",
Namespaces: []string{"foo"},
},
@@ -543,7 +586,7 @@ func TestResourceModifiers_ApplyResourceModifierRules(t *testing.T) {
ResourceModifierRules: []ResourceModifierRule{
{
Conditions: Conditions{
GroupKind: "deployments.apps",
GroupResource: "deployments.apps",
ResourceNameRegex: "^test-.*$",
Namespaces: []string{"foo"},
},
@@ -579,6 +622,80 @@ func TestResourceModifiers_ApplyResourceModifierRules(t *testing.T) {
wantErr: false,
wantObj: deployNginxMysql.DeepCopy(),
},
{
name: "nginx deployment: match label selector",
fields: fields{
Version: "v1",
ResourceModifierRules: []ResourceModifierRule{
{
Conditions: Conditions{
GroupResource: "deployments.apps",
Namespaces: []string{"foo"},
LabelSelector: &metav1.LabelSelector{
MatchLabels: map[string]string{
"app": "nginx",
},
},
},
Patches: []JSONPatch{
{
Operation: "test",
Path: "/spec/replicas",
Value: "1",
},
{
Operation: "replace",
Path: "/spec/replicas",
Value: "2",
},
},
},
},
},
args: args{
obj: deployNginxOneReplica.DeepCopy(),
groupResource: "deployments.apps",
},
wantErr: false,
wantObj: deployNginxTwoReplica.DeepCopy(),
},
{
name: "nginx deployment: mismatch label selector",
fields: fields{
Version: "v1",
ResourceModifierRules: []ResourceModifierRule{
{
Conditions: Conditions{
GroupResource: "deployments.apps",
Namespaces: []string{"foo"},
LabelSelector: &metav1.LabelSelector{
MatchLabels: map[string]string{
"app": "nginx-mismatch",
},
},
},
Patches: []JSONPatch{
{
Operation: "test",
Path: "/spec/replicas",
Value: "1",
},
{
Operation: "replace",
Path: "/spec/replicas",
Value: "2",
},
},
},
},
},
args: args{
obj: deployNginxOneReplica.DeepCopy(),
groupResource: "deployments.apps",
},
wantErr: false,
wantObj: deployNginxOneReplica.DeepCopy(),
},
}
for _, tt := range tests {

View File

@@ -1,9 +1,8 @@
package resourcemodifiers
import (
"strings"
"fmt"
"strings"
)
func (r *ResourceModifierRule) Validate() error {
@@ -48,8 +47,8 @@ func (p *JSONPatch) Validate() error {
}
func (c *Conditions) Validate() error {
if c.GroupKind == "" {
return fmt.Errorf("groupkind cannot be empty")
if c.GroupResource == "" {
return fmt.Errorf("groupkResource cannot be empty")
}
return nil
}

View File

@@ -21,7 +21,7 @@ func TestResourceModifiers_Validate(t *testing.T) {
ResourceModifierRules: []ResourceModifierRule{
{
Conditions: Conditions{
GroupKind: "persistentvolumeclaims",
GroupResource: "persistentvolumeclaims",
ResourceNameRegex: ".*",
Namespaces: []string{"bar", "foo"},
},
@@ -44,7 +44,7 @@ func TestResourceModifiers_Validate(t *testing.T) {
ResourceModifierRules: []ResourceModifierRule{
{
Conditions: Conditions{
GroupKind: "persistentvolumeclaims",
GroupResource: "persistentvolumeclaims",
ResourceNameRegex: ".*",
Namespaces: []string{"bar", "foo"},
},
@@ -75,7 +75,7 @@ func TestResourceModifiers_Validate(t *testing.T) {
ResourceModifierRules: []ResourceModifierRule{
{
Conditions: Conditions{
GroupKind: "persistentvolumeclaims",
GroupResource: "persistentvolumeclaims",
ResourceNameRegex: ".*",
Namespaces: []string{"bar", "foo"},
},
@@ -92,13 +92,13 @@ func TestResourceModifiers_Validate(t *testing.T) {
wantErr: true,
},
{
name: "Condition has empty GroupKind",
name: "Condition has empty GroupResource",
fields: fields{
Version: "v1",
ResourceModifierRules: []ResourceModifierRule{
{
Conditions: Conditions{
GroupKind: "",
GroupResource: "",
ResourceNameRegex: ".*",
Namespaces: []string{"bar", "foo"},
},

View File

@@ -132,7 +132,7 @@ func GetResourcePoliciesFromConfig(cm *v1.ConfigMap) (*Policies, error) {
return nil, fmt.Errorf("could not parse config from nil configmap")
}
if len(cm.Data) != 1 {
return nil, fmt.Errorf("illegal resource policies %s/%s configmap", cm.Name, cm.Namespace)
return nil, fmt.Errorf("illegal resource policies %s/%s configmap", cm.Namespace, cm.Name)
}
var yamlData string

View File

@@ -1,37 +0,0 @@
/*
Copyright 2020 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// TODO(2.0) After converting all controllers to runtime-controller,
// the functions in this file will no longer be needed and should be removed.
package managercontroller
import (
"context"
"sigs.k8s.io/controller-runtime/pkg/manager"
"github.com/vmware-tanzu/velero/pkg/controller"
)
// Runnable will turn a "regular" runnable component (such as a controller)
// into a controller-runtime Runnable
func Runnable(p controller.Interface, numWorkers int) manager.Runnable {
// Pass the provided Context down to the run function.
f := func(ctx context.Context) error {
return p.Run(ctx, numWorkers)
}
return manager.RunnableFunc(f)
}

View File

@@ -89,6 +89,14 @@ const (
// ResourceUsageLabel is the label key to explain the Velero resource usage.
ResourceUsageLabel = "velero.io/resource-usage"
// VolumesToBackupAnnotation is the annotation on a pod whose mounted volumes
// need to be backed up using pod volume backup.
VolumesToBackupAnnotation = "backup.velero.io/backup-volumes"
// VolumesToExcludeAnnotation is the annotation on a pod whose mounted volumes
// should be excluded from pod volume backup.
VolumesToExcludeAnnotation = "backup.velero.io/backup-volumes-excludes"
)
type AsyncOperationIDPrefix string

View File

@@ -52,6 +52,7 @@ import (
vsv1 "github.com/vmware-tanzu/velero/pkg/plugin/velero/volumesnapshotter/v1"
"github.com/vmware-tanzu/velero/pkg/podvolume"
"github.com/vmware-tanzu/velero/pkg/util/boolptr"
pdvolumeutil "github.com/vmware-tanzu/velero/pkg/util/podvolume"
"github.com/vmware-tanzu/velero/pkg/volume"
)
@@ -200,7 +201,7 @@ func (ib *itemBackupper) backupItemInternal(logger logrus.FieldLogger, obj runti
// Get the list of volumes to back up using pod volume backup from the pod's annotations. Remove from this list
// any volumes that use a PVC that we've already backed up (this would be in a read-write-many scenario,
// where it's been backed up from another pod), since we don't need >1 backup per PVC.
includedVolumes, optedOutVolumes := podvolume.GetVolumesByPod(pod, boolptr.IsSetToTrue(ib.backupRequest.Spec.DefaultVolumesToFsBackup))
includedVolumes, optedOutVolumes := pdvolumeutil.GetVolumesByPod(pod, boolptr.IsSetToTrue(ib.backupRequest.Spec.DefaultVolumesToFsBackup))
for _, volume := range includedVolumes {
// track the volumes that are PVCs using the PVC snapshot tracker, so that when we backup PVCs/PVs
// via an item action in the next step, we don't snapshot PVs that will have their data backed up

View File

@@ -196,9 +196,8 @@ func (r *itemCollector) getResourceItems(log logrus.FieldLogger, gv schema.Group
log.Info("Getting items for resource")
var (
gvr = gv.WithResource(resource.Name)
gr = gvr.GroupResource()
clusterScoped = !resource.Namespaced
gvr = gv.WithResource(resource.Name)
gr = gvr.GroupResource()
)
orders := getOrderedResourcesForType(r.backupRequest.Backup.Spec.OrderedResources, resource.Name)
@@ -272,8 +271,6 @@ func (r *itemCollector) getResourceItems(log logrus.FieldLogger, gv schema.Group
}
}
namespacesToList := getNamespacesToList(r.backupRequest.NamespaceIncludesExcludes)
// Handle namespace resource here.
// Namespace are only filtered by namespace include/exclude filters.
// Label selectors are not checked.
@@ -289,10 +286,12 @@ func (r *itemCollector) getResourceItems(log logrus.FieldLogger, gv schema.Group
return nil, errors.WithStack(err)
}
items := r.backupNamespaces(unstructuredList, namespacesToList, gr, preferredGVR, log)
items := r.backupNamespaces(unstructuredList, r.backupRequest.NamespaceIncludesExcludes, gr, preferredGVR, log)
return items, nil
}
clusterScoped := !resource.Namespaced
namespacesToList := getNamespacesToList(r.backupRequest.NamespaceIncludesExcludes)
// If we get here, we're backing up something other than namespaces
if clusterScoped {
@@ -533,31 +532,13 @@ func (r *itemCollector) listItemsForLabel(unstructuredItems []unstructured.Unstr
// backupNamespaces process namespace resource according to namespace filters.
func (r *itemCollector) backupNamespaces(unstructuredList *unstructured.UnstructuredList,
namespacesToList []string, gr schema.GroupResource, preferredGVR schema.GroupVersionResource,
ie *collections.IncludesExcludes, gr schema.GroupResource, preferredGVR schema.GroupVersionResource,
log logrus.FieldLogger) []*kubernetesResource {
var items []*kubernetesResource
for index, unstructured := range unstructuredList.Items {
found := false
if len(namespacesToList) == 0 {
// No namespace found. By far, this condition cannot be triggered. Either way,
// namespacesToList is not empty.
log.Debug("Skip namespace resource, because no item found by namespace filters.")
break
} else if len(namespacesToList) == 1 && namespacesToList[0] == "" {
// All namespaces are included.
log.Debugf("Backup namespace %s due to full cluster backup.", unstructured.GetName())
found = true
} else {
for _, ns := range namespacesToList {
if unstructured.GetName() == ns {
log.Debugf("Backup namespace %s due to namespace filters setting.", unstructured.GetName())
found = true
break
}
}
}
if ie.ShouldInclude(unstructured.GetName()) {
log.Debugf("Backup namespace %s due to namespace filters setting.", unstructured.GetName())
if found {
path, err := r.writeToFile(&unstructuredList.Items[index])
if err != nil {
log.WithError(err).Error("Error writing item to file")

View File

@@ -95,6 +95,12 @@ func (b *PersistentVolumeBuilder) StorageClass(name string) *PersistentVolumeBui
return b
}
// VolumeMode sets the PersistentVolume's volume mode.
func (b *PersistentVolumeBuilder) VolumeMode(volMode corev1api.PersistentVolumeMode) *PersistentVolumeBuilder {
b.object.Spec.VolumeMode = &volMode
return b
}
// NodeAffinityRequired sets the PersistentVolume's NodeAffinity Requirement.
func (b *PersistentVolumeBuilder) NodeAffinityRequired(req *corev1api.NodeSelector) *PersistentVolumeBuilder {
b.object.Spec.NodeAffinity = &corev1api.VolumeNodeAffinity{

View File

@@ -0,0 +1,21 @@
/*
Copyright 2017 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package client
import (
// Make sure we import the client-go auth provider plugins.
_ "k8s.io/client-go/plugin/pkg/client/auth/azure"
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
_ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
)

View File

@@ -18,6 +18,7 @@ package client
import (
"context"
"time"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
@@ -25,6 +26,7 @@ import (
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/watch"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/dynamic/dynamicinformer"
)
// DynamicFactory contains methods for retrieving dynamic clients for GroupVersionResources and
@@ -33,6 +35,8 @@ type DynamicFactory interface {
// ClientForGroupVersionResource returns a Dynamic client for the given group/version
// and resource for the given namespace.
ClientForGroupVersionResource(gv schema.GroupVersion, resource metav1.APIResource, namespace string) (Dynamic, error)
// DynamicSharedInformerFactoryForNamespace returns a DynamicSharedInformerFactory for the given namespace.
DynamicSharedInformerFactoryForNamespace(namespace string) dynamicinformer.DynamicSharedInformerFactory
}
// dynamicFactory implements DynamicFactory.
@@ -51,6 +55,10 @@ func (f *dynamicFactory) ClientForGroupVersionResource(gv schema.GroupVersion, r
}, nil
}
func (f *dynamicFactory) DynamicSharedInformerFactoryForNamespace(namespace string) dynamicinformer.DynamicSharedInformerFactory {
return dynamicinformer.NewFilteredDynamicSharedInformerFactory(f.dynamicClient, time.Minute, namespace, nil)
}
// Creator creates an object.
type Creator interface {
// Create creates an object.

44
pkg/client/retry.go Normal file
View File

@@ -0,0 +1,44 @@
/*
Copyright the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package client
import (
"context"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/client-go/util/retry"
kbclient "sigs.k8s.io/controller-runtime/pkg/client"
)
func CreateRetryGenerateName(client kbclient.Client, ctx context.Context, obj kbclient.Object) error {
return CreateRetryGenerateNameWithFunc(obj, func() error {
return client.Create(ctx, obj, &kbclient.CreateOptions{})
})
}
func CreateRetryGenerateNameWithFunc(obj kbclient.Object, createFn func() error) error {
retryCreateFn := func() error {
// needed to ensure that the name from the failed create isn't left on the object between retries
obj.SetName("")
return createFn()
}
if obj.GetGenerateName() != "" && obj.GetName() == "" {
return retry.OnError(retry.DefaultRetry, apierrors.IsAlreadyExists, retryCreateFn)
} else {
return createFn()
}
}

View File

@@ -95,6 +95,7 @@ type CreateOptions struct {
ExcludeNamespaceScopedResources flag.StringArray
Labels flag.Map
Selector flag.LabelSelector
OrSelector flag.OrLabelSelector
IncludeClusterResources flag.OptionalBool
Wait bool
StorageLocation string
@@ -130,6 +131,7 @@ func (o *CreateOptions) BindFlags(flags *pflag.FlagSet) {
flags.StringVar(&o.StorageLocation, "storage-location", "", "Location in which to store the backup.")
flags.StringSliceVar(&o.SnapshotLocations, "volume-snapshot-locations", o.SnapshotLocations, "List of locations (at most one per provider) where volume snapshots should be stored.")
flags.VarP(&o.Selector, "selector", "l", "Only back up resources matching this label selector.")
flags.Var(&o.OrSelector, "or-selector", "Backup resources matching at least one of the label selector from the list. Label selectors should be separated by ' or '. For example, foo=bar or app=nginx")
flags.StringVar(&o.OrderedResources, "ordered-resources", "", "Mapping Kinds to an ordered list of specific resources of that Kind. Resource names are separated by commas and their names are in format 'namespace/resourcename'. For cluster scope resource, simply use resource name. Key-value pairs in the mapping are separated by semi-colon. Example: 'pods=ns1/pod1,ns1/pod2;persistentvolumeclaims=ns1/pvc4,ns1/pvc8'. Optional.")
flags.DurationVar(&o.CSISnapshotTimeout, "csi-snapshot-timeout", o.CSISnapshotTimeout, "How long to wait for CSI snapshot creation before timeout.")
flags.DurationVar(&o.ItemOperationTimeout, "item-operation-timeout", o.ItemOperationTimeout, "How long to wait for async plugin operations before timeout.")
@@ -168,9 +170,8 @@ func (o *CreateOptions) Validate(c *cobra.Command, args []string, f client.Facto
return err
}
client, err := f.KubebuilderWatchClient()
if err != nil {
return err
if o.Selector.LabelSelector != nil && o.OrSelector.OrLabelSelectors != nil {
return fmt.Errorf("either a 'selector' or an 'or-selector' can be specified, but not both")
}
// Ensure that unless FromSchedule is set, args contains a backup name
@@ -191,7 +192,7 @@ func (o *CreateOptions) Validate(c *cobra.Command, args []string, f client.Facto
if o.StorageLocation != "" {
location := &velerov1api.BackupStorageLocation{}
if err := client.Get(context.Background(), kbclient.ObjectKey{
if err := o.client.Get(context.Background(), kbclient.ObjectKey{
Namespace: f.Namespace(),
Name: o.StorageLocation,
}, location); err != nil {
@@ -201,7 +202,7 @@ func (o *CreateOptions) Validate(c *cobra.Command, args []string, f client.Facto
for _, loc := range o.SnapshotLocations {
snapshotLocation := new(velerov1api.VolumeSnapshotLocation)
if err := o.client.Get(context.TODO(), kbclient.ObjectKey{Namespace: f.Namespace(), Name: loc}, snapshotLocation); err != nil {
if err := o.client.Get(context.Background(), kbclient.ObjectKey{Namespace: f.Namespace(), Name: loc}, snapshotLocation); err != nil {
return err
}
}
@@ -365,6 +366,7 @@ func (o *CreateOptions) BuildBackup(namespace string) (*velerov1api.Backup, erro
IncludedNamespaceScopedResources(o.IncludeNamespaceScopedResources...).
ExcludedNamespaceScopedResources(o.ExcludeNamespaceScopedResources...).
LabelSelector(o.Selector.LabelSelector).
OrLabelSelector(o.OrSelector.OrLabelSelectors).
TTL(o.TTL).
StorageLocation(o.StorageLocation).
VolumeSnapshotLocations(o.SnapshotLocations...).

View File

@@ -47,6 +47,15 @@ func TestCreateOptions_BuildBackup(t *testing.T) {
orders, err := ParseOrderedResources(o.OrderedResources)
o.CSISnapshotTimeout = 20 * time.Minute
o.ItemOperationTimeout = 20 * time.Minute
orLabelSelectors := []*metav1.LabelSelector{
{
MatchLabels: map[string]string{"k1": "v1", "k2": "v2"},
},
{
MatchLabels: map[string]string{"a1": "b1", "a2": "b2"},
},
}
o.OrSelector.OrLabelSelectors = orLabelSelectors
assert.NoError(t, err)
backup, err := o.BuildBackup(cmdtest.VeleroNameSpace)
@@ -58,6 +67,7 @@ func TestCreateOptions_BuildBackup(t *testing.T) {
SnapshotVolumes: o.SnapshotVolumes.Value,
IncludeClusterResources: o.IncludeClusterResources.Value,
OrderedResources: orders,
OrLabelSelectors: orLabelSelectors,
CSISnapshotTimeout: metav1.Duration{Duration: o.CSISnapshotTimeout},
ItemOperationTimeout: metav1.Duration{Duration: o.ItemOperationTimeout},
}, backup.Spec)

View File

@@ -124,7 +124,7 @@ func Run(o *cli.DeleteOptions) error {
ObjectMeta(builder.WithLabels(velerov1api.BackupNameLabel, label.GetValidName(b.Name),
velerov1api.BackupUIDLabel, string(b.UID)), builder.WithGenerateName(b.Name+"-")).Result()
if err := o.Client.Create(context.TODO(), deleteRequest, &controllerclient.CreateOptions{}); err != nil {
if err := client.CreateRetryGenerateName(o.Client, context.TODO(), deleteRequest); err != nil {
errs = append(errs, err)
continue
}

View File

@@ -57,6 +57,14 @@ func NewDescribeCommand(f client.Factory, use string) *cobra.Command {
kbClient, err := f.KubebuilderClient()
cmd.CheckError(err)
var csiClient *snapshotv1client.Clientset
if features.IsEnabled(velerov1api.CSIFeatureFlag) {
clientConfig, err := f.ClientConfig()
cmd.CheckError(err)
csiClient, err = snapshotv1client.NewForConfig(clientConfig)
cmd.CheckError(err)
}
if outputFormat != "plaintext" && outputFormat != "json" {
cmd.CheckError(fmt.Errorf("invalid output format '%s'. valid value are 'plaintext, json'", outputFormat))
}
@@ -96,16 +104,9 @@ func NewDescribeCommand(f client.Factory, use string) *cobra.Command {
fmt.Fprintf(os.Stderr, "error getting PodVolumeBackups for backup %s: %v\n", backup.Name, err)
}
var csiClient *snapshotv1client.Clientset
// declare vscList up here since it may be empty and we'll pass the empty Items field into DescribeBackup
vscList := new(snapshotv1api.VolumeSnapshotContentList)
if features.IsEnabled(velerov1api.CSIFeatureFlag) {
clientConfig, err := f.ClientConfig()
cmd.CheckError(err)
csiClient, err = snapshotv1client.NewForConfig(clientConfig)
cmd.CheckError(err)
opts := label.NewListOptionsForBackup(backup.Name)
vscList, err = csiClient.SnapshotV1().VolumeSnapshotContents().List(context.TODO(), opts)
if err != nil {

View File

@@ -69,6 +69,7 @@ func TestNewDescribeCommand(t *testing.T) {
if err == nil {
assert.Contains(t, stdout, "Velero-Native Snapshots: <none included>")
assert.Contains(t, stdout, "Or label selector: <none>")
assert.Contains(t, stdout, fmt.Sprintf("Name: %s", backupName))
return
}

View File

@@ -33,8 +33,6 @@ import (
"github.com/vmware-tanzu/velero/pkg/cmd/cli"
)
const bslLabelKey = "velero.io/storage-location"
// NewDeleteCommand creates and returns a new cobra command for deleting backup-locations.
func NewDeleteCommand(f client.Factory, use string) *cobra.Command {
o := cli.NewDeleteOptions("backup-location")
@@ -146,7 +144,7 @@ func findAssociatedBackups(client kbclient.Client, bslName, ns string) (velerov1
var backups velerov1api.BackupList
err := client.List(context.Background(), &backups, &kbclient.ListOptions{
Namespace: ns,
Raw: &metav1.ListOptions{LabelSelector: bslLabelKey + "=" + bslName},
Raw: &metav1.ListOptions{LabelSelector: velerov1api.StorageLocationLabel + "=" + bslName},
})
return backups, err
}
@@ -155,7 +153,7 @@ func findAssociatedBackupRepos(client kbclient.Client, bslName, ns string) (vele
var repos velerov1api.BackupRepositoryList
err := client.List(context.Background(), &repos, &kbclient.ListOptions{
Namespace: ns,
Raw: &metav1.ListOptions{LabelSelector: bslLabelKey + "=" + bslName},
Raw: &metav1.ListOptions{LabelSelector: velerov1api.StorageLocationLabel + "=" + bslName},
})
return repos, err
}

View File

@@ -66,6 +66,7 @@ type Options struct {
BackupStorageConfig flag.Map
VolumeSnapshotConfig flag.Map
UseNodeAgent bool
PrivilegedNodeAgent bool
//TODO remove UseRestic when migration test out of using it
UseRestic bool
Wait bool
@@ -79,6 +80,8 @@ type Options struct {
Features string
DefaultVolumesToFsBackup bool
UploaderType string
DefaultSnapshotMoveData bool
DisableInformerCache bool
}
// BindFlags adds command line values to the options struct.
@@ -109,6 +112,7 @@ func (o *Options) BindFlags(flags *pflag.FlagSet) {
flags.BoolVar(&o.RestoreOnly, "restore-only", o.RestoreOnly, "Run the server in restore-only mode. Optional.")
flags.BoolVar(&o.DryRun, "dry-run", o.DryRun, "Generate resources, but don't send them to the cluster. Use with -o. Optional.")
flags.BoolVar(&o.UseNodeAgent, "use-node-agent", o.UseNodeAgent, "Create Velero node-agent daemonset. Optional. Velero node-agent hosts Velero modules that need to run in one or more nodes(i.e. Restic, Kopia).")
flags.BoolVar(&o.PrivilegedNodeAgent, "privileged-node-agent", o.PrivilegedNodeAgent, "Use privileged mode for the node agent. Optional. Required to backup block devices.")
flags.BoolVar(&o.Wait, "wait", o.Wait, "Wait for Velero deployment to be ready. Optional.")
flags.DurationVar(&o.DefaultRepoMaintenanceFrequency, "default-repo-maintain-frequency", o.DefaultRepoMaintenanceFrequency, "How often 'maintain' is run for backup repositories by default. Optional.")
flags.DurationVar(&o.GarbageCollectionFrequency, "garbage-collection-frequency", o.GarbageCollectionFrequency, "How often the garbage collection runs for expired backups.(default 1h)")
@@ -118,6 +122,8 @@ func (o *Options) BindFlags(flags *pflag.FlagSet) {
flags.StringVar(&o.Features, "features", o.Features, "Comma separated list of Velero feature flags to be set on the Velero deployment and the node-agent daemonset, if node-agent is enabled")
flags.BoolVar(&o.DefaultVolumesToFsBackup, "default-volumes-to-fs-backup", o.DefaultVolumesToFsBackup, "Bool flag to configure Velero server to use pod volume file system backup by default for all volumes on all backups. Optional.")
flags.StringVar(&o.UploaderType, "uploader-type", o.UploaderType, fmt.Sprintf("The type of uploader to transfer the data of pod volumes, the supported values are '%s', '%s'", uploader.ResticType, uploader.KopiaType))
flags.BoolVar(&o.DefaultSnapshotMoveData, "default-snapshot-move-data", o.DefaultSnapshotMoveData, "Bool flag to configure Velero server to move data by default for all snapshots supporting data movement. Optional.")
flags.BoolVar(&o.DisableInformerCache, "disable-informer-cache", o.DisableInformerCache, "Disable informer cache for Get calls on restore. With this enabled, it will speed up restore in cases where there are backup resources which already exist in the cluster, but for very large clusters this will increase velero memory usage. Default is false (don't disable). Optional.")
}
// NewInstallOptions instantiates a new, default InstallOptions struct.
@@ -144,6 +150,8 @@ func NewInstallOptions() *Options {
CRDsOnly: false,
DefaultVolumesToFsBackup: false,
UploaderType: uploader.KopiaType,
DefaultSnapshotMoveData: false,
DisableInformerCache: true,
}
}
@@ -195,6 +203,7 @@ func (o *Options) AsVeleroOptions() (*install.VeleroOptions, error) {
SecretData: secretData,
RestoreOnly: o.RestoreOnly,
UseNodeAgent: o.UseNodeAgent,
PrivilegedNodeAgent: o.PrivilegedNodeAgent,
UseVolumeSnapshots: o.UseVolumeSnapshots,
BSLConfig: o.BackupStorageConfig.Data(),
VSLConfig: o.VolumeSnapshotConfig.Data(),
@@ -206,6 +215,8 @@ func (o *Options) AsVeleroOptions() (*install.VeleroOptions, error) {
Features: strings.Split(o.Features, ","),
DefaultVolumesToFsBackup: o.DefaultVolumesToFsBackup,
UploaderType: o.UploaderType,
DefaultSnapshotMoveData: o.DefaultSnapshotMoveData,
DisableInformerCache: o.DisableInformerCache,
}, nil
}

View File

@@ -361,7 +361,7 @@ func (s *nodeAgentServer) markDataUploadsCancel(r *controller.DataUploadReconcil
return
}
if dataUploads, err := r.FindDataUploads(s.ctx, client, s.namespace); err != nil {
s.logger.WithError(errors.WithStack(err)).Error("failed to find data downloads")
s.logger.WithError(errors.WithStack(err)).Error("failed to find data uploads")
} else {
for i := range dataUploads {
du := dataUploads[i]
@@ -463,7 +463,7 @@ func (s *nodeAgentServer) markInProgressPVRsFailed(client ctrlclient.Client) {
continue
}
if pod.Spec.NodeName != s.nodeName {
s.logger.Debugf("the node of pod referenced by podvolumebackup %q is %q, not %q, skip", pvr.GetName(), pod.Spec.NodeName, s.nodeName)
s.logger.Debugf("the node of pod referenced by podvolumerestore %q is %q, not %q, skip", pvr.GetName(), pod.Spec.NodeName, s.nodeName)
continue
}

View File

@@ -92,6 +92,7 @@ type CreateOptions struct {
StatusExcludeResources flag.StringArray
NamespaceMappings flag.Map
Selector flag.LabelSelector
OrSelector flag.OrLabelSelector
IncludeClusterResources flag.OptionalBool
Wait bool
AllowPartiallyFailed flag.OptionalBool
@@ -124,6 +125,7 @@ func (o *CreateOptions) BindFlags(flags *pflag.FlagSet) {
flags.Var(&o.StatusIncludeResources, "status-include-resources", "Resources to include in the restore status, formatted as resource.group, such as storageclasses.storage.k8s.io.")
flags.Var(&o.StatusExcludeResources, "status-exclude-resources", "Resources to exclude from the restore status, formatted as resource.group, such as storageclasses.storage.k8s.io.")
flags.VarP(&o.Selector, "selector", "l", "Only restore resources matching this label selector.")
flags.Var(&o.OrSelector, "or-selector", "Restore resources matching at least one of the label selector from the list. Label selectors should be separated by ' or '. For example, foo=bar or app=nginx")
flags.DurationVar(&o.ItemOperationTimeout, "item-operation-timeout", o.ItemOperationTimeout, "How long to wait for async plugin operations before timeout.")
f := flags.VarPF(&o.RestoreVolumes, "restore-volumes", "", "Whether to restore volumes from snapshots.")
// this allows the user to just specify "--restore-volumes" as shorthand for "--restore-volumes=true"
@@ -185,6 +187,10 @@ func (o *CreateOptions) Validate(c *cobra.Command, args []string, f client.Facto
return errors.New("Velero client is not set; unable to proceed")
}
if o.Selector.LabelSelector != nil && o.OrSelector.OrLabelSelectors != nil {
return errors.New("either a 'selector' or an 'or-selector' can be specified, but not both")
}
if len(o.ExistingResourcePolicy) > 0 && !isResourcePolicyValid(o.ExistingResourcePolicy) {
return errors.New("existing-resource-policy has invalid value, it accepts only none, update as value")
}
@@ -304,6 +310,7 @@ func (o *CreateOptions) Run(c *cobra.Command, f client.Factory) error {
ExistingResourcePolicy: api.PolicyType(o.ExistingResourcePolicy),
NamespaceMapping: o.NamespaceMappings.Data(),
LabelSelector: o.Selector.LabelSelector,
OrLabelSelectors: o.OrSelector.OrLabelSelectors,
RestorePVs: o.RestoreVolumes.Value,
PreserveNodePorts: o.PreserveNodePorts.Value,
IncludeClusterResources: o.IncludeClusterResources.Value,

View File

@@ -145,6 +145,7 @@ func (o *CreateOptions) Run(c *cobra.Command, f client.Factory) error {
ExcludedNamespaceScopedResources: o.BackupOptions.ExcludeNamespaceScopedResources,
IncludeClusterResources: o.BackupOptions.IncludeClusterResources.Value,
LabelSelector: o.BackupOptions.Selector.LabelSelector,
OrLabelSelectors: o.BackupOptions.OrSelector.OrLabelSelectors,
SnapshotVolumes: o.BackupOptions.SnapshotVolumes.Value,
TTL: metav1.Duration{Duration: o.BackupOptions.TTL},
StorageLocation: o.BackupOptions.StorageLocation,
@@ -153,6 +154,8 @@ func (o *CreateOptions) Run(c *cobra.Command, f client.Factory) error {
OrderedResources: orders,
CSISnapshotTimeout: metav1.Duration{Duration: o.BackupOptions.CSISnapshotTimeout},
ItemOperationTimeout: metav1.Duration{Duration: o.BackupOptions.ItemOperationTimeout},
DataMover: o.BackupOptions.DataMover,
SnapshotMoveData: o.BackupOptions.SnapshotMoveData.Value,
},
Schedule: o.Schedule,
UseOwnerReferencesInBackup: &o.UseOwnerReferencesInBackup,

View File

@@ -26,6 +26,7 @@ import (
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/builder"
veleroclient "github.com/vmware-tanzu/velero/pkg/client"
)
type Getter interface {
@@ -40,7 +41,7 @@ type DefaultServerStatusGetter struct {
func (g *DefaultServerStatusGetter) GetServerStatus(kbClient kbclient.Client) (*velerov1api.ServerStatusRequest, error) {
created := builder.ForServerStatusRequest(g.Namespace, "", "0").ObjectMeta(builder.WithGenerateName("velero-cli-")).Result()
if err := kbClient.Create(context.Background(), created, &kbclient.CreateOptions{}); err != nil {
if err := veleroclient.CreateRetryGenerateName(kbClient, context.Background(), created); err != nil {
return nil, errors.WithStack(err)
}

View File

@@ -19,6 +19,8 @@ package uninstall
import (
"context"
"fmt"
"reflect"
"sync"
"time"
"github.com/pkg/errors"
@@ -40,6 +42,7 @@ import (
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
velerov2alpha1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v2alpha1"
"github.com/vmware-tanzu/velero/pkg/client"
"github.com/vmware-tanzu/velero/pkg/cmd"
"github.com/vmware-tanzu/velero/pkg/cmd/cli"
@@ -50,6 +53,8 @@ import (
var gracefulDeletionMaximumDuration = 1 * time.Minute
var resToDelete = []kbclient.ObjectList{}
// uninstallOptions collects all the options for uninstalling Velero from a Kubernetes cluster.
type uninstallOptions struct {
wait bool // deprecated
@@ -167,11 +172,7 @@ func Run(ctx context.Context, kbClient kbclient.Client, namespace string) error
}
func deleteNamespace(ctx context.Context, kbClient kbclient.Client, namespace string) error {
// Deal with resources with attached finalizers to ensure proper handling of those finalizers.
if err := deleteResourcesWithFinalizer(ctx, kbClient, namespace); err != nil {
return errors.Wrap(err, "Fail to remove finalizer from restores")
}
// First check if it's already been deleted
ns := &corev1.Namespace{}
key := kbclient.ObjectKey{Name: namespace}
if err := kbClient.Get(ctx, key, ns); err != nil {
@@ -182,6 +183,11 @@ func deleteNamespace(ctx context.Context, kbClient kbclient.Client, namespace st
return err
}
// Deal with resources with attached finalizers to ensure proper handling of those finalizers.
if err := deleteResourcesWithFinalizer(ctx, kbClient, namespace); err != nil {
return errors.Wrap(err, "Fail to remove finalizer from restores")
}
if err := kbClient.Delete(ctx, ns); err != nil {
if apierrors.IsNotFound(err) {
fmt.Printf("Velero namespace %q does not exist, skipping.\n", namespace)
@@ -223,80 +229,151 @@ func deleteNamespace(ctx context.Context, kbClient kbclient.Client, namespace st
// 2. The controller may encounter errors while handling the finalizer, in such case, the controller will keep trying until it succeeds.
// So it is important to set a timeout, once the process exceed the timeout, we will forcedly delete the resources by removing the finalizer from them,
// otherwise the deletion process may get stuck indefinitely.
// 3. There is only restore finalizer supported as of v1.12. If any new finalizers are added in the future, the corresponding deletion logic can be
// 3. There is only resources finalizer supported as of v1.12. If any new finalizers are added in the future, the corresponding deletion logic can be
// incorporated into this function.
func deleteResourcesWithFinalizer(ctx context.Context, kbClient kbclient.Client, namespace string) error {
fmt.Println("Waiting for resource with attached finalizer to be deleted")
return deleteRestore(ctx, kbClient, namespace)
return deleteResources(ctx, kbClient, namespace)
}
func deleteRestore(ctx context.Context, kbClient kbclient.Client, namespace string) error {
// Check if restore crd exists, if it does not exist, return immediately.
func checkResources(ctx context.Context, kbClient kbclient.Client) error {
checkCRDs := []string{"restores.velero.io", "datauploads.velero.io", "datadownloads.velero.io"}
var err error
v1crd := &apiextv1.CustomResourceDefinition{}
key := kbclient.ObjectKey{Name: "restores.velero.io"}
if err = kbClient.Get(ctx, key, v1crd); err != nil {
if apierrors.IsNotFound(err) {
return nil
for _, crd := range checkCRDs {
key := kbclient.ObjectKey{Name: crd}
if err = kbClient.Get(ctx, key, v1crd); err != nil {
if !apierrors.IsNotFound(err) {
return errors.Wrapf(err, "Error getting %s crd", crd)
}
} else {
return errors.Wrap(err, "Error getting restore crd")
// no error with found CRD that we should delete
switch crd {
case "restores.velero.io":
resToDelete = append(resToDelete, &velerov1api.RestoreList{})
case "datauploads.velero.io":
resToDelete = append(resToDelete, &velerov2alpha1api.DataUploadList{})
case "datadownloads.velero.io":
resToDelete = append(resToDelete, &velerov2alpha1api.DataDownloadList{})
default:
fmt.Printf("Unsupported type %s to be cleared\n", crd)
}
}
}
return nil
}
// First attempt to gracefully delete all the restore within a specified time frame, If the process exceeds the timeout limit,
// it is likely that there may be errors during the finalization of restores. In such cases, we should proceed with forcefully deleting the restores.
err = gracefullyDeleteRestore(ctx, kbClient, namespace)
if err != nil && err != wait.ErrWaitTimeout {
return errors.Wrap(err, "Error deleting restores")
func deleteResources(ctx context.Context, kbClient kbclient.Client, namespace string) error {
// Check if resources crd exists, if it does not exist, return immediately.
err := checkResources(ctx, kbClient)
if err != nil {
return err
}
// First attempt to gracefully delete all the resources within a specified time frame, If the process exceeds the timeout limit,
// it is likely that there may be errors during the finalization of restores. In such cases, we should proceed with forcefully deleting the restores.
err = gracefullyDeleteResources(ctx, kbClient, namespace)
if err != nil && err != wait.ErrWaitTimeout {
return errors.Wrap(err, "Error deleting resources")
}
if err == wait.ErrWaitTimeout {
err = forcedlyDeleteRestore(ctx, kbClient, namespace)
err = forcedlyDeleteResources(ctx, kbClient, namespace)
if err != nil {
return errors.Wrap(err, "Error deleting restores")
return errors.Wrap(err, "Error deleting resources forcedly")
}
}
return nil
}
func gracefullyDeleteRestore(ctx context.Context, kbClient kbclient.Client, namespace string) error {
var err error
restoreList := &velerov1api.RestoreList{}
if err = kbClient.List(ctx, restoreList, &kbclient.ListOptions{Namespace: namespace}); err != nil {
return errors.Wrap(err, "Error getting restores during graceful deletion")
func gracefullyDeleteResources(ctx context.Context, kbClient kbclient.Client, namespace string) error {
errorChan := make(chan error)
var wg sync.WaitGroup
wg.Add(len(resToDelete))
for i := range resToDelete {
go func(index int) {
defer wg.Done()
errorChan <- gracefullyDeleteResource(ctx, kbClient, namespace, resToDelete[index])
}(i)
}
for i := range restoreList.Items {
if err = kbClient.Delete(ctx, &restoreList.Items[i]); err != nil {
go func() {
wg.Wait()
close(errorChan)
}()
for err := range errorChan {
if err != nil {
return err
}
}
return waitDeletingResources(ctx, kbClient, namespace)
}
func gracefullyDeleteResource(ctx context.Context, kbClient kbclient.Client, namespace string, list kbclient.ObjectList) error {
if err := kbClient.List(ctx, list, &kbclient.ListOptions{Namespace: namespace}); err != nil {
return errors.Wrap(err, "Error getting resources during graceful deletion")
}
var objectsToDelete []kbclient.Object
items := reflect.ValueOf(list).Elem().FieldByName("Items")
for i := 0; i < items.Len(); i++ {
item := items.Index(i).Addr().Interface()
// Type assertion to cast item to the appropriate type
switch typedItem := item.(type) {
case *velerov1api.Restore:
objectsToDelete = append(objectsToDelete, typedItem)
case *velerov2alpha1api.DataUpload:
objectsToDelete = append(objectsToDelete, typedItem)
case *velerov2alpha1api.DataDownload:
objectsToDelete = append(objectsToDelete, typedItem)
default:
return errors.New("Unsupported resource type")
}
}
// Delete collected resources in a batch
for _, resource := range objectsToDelete {
if err := kbClient.Delete(ctx, resource); err != nil {
if apierrors.IsNotFound(err) {
continue
}
return errors.Wrap(err, "Error deleting restores during graceful deletion")
return errors.Wrap(err, "Error deleting resources during graceful deletion")
}
}
return nil
}
func waitDeletingResources(ctx context.Context, kbClient kbclient.Client, namespace string) error {
// Wait for the deletion of all the restores within a specified time frame
err = wait.PollImmediate(time.Second, gracefulDeletionMaximumDuration, func() (bool, error) {
restoreList := &velerov1api.RestoreList{}
if errList := kbClient.List(ctx, restoreList, &kbclient.ListOptions{Namespace: namespace}); errList != nil {
return false, errList
err := wait.PollImmediate(time.Second, gracefulDeletionMaximumDuration, func() (bool, error) {
itemsCount := 0
for i := range resToDelete {
if errList := kbClient.List(ctx, resToDelete[i], &kbclient.ListOptions{Namespace: namespace}); errList != nil {
return false, errList
}
itemsCount += reflect.ValueOf(resToDelete[i]).Elem().FieldByName("Items").Len()
if itemsCount > 0 {
fmt.Print(".")
return false, nil
}
}
if len(restoreList.Items) > 0 {
fmt.Print(".")
return false, nil
} else {
return true, nil
}
return true, nil
})
return err
}
func forcedlyDeleteRestore(ctx context.Context, kbClient kbclient.Client, namespace string) error {
func forcedlyDeleteResources(ctx context.Context, kbClient kbclient.Client, namespace string) error {
// Delete velero deployment first in case:
// 1. finalizers will be added back by restore controller after they are removed at next step;
// 2. new restores attached with finalizer will be created by restore controller after we remove all the restores' finalizer at next step;
// 1. finalizers will be added back by resources related controller after they are removed at next step;
// 2. new resources attached with finalizer will be created by controller after we remove all the resources' finalizer at next step;
deploy := &appsv1api.Deployment{
ObjectMeta: metav1.ObjectMeta{
Namespace: "velero",
@@ -330,23 +407,56 @@ func forcedlyDeleteRestore(ctx context.Context, kbClient kbclient.Client, namesp
if err != nil {
return errors.Wrap(err, "Error deleting velero deployment during force deletion")
}
return removeResourcesFinalizer(ctx, kbClient, namespace)
}
// Remove all the restores' finalizer so they can be deleted during the deletion of velero namespace.
restoreList := &velerov1api.RestoreList{}
if err := kbClient.List(ctx, restoreList, &kbclient.ListOptions{Namespace: namespace}); err != nil {
return errors.Wrap(err, "Error getting restores during force deletion")
func removeResourcesFinalizer(ctx context.Context, kbClient kbclient.Client, namespace string) error {
for i := range resToDelete {
if err := removeResourceFinalizer(ctx, kbClient, namespace, resToDelete[i]); err != nil {
return err
}
}
return nil
}
func removeResourceFinalizer(ctx context.Context, kbClient kbclient.Client, namespace string, resourceList kbclient.ObjectList) error {
listOptions := &kbclient.ListOptions{Namespace: namespace}
if err := kbClient.List(ctx, resourceList, listOptions); err != nil {
return errors.Wrap(err, fmt.Sprintf("Error getting resources of type %T during force deletion", resourceList))
}
for i := range restoreList.Items {
if controllerutil.ContainsFinalizer(&restoreList.Items[i], controller.ExternalResourcesFinalizer) {
update := &restoreList.Items[i]
original := update.DeepCopy()
controllerutil.RemoveFinalizer(update, controller.ExternalResourcesFinalizer)
if err := kubeutil.PatchResource(original, update, kbClient); err != nil {
return errors.Wrap(err, "Error removing restore finalizer during force deletion")
}
items := reflect.ValueOf(resourceList).Elem().FieldByName("Items")
var err error
for i := 0; i < items.Len(); i++ {
item := items.Index(i).Addr().Interface()
// Type assertion to cast item to the appropriate type
switch typedItem := item.(type) {
case *velerov1api.Restore:
err = removeFinalizerForObject(typedItem, controller.ExternalResourcesFinalizer, kbClient)
case *velerov2alpha1api.DataUpload:
err = removeFinalizerForObject(typedItem, controller.DataUploadDownloadFinalizer, kbClient)
case *velerov2alpha1api.DataDownload:
err = removeFinalizerForObject(typedItem, controller.DataUploadDownloadFinalizer, kbClient)
default:
err = errors.Errorf("Unsupported resource type %T", typedItem)
}
if err != nil {
return err
}
}
return nil
}
func removeFinalizerForObject(obj kbclient.Object, finalizer string, kbClient kbclient.Client) error {
if controllerutil.ContainsFinalizer(obj, finalizer) {
update := obj.DeepCopyObject().(kbclient.Object)
original := obj.DeepCopyObject().(kbclient.Object)
controllerutil.RemoveFinalizer(update, finalizer)
if err := kubeutil.PatchResource(original, update, kbClient); err != nil {
return errors.Wrap(err, fmt.Sprintf("Error removing finalizer %q during force deletion", finalizer))
}
}
return nil
}

View File

@@ -112,6 +112,7 @@ const (
defaultCredentialsDirectory = "/tmp/credentials"
defaultMaxConcurrentK8SConnections = 30
defaultDisableInformerCache = false
)
type serverConfig struct {
@@ -135,6 +136,8 @@ type serverConfig struct {
defaultVolumesToFsBackup bool
uploaderType string
maxConcurrentK8SConnections int
defaultSnapshotMoveData bool
disableInformerCache bool
}
func NewCommand(f client.Factory) *cobra.Command {
@@ -163,6 +166,8 @@ func NewCommand(f client.Factory) *cobra.Command {
defaultVolumesToFsBackup: podvolume.DefaultVolumesToFsBackup,
uploaderType: uploader.ResticType,
maxConcurrentK8SConnections: defaultMaxConcurrentK8SConnections,
defaultSnapshotMoveData: false,
disableInformerCache: defaultDisableInformerCache,
}
)
@@ -233,6 +238,8 @@ func NewCommand(f client.Factory) *cobra.Command {
command.Flags().DurationVar(&config.defaultItemOperationTimeout, "default-item-operation-timeout", config.defaultItemOperationTimeout, "How long to wait on asynchronous BackupItemActions and RestoreItemActions to complete before timing out. Default is 4 hours")
command.Flags().DurationVar(&config.resourceTimeout, "resource-timeout", config.resourceTimeout, "How long to wait for resource processes which are not covered by other specific timeout parameters. Default is 10 minutes.")
command.Flags().IntVar(&config.maxConcurrentK8SConnections, "max-concurrent-k8s-connections", config.maxConcurrentK8SConnections, "Max concurrent connections number that Velero can create with kube-apiserver. Default is 30.")
command.Flags().BoolVar(&config.defaultSnapshotMoveData, "default-snapshot-move-data", config.defaultSnapshotMoveData, "Move data by default for all snapshots supporting data movement.")
command.Flags().BoolVar(&config.disableInformerCache, "disable-informer-cache", config.disableInformerCache, "Disable informer cache for Get calls on restore. WIth this enabled, it will speed up restore in cases where there are backup resources which already exist in the cluster, but for very large clusters this will increase velero memory usage. Default is false (don't disable).")
return command
}
@@ -757,6 +764,7 @@ func (s *server) runControllers(defaultVolumeSnapshotLocations map[string]string
s.csiSnapshotClient,
s.credentialFileStore,
s.config.maxConcurrentK8SConnections,
s.config.defaultSnapshotMoveData,
).SetupWithManager(s.mgr); err != nil {
s.logger.Fatal(err, "unable to create controller", "controller", controller.Backup)
}
@@ -933,6 +941,7 @@ func (s *server) runControllers(defaultVolumeSnapshotLocations map[string]string
s.metrics,
s.config.formatFlag.Parse(),
s.config.defaultItemOperationTimeout,
s.config.disableInformerCache,
)
if err = r.SetupWithManager(s.mgr); err != nil {

View File

@@ -0,0 +1,61 @@
/*
Copyright 2017 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package flag
import (
"strings"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// OrLabelSelector is a Cobra-compatible wrapper for defining
// a Kubernetes or-label-selector flag.
type OrLabelSelector struct {
OrLabelSelectors []*metav1.LabelSelector
}
// String returns a string representation of the or-label
// selector flag.
func (ls *OrLabelSelector) String() string {
orLabels := []string{}
for _, v := range ls.OrLabelSelectors {
orLabels = append(orLabels, metav1.FormatLabelSelector(v))
}
return strings.Join(orLabels, " or ")
}
// Set parses the provided string and assigns the result
// to the or-label-selector receiver. It returns an error if
// the string is not parseable.
func (ls *OrLabelSelector) Set(s string) error {
orItems := strings.Split(s, " or ")
ls.OrLabelSelectors = make([]*metav1.LabelSelector, 0)
for _, orItem := range orItems {
parsed, err := metav1.ParseToLabelSelector(orItem)
if err != nil {
return err
}
ls.OrLabelSelectors = append(ls.OrLabelSelectors, parsed)
}
return nil
}
// Type returns a string representation of the
// OrLabelSelector type.
func (ls *OrLabelSelector) Type() string {
return "orLabelSelector"
}

View File

@@ -0,0 +1,102 @@
package flag
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
func TestStringOfOrLabelSelector(t *testing.T) {
tests := []struct {
name string
orLabelSelector *OrLabelSelector
expectedStr string
}{
{
name: "or between two labels",
orLabelSelector: &OrLabelSelector{
OrLabelSelectors: []*metav1.LabelSelector{
{
MatchLabels: map[string]string{"k1": "v1"},
},
{
MatchLabels: map[string]string{"k2": "v2"},
},
},
},
expectedStr: "k1=v1 or k2=v2",
},
{
name: "or between two label groups",
orLabelSelector: &OrLabelSelector{
OrLabelSelectors: []*metav1.LabelSelector{
{
MatchLabels: map[string]string{"k1": "v1", "k2": "v2"},
},
{
MatchLabels: map[string]string{"a1": "b1", "a2": "b2"},
},
},
},
expectedStr: "k1=v1,k2=v2 or a1=b1,a2=b2",
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
assert.Equal(t, test.expectedStr, test.orLabelSelector.String())
})
}
}
func TestSetOfOrLabelSelector(t *testing.T) {
tests := []struct {
name string
inputStr string
expectedSelector *OrLabelSelector
}{
{
name: "or between two labels",
inputStr: "k1=v1 or k2=v2",
expectedSelector: &OrLabelSelector{
OrLabelSelectors: []*metav1.LabelSelector{
{
MatchLabels: map[string]string{"k1": "v1"},
},
{
MatchLabels: map[string]string{"k2": "v2"},
},
},
},
},
{
name: "or between two label groups",
inputStr: "k1=v1,k2=v2 or a1=b1,a2=b2",
expectedSelector: &OrLabelSelector{
OrLabelSelectors: []*metav1.LabelSelector{
{
MatchLabels: map[string]string{"k1": "v1", "k2": "v2"},
},
{
MatchLabels: map[string]string{"a1": "b1", "a2": "b2"},
},
},
},
},
}
selector := &OrLabelSelector{}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
require.Nil(t, selector.Set(test.inputStr))
assert.Equal(t, len(test.expectedSelector.OrLabelSelectors), len(selector.OrLabelSelectors))
assert.Equal(t, test.expectedSelector.String(), selector.String())
})
}
}
func TestTypeOfOrLabelSelector(t *testing.T) {
selector := &OrLabelSelector{}
assert.Equal(t, "orLabelSelector", selector.Type())
}

View File

@@ -199,6 +199,18 @@ func DescribeBackupSpec(d *Describer, spec velerov1api.BackupSpec) {
}
d.Printf("Label selector:\t%s\n", s)
d.Println()
if len(spec.OrLabelSelectors) == 0 {
s = emptyDisplay
} else {
orLabelSelectors := []string{}
for _, v := range spec.OrLabelSelectors {
orLabelSelectors = append(orLabelSelectors, metav1.FormatLabelSelector(v))
}
s = strings.Join(orLabelSelectors, " or ")
}
d.Printf("Or label selector:\t%s\n", s)
d.Println()
d.Printf("Storage Location:\t%s\n", spec.StorageLocation)

View File

@@ -91,6 +91,8 @@ Resources:
Label selector: <none>
Or label selector: <none>
Storage Location: backup-location
Velero-Native Snapshot PVs: auto
@@ -153,6 +155,8 @@ Resources:
Label selector: <none>
Or label selector: <none>
Storage Location: backup-location
Velero-Native Snapshot PVs: auto
@@ -208,6 +212,8 @@ Resources:
Label selector: <none>
Or label selector: <none>
Storage Location: backup-location
Velero-Native Snapshot PVs: auto

View File

@@ -146,6 +146,18 @@ func DescribeRestore(ctx context.Context, kbClient kbclient.Client, restore *vel
}
d.Printf("Label selector:\t%s\n", s)
d.Println()
if len(restore.Spec.OrLabelSelectors) == 0 {
s = emptyDisplay
} else {
orLabelSelectors := []string{}
for _, v := range restore.Spec.OrLabelSelectors {
orLabelSelectors = append(orLabelSelectors, metav1.FormatLabelSelector(v))
}
s = strings.Join(orLabelSelectors, " or ")
}
d.Printf("Or label selector:\t%s\n", s)
d.Println()
d.Printf("Restore PVs:\t%s\n", BoolPointerString(restore.Spec.RestorePVs, "false", "true", "auto"))

View File

@@ -38,6 +38,8 @@ Backup Template:
Label selector: <none>
Or label selector: <none>
Storage Location:
Velero-Native Snapshot PVs: auto
@@ -82,6 +84,8 @@ Backup Template:
Label selector: <none>
Or label selector: <none>
Storage Location:
Velero-Native Snapshot PVs: auto

View File

@@ -88,6 +88,7 @@ type backupReconciler struct {
volumeSnapshotClient snapshotterClientSet.Interface
credentialFileStore credentials.FileStore
maxConcurrentK8SConnections int
defaultSnapshotMoveData bool
}
func NewBackupReconciler(
@@ -113,6 +114,7 @@ func NewBackupReconciler(
volumeSnapshotClient snapshotterClientSet.Interface,
credentialStore credentials.FileStore,
maxConcurrentK8SConnections int,
defaultSnapshotMoveData bool,
) *backupReconciler {
b := &backupReconciler{
ctx: ctx,
@@ -138,6 +140,7 @@ func NewBackupReconciler(
volumeSnapshotClient: volumeSnapshotClient,
credentialFileStore: credentialStore,
maxConcurrentK8SConnections: maxConcurrentK8SConnections,
defaultSnapshotMoveData: defaultSnapshotMoveData,
}
b.updateTotalBackupMetric()
return b
@@ -353,6 +356,10 @@ func (b *backupReconciler) prepareBackupRequest(backup *velerov1api.Backup, logg
request.Spec.DefaultVolumesToFsBackup = &b.defaultVolumesToFsBackup
}
if request.Spec.SnapshotMoveData == nil {
request.Spec.SnapshotMoveData = &b.defaultSnapshotMoveData
}
// find which storage location to use
var serverSpecified bool
if request.Spec.StorageLocation == "" {
@@ -533,8 +540,8 @@ func (b *backupReconciler) validateAndGetSnapshotLocations(backup *velerov1api.B
if len(errors) > 0 {
return nil, errors
}
allLocations := &velerov1api.VolumeSnapshotLocationList{}
err := b.kbClient.List(context.Background(), allLocations, &kbclient.ListOptions{Namespace: backup.Namespace, LabelSelector: labels.Everything()})
vsls := &velerov1api.VolumeSnapshotLocationList{}
err := b.kbClient.List(context.Background(), vsls, &kbclient.ListOptions{Namespace: backup.Namespace, LabelSelector: labels.Everything()})
if err != nil {
errors = append(errors, fmt.Sprintf("error listing volume snapshot locations: %v", err))
return nil, errors
@@ -542,8 +549,8 @@ func (b *backupReconciler) validateAndGetSnapshotLocations(backup *velerov1api.B
// build a map of provider->list of all locations for the provider
allProviderLocations := make(map[string][]*velerov1api.VolumeSnapshotLocation)
for i := range allLocations.Items {
loc := allLocations.Items[i]
for i := range vsls.Items {
loc := vsls.Items[i]
allProviderLocations[loc.Spec.Provider] = append(allProviderLocations[loc.Spec.Provider], &loc)
}

View File

@@ -583,6 +583,7 @@ func TestProcessBackupCompletions(t *testing.T) {
backup *velerov1api.Backup
backupLocation *velerov1api.BackupStorageLocation
defaultVolumesToFsBackup bool
defaultSnapshotMoveData bool
expectedResult *velerov1api.Backup
backupExists bool
existenceCheckError error
@@ -615,6 +616,7 @@ func TestProcessBackupCompletions(t *testing.T) {
Spec: velerov1api.BackupSpec{
StorageLocation: defaultBackupLocation.Name,
DefaultVolumesToFsBackup: boolptr.True(),
SnapshotMoveData: boolptr.False(),
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseFinalizing,
@@ -651,6 +653,7 @@ func TestProcessBackupCompletions(t *testing.T) {
Spec: velerov1api.BackupSpec{
StorageLocation: "alt-loc",
DefaultVolumesToFsBackup: boolptr.False(),
SnapshotMoveData: boolptr.False(),
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseFinalizing,
@@ -690,6 +693,7 @@ func TestProcessBackupCompletions(t *testing.T) {
Spec: velerov1api.BackupSpec{
StorageLocation: "read-write",
DefaultVolumesToFsBackup: boolptr.True(),
SnapshotMoveData: boolptr.False(),
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseFinalizing,
@@ -727,6 +731,7 @@ func TestProcessBackupCompletions(t *testing.T) {
TTL: metav1.Duration{Duration: 10 * time.Minute},
StorageLocation: defaultBackupLocation.Name,
DefaultVolumesToFsBackup: boolptr.False(),
SnapshotMoveData: boolptr.False(),
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseFinalizing,
@@ -764,6 +769,7 @@ func TestProcessBackupCompletions(t *testing.T) {
Spec: velerov1api.BackupSpec{
StorageLocation: defaultBackupLocation.Name,
DefaultVolumesToFsBackup: boolptr.True(),
SnapshotMoveData: boolptr.False(),
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseFinalizing,
@@ -802,6 +808,7 @@ func TestProcessBackupCompletions(t *testing.T) {
Spec: velerov1api.BackupSpec{
StorageLocation: defaultBackupLocation.Name,
DefaultVolumesToFsBackup: boolptr.False(),
SnapshotMoveData: boolptr.False(),
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseFinalizing,
@@ -840,6 +847,7 @@ func TestProcessBackupCompletions(t *testing.T) {
Spec: velerov1api.BackupSpec{
StorageLocation: defaultBackupLocation.Name,
DefaultVolumesToFsBackup: boolptr.True(),
SnapshotMoveData: boolptr.False(),
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseFinalizing,
@@ -878,6 +886,7 @@ func TestProcessBackupCompletions(t *testing.T) {
Spec: velerov1api.BackupSpec{
StorageLocation: defaultBackupLocation.Name,
DefaultVolumesToFsBackup: boolptr.True(),
SnapshotMoveData: boolptr.False(),
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseFinalizing,
@@ -916,6 +925,7 @@ func TestProcessBackupCompletions(t *testing.T) {
Spec: velerov1api.BackupSpec{
StorageLocation: defaultBackupLocation.Name,
DefaultVolumesToFsBackup: boolptr.False(),
SnapshotMoveData: boolptr.False(),
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseFinalizing,
@@ -955,6 +965,7 @@ func TestProcessBackupCompletions(t *testing.T) {
Spec: velerov1api.BackupSpec{
StorageLocation: defaultBackupLocation.Name,
DefaultVolumesToFsBackup: boolptr.True(),
SnapshotMoveData: boolptr.False(),
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseFailed,
@@ -994,6 +1005,7 @@ func TestProcessBackupCompletions(t *testing.T) {
Spec: velerov1api.BackupSpec{
StorageLocation: defaultBackupLocation.Name,
DefaultVolumesToFsBackup: boolptr.True(),
SnapshotMoveData: boolptr.False(),
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseFailed,
@@ -1113,6 +1125,7 @@ func TestProcessBackupCompletions(t *testing.T) {
Spec: velerov1api.BackupSpec{
StorageLocation: defaultBackupLocation.Name,
DefaultVolumesToFsBackup: boolptr.False(),
SnapshotMoveData: boolptr.False(),
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseFinalizing,
@@ -1126,6 +1139,129 @@ func TestProcessBackupCompletions(t *testing.T) {
},
volumeSnapshot: builder.ForVolumeSnapshot("velero", "testVS").ObjectMeta(builder.WithLabels(velerov1api.BackupNameLabel, "backup-1")).Result(),
},
{
name: "backup with snapshot data movement set to true and defaultSnapshotMoveData set to false",
backup: defaultBackup().SnapshotMoveData(true).Result(),
backupLocation: defaultBackupLocation,
defaultVolumesToFsBackup: false,
defaultSnapshotMoveData: false,
expectedResult: &velerov1api.Backup{
TypeMeta: metav1.TypeMeta{
Kind: "Backup",
APIVersion: "velero.io/v1",
},
ObjectMeta: metav1.ObjectMeta{
Namespace: velerov1api.DefaultNamespace,
Name: "backup-1",
Annotations: map[string]string{
"velero.io/source-cluster-k8s-major-version": "1",
"velero.io/source-cluster-k8s-minor-version": "16",
"velero.io/source-cluster-k8s-gitversion": "v1.16.4",
"velero.io/resource-timeout": "0s",
},
Labels: map[string]string{
"velero.io/storage-location": "loc-1",
},
},
Spec: velerov1api.BackupSpec{
StorageLocation: defaultBackupLocation.Name,
DefaultVolumesToFsBackup: boolptr.False(),
SnapshotMoveData: boolptr.True(),
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseFinalizing,
Version: 1,
FormatVersion: "1.1.0",
StartTimestamp: &timestamp,
Expiration: &timestamp,
CSIVolumeSnapshotsAttempted: 0,
CSIVolumeSnapshotsCompleted: 0,
},
},
volumeSnapshot: builder.ForVolumeSnapshot("velero", "testVS").ObjectMeta(builder.WithLabels(velerov1api.BackupNameLabel, "backup-1")).Result(),
},
{
name: "backup with snapshot data movement set to false and defaultSnapshotMoveData set to true",
backup: defaultBackup().SnapshotMoveData(false).Result(),
backupLocation: defaultBackupLocation,
defaultVolumesToFsBackup: false,
defaultSnapshotMoveData: true,
expectedResult: &velerov1api.Backup{
TypeMeta: metav1.TypeMeta{
Kind: "Backup",
APIVersion: "velero.io/v1",
},
ObjectMeta: metav1.ObjectMeta{
Namespace: velerov1api.DefaultNamespace,
Name: "backup-1",
Annotations: map[string]string{
"velero.io/source-cluster-k8s-major-version": "1",
"velero.io/source-cluster-k8s-minor-version": "16",
"velero.io/source-cluster-k8s-gitversion": "v1.16.4",
"velero.io/resource-timeout": "0s",
},
Labels: map[string]string{
"velero.io/storage-location": "loc-1",
},
},
Spec: velerov1api.BackupSpec{
StorageLocation: defaultBackupLocation.Name,
DefaultVolumesToFsBackup: boolptr.False(),
SnapshotMoveData: boolptr.False(),
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseFinalizing,
Version: 1,
FormatVersion: "1.1.0",
StartTimestamp: &timestamp,
Expiration: &timestamp,
CSIVolumeSnapshotsAttempted: 1,
CSIVolumeSnapshotsCompleted: 0,
},
},
volumeSnapshot: builder.ForVolumeSnapshot("velero", "testVS").ObjectMeta(builder.WithLabels(velerov1api.BackupNameLabel, "backup-1")).Result(),
},
{
name: "backup with snapshot data movement not set and defaultSnapshotMoveData set to true",
backup: defaultBackup().Result(),
backupLocation: defaultBackupLocation,
defaultVolumesToFsBackup: false,
defaultSnapshotMoveData: true,
expectedResult: &velerov1api.Backup{
TypeMeta: metav1.TypeMeta{
Kind: "Backup",
APIVersion: "velero.io/v1",
},
ObjectMeta: metav1.ObjectMeta{
Namespace: velerov1api.DefaultNamespace,
Name: "backup-1",
Annotations: map[string]string{
"velero.io/source-cluster-k8s-major-version": "1",
"velero.io/source-cluster-k8s-minor-version": "16",
"velero.io/source-cluster-k8s-gitversion": "v1.16.4",
"velero.io/resource-timeout": "0s",
},
Labels: map[string]string{
"velero.io/storage-location": "loc-1",
},
},
Spec: velerov1api.BackupSpec{
StorageLocation: defaultBackupLocation.Name,
DefaultVolumesToFsBackup: boolptr.False(),
SnapshotMoveData: boolptr.True(),
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseFinalizing,
Version: 1,
FormatVersion: "1.1.0",
StartTimestamp: &timestamp,
Expiration: &timestamp,
CSIVolumeSnapshotsAttempted: 0,
CSIVolumeSnapshotsCompleted: 0,
},
},
volumeSnapshot: builder.ForVolumeSnapshot("velero", "testVS").ObjectMeta(builder.WithLabels(velerov1api.BackupNameLabel, "backup-1")).Result(),
},
}
for _, test := range tests {
@@ -1178,6 +1314,7 @@ func TestProcessBackupCompletions(t *testing.T) {
kbClient: fakeClient,
defaultBackupLocation: defaultBackupLocation.Name,
defaultVolumesToFsBackup: test.defaultVolumesToFsBackup,
defaultSnapshotMoveData: test.defaultSnapshotMoveData,
backupTracker: NewBackupTracker(),
metrics: metrics.NewServerMetrics(),
clock: testclocks.NewFakeClock(now),

View File

@@ -570,7 +570,7 @@ func (r *backupDeletionReconciler) patchDeleteBackupRequest(ctx context.Context,
}
func (r *backupDeletionReconciler) patchBackup(ctx context.Context, backup *velerov1api.Backup, mutate func(*velerov1api.Backup)) (*velerov1api.Backup, error) {
//TODO: The patchHelper can't be used here because the `backup/xxx/status` does not exist, until the bakcup resource is refactored
//TODO: The patchHelper can't be used here because the `backup/xxx/status` does not exist, until the backup resource is refactored
// Record original json
oldData, err := json.Marshal(backup)

View File

@@ -123,9 +123,9 @@ func (r *DataDownloadReconciler) Reconcile(ctx context.Context, req ctrl.Request
// Add finalizer
// Logic for clear resources when datadownload been deleted
if dd.DeletionTimestamp.IsZero() { // add finalizer for all cr at beginning
if !isDataDownloadInFinalState(dd) && !controllerutil.ContainsFinalizer(dd, dataUploadDownloadFinalizer) {
if !isDataDownloadInFinalState(dd) && !controllerutil.ContainsFinalizer(dd, DataUploadDownloadFinalizer) {
succeeded, err := r.exclusiveUpdateDataDownload(ctx, dd, func(dd *velerov2alpha1api.DataDownload) {
controllerutil.AddFinalizer(dd, dataUploadDownloadFinalizer)
controllerutil.AddFinalizer(dd, DataUploadDownloadFinalizer)
})
if err != nil {
log.Errorf("failed to add finalizer with error %s for %s/%s", err.Error(), dd.Namespace, dd.Name)
@@ -135,7 +135,7 @@ func (r *DataDownloadReconciler) Reconcile(ctx context.Context, req ctrl.Request
return ctrl.Result{Requeue: true}, nil
}
}
} else if controllerutil.ContainsFinalizer(dd, dataUploadDownloadFinalizer) && !dd.Spec.Cancel && !isDataDownloadInFinalState(dd) {
} else if controllerutil.ContainsFinalizer(dd, DataUploadDownloadFinalizer) && !dd.Spec.Cancel && !isDataDownloadInFinalState(dd) {
// when delete cr we need to clear up internal resources created by Velero, here we use the cancel mechanism
// to help clear up resources instead of clear them directly in case of some conflict with Expose action
if err := UpdateDataDownloadWithRetry(ctx, r.client, req.NamespacedName, log, func(dataDownload *velerov2alpha1api.DataDownload) {
@@ -309,9 +309,11 @@ func (r *DataDownloadReconciler) Reconcile(ctx context.Context, req ctrl.Request
} else {
// put the finilizer remove action here for all cr will goes to the final status, we could check finalizer and do remove action in final status
// instead of intermediate state
if isDataDownloadInFinalState(dd) && !dd.DeletionTimestamp.IsZero() && controllerutil.ContainsFinalizer(dd, dataUploadDownloadFinalizer) {
// remove finalizer no matter whether the cr is being deleted or not for it is no longer needed when internal resources are all cleaned up
// also in final status cr won't block the direct delete of the velero namespace
if isDataDownloadInFinalState(dd) && controllerutil.ContainsFinalizer(dd, DataUploadDownloadFinalizer) {
original := dd.DeepCopy()
controllerutil.RemoveFinalizer(dd, dataUploadDownloadFinalizer)
controllerutil.RemoveFinalizer(dd, DataUploadDownloadFinalizer)
if err := r.client.Patch(ctx, dd, client.MergeFrom(original)); err != nil {
log.WithError(err).Error("error to remove finalizer")
}

View File

@@ -299,7 +299,7 @@ func TestDataDownloadReconcile(t *testing.T) {
name: "dataDownload with enabled cancel",
dd: func() *velerov2alpha1api.DataDownload {
dd := dataDownloadBuilder().Phase(velerov2alpha1api.DataDownloadPhaseAccepted).Result()
controllerutil.AddFinalizer(dd, dataUploadDownloadFinalizer)
controllerutil.AddFinalizer(dd, DataUploadDownloadFinalizer)
dd.DeletionTimestamp = &metav1.Time{Time: time.Now()}
return dd
}(),
@@ -312,12 +312,12 @@ func TestDataDownloadReconcile(t *testing.T) {
name: "dataDownload with remove finalizer and should not be retrieved",
dd: func() *velerov2alpha1api.DataDownload {
dd := dataDownloadBuilder().Phase(velerov2alpha1api.DataDownloadPhaseFailed).Cancel(true).Result()
controllerutil.AddFinalizer(dd, dataUploadDownloadFinalizer)
controllerutil.AddFinalizer(dd, DataUploadDownloadFinalizer)
dd.DeletionTimestamp = &metav1.Time{Time: time.Now()}
return dd
}(),
checkFunc: func(dd velerov2alpha1api.DataDownload) bool {
return !controllerutil.ContainsFinalizer(&dd, dataUploadDownloadFinalizer)
return !controllerutil.ContainsFinalizer(&dd, DataUploadDownloadFinalizer)
},
},
}
@@ -428,7 +428,7 @@ func TestDataDownloadReconcile(t *testing.T) {
assert.Contains(t, dd.Status.Message, test.expectedStatusMsg)
}
if test.dd.Namespace == velerov1api.DefaultNamespace {
if controllerutil.ContainsFinalizer(test.dd, dataUploadDownloadFinalizer) {
if controllerutil.ContainsFinalizer(test.dd, DataUploadDownloadFinalizer) {
assert.True(t, true, apierrors.IsNotFound(err))
} else {
require.Nil(t, err)

View File

@@ -58,7 +58,7 @@ import (
const (
dataUploadDownloadRequestor = "snapshot-data-upload-download"
acceptNodeLabelKey = "velero.io/accepted-by"
dataUploadDownloadFinalizer = "velero.io/data-upload-download-finalizer"
DataUploadDownloadFinalizer = "velero.io/data-upload-download-finalizer"
preparingMonitorFrequency = time.Minute
)
@@ -132,9 +132,9 @@ func (r *DataUploadReconciler) Reconcile(ctx context.Context, req ctrl.Request)
// Logic for clear resources when dataupload been deleted
if du.DeletionTimestamp.IsZero() { // add finalizer for all cr at beginning
if !isDataUploadInFinalState(du) && !controllerutil.ContainsFinalizer(du, dataUploadDownloadFinalizer) {
if !isDataUploadInFinalState(du) && !controllerutil.ContainsFinalizer(du, DataUploadDownloadFinalizer) {
succeeded, err := r.exclusiveUpdateDataUpload(ctx, du, func(du *velerov2alpha1api.DataUpload) {
controllerutil.AddFinalizer(du, dataUploadDownloadFinalizer)
controllerutil.AddFinalizer(du, DataUploadDownloadFinalizer)
})
if err != nil {
@@ -145,7 +145,7 @@ func (r *DataUploadReconciler) Reconcile(ctx context.Context, req ctrl.Request)
return ctrl.Result{Requeue: true}, nil
}
}
} else if controllerutil.ContainsFinalizer(du, dataUploadDownloadFinalizer) && !du.Spec.Cancel && !isDataUploadInFinalState(du) {
} else if controllerutil.ContainsFinalizer(du, DataUploadDownloadFinalizer) && !du.Spec.Cancel && !isDataUploadInFinalState(du) {
// when delete cr we need to clear up internal resources created by Velero, here we use the cancel mechanism
// to help clear up resources instead of clear them directly in case of some conflict with Expose action
if err := UpdateDataUploadWithRetry(ctx, r.client, req.NamespacedName, log, func(dataUpload *velerov2alpha1api.DataUpload) {
@@ -177,7 +177,10 @@ func (r *DataUploadReconciler) Reconcile(ctx context.Context, req ctrl.Request)
return ctrl.Result{}, nil
}
exposeParam := r.setupExposeParam(du)
exposeParam, err := r.setupExposeParam(du)
if err != nil {
return r.errorOut(ctx, du, err, "failed to set exposer parameters", log)
}
// Expose() will trigger to create one pod whose volume is restored by a given volume snapshot,
// but the pod maybe is not in the same node of the current controller, so we need to return it here.
@@ -308,10 +311,12 @@ func (r *DataUploadReconciler) Reconcile(ctx context.Context, req ctrl.Request)
return ctrl.Result{}, nil
} else {
// put the finilizer remove action here for all cr will goes to the final status, we could check finalizer and do remove action in final status
// instead of intermediate state
if isDataUploadInFinalState(du) && !du.DeletionTimestamp.IsZero() && controllerutil.ContainsFinalizer(du, dataUploadDownloadFinalizer) {
// instead of intermediate state.
// remove finalizer no matter whether the cr is being deleted or not for it is no longer needed when internal resources are all cleaned up
// also in final status cr won't block the direct delete of the velero namespace
if isDataUploadInFinalState(du) && controllerutil.ContainsFinalizer(du, DataUploadDownloadFinalizer) {
original := du.DeepCopy()
controllerutil.RemoveFinalizer(du, dataUploadDownloadFinalizer)
controllerutil.RemoveFinalizer(du, DataUploadDownloadFinalizer)
if err := r.client.Patch(ctx, du, client.MergeFrom(original)); err != nil {
log.WithError(err).Error("error to remove finalizer")
}
@@ -733,18 +738,33 @@ func (r *DataUploadReconciler) closeDataPath(ctx context.Context, duName string)
r.dataPathMgr.RemoveAsyncBR(duName)
}
func (r *DataUploadReconciler) setupExposeParam(du *velerov2alpha1api.DataUpload) interface{} {
func (r *DataUploadReconciler) setupExposeParam(du *velerov2alpha1api.DataUpload) (interface{}, error) {
if du.Spec.SnapshotType == velerov2alpha1api.SnapshotTypeCSI {
pvc := &corev1.PersistentVolumeClaim{}
err := r.client.Get(context.Background(), types.NamespacedName{
Namespace: du.Spec.SourceNamespace,
Name: du.Spec.SourcePVC,
}, pvc)
if err != nil {
return nil, errors.Wrapf(err, "failed to get PVC %s/%s", du.Spec.SourceNamespace, du.Spec.SourcePVC)
}
accessMode := exposer.AccessModeFileSystem
if pvc.Spec.VolumeMode != nil && *pvc.Spec.VolumeMode == corev1.PersistentVolumeBlock {
accessMode = exposer.AccessModeBlock
}
return &exposer.CSISnapshotExposeParam{
SnapshotName: du.Spec.CSISnapshot.VolumeSnapshot,
SourceNamespace: du.Spec.SourceNamespace,
StorageClass: du.Spec.CSISnapshot.StorageClass,
HostingPodLabels: map[string]string{velerov1api.DataUploadLabel: du.Name},
AccessMode: exposer.AccessModeFileSystem,
AccessMode: accessMode,
Timeout: du.Spec.OperationTimeout.Duration,
}
}, nil
}
return nil
return nil, nil
}
func (r *DataUploadReconciler) setupWaitExposePara(du *velerov2alpha1api.DataUpload) interface{} {

View File

@@ -306,6 +306,7 @@ func TestReconcile(t *testing.T) {
name string
du *velerov2alpha1api.DataUpload
pod *corev1.Pod
pvc *corev1.PersistentVolumeClaim
snapshotExposerList map[velerov2alpha1api.SnapshotType]exposer.SnapshotExposer
dataMgr *datapath.Manager
expectedProcessed bool
@@ -345,11 +346,21 @@ func TestReconcile(t *testing.T) {
}, {
name: "Dataupload should be accepted",
du: dataUploadBuilder().Result(),
pod: builder.ForPod(velerov1api.DefaultNamespace, dataUploadName).Volumes(&corev1.Volume{Name: "dataupload-1"}).Result(),
pod: builder.ForPod("fake-ns", dataUploadName).Volumes(&corev1.Volume{Name: "test-pvc"}).Result(),
pvc: builder.ForPersistentVolumeClaim("fake-ns", "test-pvc").Result(),
expectedProcessed: false,
expected: dataUploadBuilder().Phase(velerov2alpha1api.DataUploadPhaseAccepted).Result(),
expectedRequeue: ctrl.Result{},
},
{
name: "Dataupload should fail to get PVC information",
du: dataUploadBuilder().Result(),
pod: builder.ForPod("fake-ns", dataUploadName).Volumes(&corev1.Volume{Name: "wrong-pvc"}).Result(),
expectedProcessed: true,
expected: dataUploadBuilder().Phase(velerov2alpha1api.DataUploadPhaseFailed).Result(),
expectedRequeue: ctrl.Result{},
expectedErrMsg: "failed to get PVC",
},
{
name: "Dataupload should be prepared",
du: dataUploadBuilder().SnapshotType(fakeSnapshotType).Result(),
@@ -399,7 +410,7 @@ func TestReconcile(t *testing.T) {
pod: builder.ForPod(velerov1api.DefaultNamespace, dataUploadName).Volumes(&corev1.Volume{Name: "dataupload-1"}).Result(),
du: func() *velerov2alpha1api.DataUpload {
du := dataUploadBuilder().Phase(velerov2alpha1api.DataUploadPhaseAccepted).SnapshotType(fakeSnapshotType).Result()
controllerutil.AddFinalizer(du, dataUploadDownloadFinalizer)
controllerutil.AddFinalizer(du, DataUploadDownloadFinalizer)
du.DeletionTimestamp = &metav1.Time{Time: time.Now()}
return du
}(),
@@ -415,13 +426,13 @@ func TestReconcile(t *testing.T) {
pod: builder.ForPod(velerov1api.DefaultNamespace, dataUploadName).Volumes(&corev1.Volume{Name: "dataupload-1"}).Result(),
du: func() *velerov2alpha1api.DataUpload {
du := dataUploadBuilder().Phase(velerov2alpha1api.DataUploadPhaseFailed).SnapshotType(fakeSnapshotType).Cancel(true).Result()
controllerutil.AddFinalizer(du, dataUploadDownloadFinalizer)
controllerutil.AddFinalizer(du, DataUploadDownloadFinalizer)
du.DeletionTimestamp = &metav1.Time{Time: time.Now()}
return du
}(),
expectedProcessed: false,
checkFunc: func(du velerov2alpha1api.DataUpload) bool {
return !controllerutil.ContainsFinalizer(&du, dataUploadDownloadFinalizer)
return !controllerutil.ContainsFinalizer(&du, DataUploadDownloadFinalizer)
},
expectedRequeue: ctrl.Result{},
},
@@ -448,6 +459,11 @@ func TestReconcile(t *testing.T) {
require.NoError(t, err)
}
if test.pvc != nil {
err = r.client.Create(ctx, test.pvc)
require.NoError(t, err)
}
if test.dataMgr != nil {
r.dataPathMgr = test.dataMgr
} else {

View File

@@ -32,6 +32,7 @@ import (
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
pkgbackup "github.com/vmware-tanzu/velero/pkg/backup"
veleroclient "github.com/vmware-tanzu/velero/pkg/client"
"github.com/vmware-tanzu/velero/pkg/label"
"github.com/vmware-tanzu/velero/pkg/util/kube"
)
@@ -187,7 +188,7 @@ func (c *gcReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Re
log.Info("Creating a new deletion request")
ndbr := pkgbackup.NewDeleteBackupRequest(backup.Name, string(backup.UID))
ndbr.SetNamespace(backup.Namespace)
if err := c.Create(ctx, ndbr); err != nil {
if err := veleroclient.CreateRetryGenerateName(c, ctx, ndbr); err != nil {
log.WithError(err).Error("error creating DeleteBackupRequests")
return ctrl.Result{}, errors.Wrap(err, "error creating DeleteBackupRequest")
}

View File

@@ -101,8 +101,6 @@ func (r *PodVolumeBackupReconciler) Reconcile(ctx context.Context, req ctrl.Requ
)
}
log.Info("PodVolumeBackup starting")
// Only process items for this node.
if pvb.Spec.Node != r.nodeName {
return ctrl.Result{}, nil
@@ -116,6 +114,8 @@ func (r *PodVolumeBackupReconciler) Reconcile(ctx context.Context, req ctrl.Requ
return ctrl.Result{}, nil
}
log.Info("PodVolumeBackup starting")
callbacks := datapath.Callbacks{
OnCompleted: r.OnDataPathCompleted,
OnFailed: r.OnDataPathFailed,

View File

@@ -101,6 +101,7 @@ type restoreReconciler struct {
logFormat logging.Format
clock clock.WithTickerAndDelayedExecution
defaultItemOperationTimeout time.Duration
disableInformerCache bool
newPluginManager func(logger logrus.FieldLogger) clientmgmt.Manager
backupStoreGetter persistence.ObjectBackupStoreGetter
@@ -123,6 +124,7 @@ func NewRestoreReconciler(
metrics *metrics.ServerMetrics,
logFormat logging.Format,
defaultItemOperationTimeout time.Duration,
disableInformerCache bool,
) *restoreReconciler {
r := &restoreReconciler{
ctx: ctx,
@@ -135,6 +137,7 @@ func NewRestoreReconciler(
logFormat: logFormat,
clock: &clock.RealClock{},
defaultItemOperationTimeout: defaultItemOperationTimeout,
disableInformerCache: disableInformerCache,
// use variables to refer to these functions so they can be
// replaced with fakes for testing.
@@ -519,13 +522,14 @@ func (r *restoreReconciler) runValidatedRestore(restore *api.Restore, info backu
}
restoreReq := &pkgrestore.Request{
Log: restoreLog,
Restore: restore,
Backup: info.backup,
PodVolumeBackups: podVolumeBackups,
VolumeSnapshots: volumeSnapshots,
BackupReader: backupFile,
ResourceModifiers: resourceModifiers,
Log: restoreLog,
Restore: restore,
Backup: info.backup,
PodVolumeBackups: podVolumeBackups,
VolumeSnapshots: volumeSnapshots,
BackupReader: backupFile,
ResourceModifiers: resourceModifiers,
DisableInformerCache: r.disableInformerCache,
}
restoreWarnings, restoreErrors := r.restorer.RestoreWithResolvers(restoreReq, actionsResolver, pluginManager)
@@ -671,14 +675,13 @@ func (r *restoreReconciler) deleteExternalResources(restore *api.Restore) error
backupInfo, err := r.fetchBackupInfo(restore.Spec.BackupName)
if err != nil {
if apierrors.IsNotFound(err) {
r.logger.Errorf("got not found error: %v, skip deleting the restore files in object storage", err)
return nil
}
return errors.Wrap(err, fmt.Sprintf("can't get backup info, backup: %s", restore.Spec.BackupName))
}
// if storage locations is read-only, skip deletion
if backupInfo.location.Spec.AccessMode == api.BackupStorageLocationAccessModeReadOnly {
return nil
}
// delete restore files in object storage
pluginManager := r.newPluginManager(r.logger)
defer pluginManager.CleanupClients()

View File

@@ -114,6 +114,7 @@ func TestFetchBackupInfo(t *testing.T) {
metrics.NewServerMetrics(),
formatFlag,
60*time.Minute,
false,
)
if test.backupStoreError == nil {
@@ -191,6 +192,7 @@ func TestProcessQueueItemSkips(t *testing.T) {
metrics.NewServerMetrics(),
formatFlag,
60*time.Minute,
false,
)
_, err := r.Reconcile(context.Background(), ctrl.Request{NamespacedName: types.NamespacedName{
@@ -445,6 +447,7 @@ func TestRestoreReconcile(t *testing.T) {
metrics.NewServerMetrics(),
formatFlag,
60*time.Minute,
false,
)
r.clock = clocktesting.NewFakeClock(now)
@@ -616,6 +619,7 @@ func TestValidateAndCompleteWhenScheduleNameSpecified(t *testing.T) {
metrics.NewServerMetrics(),
formatFlag,
60*time.Minute,
false,
)
restore := &velerov1api.Restore{
@@ -708,6 +712,7 @@ func TestValidateAndCompleteWithResourceModifierSpecified(t *testing.T) {
metrics.NewServerMetrics(),
formatFlag,
60*time.Minute,
false,
)
restore := &velerov1api.Restore{
@@ -760,7 +765,7 @@ func TestValidateAndCompleteWithResourceModifierSpecified(t *testing.T) {
Namespace: velerov1api.DefaultNamespace,
},
Data: map[string]string{
"sub.yml": "version: v1\nresourceModifierRules:\n- conditions:\n groupKind: persistentvolumeclaims\n resourceNameRegex: \".*\"\n namespaces:\n - bar\n - foo\n patches:\n - operation: replace\n path: \"/spec/storageClassName\"\n value: \"premium\"\n - operation: remove\n path: \"/metadata/labels/test\"\n\n\n",
"sub.yml": "version: v1\nresourceModifierRules:\n- conditions:\n groupResource: persistentvolumeclaims\n resourceNameRegex: \".*\"\n namespaces:\n - bar\n - foo\n patches:\n - operation: replace\n path: \"/spec/storageClassName\"\n value: \"premium\"\n - operation: remove\n path: \"/metadata/labels/test\"\n\n\n",
},
}
require.NoError(t, r.kbClient.Create(context.Background(), cm1))
@@ -788,7 +793,7 @@ func TestValidateAndCompleteWithResourceModifierSpecified(t *testing.T) {
Namespace: velerov1api.DefaultNamespace,
},
Data: map[string]string{
"sub.yml": "version1: v1\nresourceModifierRules:\n- conditions:\n groupKind: persistentvolumeclaims\n resourceNameRegex: \".*\"\n namespaces:\n - bar\n - foo\n patches:\n - operation: replace\n path: \"/spec/storageClassName\"\n value: \"premium\"\n - operation: remove\n path: \"/metadata/labels/test\"\n\n\n",
"sub.yml": "version1: v1\nresourceModifierRules:\n- conditions:\n groupResource: persistentvolumeclaims\n resourceNameRegex: \".*\"\n namespaces:\n - bar\n - foo\n patches:\n - operation: replace\n path: \"/spec/storageClassName\"\n value: \"premium\"\n - operation: remove\n path: \"/metadata/labels/test\"\n\n\n",
},
}
require.NoError(t, r.kbClient.Create(context.Background(), invalidVersionCm))
@@ -816,7 +821,7 @@ func TestValidateAndCompleteWithResourceModifierSpecified(t *testing.T) {
Namespace: velerov1api.DefaultNamespace,
},
Data: map[string]string{
"sub.yml": "version: v1\nresourceModifierRules:\n- conditions:\n groupKind: persistentvolumeclaims\n resourceNameRegex: \".*\"\n namespaces:\n - bar\n - foo\n patches:\n - operation: invalid\n path: \"/spec/storageClassName\"\n value: \"premium\"\n - operation: remove\n path: \"/metadata/labels/test\"\n\n\n",
"sub.yml": "version: v1\nresourceModifierRules:\n- conditions:\n groupResource: persistentvolumeclaims\n resourceNameRegex: \".*\"\n namespaces:\n - bar\n - foo\n patches:\n - operation: invalid\n path: \"/spec/storageClassName\"\n value: \"premium\"\n - operation: remove\n path: \"/metadata/labels/test\"\n\n\n",
},
}
require.NoError(t, r.kbClient.Create(context.Background(), invalidOperatorCm))

View File

@@ -139,18 +139,6 @@ func (r *restoreOperationsReconciler) Reconcile(ctx context.Context, req ctrl.Re
return ctrl.Result{}, errors.Wrap(err, "error getting backup info")
}
if info.location.Spec.AccessMode == velerov1api.BackupStorageLocationAccessModeReadOnly {
log.Infof("Cannot check progress on Restore operations because backup storage location %s is currently in read-only mode; marking restore PartiallyFailed", info.location.Name)
restore.Status.Phase = velerov1api.RestorePhasePartiallyFailed
restore.Status.CompletionTimestamp = &metav1.Time{Time: r.clock.Now()}
r.metrics.RegisterRestorePartialFailure(restore.Spec.ScheduleName)
err := r.updateRestoreAndOperationsJSON(ctx, original, restore, nil, &itemoperationmap.OperationsForRestore{ErrsSinceUpdate: []string{"BSL is read-only"}}, false, false)
if err != nil {
log.WithError(err).Error("error updating Restore")
}
return ctrl.Result{}, nil
}
pluginManager := r.newPluginManager(r.logger)
defer pluginManager.CleanupClients()
backupStore, err := r.backupStoreGetter.Get(info.location, pluginManager, r.logger)

View File

@@ -133,10 +133,10 @@ func (fs *fileSystemBR) StartBackup(source AccessPoint, realSource string, paren
if !fs.initialized {
return errors.New("file system data path is not initialized")
}
volMode := getPersistentVolumeMode(source)
go func() {
snapshotID, emptySnapshot, err := fs.uploaderProv.RunBackup(fs.ctx, source.ByPath, realSource, tags, forceFull, parentSnapshot, volMode, fs)
snapshotID, emptySnapshot, err := fs.uploaderProv.RunBackup(fs.ctx, source.ByPath, realSource, tags, forceFull,
parentSnapshot, source.VolMode, fs)
if err == provider.ErrorCanceled {
fs.callbacks.OnCancelled(context.Background(), fs.namespace, fs.jobName)
@@ -155,10 +155,8 @@ func (fs *fileSystemBR) StartRestore(snapshotID string, target AccessPoint) erro
return errors.New("file system data path is not initialized")
}
volMode := getPersistentVolumeMode(target)
go func() {
err := fs.uploaderProv.RunRestore(fs.ctx, snapshotID, target.ByPath, volMode, fs)
err := fs.uploaderProv.RunRestore(fs.ctx, snapshotID, target.ByPath, target.VolMode, fs)
if err == provider.ErrorCanceled {
fs.callbacks.OnCancelled(context.Background(), fs.namespace, fs.jobName)
@@ -172,13 +170,6 @@ func (fs *fileSystemBR) StartRestore(snapshotID string, target AccessPoint) erro
return nil
}
func getPersistentVolumeMode(source AccessPoint) uploader.PersistentVolumeMode {
if source.ByBlock != "" {
return uploader.PersistentVolumeBlock
}
return uploader.PersistentVolumeFilesystem
}
// UpdateProgress which implement ProgressUpdater interface to update progress status
func (fs *fileSystemBR) UpdateProgress(p *uploader.Progress) {
if fs.callbacks.OnProgress != nil {

View File

@@ -53,7 +53,7 @@ type Callbacks struct {
// AccessPoint represents an access point that has been exposed to a data path instance
type AccessPoint struct {
ByPath string
ByBlock string
VolMode uploader.PersistentVolumeMode
}
// AsyncBR is the interface for asynchronous data path methods

View File

@@ -18,6 +18,7 @@ package discovery
import (
"sort"
"strings"
"sync"
"github.com/pkg/errors"
@@ -170,7 +171,7 @@ func (h *helper) Refresh() error {
}
h.resources = discovery.FilteredBy(
discovery.ResourcePredicateFunc(filterByVerbs),
And(filterByVerbs, skipSubresource),
serverResources,
)
@@ -240,10 +241,34 @@ func refreshServerGroupsAndResources(discoveryClient serverResourcesInterface, l
return serverGroups, serverResources, err
}
// And returns a composite predicate that implements a logical AND of the predicates passed to it.
func And(predicates ...discovery.ResourcePredicateFunc) discovery.ResourcePredicate {
return and{predicates}
}
type and struct {
predicates []discovery.ResourcePredicateFunc
}
func (a and) Match(groupVersion string, r *metav1.APIResource) bool {
for _, p := range a.predicates {
if !p(groupVersion, r) {
return false
}
}
return true
}
func filterByVerbs(groupVersion string, r *metav1.APIResource) bool {
return discovery.SupportsAllVerbs{Verbs: []string{"list", "create", "get", "delete"}}.Match(groupVersion, r)
}
func skipSubresource(_ string, r *metav1.APIResource) bool {
// if we have a slash, then this is a subresource and we shouldn't include it.
return !strings.Contains(r.Name, "/")
}
// sortResources sources resources by moving extensions to the end of the slice. The order of all
// the other resources is preserved.
func sortResources(resources []*metav1.APIResourceList) {

View File

@@ -233,9 +233,12 @@ func (e *csiSnapshotExposer) CleanUp(ctx context.Context, ownerObject corev1.Obj
}
func getVolumeModeByAccessMode(accessMode string) (corev1.PersistentVolumeMode, error) {
if accessMode == AccessModeFileSystem {
switch accessMode {
case AccessModeFileSystem:
return corev1.PersistentVolumeFilesystem, nil
} else {
case AccessModeBlock:
return corev1.PersistentVolumeBlock, nil
default:
return "", errors.Errorf("unsupported access mode %s", accessMode)
}
}
@@ -246,8 +249,9 @@ func (e *csiSnapshotExposer) createBackupVS(ctx context.Context, ownerObject cor
vs := &snapshotv1api.VolumeSnapshot{
ObjectMeta: metav1.ObjectMeta{
Name: backupVSName,
Namespace: ownerObject.Namespace,
Name: backupVSName,
Namespace: ownerObject.Namespace,
Annotations: snapshotVS.Annotations,
// Don't add ownerReference to SnapshotBackup.
// The backupPVC should be deleted before backupVS, otherwise, the deletion of backupVS will fail since
// backupPVC has its dataSource referring to it
@@ -268,7 +272,8 @@ func (e *csiSnapshotExposer) createBackupVSC(ctx context.Context, ownerObject co
vsc := &snapshotv1api.VolumeSnapshotContent{
ObjectMeta: metav1.ObjectMeta{
Name: backupVSCName,
Name: backupVSCName,
Annotations: snapshotVSC.Annotations,
},
Spec: snapshotv1api.VolumeSnapshotContentSpec{
VolumeSnapshotRef: corev1.ObjectReference{
@@ -280,7 +285,7 @@ func (e *csiSnapshotExposer) createBackupVSC(ctx context.Context, ownerObject co
Source: snapshotv1api.VolumeSnapshotContentSource{
SnapshotHandle: snapshotVSC.Status.SnapshotHandle,
},
DeletionPolicy: snapshotVSC.Spec.DeletionPolicy,
DeletionPolicy: snapshotv1api.VolumeSnapshotContentDelete,
Driver: snapshotVSC.Spec.Driver,
VolumeSnapshotClassName: snapshotVSC.Spec.VolumeSnapshotClassName,
},
@@ -354,6 +359,13 @@ func (e *csiSnapshotExposer) createBackupPod(ctx context.Context, ownerObject co
}
var gracePeriod int64 = 0
volumeMounts, volumeDevices := kube.MakePodPVCAttachment(volumeName, backupPVC.Spec.VolumeMode)
if label == nil {
label = make(map[string]string)
}
label[podGroupLabel] = podGroupSnapshot
pod := &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
@@ -371,16 +383,26 @@ func (e *csiSnapshotExposer) createBackupPod(ctx context.Context, ownerObject co
Labels: label,
},
Spec: corev1.PodSpec{
TopologySpreadConstraints: []corev1.TopologySpreadConstraint{
{
MaxSkew: 1,
TopologyKey: "kubernetes.io/hostname",
WhenUnsatisfiable: corev1.ScheduleAnyway,
LabelSelector: &metav1.LabelSelector{
MatchLabels: map[string]string{
podGroupLabel: podGroupSnapshot,
},
},
},
},
Containers: []corev1.Container{
{
Name: containerName,
Image: podInfo.image,
ImagePullPolicy: corev1.PullNever,
Command: []string{"/velero-helper", "pause"},
VolumeMounts: []corev1.VolumeMount{{
Name: volumeName,
MountPath: "/" + volumeName,
}},
VolumeMounts: volumeMounts,
VolumeDevices: volumeDevices,
},
},
ServiceAccountName: podInfo.serviceAccount,

View File

@@ -58,10 +58,22 @@ func TestExpose(t *testing.T) {
UID: "fake-uid",
},
}
snapshotClass := "fake-snapshot-class"
vsObject := &snapshotv1api.VolumeSnapshot{
ObjectMeta: metav1.ObjectMeta{
Name: "fake-vs",
Namespace: "fake-ns",
Annotations: map[string]string{
"fake-key-1": "fake-value-1",
"fake-key-2": "fake-value-2",
},
},
Spec: snapshotv1api.VolumeSnapshotSpec{
Source: snapshotv1api.VolumeSnapshotSource{
VolumeSnapshotContentName: &vscName,
},
VolumeSnapshotClassName: &snapshotClass,
},
Status: &snapshotv1api.VolumeSnapshotStatus{
BoundVolumeSnapshotContentName: &vscName,
@@ -71,15 +83,23 @@ func TestExpose(t *testing.T) {
}
var restoreSize int64
snapshotHandle := "fake-handle"
vscObj := &snapshotv1api.VolumeSnapshotContent{
ObjectMeta: metav1.ObjectMeta{
Name: "fake-vsc",
Name: vscName,
Annotations: map[string]string{
"fake-key-3": "fake-value-3",
"fake-key-4": "fake-value-4",
},
},
Spec: snapshotv1api.VolumeSnapshotContentSpec{
DeletionPolicy: snapshotv1api.VolumeSnapshotContentDelete,
DeletionPolicy: snapshotv1api.VolumeSnapshotContentDelete,
Driver: "fake-driver",
VolumeSnapshotClassName: &snapshotClass,
},
Status: &snapshotv1api.VolumeSnapshotContentStatus{
RestoreSize: &restoreSize,
RestoreSize: &restoreSize,
SnapshotHandle: &snapshotHandle,
},
}
@@ -284,6 +304,23 @@ func TestExpose(t *testing.T) {
},
err: "error to create backup pod: fake-create-error",
},
{
name: "success",
ownerBackup: backup,
exposeParam: CSISnapshotExposeParam{
SnapshotName: "fake-vs",
SourceNamespace: "fake-ns",
AccessMode: AccessModeFileSystem,
Timeout: time.Millisecond,
},
snapshotClientObj: []runtime.Object{
vsObject,
vscObj,
},
kubeClientObj: []runtime.Object{
daemonSet,
},
},
}
for _, test := range tests {
@@ -317,7 +354,33 @@ func TestExpose(t *testing.T) {
}
err := exposer.Expose(context.Background(), ownerObject, &test.exposeParam)
assert.EqualError(t, err, test.err)
if err == nil {
assert.NoError(t, err)
_, err = exposer.kubeClient.CoreV1().Pods(ownerObject.Namespace).Get(context.Background(), ownerObject.Name, metav1.GetOptions{})
assert.NoError(t, err)
_, err = exposer.kubeClient.CoreV1().PersistentVolumeClaims(ownerObject.Namespace).Get(context.Background(), ownerObject.Name, metav1.GetOptions{})
assert.NoError(t, err)
expectedVS, err := exposer.csiSnapshotClient.VolumeSnapshots(ownerObject.Namespace).Get(context.Background(), ownerObject.Name, metav1.GetOptions{})
assert.NoError(t, err)
expectedVSC, err := exposer.csiSnapshotClient.VolumeSnapshotContents().Get(context.Background(), ownerObject.Name, metav1.GetOptions{})
assert.NoError(t, err)
assert.Equal(t, expectedVS.Annotations, vsObject.Annotations)
assert.Equal(t, *expectedVS.Spec.VolumeSnapshotClassName, *vsObject.Spec.VolumeSnapshotClassName)
assert.Equal(t, *expectedVS.Spec.Source.VolumeSnapshotContentName, expectedVSC.Name)
assert.Equal(t, expectedVSC.Annotations, vscObj.Annotations)
assert.Equal(t, expectedVSC.Spec.DeletionPolicy, vscObj.Spec.DeletionPolicy)
assert.Equal(t, expectedVSC.Spec.Driver, vscObj.Spec.Driver)
assert.Equal(t, *expectedVSC.Spec.VolumeSnapshotClassName, *vscObj.Spec.VolumeSnapshotClassName)
} else {
assert.EqualError(t, err, test.err)
}
})
}
}

View File

@@ -82,7 +82,7 @@ func (e *genericRestoreExposer) Expose(ctx context.Context, ownerObject corev1.O
return errors.Errorf("Target PVC %s/%s has already been bound, abort", sourceNamespace, targetPVCName)
}
restorePod, err := e.createRestorePod(ctx, ownerObject, hostingPodLabels, selectedNode)
restorePod, err := e.createRestorePod(ctx, ownerObject, targetPVC, hostingPodLabels, selectedNode)
if err != nil {
return errors.Wrapf(err, "error to create restore pod")
}
@@ -221,7 +221,7 @@ func (e *genericRestoreExposer) RebindVolume(ctx context.Context, ownerObject co
}
restorePVName := restorePV.Name
restorePV, err = kube.ResetPVBinding(ctx, e.kubeClient.CoreV1(), restorePV, matchLabel)
restorePV, err = kube.ResetPVBinding(ctx, e.kubeClient.CoreV1(), restorePV, matchLabel, targetPVC)
if err != nil {
return errors.Wrapf(err, "error to reset binding info for restore PV %s", restorePVName)
}
@@ -247,7 +247,8 @@ func (e *genericRestoreExposer) RebindVolume(ctx context.Context, ownerObject co
return nil
}
func (e *genericRestoreExposer) createRestorePod(ctx context.Context, ownerObject corev1.ObjectReference, label map[string]string, selectedNode string) (*corev1.Pod, error) {
func (e *genericRestoreExposer) createRestorePod(ctx context.Context, ownerObject corev1.ObjectReference, targetPVC *corev1.PersistentVolumeClaim,
label map[string]string, selectedNode string) (*corev1.Pod, error) {
restorePodName := ownerObject.Name
restorePVCName := ownerObject.Name
@@ -260,6 +261,7 @@ func (e *genericRestoreExposer) createRestorePod(ctx context.Context, ownerObjec
}
var gracePeriod int64 = 0
volumeMounts, volumeDevices := kube.MakePodPVCAttachment(volumeName, targetPVC.Spec.VolumeMode)
pod := &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
@@ -283,10 +285,8 @@ func (e *genericRestoreExposer) createRestorePod(ctx context.Context, ownerObjec
Image: podInfo.image,
ImagePullPolicy: corev1.PullNever,
Command: []string{"/velero-helper", "pause"},
VolumeMounts: []corev1.VolumeMount{{
Name: volumeName,
MountPath: "/" + volumeName,
}},
VolumeMounts: volumeMounts,
VolumeDevices: volumeDevices,
},
},
ServiceAccountName: podInfo.serviceAccount,

View File

@@ -26,11 +26,13 @@ import (
ctrlclient "sigs.k8s.io/controller-runtime/pkg/client"
"github.com/vmware-tanzu/velero/pkg/datapath"
"github.com/vmware-tanzu/velero/pkg/uploader"
"github.com/vmware-tanzu/velero/pkg/util/filesystem"
"github.com/vmware-tanzu/velero/pkg/util/kube"
)
var getVolumeDirectory = kube.GetVolumeDirectory
var getVolumeMode = kube.GetVolumeMode
var singlePathMatch = kube.SinglePathMatch
// GetPodVolumeHostPath returns a path that can be accessed from the host for a given volume of a pod
@@ -45,7 +47,17 @@ func GetPodVolumeHostPath(ctx context.Context, pod *corev1.Pod, volumeName strin
logger.WithField("volDir", volDir).Info("Got volume dir")
pathGlob := fmt.Sprintf("/host_pods/%s/volumes/*/%s", string(pod.GetUID()), volDir)
volMode, err := getVolumeMode(ctx, logger, pod, volumeName, cli)
if err != nil {
return datapath.AccessPoint{}, errors.Wrapf(err, "error getting volume mode for volume %s in pod %s", volumeName, pod.Name)
}
volSubDir := "volumes"
if volMode == uploader.PersistentVolumeBlock {
volSubDir = "volumeDevices"
}
pathGlob := fmt.Sprintf("/host_pods/%s/%s/*/%s", string(pod.GetUID()), volSubDir, volDir)
logger.WithField("pathGlob", pathGlob).Debug("Looking for path matching glob")
path, err := singlePathMatch(pathGlob, fs, logger)
@@ -56,6 +68,7 @@ func GetPodVolumeHostPath(ctx context.Context, pod *corev1.Pod, volumeName strin
logger.WithField("path", path).Info("Found path matching glob")
return datapath.AccessPoint{
ByPath: path,
ByPath: path,
VolMode: volMode,
}, nil
}

View File

@@ -29,17 +29,19 @@ import (
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/builder"
velerotest "github.com/vmware-tanzu/velero/pkg/test"
"github.com/vmware-tanzu/velero/pkg/uploader"
"github.com/vmware-tanzu/velero/pkg/util/filesystem"
)
func TestGetPodVolumeHostPath(t *testing.T) {
tests := []struct {
name string
getVolumeDirFunc func(context.Context, logrus.FieldLogger, *corev1.Pod, string, ctrlclient.Client) (string, error)
pathMatchFunc func(string, filesystem.Interface, logrus.FieldLogger) (string, error)
pod *corev1.Pod
pvc string
err string
name string
getVolumeDirFunc func(context.Context, logrus.FieldLogger, *corev1.Pod, string, ctrlclient.Client) (string, error)
getVolumeModeFunc func(context.Context, logrus.FieldLogger, *corev1.Pod, string, ctrlclient.Client) (uploader.PersistentVolumeMode, error)
pathMatchFunc func(string, filesystem.Interface, logrus.FieldLogger) (string, error)
pod *corev1.Pod
pvc string
err string
}{
{
name: "get volume dir fail",
@@ -55,6 +57,9 @@ func TestGetPodVolumeHostPath(t *testing.T) {
getVolumeDirFunc: func(context.Context, logrus.FieldLogger, *corev1.Pod, string, ctrlclient.Client) (string, error) {
return "", nil
},
getVolumeModeFunc: func(context.Context, logrus.FieldLogger, *corev1.Pod, string, ctrlclient.Client) (uploader.PersistentVolumeMode, error) {
return uploader.PersistentVolumeFilesystem, nil
},
pathMatchFunc: func(string, filesystem.Interface, logrus.FieldLogger) (string, error) {
return "", errors.New("fake-error-2")
},
@@ -62,6 +67,18 @@ func TestGetPodVolumeHostPath(t *testing.T) {
pvc: "fake-pvc-1",
err: "error identifying unique volume path on host for volume fake-pvc-1 in pod fake-pod-2: fake-error-2",
},
{
name: "get block volume dir success",
getVolumeDirFunc: func(context.Context, logrus.FieldLogger, *corev1.Pod, string, ctrlclient.Client) (
string, error) {
return "fake-pvc-1", nil
},
pathMatchFunc: func(string, filesystem.Interface, logrus.FieldLogger) (string, error) {
return "/host_pods/fake-pod-1-id/volumeDevices/kubernetes.io~csi/fake-pvc-1-id", nil
},
pod: builder.ForPod(velerov1api.DefaultNamespace, "fake-pod-1").Result(),
pvc: "fake-pvc-1",
},
}
for _, test := range tests {
@@ -70,12 +87,18 @@ func TestGetPodVolumeHostPath(t *testing.T) {
getVolumeDirectory = test.getVolumeDirFunc
}
if test.getVolumeModeFunc != nil {
getVolumeMode = test.getVolumeModeFunc
}
if test.pathMatchFunc != nil {
singlePathMatch = test.pathMatchFunc
}
_, err := GetPodVolumeHostPath(context.Background(), test.pod, test.pvc, nil, nil, velerotest.NewLogger())
assert.EqualError(t, err, test.err)
if test.err != "" || err != nil {
assert.EqualError(t, err, test.err)
}
})
}
}

View File

@@ -22,6 +22,9 @@ import (
const (
AccessModeFileSystem = "by-file-system"
AccessModeBlock = "by-block-device"
podGroupLabel = "velero.io/exposer-pod-group"
podGroupSnapshot = "snapshot-exposer"
)
// ExposeResult defines the result of expose.

View File

@@ -86,6 +86,14 @@ func DaemonSet(namespace string, opts ...podTemplateOption) *appsv1.DaemonSet {
},
},
},
{
Name: "host-plugins",
VolumeSource: corev1.VolumeSource{
HostPath: &corev1.HostPathVolumeSource{
Path: "/var/lib/kubelet/plugins",
},
},
},
{
Name: "scratch",
VolumeSource: corev1.VolumeSource{
@@ -102,13 +110,20 @@ func DaemonSet(namespace string, opts ...podTemplateOption) *appsv1.DaemonSet {
"/velero",
},
Args: daemonSetArgs,
SecurityContext: &corev1.SecurityContext{
Privileged: &c.privilegedNodeAgent,
},
VolumeMounts: []corev1.VolumeMount{
{
Name: "host-pods",
MountPath: "/host_pods",
MountPropagation: &mountPropagationMode,
},
{
Name: "host-plugins",
MountPath: "/var/lib/kubelet/plugins",
MountPropagation: &mountPropagationMode,
},
{
Name: "scratch",
MountPath: "/scratch",

View File

@@ -35,7 +35,7 @@ func TestDaemonSet(t *testing.T) {
ds = DaemonSet("velero", WithSecret(true))
assert.Equal(t, 7, len(ds.Spec.Template.Spec.Containers[0].Env))
assert.Equal(t, 3, len(ds.Spec.Template.Spec.Volumes))
assert.Equal(t, 4, len(ds.Spec.Template.Spec.Volumes))
ds = DaemonSet("velero", WithFeatures([]string{"foo,bar,baz"}))
assert.Len(t, ds.Spec.Template.Spec.Containers[0].Args, 3)

View File

@@ -46,6 +46,9 @@ type podTemplateConfig struct {
defaultVolumesToFsBackup bool
serviceAccountName string
uploaderType string
privilegedNodeAgent bool
defaultSnapshotMoveData bool
disableInformerCache bool
}
func WithImage(image string) podTemplateOption {
@@ -136,12 +139,30 @@ func WithDefaultVolumesToFsBackup() podTemplateOption {
}
}
func WithDefaultSnapshotMoveData() podTemplateOption {
return func(c *podTemplateConfig) {
c.defaultSnapshotMoveData = true
}
}
func WithDisableInformerCache() podTemplateOption {
return func(c *podTemplateConfig) {
c.disableInformerCache = true
}
}
func WithServiceAccountName(sa string) podTemplateOption {
return func(c *podTemplateConfig) {
c.serviceAccountName = sa
}
}
func WithPrivilegedNodeAgent() podTemplateOption {
return func(c *podTemplateConfig) {
c.privilegedNodeAgent = true
}
}
func Deployment(namespace string, opts ...podTemplateOption) *appsv1.Deployment {
// TODO: Add support for server args
c := &podTemplateConfig{
@@ -167,6 +188,14 @@ func Deployment(namespace string, opts ...podTemplateOption) *appsv1.Deployment
args = append(args, "--default-volumes-to-fs-backup=true")
}
if c.defaultSnapshotMoveData {
args = append(args, "--default-snapshot-move-data=true")
}
if c.disableInformerCache {
args = append(args, "--disable-informer-cache=true")
}
if len(c.uploaderType) > 0 {
args = append(args, fmt.Sprintf("--uploader-type=%s", c.uploaderType))
}

View File

@@ -64,4 +64,8 @@ func TestDeployment(t *testing.T) {
deploy = Deployment("velero", WithServiceAccountName("test-sa"))
assert.Equal(t, "test-sa", deploy.Spec.Template.Spec.ServiceAccountName)
deploy = Deployment("velero", WithDisableInformerCache())
assert.Len(t, deploy.Spec.Template.Spec.Containers[0].Args, 2)
assert.Equal(t, "--disable-informer-cache=true", deploy.Spec.Template.Spec.Containers[0].Args[1])
}

View File

@@ -32,7 +32,11 @@ import (
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
)
const defaultServiceAccountName = "velero"
const (
defaultServiceAccountName = "velero"
podSecurityLevel = "privileged"
podSecurityVersion = "latest"
)
var (
DefaultVeleroPodCPURequest = "500m"
@@ -148,8 +152,12 @@ func Namespace(namespace string) *corev1.Namespace {
},
}
ns.Labels["pod-security.kubernetes.io/enforce"] = "privileged"
ns.Labels["pod-security.kubernetes.io/enforce-version"] = "latest"
ns.Labels["pod-security.kubernetes.io/enforce"] = podSecurityLevel
ns.Labels["pod-security.kubernetes.io/enforce-version"] = podSecurityVersion
ns.Labels["pod-security.kubernetes.io/audit"] = podSecurityLevel
ns.Labels["pod-security.kubernetes.io/audit-version"] = podSecurityVersion
ns.Labels["pod-security.kubernetes.io/warn"] = podSecurityLevel
ns.Labels["pod-security.kubernetes.io/warn-version"] = podSecurityVersion
return ns
}
@@ -232,6 +240,7 @@ type VeleroOptions struct {
SecretData []byte
RestoreOnly bool
UseNodeAgent bool
PrivilegedNodeAgent bool
UseVolumeSnapshots bool
BSLConfig map[string]string
VSLConfig map[string]string
@@ -243,6 +252,8 @@ type VeleroOptions struct {
Features []string
DefaultVolumesToFsBackup bool
UploaderType string
DefaultSnapshotMoveData bool
DisableInformerCache bool
}
func AllCRDs() *unstructured.UnstructuredList {
@@ -343,6 +354,14 @@ func AllResources(o *VeleroOptions) *unstructured.UnstructuredList {
deployOpts = append(deployOpts, WithDefaultVolumesToFsBackup())
}
if o.DefaultSnapshotMoveData {
deployOpts = append(deployOpts, WithDefaultSnapshotMoveData())
}
if o.DisableInformerCache {
deployOpts = append(deployOpts, WithDisableInformerCache())
}
deploy := Deployment(o.Namespace, deployOpts...)
if err := appendUnstructured(resources, deploy); err != nil {
@@ -361,6 +380,9 @@ func AllResources(o *VeleroOptions) *unstructured.UnstructuredList {
if len(o.Features) > 0 {
dsOpts = append(dsOpts, WithFeatures(o.Features))
}
if o.PrivilegedNodeAgent {
dsOpts = append(dsOpts, WithPrivilegedNodeAgent())
}
ds := DaemonSet(o.Namespace, dsOpts...)
if err := appendUnstructured(resources, ds); err != nil {
fmt.Printf("error appending DaemonSet %s: %s\n", ds.GetName(), err.Error())

View File

@@ -47,6 +47,10 @@ func TestResources(t *testing.T) {
// PSA(Pod Security Admission) and PSS(Pod Security Standards).
assert.Equal(t, ns.Labels["pod-security.kubernetes.io/enforce"], "privileged")
assert.Equal(t, ns.Labels["pod-security.kubernetes.io/enforce-version"], "latest")
assert.Equal(t, ns.Labels["pod-security.kubernetes.io/audit"], "privileged")
assert.Equal(t, ns.Labels["pod-security.kubernetes.io/audit-version"], "latest")
assert.Equal(t, ns.Labels["pod-security.kubernetes.io/warn"], "privileged")
assert.Equal(t, ns.Labels["pod-security.kubernetes.io/warn-version"], "latest")
crb := ClusterRoleBinding(DefaultVeleroNamespace)
// The CRB is a cluster-scoped resource

View File

@@ -31,6 +31,7 @@ import (
"github.com/vmware-tanzu/velero/internal/resourcepolicies"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
veleroclient "github.com/vmware-tanzu/velero/pkg/client"
clientset "github.com/vmware-tanzu/velero/pkg/generated/clientset/versioned"
"github.com/vmware-tanzu/velero/pkg/label"
"github.com/vmware-tanzu/velero/pkg/nodeagent"
@@ -200,10 +201,11 @@ func (b *backupper) BackupPodVolumes(backup *velerov1api.Backup, pod *corev1api.
b.resultsLock.Unlock()
var (
errs []error
podVolumeBackups []*velerov1api.PodVolumeBackup
podVolumes = make(map[string]corev1api.Volume)
mountedPodVolumes = sets.String{}
errs []error
podVolumeBackups []*velerov1api.PodVolumeBackup
podVolumes = make(map[string]corev1api.Volume)
mountedPodVolumes = sets.String{}
attachedPodDevices = sets.String{}
)
pvcSummary := NewPVCBackupSummary()
@@ -233,6 +235,9 @@ func (b *backupper) BackupPodVolumes(backup *velerov1api.Backup, pod *corev1api.
for _, volumeMount := range container.VolumeMounts {
mountedPodVolumes.Insert(volumeMount.Name)
}
for _, volumeDevice := range container.VolumeDevices {
attachedPodDevices.Insert(volumeDevice.Name)
}
}
var numVolumeSnapshots int
@@ -263,6 +268,15 @@ func (b *backupper) BackupPodVolumes(backup *velerov1api.Backup, pod *corev1api.
continue
}
// check if volume is a block volume
if attachedPodDevices.Has(volumeName) {
msg := fmt.Sprintf("volume %s declared in pod %s/%s is a block volume. Block volumes are not supported for fs backup, skipping",
volumeName, pod.Namespace, pod.Name)
log.Warn(msg)
pvcSummary.addSkipped(volumeName, msg)
continue
}
// volumes that are not mounted by any container should not be backed up, because
// its directory is not created
if !mountedPodVolumes.Has(volumeName) {
@@ -284,7 +298,11 @@ func (b *backupper) BackupPodVolumes(backup *velerov1api.Backup, pod *corev1api.
}
volumeBackup := newPodVolumeBackup(backup, pod, volume, repo.Spec.ResticIdentifier, b.uploaderType, pvc)
if _, err = b.veleroClient.VeleroV1().PodVolumeBackups(volumeBackup.Namespace).Create(context.TODO(), volumeBackup, metav1.CreateOptions{}); err != nil {
// TODO: once backupper is refactored to use controller-runtime, just pass client instead of anonymous func
if err := veleroclient.CreateRetryGenerateNameWithFunc(volumeBackup, func() error {
_, err := b.veleroClient.VeleroV1().PodVolumeBackups(volumeBackup.Namespace).Create(context.TODO(), volumeBackup, metav1.CreateOptions{})
return err
}); err != nil {
errs = append(errs, err)
continue
}

View File

@@ -31,6 +31,7 @@ import (
"k8s.io/client-go/tools/cache"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
veleroclient "github.com/vmware-tanzu/velero/pkg/client"
clientset "github.com/vmware-tanzu/velero/pkg/generated/clientset/versioned"
"github.com/vmware-tanzu/velero/pkg/label"
"github.com/vmware-tanzu/velero/pkg/nodeagent"
@@ -172,7 +173,10 @@ func (r *restorer) RestorePodVolumes(data RestoreData) []error {
volumeRestore := newPodVolumeRestore(data.Restore, data.Pod, data.BackupLocation, volume, backupInfo.snapshotID, repo.Spec.ResticIdentifier, backupInfo.uploaderType, data.SourceNamespace, pvc)
if err := errorOnly(r.veleroClient.VeleroV1().PodVolumeRestores(volumeRestore.Namespace).Create(context.TODO(), volumeRestore, metav1.CreateOptions{})); err != nil {
// TODO: once restorer is refactored to use controller-runtime, just pass client instead of anonymous func
if err := veleroclient.CreateRetryGenerateNameWithFunc(volumeRestore, func() error {
return errorOnly(r.veleroClient.VeleroV1().PodVolumeRestores(volumeRestore.Namespace).Create(context.TODO(), volumeRestore, metav1.CreateOptions{}))
}); err != nil {
errs = append(errs, errors.WithStack(err))
continue
}

View File

@@ -37,14 +37,6 @@ const (
// TODO(2.0): remove
podAnnotationPrefix = "snapshot.velero.io/"
// VolumesToBackupAnnotation is the annotation on a pod whose mounted volumes
// need to be backed up using pod volume backup.
VolumesToBackupAnnotation = "backup.velero.io/backup-volumes"
// VolumesToExcludeAnnotation is the annotation on a pod whose mounted volumes
// should be excluded from pod volume backup.
VolumesToExcludeAnnotation = "backup.velero.io/backup-volumes-excludes"
// DefaultVolumesToFsBackup specifies whether pod volume backup should be used, by default, to
// take backup of all pod volumes.
DefaultVolumesToFsBackup = false
@@ -216,85 +208,3 @@ func getPodSnapshotAnnotations(obj metav1.Object) map[string]string {
return res
}
// GetVolumesToBackup returns a list of volume names to backup for
// the provided pod.
// Deprecated: Use GetVolumesByPod instead.
func GetVolumesToBackup(obj metav1.Object) []string {
annotations := obj.GetAnnotations()
if annotations == nil {
return nil
}
backupsValue := annotations[VolumesToBackupAnnotation]
if backupsValue == "" {
return nil
}
return strings.Split(backupsValue, ",")
}
func getVolumesToExclude(obj metav1.Object) []string {
annotations := obj.GetAnnotations()
if annotations == nil {
return nil
}
return strings.Split(annotations[VolumesToExcludeAnnotation], ",")
}
func contains(list []string, k string) bool {
for _, i := range list {
if i == k {
return true
}
}
return false
}
// GetVolumesByPod returns a list of volume names to backup for the provided pod.
func GetVolumesByPod(pod *corev1api.Pod, defaultVolumesToFsBackup bool) ([]string, []string) {
// tracks the volumes that have been explicitly opted out of backup via the annotation in the pod
optedOutVolumes := make([]string, 0)
if !defaultVolumesToFsBackup {
return GetVolumesToBackup(pod), optedOutVolumes
}
volsToExclude := getVolumesToExclude(pod)
podVolumes := []string{}
for _, pv := range pod.Spec.Volumes {
// cannot backup hostpath volumes as they are not mounted into /var/lib/kubelet/pods
// and therefore not accessible to the node agent daemon set.
if pv.HostPath != nil {
continue
}
// don't backup volumes mounting secrets. Secrets will be backed up separately.
if pv.Secret != nil {
continue
}
// don't backup volumes mounting config maps. Config maps will be backed up separately.
if pv.ConfigMap != nil {
continue
}
// don't backup volumes mounted as projected volumes, all data in those come from kube state.
if pv.Projected != nil {
continue
}
// don't backup DownwardAPI volumes, all data in those come from kube state.
if pv.DownwardAPI != nil {
continue
}
// don't backup volumes that are included in the exclude list.
if contains(volsToExclude, pv.Name) {
optedOutVolumes = append(optedOutVolumes, pv.Name)
continue
}
// don't include volumes that mount the default service account token.
if strings.HasPrefix(pv.Name, "default-token") {
continue
}
podVolumes = append(podVolumes, pv.Name)
}
return podVolumes, optedOutVolumes
}

View File

@@ -17,12 +17,10 @@ limitations under the License.
package podvolume
import (
"sort"
"testing"
"github.com/stretchr/testify/assert"
corev1api "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/builder"
@@ -303,322 +301,3 @@ func TestVolumeHasNonRestorableSource(t *testing.T) {
}
}
func TestGetVolumesToBackup(t *testing.T) {
tests := []struct {
name string
annotations map[string]string
expected []string
}{
{
name: "nil annotations",
annotations: nil,
expected: nil,
},
{
name: "no volumes to backup",
annotations: map[string]string{"foo": "bar"},
expected: nil,
},
{
name: "one volume to backup",
annotations: map[string]string{"foo": "bar", VolumesToBackupAnnotation: "volume-1"},
expected: []string{"volume-1"},
},
{
name: "multiple volumes to backup",
annotations: map[string]string{"foo": "bar", VolumesToBackupAnnotation: "volume-1,volume-2,volume-3"},
expected: []string{"volume-1", "volume-2", "volume-3"},
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
pod := &corev1api.Pod{}
pod.Annotations = test.annotations
res := GetVolumesToBackup(pod)
// sort to ensure good compare of slices
sort.Strings(test.expected)
sort.Strings(res)
assert.Equal(t, test.expected, res)
})
}
}
func TestGetVolumesByPod(t *testing.T) {
testCases := []struct {
name string
pod *corev1api.Pod
expected struct {
included []string
optedOut []string
}
defaultVolumesToFsBackup bool
}{
{
name: "should get PVs from VolumesToBackupAnnotation when defaultVolumesToFsBackup is false",
defaultVolumesToFsBackup: false,
pod: &corev1api.Pod{
ObjectMeta: metav1.ObjectMeta{
Annotations: map[string]string{
VolumesToBackupAnnotation: "pvbPV1,pvbPV2,pvbPV3",
},
},
},
expected: struct {
included []string
optedOut []string
}{
included: []string{"pvbPV1", "pvbPV2", "pvbPV3"},
optedOut: []string{},
},
},
{
name: "should get all pod volumes when defaultVolumesToFsBackup is true and no PVs are excluded",
defaultVolumesToFsBackup: true,
pod: &corev1api.Pod{
Spec: corev1api.PodSpec{
Volumes: []corev1api.Volume{
// PVB Volumes
{Name: "pvbPV1"}, {Name: "pvbPV2"}, {Name: "pvbPV3"},
},
},
},
expected: struct {
included []string
optedOut []string
}{
included: []string{"pvbPV1", "pvbPV2", "pvbPV3"},
optedOut: []string{},
},
},
{
name: "should get all pod volumes except ones excluded when defaultVolumesToFsBackup is true",
defaultVolumesToFsBackup: true,
pod: &corev1api.Pod{
ObjectMeta: metav1.ObjectMeta{
Annotations: map[string]string{
VolumesToExcludeAnnotation: "nonPvbPV1,nonPvbPV2,nonPvbPV3",
},
},
Spec: corev1api.PodSpec{
Volumes: []corev1api.Volume{
// PVB Volumes
{Name: "pvbPV1"}, {Name: "pvbPV2"}, {Name: "pvbPV3"},
/// Excluded from PVB through annotation
{Name: "nonPvbPV1"}, {Name: "nonPvbPV2"}, {Name: "nonPvbPV3"},
},
},
},
expected: struct {
included []string
optedOut []string
}{
included: []string{"pvbPV1", "pvbPV2", "pvbPV3"},
optedOut: []string{"nonPvbPV1", "nonPvbPV2", "nonPvbPV3"},
},
},
{
name: "should exclude default service account token from pod volume backup",
defaultVolumesToFsBackup: true,
pod: &corev1api.Pod{
Spec: corev1api.PodSpec{
Volumes: []corev1api.Volume{
// PVB Volumes
{Name: "pvbPV1"}, {Name: "pvbPV2"}, {Name: "pvbPV3"},
/// Excluded from PVB because colume mounting default service account token
{Name: "default-token-5xq45"},
},
},
},
expected: struct {
included []string
optedOut []string
}{
included: []string{"pvbPV1", "pvbPV2", "pvbPV3"},
optedOut: []string{},
},
},
{
name: "should exclude host path volumes from pod volume backups",
defaultVolumesToFsBackup: true,
pod: &corev1api.Pod{
ObjectMeta: metav1.ObjectMeta{
Annotations: map[string]string{
VolumesToExcludeAnnotation: "nonPvbPV1,nonPvbPV2,nonPvbPV3",
},
},
Spec: corev1api.PodSpec{
Volumes: []corev1api.Volume{
// PVB Volumes
{Name: "pvbPV1"}, {Name: "pvbPV2"}, {Name: "pvbPV3"},
/// Excluded from pod volume backup through annotation
{Name: "nonPvbPV1"}, {Name: "nonPvbPV2"}, {Name: "nonPvbPV3"},
// Excluded from pod volume backup because hostpath
{Name: "hostPath1", VolumeSource: corev1api.VolumeSource{HostPath: &corev1api.HostPathVolumeSource{Path: "/hostpathVol"}}},
},
},
},
expected: struct {
included []string
optedOut []string
}{
included: []string{"pvbPV1", "pvbPV2", "pvbPV3"},
optedOut: []string{"nonPvbPV1", "nonPvbPV2", "nonPvbPV3"},
},
},
{
name: "should exclude volumes mounting secrets",
defaultVolumesToFsBackup: true,
pod: &corev1api.Pod{
ObjectMeta: metav1.ObjectMeta{
Annotations: map[string]string{
VolumesToExcludeAnnotation: "nonPvbPV1,nonPvbPV2,nonPvbPV3",
},
},
Spec: corev1api.PodSpec{
Volumes: []corev1api.Volume{
// PVB Volumes
{Name: "pvbPV1"}, {Name: "pvbPV2"}, {Name: "pvbPV3"},
/// Excluded from pod volume backup through annotation
{Name: "nonPvbPV1"}, {Name: "nonPvbPV2"}, {Name: "nonPvbPV3"},
// Excluded from pod volume backup because hostpath
{Name: "superSecret", VolumeSource: corev1api.VolumeSource{Secret: &corev1api.SecretVolumeSource{SecretName: "super-secret"}}},
},
},
},
expected: struct {
included []string
optedOut []string
}{
included: []string{"pvbPV1", "pvbPV2", "pvbPV3"},
optedOut: []string{"nonPvbPV1", "nonPvbPV2", "nonPvbPV3"},
},
},
{
name: "should exclude volumes mounting config maps",
defaultVolumesToFsBackup: true,
pod: &corev1api.Pod{
ObjectMeta: metav1.ObjectMeta{
Annotations: map[string]string{
VolumesToExcludeAnnotation: "nonPvbPV1,nonPvbPV2,nonPvbPV3",
},
},
Spec: corev1api.PodSpec{
Volumes: []corev1api.Volume{
// PVB Volumes
{Name: "pvbPV1"}, {Name: "pvbPV2"}, {Name: "pvbPV3"},
/// Excluded from pod volume backup through annotation
{Name: "nonPvbPV1"}, {Name: "nonPvbPV2"}, {Name: "nonPvbPV3"},
// Excluded from pod volume backup because hostpath
{Name: "appCOnfig", VolumeSource: corev1api.VolumeSource{ConfigMap: &corev1api.ConfigMapVolumeSource{LocalObjectReference: corev1api.LocalObjectReference{Name: "app-config"}}}},
},
},
},
expected: struct {
included []string
optedOut []string
}{
included: []string{"pvbPV1", "pvbPV2", "pvbPV3"},
optedOut: []string{"nonPvbPV1", "nonPvbPV2", "nonPvbPV3"},
},
},
{
name: "should exclude projected volumes",
defaultVolumesToFsBackup: true,
pod: &corev1api.Pod{
ObjectMeta: metav1.ObjectMeta{
Annotations: map[string]string{
VolumesToExcludeAnnotation: "nonPvbPV1,nonPvbPV2,nonPvbPV3",
},
},
Spec: corev1api.PodSpec{
Volumes: []corev1api.Volume{
{Name: "pvbPV1"}, {Name: "pvbPV2"}, {Name: "pvbPV3"},
{
Name: "projected",
VolumeSource: corev1api.VolumeSource{
Projected: &corev1api.ProjectedVolumeSource{
Sources: []corev1api.VolumeProjection{{
Secret: &corev1api.SecretProjection{
LocalObjectReference: corev1api.LocalObjectReference{},
Items: nil,
Optional: nil,
},
DownwardAPI: nil,
ConfigMap: nil,
ServiceAccountToken: nil,
}},
DefaultMode: nil,
},
},
},
},
},
},
expected: struct {
included []string
optedOut []string
}{
included: []string{"pvbPV1", "pvbPV2", "pvbPV3"},
optedOut: []string{},
},
},
{
name: "should exclude DownwardAPI volumes",
defaultVolumesToFsBackup: true,
pod: &corev1api.Pod{
ObjectMeta: metav1.ObjectMeta{
Annotations: map[string]string{
VolumesToExcludeAnnotation: "nonPvbPV1,nonPvbPV2,nonPvbPV3",
},
},
Spec: corev1api.PodSpec{
Volumes: []corev1api.Volume{
{Name: "pvbPV1"}, {Name: "pvbPV2"}, {Name: "pvbPV3"},
{
Name: "downwardAPI",
VolumeSource: corev1api.VolumeSource{
DownwardAPI: &corev1api.DownwardAPIVolumeSource{
Items: []corev1api.DownwardAPIVolumeFile{
{
Path: "labels",
FieldRef: &corev1api.ObjectFieldSelector{
APIVersion: "v1",
FieldPath: "metadata.labels",
},
},
},
},
},
},
},
},
},
expected: struct {
included []string
optedOut []string
}{
included: []string{"pvbPV1", "pvbPV2", "pvbPV3"},
optedOut: []string{},
},
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
actualIncluded, actualOptedOut := GetVolumesByPod(tc.pod, tc.defaultVolumesToFsBackup)
sort.Strings(tc.expected.included)
sort.Strings(actualIncluded)
assert.Equal(t, tc.expected.included, actualIncluded)
sort.Strings(tc.expected.optedOut)
sort.Strings(actualOptedOut)
assert.Equal(t, tc.expected.optedOut, actualOptedOut)
})
}
}

View File

@@ -107,9 +107,9 @@ func NewBackupRepository(namespace string, key BackupRepositoryKey) *velerov1api
}
func isBackupRepositoryNotFoundError(err error) bool {
return (err == errBackupRepoNotFound)
return err == errBackupRepoNotFound
}
func isBackupRepositoryNotProvisionedError(err error) bool {
return (err == errBackupRepoNotProvisioned)
return err == errBackupRepoNotProvisioned
}

View File

@@ -81,7 +81,7 @@ func GetS3Credentials(config map[string]string) (*credentials.Value, error) {
opts := session.Options{}
credentialsFile := config[CredentialsFileKey]
if credentialsFile == "" {
credentialsFile = os.Getenv("AWS_SHARED_CREDENTIALS_FILE")
credentialsFile = os.Getenv(awsCredentialsFileEnvVar)
}
if credentialsFile != "" {
opts.SharedConfigFiles = append(opts.SharedConfigFiles, credentialsFile)

View File

@@ -42,5 +42,5 @@ func GetGCPCredentials(config map[string]string) string {
if credentialsFile, ok := config[CredentialsFileKey]; ok {
return credentialsFile
}
return os.Getenv("GOOGLE_APPLICATION_CREDENTIALS")
return os.Getenv(gcpCredentialsFileEnvVar)
}

View File

@@ -28,6 +28,7 @@ import (
"sigs.k8s.io/controller-runtime/pkg/client"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
veleroclient "github.com/vmware-tanzu/velero/pkg/client"
)
// Ensurer ensures that backup repositories are created and ready.
@@ -107,7 +108,7 @@ func (r *Ensurer) repoLock(key BackupRepositoryKey) *sync.Mutex {
func (r *Ensurer) createBackupRepositoryAndWait(ctx context.Context, namespace string, backupRepoKey BackupRepositoryKey) (*velerov1api.BackupRepository, error) {
toCreate := NewBackupRepository(namespace, backupRepoKey)
if err := r.repoClient.Create(ctx, toCreate, &client.CreateOptions{}); err != nil {
if err := veleroclient.CreateRetryGenerateName(r.repoClient, ctx, toCreate); err != nil {
return nil, errors.Wrap(err, "unable to create backup repository resource")
}

View File

@@ -480,9 +480,11 @@ func getStorageVariables(backupLocation *velerov1api.BackupStorageLocation, repo
var err error
if s3URL == "" {
region, err = getS3BucketRegion(bucket)
if err != nil {
return map[string]string{}, errors.Wrap(err, "error get s3 bucket region")
if region == "" {
region, err = getS3BucketRegion(bucket)
if err != nil {
return map[string]string{}, errors.Wrap(err, "error get s3 bucket region")
}
}
s3URL = fmt.Sprintf("s3-%s.amazonaws.com", region)

View File

@@ -30,6 +30,7 @@ import (
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
velerov2alpha1 "github.com/vmware-tanzu/velero/pkg/apis/velero/v2alpha1"
veleroclient "github.com/vmware-tanzu/velero/pkg/client"
"github.com/vmware-tanzu/velero/pkg/label"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
)
@@ -104,7 +105,7 @@ func (d *DataUploadRetrieveAction) Execute(input *velero.RestoreItemActionExecut
},
}
err = d.client.Create(context.Background(), &cm, &client.CreateOptions{})
err = veleroclient.CreateRetryGenerateName(d.client, context.Background(), &cm)
if err != nil {
d.logger.Errorf("fail to create DataUploadResult ConfigMap %s/%s: %s", cm.Namespace, cm.Name, err.Error())
return nil, errors.Wrap(err, "fail to create DataUploadResult ConfigMap")

View File

@@ -26,6 +26,11 @@ import (
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
)
const (
legacyControllerUIDLabel = "controller-uid" // <=1.27 This still exists in 1.27 for backward compatibility, maybe remove in 1.28?
controllerUIDLabel = "batch.kubernetes.io/controller-uid" // >=1.27 https://github.com/kubernetes/kubernetes/pull/114930#issuecomment-1384667494
)
type JobAction struct {
logger logrus.FieldLogger
}
@@ -47,9 +52,11 @@ func (a *JobAction) Execute(input *velero.RestoreItemActionExecuteInput) (*veler
}
if job.Spec.Selector != nil {
delete(job.Spec.Selector.MatchLabels, "controller-uid")
delete(job.Spec.Selector.MatchLabels, controllerUIDLabel)
delete(job.Spec.Selector.MatchLabels, legacyControllerUIDLabel)
}
delete(job.Spec.Template.ObjectMeta.Labels, "controller-uid")
delete(job.Spec.Template.ObjectMeta.Labels, controllerUIDLabel)
delete(job.Spec.Template.ObjectMeta.Labels, legacyControllerUIDLabel)
res, err := runtime.DefaultUnstructuredConverter.ToUnstructured(job)
if err != nil {

View File

@@ -51,14 +51,15 @@ func resourceKey(obj runtime.Object) string {
type Request struct {
*velerov1api.Restore
Log logrus.FieldLogger
Backup *velerov1api.Backup
PodVolumeBackups []*velerov1api.PodVolumeBackup
VolumeSnapshots []*volume.Snapshot
BackupReader io.Reader
RestoredItems map[itemKey]restoredItemStatus
itemOperationsList *[]*itemoperation.RestoreOperation
ResourceModifiers *resourcemodifiers.ResourceModifiers
Log logrus.FieldLogger
Backup *velerov1api.Backup
PodVolumeBackups []*velerov1api.PodVolumeBackup
VolumeSnapshots []*volume.Snapshot
BackupReader io.Reader
RestoredItems map[itemKey]restoredItemStatus
itemOperationsList *[]*itemoperation.RestoreOperation
ResourceModifiers *resourcemodifiers.ResourceModifiers
DisableInformerCache bool
}
type restoredItemStatus struct {

View File

@@ -22,6 +22,7 @@ import (
"fmt"
"io"
"os"
"os/signal"
"path/filepath"
"sort"
"strings"
@@ -42,6 +43,8 @@ import (
kubeerrs "k8s.io/apimachinery/pkg/util/errors"
"k8s.io/apimachinery/pkg/util/sets"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/dynamic/dynamicinformer"
"k8s.io/client-go/informers"
corev1 "k8s.io/client-go/kubernetes/typed/core/v1"
"k8s.io/client-go/tools/cache"
crclient "sigs.k8s.io/controller-runtime/pkg/client"
@@ -71,6 +74,10 @@ import (
"github.com/vmware-tanzu/velero/pkg/volume"
)
var resourceMustHave = []string{
"datauploads.velero.io",
}
type VolumeSnapshotterGetter interface {
GetVolumeSnapshotter(name string) (vsv1.VolumeSnapshotter, error)
}
@@ -276,6 +283,7 @@ func (kr *kubernetesRestorer) RestoreWithResolvers(
resourceIncludesExcludes: resourceIncludesExcludes,
resourceStatusIncludesExcludes: restoreStatusIncludesExcludes,
namespaceIncludesExcludes: namespaceIncludesExcludes,
resourceMustHave: sets.NewString(resourceMustHave...),
chosenGrpVersToRestore: make(map[string]ChosenGroupVersion),
selector: selector,
OrSelectors: OrSelectors,
@@ -294,6 +302,8 @@ func (kr *kubernetesRestorer) RestoreWithResolvers(
resourceTerminatingTimeout: kr.resourceTerminatingTimeout,
resourceTimeout: kr.resourceTimeout,
resourceClients: make(map[resourceClientKey]client.Dynamic),
dynamicInformerFactories: make(map[string]*informerFactoryWithContext),
resourceInformers: make(map[resourceClientKey]informers.GenericInformer),
restoredItems: req.RestoredItems,
renamedPVs: make(map[string]string),
pvRenamer: kr.pvRenamer,
@@ -307,6 +317,7 @@ func (kr *kubernetesRestorer) RestoreWithResolvers(
kbClient: kr.kbClient,
itemOperationsList: req.GetItemOperationsList(),
resourceModifiers: req.ResourceModifiers,
disableInformerCache: req.DisableInformerCache,
}
return restoreCtx.execute()
@@ -320,6 +331,7 @@ type restoreContext struct {
resourceIncludesExcludes *collections.IncludesExcludes
resourceStatusIncludesExcludes *collections.IncludesExcludes
namespaceIncludesExcludes *collections.IncludesExcludes
resourceMustHave sets.String
chosenGrpVersToRestore map[string]ChosenGroupVersion
selector labels.Selector
OrSelectors []labels.Selector
@@ -339,6 +351,8 @@ type restoreContext struct {
resourceTerminatingTimeout time.Duration
resourceTimeout time.Duration
resourceClients map[resourceClientKey]client.Dynamic
dynamicInformerFactories map[string]*informerFactoryWithContext
resourceInformers map[resourceClientKey]informers.GenericInformer
restoredItems map[itemKey]restoredItemStatus
renamedPVs map[string]string
pvRenamer func(string) (string, error)
@@ -353,6 +367,7 @@ type restoreContext struct {
kbClient crclient.Client
itemOperationsList *[]*itemoperation.RestoreOperation
resourceModifiers *resourcemodifiers.ResourceModifiers
disableInformerCache bool
}
type resourceClientKey struct {
@@ -360,6 +375,12 @@ type resourceClientKey struct {
namespace string
}
type informerFactoryWithContext struct {
factory dynamicinformer.DynamicSharedInformerFactory
context go_context.Context
cancel go_context.CancelFunc
}
// getOrderedResources returns an ordered list of resource identifiers to restore,
// based on the provided resource priorities and backup contents. The returned list
// begins with all of the high prioritized resources (in order), ends with all of
@@ -410,6 +431,17 @@ func (ctx *restoreContext) execute() (results.Result, results.Result) {
}
}()
// Need to stop all informers if enabled
if !ctx.disableInformerCache {
defer func() {
// Call the cancel func to close the channel for each started informer
for _, factory := range ctx.dynamicInformerFactories {
factory.cancel()
}
// After upgrading to client-go 0.27 or newer, also call Shutdown for each informer factory
}()
}
// Need to set this for additionalItems to be restored.
ctx.restoreDir = dir
@@ -514,6 +546,32 @@ func (ctx *restoreContext) execute() (results.Result, results.Result) {
warnings.Merge(&w)
errs.Merge(&e)
// initialize informer caches for selected resources if enabled
if !ctx.disableInformerCache {
// CRD informer will have already been initialized if any CRDs were created,
// but already-initialized informers aren't re-initialized because getGenericInformer
// looks for an existing one first.
factoriesToStart := make(map[string]*informerFactoryWithContext)
for _, informerResource := range selectedResourceCollection {
gr := schema.ParseGroupResource(informerResource.resource)
for _, items := range informerResource.selectedItemsByNamespace {
// don't use ns key since it represents original ns, not mapped ns
if len(items) == 0 {
continue
}
// use the first item in the list to initialize the informer. The rest of the list
// should share the same gvr and namespace
_, factory := ctx.getGenericInformerInternal(gr, items[0].version, items[0].targetNamespace)
if factory != nil {
factoriesToStart[items[0].targetNamespace] = factory
}
}
}
for _, factoryWithContext := range factoriesToStart {
factoryWithContext.factory.WaitForCacheSync(factoryWithContext.context.Done())
}
}
// reset processedItems and totalItems before processing full resource list
processedItems = 0
totalItems = 0
@@ -928,11 +986,14 @@ func (ctx *restoreContext) itemsAvailable(action framework.RestoreItemResolvedAc
return available, err
}
func (ctx *restoreContext) getResourceClient(groupResource schema.GroupResource, obj *unstructured.Unstructured, namespace string) (client.Dynamic, error) {
key := resourceClientKey{
resource: groupResource.WithVersion(obj.GroupVersionKind().Version),
func getResourceClientKey(groupResource schema.GroupResource, version, namespace string) resourceClientKey {
return resourceClientKey{
resource: groupResource.WithVersion(version),
namespace: namespace,
}
}
func (ctx *restoreContext) getResourceClient(groupResource schema.GroupResource, obj *unstructured.Unstructured, namespace string) (client.Dynamic, error) {
key := getResourceClientKey(groupResource, obj.GroupVersionKind().Version, namespace)
if client, ok := ctx.resourceClients[key]; ok {
return client, nil
@@ -956,6 +1017,49 @@ func (ctx *restoreContext) getResourceClient(groupResource schema.GroupResource,
return client, nil
}
// if new informer is created, non-nil factory is returned
func (ctx *restoreContext) getGenericInformerInternal(groupResource schema.GroupResource, version, namespace string) (informers.GenericInformer, *informerFactoryWithContext) {
var returnFactory *informerFactoryWithContext
key := getResourceClientKey(groupResource, version, namespace)
factoryWithContext, ok := ctx.dynamicInformerFactories[key.namespace]
if !ok {
factory := ctx.dynamicFactory.DynamicSharedInformerFactoryForNamespace(namespace)
informerContext, informerCancel := signal.NotifyContext(go_context.Background(), os.Interrupt)
factoryWithContext = &informerFactoryWithContext{
factory: factory,
context: informerContext,
cancel: informerCancel,
}
ctx.dynamicInformerFactories[key.namespace] = factoryWithContext
}
informer, ok := ctx.resourceInformers[key]
if !ok {
ctx.log.Infof("[debug] Creating factory for %s in namespace %s", key.resource, key.namespace)
informer = factoryWithContext.factory.ForResource(key.resource)
factoryWithContext.factory.Start(factoryWithContext.context.Done())
ctx.resourceInformers[key] = informer
returnFactory = factoryWithContext
}
return informer, returnFactory
}
func (ctx *restoreContext) getGenericInformer(groupResource schema.GroupResource, version, namespace string) informers.GenericInformer {
informer, factoryWithContext := ctx.getGenericInformerInternal(groupResource, version, namespace)
if factoryWithContext != nil {
factoryWithContext.factory.WaitForCacheSync(factoryWithContext.context.Done())
}
return informer
}
func (ctx *restoreContext) getResourceLister(groupResource schema.GroupResource, obj *unstructured.Unstructured, namespace string) cache.GenericNamespaceLister {
informer := ctx.getGenericInformer(groupResource, obj.GroupVersionKind().Version, namespace)
if namespace == "" {
return informer.Lister()
} else {
return informer.Lister().ByNamespace(namespace)
}
}
func getResourceID(groupResource schema.GroupResource, namespace, name string) string {
if namespace == "" {
return fmt.Sprintf("%s/%s", groupResource.String(), name)
@@ -964,6 +1068,20 @@ func getResourceID(groupResource schema.GroupResource, namespace, name string) s
return fmt.Sprintf("%s/%s/%s", groupResource.String(), namespace, name)
}
func (ctx *restoreContext) getResource(groupResource schema.GroupResource, obj *unstructured.Unstructured, namespace, name string) (*unstructured.Unstructured, error) {
lister := ctx.getResourceLister(groupResource, obj, namespace)
clusterObj, err := lister.Get(name)
if err != nil {
return nil, errors.Wrapf(err, "error getting resource from lister for %s, %s/%s", groupResource, namespace, name)
}
u, ok := clusterObj.(*unstructured.Unstructured)
if !ok {
ctx.log.WithError(errors.WithStack(fmt.Errorf("expected *unstructured.Unstructured but got %T", u))).Error("unable to understand entry returned from client")
return nil, fmt.Errorf("expected *unstructured.Unstructured but got %T", u)
}
return u, nil
}
func (ctx *restoreContext) restoreItem(obj *unstructured.Unstructured, groupResource schema.GroupResource, namespace string) (results.Result, results.Result, bool) {
warnings, errs := results.Result{}, results.Result{}
// itemExists bool is used to determine whether to include this item in the "wait for additional items" list
@@ -989,7 +1107,7 @@ func (ctx *restoreContext) restoreItem(obj *unstructured.Unstructured, groupReso
// via obj.GetNamespace()) instead of the namespace parameter, because we want
// to check the *original* namespace, not the remapped one if it's been remapped.
if namespace != "" {
if !ctx.namespaceIncludesExcludes.ShouldInclude(obj.GetNamespace()) {
if !ctx.namespaceIncludesExcludes.ShouldInclude(obj.GetNamespace()) && !ctx.resourceMustHave.Has(groupResource.String()) {
ctx.log.WithFields(logrus.Fields{
"namespace": obj.GetNamespace(),
"name": obj.GetName(),
@@ -1157,6 +1275,7 @@ func (ctx *restoreContext) restoreItem(obj *unstructured.Unstructured, groupReso
ctx.renamedPVs[oldName] = pvName
obj.SetName(pvName)
name = pvName
// Add the original PV name as an annotation.
annotations := obj.GetAnnotations()
@@ -1346,6 +1465,14 @@ func (ctx *restoreContext) restoreItem(obj *unstructured.Unstructured, groupReso
}
}
if ctx.resourceModifiers != nil {
if errList := ctx.resourceModifiers.ApplyResourceModifierRules(obj, groupResource.String(), ctx.log); errList != nil {
for _, err := range errList {
errs.Add(namespace, err)
}
}
}
// Necessary because we may have remapped the namespace if the namespace is
// blank, don't create the key.
originalNamespace := obj.GetNamespace()
@@ -1358,14 +1485,6 @@ func (ctx *restoreContext) restoreItem(obj *unstructured.Unstructured, groupReso
// and which backup they came from.
addRestoreLabels(obj, ctx.restore.Name, ctx.restore.Spec.BackupName)
if ctx.resourceModifiers != nil {
if errList := ctx.resourceModifiers.ApplyResourceModifierRules(obj, groupResource.String(), ctx.log); errList != nil {
for _, err := range errList {
errs.Add(namespace, err)
}
}
}
// The object apiVersion might get modified by a RestorePlugin so we need to
// get a new client to reflect updated resource path.
newGR := schema.GroupResource{Group: obj.GroupVersionKind().Group, Resource: groupResource.Resource}
@@ -1376,27 +1495,44 @@ func (ctx *restoreContext) restoreItem(obj *unstructured.Unstructured, groupReso
}
ctx.log.Infof("Attempting to restore %s: %v", obj.GroupVersionKind().Kind, name)
createdObj, restoreErr := resourceClient.Create(obj)
if restoreErr == nil {
itemExists = true
ctx.restoredItems[itemKey] = restoredItemStatus{action: itemRestoreResultCreated, itemExists: itemExists}
// check if we want to treat the error as a warning, in some cases the creation call might not get executed due to object API validations
// and Velero might not get the already exists error type but in reality the object already exists
var fromCluster, createdObj *unstructured.Unstructured
var restoreErr error
// only attempt Get before Create if using informer cache, otherwise this will slow down restore into
// new namespace
if !ctx.disableInformerCache {
ctx.log.Debugf("Checking for existence %s: %v", obj.GroupVersionKind().Kind, name)
fromCluster, err = ctx.getResource(groupResource, obj, namespace, name)
}
if err != nil || fromCluster == nil {
// couldn't find the resource, attempt to create
ctx.log.Debugf("Creating %s: %v", obj.GroupVersionKind().Kind, name)
createdObj, restoreErr = resourceClient.Create(obj)
if restoreErr == nil {
itemExists = true
ctx.restoredItems[itemKey] = restoredItemStatus{action: itemRestoreResultCreated, itemExists: itemExists}
}
}
isAlreadyExistsError, err := isAlreadyExistsError(ctx, obj, restoreErr, resourceClient)
if err != nil {
errs.Add(namespace, err)
return warnings, errs, itemExists
}
// check if we want to treat the error as a warning, in some cases the creation call might not get executed due to object API validations
// and Velero might not get the already exists error type but in reality the object already exists
var fromCluster *unstructured.Unstructured
if restoreErr != nil {
// check for the existence of the object in cluster, if no error then it implies that object exists
// and if err then we want to judge whether there is an existing error in the previous creation.
// if so, we will return the 'get' error.
// otherwise, we will return the original creation error.
fromCluster, err = resourceClient.Get(name, metav1.GetOptions{})
if !ctx.disableInformerCache {
fromCluster, err = ctx.getResource(groupResource, obj, namespace, name)
} else {
fromCluster, err = resourceClient.Get(name, metav1.GetOptions{})
}
if err != nil && isAlreadyExistsError {
ctx.log.Errorf("Error retrieving in-cluster version of %s: %v", kube.NamespaceAndName(obj), err)
errs.Add(namespace, err)
@@ -1941,6 +2077,7 @@ type restoreableItem struct {
path string
targetNamespace string
name string
version string // used for initializing informer cache
}
// getOrderedResourceCollection iterates over list of ordered resource
@@ -2016,7 +2153,7 @@ func (ctx *restoreContext) getOrderedResourceCollection(
// Iterate through each namespace that contains instances of the
// resource and append to the list of to-be restored resources.
for namespace, items := range resourceList.ItemsByNamespace {
if namespace != "" && !ctx.namespaceIncludesExcludes.ShouldInclude(namespace) {
if namespace != "" && !ctx.namespaceIncludesExcludes.ShouldInclude(namespace) && !ctx.resourceMustHave.Has(groupResource.String()) {
ctx.log.Infof("Skipping namespace %s", namespace)
continue
}
@@ -2130,6 +2267,7 @@ func (ctx *restoreContext) getSelectedRestoreableItems(resource, targetNamespace
path: itemPath,
name: item,
targetNamespace: targetNamespace,
version: obj.GroupVersionKind().Version,
}
restorable.selectedItemsByNamespace[originalNamespace] =
append(restorable.selectedItemsByNamespace[originalNamespace], selectedItem)

View File

@@ -861,6 +861,7 @@ func TestRestoreItems(t *testing.T) {
tarball io.Reader
want []*test.APIResource
expectedRestoreItems map[itemKey]restoredItemStatus
disableInformer bool
}{
{
name: "metadata uid/resourceVersion/etc. gets removed",
@@ -1017,6 +1018,26 @@ func TestRestoreItems(t *testing.T) {
apiResources: []*test.APIResource{
test.Secrets(builder.ForSecret("ns-1", "sa-1").Data(map[string][]byte{"foo": []byte("bar")}).Result()),
},
disableInformer: true,
want: []*test.APIResource{
test.Secrets(builder.ForSecret("ns-1", "sa-1").ObjectMeta(builder.WithLabels("velero.io/backup-name", "backup-1", "velero.io/restore-name", "restore-1")).Data(map[string][]byte{"key-1": []byte("value-1")}).Result()),
},
expectedRestoreItems: map[itemKey]restoredItemStatus{
{resource: "v1/Namespace", namespace: "", name: "ns-1"}: {action: "created", itemExists: true},
{resource: "v1/Secret", namespace: "ns-1", name: "sa-1"}: {action: "updated", itemExists: true},
},
},
{
name: "update secret data and labels when secret exists in cluster and is not identical to the backed up one, existing resource policy is update, using informer cache",
restore: defaultRestore().ExistingResourcePolicy("update").Result(),
backup: defaultBackup().Result(),
tarball: test.NewTarWriter(t).
AddItems("secrets", builder.ForSecret("ns-1", "sa-1").Data(map[string][]byte{"key-1": []byte("value-1")}).Result()).
Done(),
apiResources: []*test.APIResource{
test.Secrets(builder.ForSecret("ns-1", "sa-1").Data(map[string][]byte{"foo": []byte("bar")}).Result()),
},
disableInformer: false,
want: []*test.APIResource{
test.Secrets(builder.ForSecret("ns-1", "sa-1").ObjectMeta(builder.WithLabels("velero.io/backup-name", "backup-1", "velero.io/restore-name", "restore-1")).Data(map[string][]byte{"key-1": []byte("value-1")}).Result()),
},
@@ -1175,13 +1196,14 @@ func TestRestoreItems(t *testing.T) {
}
data := &Request{
Log: h.log,
Restore: tc.restore,
Backup: tc.backup,
PodVolumeBackups: nil,
VolumeSnapshots: nil,
BackupReader: tc.tarball,
RestoredItems: map[itemKey]restoredItemStatus{},
Log: h.log,
Restore: tc.restore,
Backup: tc.backup,
PodVolumeBackups: nil,
VolumeSnapshots: nil,
BackupReader: tc.tarball,
RestoredItems: map[itemKey]restoredItemStatus{},
DisableInformerCache: tc.disableInformer,
}
warnings, errs := h.restorer.Restore(
data,

View File

@@ -59,6 +59,7 @@ func NewAPIServer(t *testing.T) *APIServer {
{Group: "velero.io", Version: "v1", Resource: "backups"}: "BackupList",
{Group: "extensions", Version: "v1", Resource: "deployments"}: "ExtDeploymentsList",
{Group: "velero.io", Version: "v1", Resource: "deployments"}: "VeleroDeploymentsList",
{Group: "velero.io", Version: "v2alpha1", Resource: "datauploads"}: "DataUploadsList",
})
discoveryClient = &DiscoveryClient{FakeDiscovery: kubeClient.Discovery().(*discoveryfake.FakeDiscovery)}
)

View File

@@ -22,6 +22,7 @@ import (
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/watch"
"k8s.io/client-go/dynamic/dynamicinformer"
"github.com/vmware-tanzu/velero/pkg/client"
)
@@ -37,6 +38,11 @@ func (df *FakeDynamicFactory) ClientForGroupVersionResource(gv schema.GroupVersi
return args.Get(0).(client.Dynamic), args.Error(1)
}
func (df *FakeDynamicFactory) DynamicSharedInformerFactoryForNamespace(namespace string) dynamicinformer.DynamicSharedInformerFactory {
args := df.Called(namespace)
return args.Get(0).(dynamicinformer.DynamicSharedInformerFactory)
}
type FakeDynamicClient struct {
mock.Mock
}

View File

@@ -0,0 +1,58 @@
//go:build !windows
// +build !windows
/*
Copyright The Velero Contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package kopia
import (
"os"
"syscall"
"github.com/kopia/kopia/fs"
"github.com/kopia/kopia/fs/virtualfs"
"github.com/pkg/errors"
)
const ErrNotPermitted = "operation not permitted"
func getLocalBlockEntry(sourcePath string) (fs.Entry, error) {
source, err := resolveSymlink(sourcePath)
if err != nil {
return nil, errors.Wrap(err, "resolveSymlink")
}
fileInfo, err := os.Lstat(source)
if err != nil {
return nil, errors.Wrapf(err, "unable to get the source device information %s", source)
}
if (fileInfo.Sys().(*syscall.Stat_t).Mode & syscall.S_IFMT) != syscall.S_IFBLK {
return nil, errors.Errorf("source path %s is not a block device", source)
}
device, err := os.Open(source)
if err != nil {
if os.IsPermission(err) || err.Error() == ErrNotPermitted {
return nil, errors.Wrapf(err, "no permission to open the source device %s, make sure that node agent is running in privileged mode", source)
}
return nil, errors.Wrapf(err, "unable to open the source device %s", source)
}
sf := virtualfs.StreamingFileFromReader(source, device)
return virtualfs.NewStaticDirectory(source, []fs.Entry{sf}), nil
}

View File

@@ -0,0 +1,30 @@
//go:build windows
// +build windows
/*
Copyright The Velero Contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package kopia
import (
"fmt"
"github.com/kopia/kopia/fs"
)
func getLocalBlockEntry(sourcePath string) (fs.Entry, error) {
return nil, fmt.Errorf("block mode is not supported for Windows")
}

View File

@@ -0,0 +1,102 @@
//go:build !windows
// +build !windows
/*
Copyright The Velero Contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package kopia
import (
"context"
"io"
"os"
"path/filepath"
"syscall"
"github.com/kopia/kopia/fs"
"github.com/kopia/kopia/snapshot/restore"
"github.com/pkg/errors"
)
type BlockOutput struct {
*restore.FilesystemOutput
targetFileName string
}
var _ restore.Output = &BlockOutput{}
const bufferSize = 128 * 1024
func (o *BlockOutput) WriteFile(ctx context.Context, relativePath string, remoteFile fs.File) error {
remoteReader, err := remoteFile.Open(ctx)
if err != nil {
return errors.Wrapf(err, "failed to open remote file %s", remoteFile.Name())
}
defer remoteReader.Close()
targetFile, err := os.Create(o.targetFileName)
if err != nil {
return errors.Wrapf(err, "failed to open file %s", o.targetFileName)
}
defer targetFile.Close()
buffer := make([]byte, bufferSize)
readData := true
for readData {
bytesToWrite, err := remoteReader.Read(buffer)
if err != nil {
if err != io.EOF {
return errors.Wrapf(err, "failed to read data from remote file %s", o.targetFileName)
}
readData = false
}
if bytesToWrite > 0 {
offset := 0
for bytesToWrite > 0 {
if bytesWritten, err := targetFile.Write(buffer[offset:bytesToWrite]); err == nil {
bytesToWrite -= bytesWritten
offset += bytesWritten
} else {
return errors.Wrapf(err, "failed to write data to file %s", o.targetFileName)
}
}
}
}
return nil
}
func (o *BlockOutput) BeginDirectory(ctx context.Context, relativePath string, e fs.Directory) error {
var err error
o.targetFileName, err = filepath.EvalSymlinks(o.TargetPath)
if err != nil {
return errors.Wrapf(err, "unable to evaluate symlinks for %s", o.targetFileName)
}
fileInfo, err := os.Lstat(o.targetFileName)
if err != nil {
return errors.Wrapf(err, "unable to get the target device information for %s", o.TargetPath)
}
if (fileInfo.Sys().(*syscall.Stat_t).Mode & syscall.S_IFMT) != syscall.S_IFBLK {
return errors.Errorf("target file %s is not a block device", o.TargetPath)
}
return nil
}

View File

@@ -0,0 +1,42 @@
//go:build windows
// +build windows
/*
Copyright The Velero Contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package kopia
import (
"context"
"fmt"
"github.com/kopia/kopia/fs"
"github.com/kopia/kopia/snapshot/restore"
)
type BlockOutput struct {
*restore.FilesystemOutput
targetFileName string
}
func (o *BlockOutput) WriteFile(ctx context.Context, relativePath string, remoteFile fs.File) error {
return fmt.Errorf("block mode is not supported for Windows")
}
func (o *BlockOutput) BeginDirectory(ctx context.Context, relativePath string, e fs.Directory) error {
return fmt.Errorf("block mode is not supported for Windows")
}

View File

@@ -210,7 +210,25 @@ func (sr *shimRepository) DeleteManifest(ctx context.Context, id manifest.ID) er
}
func (sr *shimRepository) ReplaceManifests(ctx context.Context, labels map[string]string, payload interface{}) (manifest.ID, error) {
return manifest.ID(""), errors.New("ReplaceManifests is not supported")
const minReplaceManifestTimeDelta = 100 * time.Millisecond
md, err := sr.FindManifests(ctx, labels)
if err != nil {
return "", errors.Wrap(err, "unable to load manifests")
}
for _, em := range md {
age := sr.Time().Sub(em.ModTime)
if age < minReplaceManifestTimeDelta {
time.Sleep(minReplaceManifestTimeDelta)
}
if err := sr.DeleteManifest(ctx, em.ID); err != nil {
return "", errors.Wrapf(err, "unable to delete previous manifest %v", em.ID)
}
}
return sr.PutManifest(ctx, labels, payload)
}
// Flush all the unifited repository data

View File

@@ -49,7 +49,6 @@ func TestShimRepo(t *testing.T) {
shim.PrefetchObjects(ctx, []object.ID{}, "hint")
shim.UpdateDescription("desc")
shim.NewWriter(ctx, repo.WriteSessionOptions{})
shim.ReplaceManifests(ctx, map[string]string{}, nil)
shim.OnSuccessfulFlush(func(ctx context.Context, w repo.RepositoryWriter) error { return nil })
backupRepo.On("Close", mock.Anything).Return(nil)
@@ -202,3 +201,92 @@ func TestShimObjWriter(t *testing.T) {
objWriter.On("Close").Return(nil)
writer.Close()
}
func TestReplaceManifests(t *testing.T) {
meta1 := udmrepo.ManifestEntryMetadata{
ID: "mani-1",
}
meta2 := udmrepo.ManifestEntryMetadata{
ID: "mani-2",
}
tests := []struct {
name string
backupRepo *mocks.BackupRepo
isGetManifestError bool
expectedError string
expectedID manifest.ID
}{
{
name: "Failed to find manifest",
isGetManifestError: true,
backupRepo: func() *mocks.BackupRepo {
backupRepo := &mocks.BackupRepo{}
backupRepo.On("FindManifests", mock.Anything, mock.Anything).Return([]*udmrepo.ManifestEntryMetadata{},
errors.New("fake-find-error"))
return backupRepo
}(),
expectedError: "unable to load manifests: failed to get manifests with labels map[]: fake-find-error",
},
{
name: "Failed to delete manifest",
isGetManifestError: true,
backupRepo: func() *mocks.BackupRepo {
backupRepo := &mocks.BackupRepo{}
backupRepo.On("FindManifests", mock.Anything, mock.Anything).Return([]*udmrepo.ManifestEntryMetadata{
&meta1,
&meta2,
}, nil)
backupRepo.On("Time").Return(time.Now())
backupRepo.On("DeleteManifest", mock.Anything, mock.Anything).Return(errors.New("fake-delete-error"))
return backupRepo
}(),
expectedError: "unable to delete previous manifest mani-1: fake-delete-error",
},
{
name: "Failed to put manifest",
backupRepo: func() *mocks.BackupRepo {
backupRepo := &mocks.BackupRepo{}
backupRepo.On("FindManifests", mock.Anything, mock.Anything).Return([]*udmrepo.ManifestEntryMetadata{
&meta1,
&meta2,
}, nil)
backupRepo.On("Time").Return(time.Now())
backupRepo.On("DeleteManifest", mock.Anything, mock.Anything).Return(nil)
backupRepo.On("PutManifest", mock.Anything, mock.Anything).Return(udmrepo.ID(""), errors.New("fake-put-error"))
return backupRepo
}(),
expectedError: "fake-put-error",
},
{
name: "Success",
backupRepo: func() *mocks.BackupRepo {
backupRepo := &mocks.BackupRepo{}
backupRepo.On("FindManifests", mock.Anything, mock.Anything).Return([]*udmrepo.ManifestEntryMetadata{
&meta1,
&meta2,
}, nil)
backupRepo.On("Time").Return(time.Now())
backupRepo.On("DeleteManifest", mock.Anything, mock.Anything).Return(nil)
backupRepo.On("PutManifest", mock.Anything, mock.Anything).Return(udmrepo.ID("fake-id"), nil)
return backupRepo
}(),
expectedID: manifest.ID("fake-id"),
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
ctx := context.Background()
id, err := NewShimRepo(tc.backupRepo).ReplaceManifests(ctx, map[string]string{}, nil)
if tc.expectedError != "" {
assert.EqualError(t, err, tc.expectedError)
} else {
assert.NoError(t, err)
}
assert.Equal(t, tc.expectedID, id)
})
}
}

Some files were not shown because too many files have changed in this diff Show More