Compare commits

...

45 Commits

Author SHA1 Message Date
lyndon-li
a60808256d Merge pull request #9108 from blackpiglet/bump_e2e_upgrade_versions
Some checks failed
Run the E2E test on kind / build (push) Failing after 8m6s
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 58s
Bump the Velero and plugin image versions for the upgrade and migrati…
2025-07-24 17:44:48 +08:00
Xun Jiang
befd9d4b51 Bump the Velero and plugin image versions for the upgrade and migration tests.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-07-24 16:35:54 +08:00
lyndon-li
5ae1caef9d Merge pull request #9107 from Lyndon-Li/release-1.16
1.16.2 changelog
2025-07-24 13:53:12 +08:00
Lyndon-Li
cc2dc02cbc 1.16.2 changelog
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-07-24 13:28:38 +08:00
Xun Jiang/Bruce Jiang
189a5b2836 Bump Golang, Ubuntu, and golang.org/x/oauth2 to fix CVEs. (#9104)
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-07-24 12:35:09 +08:00
Xun Jiang/Bruce Jiang
0fc7e2f98a Add imagePullSecrets inheritance for VGDP pod and maintenance job. (#9102)
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-07-23 22:28:28 -05:00
Shubham Pampattiwar
8adfd8d0b1 Merge pull request #9103 from shubham-pampattiwar/fix-backup-desc-cp
Some checks failed
Run the E2E test on kind / build (push) Failing after 7m58s
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 1m2s
[release-1.16] Fix missing defaultVolumesToFsBackup flag output in Velero describe backup cmd (#9056)
2025-07-23 15:51:14 -07:00
Shubham Pampattiwar
78fd58fb43 Update Backup describe string for DefaultVolumesToFSBackup flag (#9105)
add changelog file

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
(cherry picked from commit aa2e09c69e)
2025-07-23 15:01:21 -07:00
Shubham Pampattiwar
8f51c1c08c Fix missing defaultVolumesToFsBackup flag output in Velero describe backup cmd (#9056)
add changelog file

Show defaultVolumesToFsBackup in describe only when set by the user

minor ut fix

minor fix

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
(cherry picked from commit 60a6c7384f)

update changelog filename

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
2025-07-23 15:01:21 -07:00
lyndon-li
fd9f3fe79f issue 9077: don't block backup deletion on list VS error (#9101)
Some checks failed
Run the E2E test on kind / build (push) Failing after 8m26s
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 1m2s
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-07-23 11:04:48 -04:00
Scott Seago
043005c7a4 Mounted cloud credentials should not be world-readable (#8919) (#9094)
Some checks failed
Run the E2E test on kind / build (push) Failing after 8m30s
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 51s
Signed-off-by: Scott Seago <sseago@redhat.com>
2025-07-21 11:11:38 +08:00
Wenkai Yin(尹文开)
1017d7aa6a Merge pull request #9060 from sseago/multiple-hook-tracking-1.16
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m54s
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 1m1s
[release-1.16] Allow for proper tracking of multiple hooks per container
2025-07-07 11:29:05 +08:00
Scott Seago
6709a8a24b Allow for proper tracking of multiple hooks per container
Signed-off-by: Scott Seago <sseago@redhat.com>
2025-07-02 14:21:32 -04:00
Adarsh Saxena
3415f39a76 Bump golang to v1.23.10 to fix CVEs for 1.16.2 release (#9058)
Some checks failed
Run the E2E test on kind / build (push) Failing after 7m49s
Run the E2E test on kind / setup-test-matrix (push) Successful in 4s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 57s
* Bump golang to v1.23.10 to fix CVEs

Signed-off-by: Adarsh Saxena <adarsh.saxena@acquia.com>

* Dockerfile restic miss 1.23.10

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

* restic cve go1.23.10

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

---------

Signed-off-by: Adarsh Saxena <adarsh.saxena@acquia.com>
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
Co-authored-by: Tiger Kaovilai <tkaovila@redhat.com>
2025-07-02 11:49:30 -04:00
Wenkai Yin(尹文开)
8aeb8a2e70 Merge pull request #9010 from blackpiglet/7785_1.16
Some checks failed
Run the E2E test on kind / build (push) Failing after 45s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 42s
[cherry-pick] [release-1.16] Add BSL status check for backup/restore operations.
2025-06-20 14:31:38 +08:00
Xun Jiang
a8ce0fe3a4 Add BSL status check for backup/restore operations.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-06-09 14:53:53 +08:00
Wenkai Yin(尹文开)
2eb97fa8b1 Merge pull request #8940 from ywk253100/250514_fix
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m15s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 32s
Call WaitGroup.Done() once only when PVB changes to fianl status the first time to avoid panic
2025-05-14 15:57:37 +08:00
Wenkai Yin(尹文开)
f64fb36508 Call WaitGroup.Done() once only when PVB changes to final status the first time to avoid panic
Call WaitGroup.Done() once only when PVB changes to final status the first time to avoid
panic

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2025-05-14 15:34:24 +08:00
Xun Jiang/Bruce Jiang
4bd86f1275 Merge pull request #8939 from blackpiglet/modify_image_usage_1.16
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m27s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 32s
[cherry-pick] [1.16] Modify image usage
2025-05-14 12:49:54 +08:00
Xun Jiang
18ef5e61ad Support using image registry proxy in more cases.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-05-14 11:09:03 +08:00
Xun Jiang
01aa5385b5 Add default bakcup repository configuration for E2E.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-05-14 11:02:14 +08:00
Xun Jiang/Bruce Jiang
361717296b Merge pull request #8928 from Lyndon-Li/release-1.16
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m20s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 31s
1.16.1 changelog update
2025-05-12 19:43:36 +08:00
Lyndon-Li
82dce51004 1.16.1 changelog update
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-05-12 18:38:50 +08:00
Xun Jiang/Bruce Jiang
659a352ed1 Add VolumeSnapshotContent into the RIA and the mustHave resource list. (#8926)
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-05-12 17:00:33 +08:00
lyndon-li
9eeea4f211 Merge pull request #8922 from Lyndon-Li/release-1.16
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m26s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 32s
Bump up base image
2025-05-09 17:41:34 +08:00
Lyndon-Li
e1068d6062 bump up base image
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-05-09 17:11:36 +08:00
Xun Jiang/Bruce Jiang
bcd3d513c4 Merge pull request #8921 from Lyndon-Li/release-1.16
1.16.1 changelog
2025-05-09 16:40:01 +08:00
Lyndon-Li
5e87c3d48e 1.16.1 changelog
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-05-09 16:10:31 +08:00
lyndon-li
ed68b43acd Merge pull request #8911 from Lyndon-Li/release-1.16
[1.16] issue-8878: relief node os deduction error checks
2025-05-09 12:44:03 +08:00
lyndon-li
acc8cc41c3 Merge branch 'release-1.16' into release-1.16 2025-05-09 12:07:51 +08:00
lyndon-li
f1271372e8 Merge pull request #8916 from sseago/warn-managed-fields-patch-1.16
[1.16] Warn managed fields patch 1.16
2025-05-09 12:07:06 +08:00
lyndon-li
4b39481776 Merge branch 'release-1.16' into release-1.16 2025-05-09 11:27:08 +08:00
lyndon-li
80837ee2ac Merge branch 'release-1.16' into warn-managed-fields-patch-1.16 2025-05-09 11:27:03 +08:00
lyndon-li
8de844b8d3 Merge pull request #8920 from blackpiglet/remove_gcr_1.16
[1.16][cherry-pick] Remove pushing images to GCR.
2025-05-09 11:26:15 +08:00
Xun Jiang
2809de9ead Remove pushing images to GCR.
Remove dependency with GCR.

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-05-09 10:39:01 +08:00
Scott Seago
ea9b4f37f3 For not found errors on managed fields, add restore warning
Signed-off-by: Scott Seago <sseago@redhat.com>
2025-05-07 11:28:16 -04:00
Lyndon-Li
7bad9df51d issue-8878: relief node os deduction error checks
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-05-07 10:57:44 +08:00
lyndon-li
0c36cc82c1 Merge pull request #8889 from blackpiglet/fix_cve_for_1.16.1
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m30s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 36s
Bump Golang and golang.org/x/net to fix CVEs.
2025-04-28 15:59:18 +08:00
Xun Jiang
0d4fb1fd5e Bump Golang and golang.org/x/net to fix CVEs.
Also fix CVE for Restic.

Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-04-27 15:09:57 +08:00
lyndon-li
8f31599fe4 Merge pull request #8849 from Lyndon-Li/release-1.16
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m40s
Run the E2E test on kind / setup-test-matrix (push) Successful in 2s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 38s
[1.16] Issue 8847: inherit pod info from node-agent-windows
2025-04-08 13:38:04 +08:00
lyndon-li
f8ae1495ac Merge branch 'release-1.16' into release-1.16 2025-04-08 12:57:26 +08:00
Xun Jiang/Bruce Jiang
b469d9f427 Bump base image to 0.2.57 to fix CVEs. (#8853)
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m58s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 37s
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
2025-04-07 23:27:46 -04:00
Lyndon-Li
87084ce3c7 issue 8847: inherit pod info from node-agent-windows
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-04-07 19:41:14 +08:00
lyndon-li
3df026ffdb Merge pull request #8834 from Lyndon-Li/release-1.16
Some checks failed
Run the E2E test on kind / build (push) Failing after 6m12s
Run the E2E test on kind / setup-test-matrix (push) Successful in 3s
Run the E2E test on kind / run-e2e-test (push) Has been skipped
Main CI / Build (push) Failing after 37s
Pin velero image for 1.16.0
2025-03-31 15:15:43 +08:00
Lyndon-Li
406a730c2a pin velero image
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2025-03-31 13:39:27 +08:00
80 changed files with 1371 additions and 602 deletions

View File

@@ -121,6 +121,8 @@ jobs:
curl -LO https://dl.k8s.io/release/v${{ matrix.k8s }}/bin/linux/amd64/kubectl
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
git clone https://github.com/vmware-tanzu-experiments/distributed-data-generator.git -b main /tmp/kibishii
GOPATH=~/go \
CLOUD_PROVIDER=kind \
OBJECT_STORE_PROVIDER=aws \
@@ -132,7 +134,9 @@ jobs:
ADDITIONAL_CREDS_FILE=/tmp/credential \
ADDITIONAL_BSL_BUCKET=additional-bucket \
VELERO_IMAGE=velero:pr-test-linux-amd64 \
PLUGINS=velero/velero-plugin-for-aws:latest \
GINKGO_LABELS="${{ matrix.labels }}" \
KIBISHII_DIRECTORY=/tmp/kibishii/kubernetes/yaml/ \
make -C test/ run-e2e
timeout-minutes: 30
- name: Upload debug bundle

View File

@@ -20,15 +20,6 @@ jobs:
uses: actions/setup-go@v5
with:
go-version-file: 'go.mod'
- id: 'auth'
uses: google-github-actions/auth@v2
with:
credentials_json: '${{ secrets.GCS_SA_KEY }}'
- name: 'set up GCloud SDK'
uses: google-github-actions/setup-gcloud@v2
- name: 'use gcloud CLI'
run: |
gcloud info
- name: Set up QEMU
id: qemu
uses: docker/setup-qemu-action@v3
@@ -52,12 +43,6 @@ jobs:
token: ${{ secrets.CODECOV_TOKEN }}
files: coverage.out
verbose: true
# Use the JSON key in secret to login gcr.io
- uses: 'docker/login-action@v3'
with:
registry: 'gcr.io' # or REGION.docker.pkg.dev
username: '_json_key'
password: '${{ secrets.GCR_SA_KEY }}'
# Only try to publish the container image from the root repo; forks don't have permission to do so and will always get failures.
- name: Publish container image
if: github.repository == 'vmware-tanzu/velero'

View File

@@ -13,7 +13,7 @@
# limitations under the License.
# Velero binary build section
FROM --platform=$BUILDPLATFORM golang:1.23-bookworm AS velero-builder
FROM --platform=$BUILDPLATFORM golang:1.23.11-bookworm AS velero-builder
ARG GOPROXY
ARG BIN
@@ -49,7 +49,7 @@ RUN mkdir -p /output/usr/bin && \
go clean -modcache -cache
# Restic binary build section
FROM --platform=$BUILDPLATFORM golang:1.23-bookworm AS restic-builder
FROM --platform=$BUILDPLATFORM golang:1.23.11-bookworm AS restic-builder
ARG GOPROXY
ARG BIN
@@ -73,7 +73,7 @@ RUN mkdir -p /output/usr/bin && \
go clean -modcache -cache
# Velero image packing section
FROM paketobuildpacks/run-jammy-tiny:latest
FROM paketobuildpacks/run-jammy-tiny:0.2.73
LABEL maintainer="Xun Jiang <jxun@vmware.com>"
@@ -82,4 +82,3 @@ COPY --from=velero-builder /output /
COPY --from=restic-builder /output /
USER cnb:cnb

View File

@@ -15,7 +15,7 @@
ARG OS_VERSION=1809
# Velero binary build section
FROM --platform=$BUILDPLATFORM golang:1.23-bookworm AS velero-builder
FROM --platform=$BUILDPLATFORM golang:1.23.10-bookworm AS velero-builder
ARG GOPROXY
ARG BIN

View File

@@ -34,11 +34,9 @@ REGISTRY ?= velero
# docker buildx create --name=velero-builder --driver=docker-container --bootstrap --use --config ./buildkitd.toml
# Refer to https://github.com/docker/buildx/issues/1370#issuecomment-1288516840 for more details
INSECURE_REGISTRY ?= false
GCR_REGISTRY ?= gcr.io/velero-gcp
# Image name
IMAGE ?= $(REGISTRY)/$(BIN)
GCR_IMAGE ?= $(GCR_REGISTRY)/$(BIN)
# We allow the Dockerfile to be configurable to enable the use of custom Dockerfiles
# that pull base images from different registries.
@@ -81,10 +79,8 @@ TAG_LATEST ?= false
ifeq ($(TAG_LATEST), true)
IMAGE_TAGS ?= $(IMAGE):$(VERSION) $(IMAGE):latest
GCR_IMAGE_TAGS ?= $(GCR_IMAGE):$(VERSION) $(GCR_IMAGE):latest
else
IMAGE_TAGS ?= $(IMAGE):$(VERSION)
GCR_IMAGE_TAGS ?= $(GCR_IMAGE):$(VERSION)
endif
# check buildx is enabled only if docker is in path
@@ -116,7 +112,6 @@ CLI_PLATFORMS ?= linux-amd64 linux-arm linux-arm64 darwin-amd64 darwin-arm64 win
BUILD_OUTPUT_TYPE ?= docker
BUILD_OS ?= linux
BUILD_ARCH ?= amd64
BUILD_TAG_GCR ?= false
BUILD_WINDOWS_VERSION ?= ltsc2022
ifeq ($(BUILD_OUTPUT_TYPE), docker)
@@ -134,9 +129,6 @@ ALL_OS_ARCH.windows = $(foreach os, $(filter windows,$(ALL_OS)), $(foreach arch,
ALL_OS_ARCH = $(ALL_OS_ARCH.linux)$(ALL_OS_ARCH.windows)
ALL_IMAGE_TAGS = $(IMAGE_TAGS)
ifeq ($(BUILD_TAG_GCR), true)
ALL_IMAGE_TAGS += $(GCR_IMAGE_TAGS)
endif
# set git sha and tree state
GIT_SHA = $(shell git rev-parse HEAD)

View File

@@ -52,7 +52,7 @@ git_sha = str(local("git rev-parse HEAD", quiet = True, echo_off = True)).strip(
tilt_helper_dockerfile_header = """
# Tilt image
FROM golang:1.23 as tilt-helper
FROM golang:1.23.11 as tilt-helper
# Support live reloading with Tilt
RUN wget --output-document /restart.sh --quiet https://raw.githubusercontent.com/windmilleng/rerun-process-wrapper/master/restart.sh && \

View File

@@ -1,3 +1,48 @@
## v1.16.2
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.16.2
### Container Image
`velero/velero:v1.16.2`
### Documentation
https://velero.io/docs/v1.16/
### Upgrading
https://velero.io/docs/v1.16/upgrade-to-1.16/
### All Changes
* Update "Default Volumes to Fs Backup" to "File System Backup (Default)" (#9105, @shubham-pampattiwar)
* Fix missing defaultVolumesToFsBackup flag output in Velero describe backup cmd (#9103, @shubham-pampattiwar)
* Add imagePullSecrets inheritance for VGDP pod and maintenance job. (#9102, @blackpiglet)
* Fix issue #9077, don't block backup deletion on list VS error (#9101, @Lyndon-Li)
* Mounted cloud credentials should not be world-readable (#9094, @sseago)
* Allow for proper tracking of multiple hooks per container (#9060, @sseago)
* Add BSL status check for backup/restore operations. (#9010, @blackpiglet)
## v1.16.1
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.16.1
### Container Image
`velero/velero:v1.16.1`
### Documentation
https://velero.io/docs/v1.16/
### Upgrading
https://velero.io/docs/v1.16/upgrade-to-1.16/
### All Changes
* Call WaitGroup.Done() once only when PVB changes to final status the first time to avoid panic (#8940, @ywk253100)
* Add VolumeSnapshotContent into the RIA and the mustHave resource list. (#8926, @blackpiglet)
* Warn for not found error in patching managed fields (#8916, @sseago)
* Fix issue 8878, relief node os deduction error checks (#8911, @Lyndon-Li)
## v1.16
### Download

14
go.mod
View File

@@ -2,7 +2,7 @@ module github.com/vmware-tanzu/velero
go 1.23.0
toolchain go1.23.6
toolchain go1.23.11
require (
cloud.google.com/go/storage v1.50.0
@@ -44,9 +44,9 @@ require (
go.uber.org/zap v1.27.0
golang.org/x/exp v0.0.0-20230522175609-2e198f4a06a1
golang.org/x/mod v0.22.0
golang.org/x/net v0.36.0
golang.org/x/net v0.38.0
golang.org/x/oauth2 v0.27.0
golang.org/x/text v0.22.0
golang.org/x/text v0.23.0
google.golang.org/api v0.218.0
google.golang.org/grpc v1.69.4
google.golang.org/protobuf v1.36.3
@@ -179,10 +179,10 @@ require (
go.opentelemetry.io/otel/trace v1.34.0 // indirect
go.starlark.net v0.0.0-20230525235612-a134d8f9ddca // indirect
go.uber.org/multierr v1.11.0 // indirect
golang.org/x/crypto v0.35.0 // indirect
golang.org/x/sync v0.11.0 // indirect
golang.org/x/sys v0.30.0 // indirect
golang.org/x/term v0.29.0 // indirect
golang.org/x/crypto v0.36.0 // indirect
golang.org/x/sync v0.12.0 // indirect
golang.org/x/sys v0.31.0 // indirect
golang.org/x/term v0.30.0 // indirect
golang.org/x/time v0.9.0 // indirect
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d // indirect
gomodules.xyz/jsonpatch/v2 v2.4.0 // indirect

24
go.sum
View File

@@ -780,8 +780,8 @@ golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0/go.mod h1:LzIPMQfyMNhhGPh
golang.org/x/crypto v0.0.0-20210220033148-5ea612d1eb83/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
golang.org/x/crypto v0.0.0-20210421170649-83a5a9bb288b/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.35.0 h1:b15kiHdrGCHrP6LvwaQ3c03kgNhhiMgvlhxHQhmg2Xs=
golang.org/x/crypto v0.35.0/go.mod h1:dy7dXNW32cAb/6/PRuTNsix8T+vJAqvuIy5Bli/x0YQ=
golang.org/x/crypto v0.36.0 h1:AnAEvhDddvBdpY+uR+MyHmuZzzNqXSe/GvuDeob5L34=
golang.org/x/crypto v0.36.0/go.mod h1:Y4J0ReaxCR1IMaabaSMugxJES1EpwhBHhv2bDHklZvc=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@@ -866,8 +866,8 @@ golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4/go.mod h1:RBQZq4jEuRlivfhVLd
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20210520170846-37e1c6afe023/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.36.0 h1:vWF2fRbw4qslQsQzgFqZff+BItCvGFQqKzKIzx1rmoA=
golang.org/x/net v0.36.0/go.mod h1:bFmbeoIPfrw4sMHNhb4J9f6+tPziuGjq7Jk/38fxi1I=
golang.org/x/net v0.38.0 h1:vRMAPTMaeGqVhG5QyLJHqNDwecKTomGeqbnfZyKlBI8=
golang.org/x/net v0.38.0/go.mod h1:ivrbrMbzFq5J41QOQh0siUuly180yBYtLp+CKbEaFx8=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -894,8 +894,8 @@ golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.11.0 h1:GGz8+XQP4FvTTrjZPzNKTMFtSXH80RAzG+5ghFPgK9w=
golang.org/x/sync v0.11.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.12.0 h1:MHc5BpPuC30uJk597Ri8TV3CNZcTLu6B6z4lJy+g6Jw=
golang.org/x/sync v0.12.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -959,14 +959,14 @@ golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.30.0 h1:QjkSwP/36a20jFYWkSue1YwXzLmsV5Gfq7Eiy72C1uc=
golang.org/x/sys v0.30.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.31.0 h1:ioabZlmFYtWhL+TRYpcnNlLwhyxaM9kWTDEmfnprqik=
golang.org/x/sys v0.31.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20220526004731-065cf7ba2467/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.29.0 h1:L6pJp37ocefwRRtYPKSWOWzOtWSxVajvz2ldH/xi3iU=
golang.org/x/term v0.29.0/go.mod h1:6bl4lRlvVuDgSf3179VpIxBF0o10JUpXWOnI7nErv7s=
golang.org/x/term v0.30.0 h1:PQ39fJZ+mfadBm0y5WlL4vlM7Sx1Hgf13sMIY2+QS9Y=
golang.org/x/term v0.30.0/go.mod h1:NYYFdzHoI5wRh/h5tDMdMqCqPJZEuNqVR5xJLd/n67g=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@@ -976,8 +976,8 @@ golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.22.0 h1:bofq7m3/HAFvbF51jz3Q9wLg3jkvSPuiZu/pD1XwgtM=
golang.org/x/text v0.22.0/go.mod h1:YRoo4H8PVmsu+E3Ou7cqLVH8oXWIHVoX0jqUWALQhfY=
golang.org/x/text v0.23.0 h1:D71I7dUrlY+VX0gQShAThNGHFxZ13dGLBHQLVl1mJlY=
golang.org/x/text v0.23.0/go.mod h1:/BLNzu4aZCJ1+kcD0DNRotWKage4q2rGVAg4o22unh4=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=

View File

@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
FROM --platform=$TARGETPLATFORM golang:1.23-bookworm
FROM --platform=$TARGETPLATFORM golang:1.23.11-bookworm
ARG GOPROXY

View File

@@ -113,5 +113,4 @@ TAG_LATEST="$TAG_LATEST" \
BUILD_OS="$BUILD_OS" \
BUILD_ARCH="$BUILD_ARCH" \
BUILD_OUTPUT_TYPE=$OUTPUT_TYPE \
BUILD_TAG_GCR=true \
make all-containers

View File

@@ -1,8 +1,8 @@
diff --git a/go.mod b/go.mod
index 5f939c481..0a0a353a7 100644
index 5f939c481..3ff6e6fa1 100644
--- a/go.mod
+++ b/go.mod
@@ -24,32 +24,32 @@ require (
@@ -24,32 +24,31 @@ require (
github.com/restic/chunker v0.4.0
github.com/spf13/cobra v1.6.1
github.com/spf13/pflag v1.0.5
@@ -14,23 +14,23 @@ index 5f939c481..0a0a353a7 100644
- golang.org/x/term v0.4.0
- golang.org/x/text v0.6.0
- google.golang.org/api v0.106.0
+ golang.org/x/crypto v0.35.0
+ golang.org/x/net v0.36.0
+ golang.org/x/oauth2 v0.7.0
+ golang.org/x/sync v0.11.0
+ golang.org/x/sys v0.30.0
+ golang.org/x/term v0.29.0
+ golang.org/x/text v0.22.0
+ golang.org/x/crypto v0.36.0
+ golang.org/x/net v0.38.0
+ golang.org/x/oauth2 v0.27.0
+ golang.org/x/sync v0.12.0
+ golang.org/x/sys v0.31.0
+ golang.org/x/term v0.30.0
+ golang.org/x/text v0.23.0
+ google.golang.org/api v0.114.0
)
require (
- cloud.google.com/go v0.108.0 // indirect
- cloud.google.com/go/compute v1.15.1 // indirect
+ cloud.google.com/go v0.110.0 // indirect
+ cloud.google.com/go/compute v1.19.1 // indirect
cloud.google.com/go/compute/metadata v0.2.3 // indirect
- cloud.google.com/go/compute/metadata v0.2.3 // indirect
- cloud.google.com/go/iam v0.10.0 // indirect
+ cloud.google.com/go v0.110.0 // indirect
+ cloud.google.com/go/compute/metadata v0.3.0 // indirect
+ cloud.google.com/go/iam v0.13.0 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.1.2 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.2 // indirect
@@ -49,7 +49,7 @@ index 5f939c481..0a0a353a7 100644
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/cpuid/v2 v2.2.3 // indirect
@@ -63,11 +63,13 @@ require (
@@ -63,11 +62,13 @@ require (
go.opencensus.io v0.24.0 // indirect
golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2 // indirect
google.golang.org/appengine v1.6.7 // indirect
@@ -66,26 +66,27 @@ index 5f939c481..0a0a353a7 100644
-go 1.18
+go 1.23.0
+
+toolchain go1.23.7
+toolchain go1.23.11
\ No newline at end of file
diff --git a/go.sum b/go.sum
index 026e1d2fa..5ebc8e609 100644
index 026e1d2fa..d7857bb2b 100644
--- a/go.sum
+++ b/go.sum
@@ -1,23 +1,26 @@
@@ -1,23 +1,24 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
-cloud.google.com/go v0.108.0 h1:xntQwnfn8oHGX0crLVinvHM+AhXvi3QHQIEcX/2hiWk=
-cloud.google.com/go v0.108.0/go.mod h1:lNUfQqusBJp0bgAg6qrHgYFYbTB+dOiob1itwnlD33Q=
-cloud.google.com/go/compute v1.15.1 h1:7UGq3QknM33pw5xATlpzeoomNxsacIVvTqTTvbfajmE=
-cloud.google.com/go/compute v1.15.1/go.mod h1:bjjoF/NtFUrkD/urWfdHaKuOPDR5nWIs63rR+SXhcpA=
+cloud.google.com/go v0.110.0 h1:Zc8gqp3+a9/Eyph2KDmcGaPtbKRIoqq4YTlL4NMD0Ys=
+cloud.google.com/go v0.110.0/go.mod h1:SJnCLqQ0FCFGSZMUNUf84MV3Aia54kn7pi8st7tMzaY=
+cloud.google.com/go/compute v1.19.1 h1:am86mquDUgjGNWxiGn+5PGLbmgiWXlE/yNWpIpNvuXY=
+cloud.google.com/go/compute v1.19.1/go.mod h1:6ylj3a05WF8leseCdIf77NK0g1ey+nj5IKd5/kvShxE=
cloud.google.com/go/compute/metadata v0.2.3 h1:mg4jlk7mCAj6xXp9UJ4fjI9VUI5rubuGBW5aJ7UnBMY=
cloud.google.com/go/compute/metadata v0.2.3/go.mod h1:VAV5nSsACxMJvgaAuX6Pk2AawlZn8kiOGuCv6gTkwuA=
-cloud.google.com/go/compute/metadata v0.2.3 h1:mg4jlk7mCAj6xXp9UJ4fjI9VUI5rubuGBW5aJ7UnBMY=
-cloud.google.com/go/compute/metadata v0.2.3/go.mod h1:VAV5nSsACxMJvgaAuX6Pk2AawlZn8kiOGuCv6gTkwuA=
-cloud.google.com/go/iam v0.10.0 h1:fpP/gByFs6US1ma53v7VxhvbJpO2Aapng6wabJ99MuI=
-cloud.google.com/go/iam v0.10.0/go.mod h1:nXAECrMt2qHpF6RZUZseteD6QyanL68reN4OXPw0UWM=
-cloud.google.com/go/longrunning v0.3.0 h1:NjljC+FYPV3uh5/OwWT6pVU+doBqMg2x/rZlE+CamDs=
+cloud.google.com/go v0.110.0 h1:Zc8gqp3+a9/Eyph2KDmcGaPtbKRIoqq4YTlL4NMD0Ys=
+cloud.google.com/go v0.110.0/go.mod h1:SJnCLqQ0FCFGSZMUNUf84MV3Aia54kn7pi8st7tMzaY=
+cloud.google.com/go/compute/metadata v0.3.0 h1:Tz+eQXMEqDIKRsmY3cHTL6FVaynIjX2QxYC4trgAKZc=
+cloud.google.com/go/compute/metadata v0.3.0/go.mod h1:zFmK7XCadkQkj6TtorcaGlCW1hT1fIilQDwofLpJ20k=
+cloud.google.com/go/iam v0.13.0 h1:+CmB+K0J/33d0zSQ9SlFWUeCCEn5XJA0ZMZ3pHE9u8k=
+cloud.google.com/go/iam v0.13.0/go.mod h1:ljOg+rcNfzZ5d6f1nAUJ8ZIxOaZUVoS14bKCtaLZ/D0=
+cloud.google.com/go/longrunning v0.4.1 h1:v+yFJOfKC3yZdY6ZUI933pIYdhyhV8S3NpWrXWmg7jM=
@@ -105,7 +106,7 @@ index 026e1d2fa..5ebc8e609 100644
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/Julusian/godocdown v0.0.0-20170816220326-6d19f8ff2df8/go.mod h1:INZr5t32rG59/5xeltqoCJoNY7e5x/3xoY9WSWVWg74=
github.com/anacrolix/fuse v0.2.0 h1:pc+To78kI2d/WUjIyrsdqeJQAesuwpGxlI3h1nAv3Do=
@@ -54,6 +57,7 @@ github.com/felixge/fgprof v0.9.3/go.mod h1:RdbpDgzqYVh/T9fPELJyV7EYJuHB55UTEULNu
@@ -54,6 +55,7 @@ github.com/felixge/fgprof v0.9.3/go.mod h1:RdbpDgzqYVh/T9fPELJyV7EYJuHB55UTEULNu
github.com/go-ole/go-ole v1.2.6 h1:/Fpf6oFPoeFik9ty7siob0G6Ke8QvQEuVcuChpwXzpY=
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
github.com/golang-jwt/jwt v3.2.1+incompatible h1:73Z+4BJcrTC+KczS6WvTPvRGOp1WmfEP4Q1lOd9Z/+c=
@@ -113,7 +114,7 @@ index 026e1d2fa..5ebc8e609 100644
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE=
@@ -70,8 +74,8 @@ github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvq
@@ -70,8 +72,8 @@ github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvq
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
@@ -124,7 +125,7 @@ index 026e1d2fa..5ebc8e609 100644
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
@@ -82,17 +86,18 @@ github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
@@ -82,17 +84,18 @@ github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
@@ -148,7 +149,7 @@ index 026e1d2fa..5ebc8e609 100644
github.com/hashicorp/golang-lru/v2 v2.0.1 h1:5pv5N1lT1fjLg2VQ5KWc7kmucp2x/kvFOnxuVTqZ6x4=
github.com/hashicorp/golang-lru/v2 v2.0.1/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/ianlancetaylor/demangle v0.0.0-20210905161508-09a460cdf81d/go.mod h1:aYm2/VgdVmcIU8iMfdMvDMsRAQjcfZSKFby6HOFvi/w=
@@ -114,6 +119,7 @@ github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg=
@@ -114,6 +117,7 @@ github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg=
github.com/kurin/blazer v0.5.4-0.20211030221322-ba894c124ac6 h1:nz7i1au+nDzgExfqW5Zl6q85XNTvYoGnM5DHiQC0yYs=
github.com/kurin/blazer v0.5.4-0.20211030221322-ba894c124ac6/go.mod h1:4FCXMUWo9DllR2Do4TtBd377ezyAJ51vB5uTBjt0pGU=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
@@ -156,7 +157,7 @@ index 026e1d2fa..5ebc8e609 100644
github.com/minio/md5-simd v1.1.2 h1:Gdi1DZK69+ZVMoNHRXJyNcxrMA4dSxoYHZSQbirFg34=
github.com/minio/md5-simd v1.1.2/go.mod h1:MzdKDxYpY2BT9XQFocsiZf/NKVtR7nkE4RoEpN+20RM=
github.com/minio/minio-go/v7 v7.0.46 h1:Vo3tNmNXuj7ME5qrvN4iadO7b4mzu/RSFdUkUhaPldk=
@@ -129,6 +135,7 @@ github.com/modocache/gover v0.0.0-20171022184752-b58185e213c5/go.mod h1:caMODM3P
@@ -129,6 +133,7 @@ github.com/modocache/gover v0.0.0-20171022184752-b58185e213c5/go.mod h1:caMODM3P
github.com/ncw/swift/v2 v2.0.1 h1:q1IN8hNViXEv8Zvg3Xdis4a3c4IlIGezkYz09zQL5J0=
github.com/ncw/swift/v2 v2.0.1/go.mod h1:z0A9RVdYPjNjXVo2pDOPxZ4eu3oarO1P91fTItcb+Kg=
github.com/pkg/browser v0.0.0-20210115035449-ce105d075bb4 h1:Qj1ukM4GlMWXNdMBuXcXfz/Kw9s1qm0CLY32QxuSImI=
@@ -164,66 +165,66 @@ index 026e1d2fa..5ebc8e609 100644
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/profile v1.7.0 h1:hnbDkaNWPCLMO9wGLdBFTIZvzDrDfBM2072E1S9gJkA=
@@ -172,8 +179,8 @@ golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACk
@@ -172,8 +177,8 @@ golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACk
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
-golang.org/x/crypto v0.5.0 h1:U/0M97KRkSFvyD/3FSmdP5W5swImpNgle/EHFhOsQPE=
-golang.org/x/crypto v0.5.0/go.mod h1:NK/OQwhpMQP3MwtdjgLlYHnH9ebylxKWv3e0fK+mkQU=
+golang.org/x/crypto v0.35.0 h1:b15kiHdrGCHrP6LvwaQ3c03kgNhhiMgvlhxHQhmg2Xs=
+golang.org/x/crypto v0.35.0/go.mod h1:dy7dXNW32cAb/6/PRuTNsix8T+vJAqvuIy5Bli/x0YQ=
+golang.org/x/crypto v0.36.0 h1:AnAEvhDddvBdpY+uR+MyHmuZzzNqXSe/GvuDeob5L34=
+golang.org/x/crypto v0.36.0/go.mod h1:Y4J0ReaxCR1IMaabaSMugxJES1EpwhBHhv2bDHklZvc=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
@@ -189,17 +196,17 @@ golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLL
@@ -189,17 +194,17 @@ golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLL
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
-golang.org/x/net v0.5.0 h1:GyT4nK/YDHSqa1c4753ouYCDajOYKTja9Xb/OHtgvSw=
-golang.org/x/net v0.5.0/go.mod h1:DivGGAXEgPSlEBzxGzZI+ZLohi+xUj054jfeKui00ws=
+golang.org/x/net v0.36.0 h1:vWF2fRbw4qslQsQzgFqZff+BItCvGFQqKzKIzx1rmoA=
+golang.org/x/net v0.36.0/go.mod h1:bFmbeoIPfrw4sMHNhb4J9f6+tPziuGjq7Jk/38fxi1I=
+golang.org/x/net v0.38.0 h1:vRMAPTMaeGqVhG5QyLJHqNDwecKTomGeqbnfZyKlBI8=
+golang.org/x/net v0.38.0/go.mod h1:ivrbrMbzFq5J41QOQh0siUuly180yBYtLp+CKbEaFx8=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
-golang.org/x/oauth2 v0.4.0 h1:NF0gk8LVPg1Ml7SSbGyySuoxdsXitj7TvgvuRxIMc/M=
-golang.org/x/oauth2 v0.4.0/go.mod h1:RznEsdpjGAINPTOF0UH/t+xJ75L18YO3Ho6Pyn+uRec=
+golang.org/x/oauth2 v0.7.0 h1:qe6s0zUXlPX80/dITx3440hWZ7GwMwgDDyrSGTPJG/g=
+golang.org/x/oauth2 v0.7.0/go.mod h1:hPLQkd9LyjfXTiRohC/41GhcFqxisoUQ99sCUOHO9x4=
+golang.org/x/oauth2 v0.27.0 h1:da9Vo7/tDv5RH/7nZDz1eMGS/q1Vv1N/7FCrBhI9I3M=
+golang.org/x/oauth2 v0.27.0/go.mod h1:onh5ek6nERTohokkhCD/y2cV4Do3fxFHFuAejCkRWT8=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
-golang.org/x/sync v0.1.0 h1:wsuoTGHzEhffawBOhz5CYhcrV4IdKZbEyZjBMuTp12o=
-golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.11.0 h1:GGz8+XQP4FvTTrjZPzNKTMFtSXH80RAzG+5ghFPgK9w=
+golang.org/x/sync v0.11.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
+golang.org/x/sync v0.12.0 h1:MHc5BpPuC30uJk597Ri8TV3CNZcTLu6B6z4lJy+g6Jw=
+golang.org/x/sync v0.12.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -214,17 +221,17 @@ golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBc
@@ -214,17 +219,17 @@ golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20220408201424-a24fb2fb8a0f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220704084225-05e143d24a9e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
-golang.org/x/sys v0.4.0 h1:Zr2JFtRQNX3BCZ8YtxRE9hNJYC8J6I1MVbMg6owUp18=
-golang.org/x/sys v0.4.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.30.0 h1:QjkSwP/36a20jFYWkSue1YwXzLmsV5Gfq7Eiy72C1uc=
+golang.org/x/sys v0.30.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
+golang.org/x/sys v0.31.0 h1:ioabZlmFYtWhL+TRYpcnNlLwhyxaM9kWTDEmfnprqik=
+golang.org/x/sys v0.31.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
-golang.org/x/term v0.4.0 h1:O7UWfv5+A2qiuulQk30kVinPoMtoIPeVaKLEgLpVkvg=
-golang.org/x/term v0.4.0/go.mod h1:9P2UbLfCdcvo3p/nzKvsmas4TnlujnuoV9hGgYzW1lQ=
+golang.org/x/term v0.29.0 h1:L6pJp37ocefwRRtYPKSWOWzOtWSxVajvz2ldH/xi3iU=
+golang.org/x/term v0.29.0/go.mod h1:6bl4lRlvVuDgSf3179VpIxBF0o10JUpXWOnI7nErv7s=
+golang.org/x/term v0.30.0 h1:PQ39fJZ+mfadBm0y5WlL4vlM7Sx1Hgf13sMIY2+QS9Y=
+golang.org/x/term v0.30.0/go.mod h1:NYYFdzHoI5wRh/h5tDMdMqCqPJZEuNqVR5xJLd/n67g=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
-golang.org/x/text v0.6.0 h1:3XmdazWV+ubf7QgHSTWeykHOci5oeekaGJBLkrkaw4k=
-golang.org/x/text v0.6.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
+golang.org/x/text v0.22.0 h1:bofq7m3/HAFvbF51jz3Q9wLg3jkvSPuiZu/pD1XwgtM=
+golang.org/x/text v0.22.0/go.mod h1:YRoo4H8PVmsu+E3Ou7cqLVH8oXWIHVoX0jqUWALQhfY=
+golang.org/x/text v0.23.0 h1:D71I7dUrlY+VX0gQShAThNGHFxZ13dGLBHQLVl1mJlY=
+golang.org/x/text v0.23.0/go.mod h1:/BLNzu4aZCJ1+kcD0DNRotWKage4q2rGVAg4o22unh4=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
@@ -237,8 +244,8 @@ golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8T
@@ -237,8 +242,8 @@ golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8T
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2 h1:H2TDz8ibqkAF6YGhCdN3jS9O0/s90v0rJh3X/OLHEUk=
golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8=
@@ -234,7 +235,7 @@ index 026e1d2fa..5ebc8e609 100644
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c=
@@ -246,15 +253,15 @@ google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCID
@@ -246,15 +251,15 @@ google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCID
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
@@ -254,7 +255,7 @@ index 026e1d2fa..5ebc8e609 100644
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
@@ -266,14 +273,15 @@ google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpAD
@@ -266,14 +271,15 @@ google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpAD
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=

View File

@@ -71,7 +71,8 @@ func (n *namespacedFileStore) Path(selector *corev1api.SecretKeySelector) (strin
keyFilePath := filepath.Join(n.fsRoot, fmt.Sprintf("%s-%s", selector.Name, selector.Key))
file, err := n.fs.OpenFile(keyFilePath, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0644)
// owner RW perms, group R perms, no public perms
file, err := n.fs.OpenFile(keyFilePath, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0640)
if err != nil {
return "", errors.Wrap(err, "unable to open credentials file for writing")
}

View File

@@ -46,6 +46,9 @@ type hookKey struct {
// Container indicates the container hooks use.
// For hooks specified in the backup/restore spec, the container might be the same under different hookName.
container string
// hookIndex contains the slice index for the specific hook, in order to track multiple hooks
// for the same container
hookIndex int
}
// hookStatus records the execution status of a specific hook.
@@ -83,7 +86,7 @@ func NewHookTracker() *HookTracker {
// Add adds a hook to the hook tracker
// Add must precede the Record for each individual hook.
// In other words, a hook must be added to the tracker before its execution result is recorded.
func (ht *HookTracker) Add(podNamespace, podName, container, source, hookName string, hookPhase HookPhase) {
func (ht *HookTracker) Add(podNamespace, podName, container, source, hookName string, hookPhase HookPhase, hookIndex int) {
ht.lock.Lock()
defer ht.lock.Unlock()
@@ -94,6 +97,7 @@ func (ht *HookTracker) Add(podNamespace, podName, container, source, hookName st
container: container,
hookPhase: hookPhase,
hookName: hookName,
hookIndex: hookIndex,
}
if _, ok := ht.tracker[key]; !ok {
@@ -108,7 +112,7 @@ func (ht *HookTracker) Add(podNamespace, podName, container, source, hookName st
// Record records the hook's execution status
// Add must precede the Record for each individual hook.
// In other words, a hook must be added to the tracker before its execution result is recorded.
func (ht *HookTracker) Record(podNamespace, podName, container, source, hookName string, hookPhase HookPhase, hookFailed bool, hookErr error) error {
func (ht *HookTracker) Record(podNamespace, podName, container, source, hookName string, hookPhase HookPhase, hookIndex int, hookFailed bool, hookErr error) error {
ht.lock.Lock()
defer ht.lock.Unlock()
@@ -119,6 +123,7 @@ func (ht *HookTracker) Record(podNamespace, podName, container, source, hookName
container: container,
hookPhase: hookPhase,
hookName: hookName,
hookIndex: hookIndex,
}
if _, ok := ht.tracker[key]; !ok {
@@ -179,24 +184,24 @@ func NewMultiHookTracker() *MultiHookTracker {
}
// Add adds a backup/restore hook to the tracker
func (mht *MultiHookTracker) Add(name, podNamespace, podName, container, source, hookName string, hookPhase HookPhase) {
func (mht *MultiHookTracker) Add(name, podNamespace, podName, container, source, hookName string, hookPhase HookPhase, hookIndex int) {
mht.lock.Lock()
defer mht.lock.Unlock()
if _, ok := mht.trackers[name]; !ok {
mht.trackers[name] = NewHookTracker()
}
mht.trackers[name].Add(podNamespace, podName, container, source, hookName, hookPhase)
mht.trackers[name].Add(podNamespace, podName, container, source, hookName, hookPhase, hookIndex)
}
// Record records a backup/restore hook execution status
func (mht *MultiHookTracker) Record(name, podNamespace, podName, container, source, hookName string, hookPhase HookPhase, hookFailed bool, hookErr error) error {
func (mht *MultiHookTracker) Record(name, podNamespace, podName, container, source, hookName string, hookPhase HookPhase, hookIndex int, hookFailed bool, hookErr error) error {
mht.lock.RLock()
defer mht.lock.RUnlock()
var err error
if _, ok := mht.trackers[name]; ok {
err = mht.trackers[name].Record(podNamespace, podName, container, source, hookName, hookPhase, hookFailed, hookErr)
err = mht.trackers[name].Record(podNamespace, podName, container, source, hookName, hookPhase, hookIndex, hookFailed, hookErr)
} else {
err = fmt.Errorf("the backup/restore not exist in hook tracker, backup/restore name: %s", name)
}

View File

@@ -33,7 +33,7 @@ func TestNewHookTracker(t *testing.T) {
func TestHookTracker_Add(t *testing.T) {
tracker := NewHookTracker()
tracker.Add("ns1", "pod1", "container1", HookSourceAnnotation, "h1", "")
tracker.Add("ns1", "pod1", "container1", HookSourceAnnotation, "h1", "", 0)
key := hookKey{
podNamespace: "ns1",
@@ -50,8 +50,8 @@ func TestHookTracker_Add(t *testing.T) {
func TestHookTracker_Record(t *testing.T) {
tracker := NewHookTracker()
tracker.Add("ns1", "pod1", "container1", HookSourceAnnotation, "h1", "")
err := tracker.Record("ns1", "pod1", "container1", HookSourceAnnotation, "h1", "", true, fmt.Errorf("err"))
tracker.Add("ns1", "pod1", "container1", HookSourceAnnotation, "h1", "", 0)
err := tracker.Record("ns1", "pod1", "container1", HookSourceAnnotation, "h1", "", 0, true, fmt.Errorf("err"))
key := hookKey{
podNamespace: "ns1",
@@ -67,10 +67,10 @@ func TestHookTracker_Record(t *testing.T) {
assert.True(t, info.hookExecuted)
assert.NoError(t, err)
err = tracker.Record("ns2", "pod2", "container1", HookSourceAnnotation, "h1", "", true, fmt.Errorf("err"))
err = tracker.Record("ns2", "pod2", "container1", HookSourceAnnotation, "h1", "", 0, true, fmt.Errorf("err"))
assert.Error(t, err)
err = tracker.Record("ns1", "pod1", "container1", HookSourceAnnotation, "h1", "", false, nil)
err = tracker.Record("ns1", "pod1", "container1", HookSourceAnnotation, "h1", "", 0, false, nil)
assert.NoError(t, err)
assert.True(t, info.hookFailed)
}
@@ -78,29 +78,30 @@ func TestHookTracker_Record(t *testing.T) {
func TestHookTracker_Stat(t *testing.T) {
tracker := NewHookTracker()
tracker.Add("ns1", "pod1", "container1", HookSourceAnnotation, "h1", "")
tracker.Add("ns2", "pod2", "container1", HookSourceAnnotation, "h2", "")
tracker.Record("ns1", "pod1", "container1", HookSourceAnnotation, "h1", "", true, fmt.Errorf("err"))
tracker.Add("ns1", "pod1", "container1", HookSourceAnnotation, "h1", "", 0)
tracker.Add("ns2", "pod2", "container1", HookSourceAnnotation, "h2", "", 0)
tracker.Add("ns2", "pod2", "container1", HookSourceAnnotation, "h2", "", 1)
tracker.Record("ns1", "pod1", "container1", HookSourceAnnotation, "h1", "", 0, true, fmt.Errorf("err"))
attempted, failed := tracker.Stat()
assert.Equal(t, 2, attempted)
assert.Equal(t, 3, attempted)
assert.Equal(t, 1, failed)
}
func TestHookTracker_IsComplete(t *testing.T) {
tracker := NewHookTracker()
tracker.Add("ns1", "pod1", "container1", HookSourceAnnotation, "h1", PhasePre)
tracker.Record("ns1", "pod1", "container1", HookSourceAnnotation, "h1", PhasePre, true, fmt.Errorf("err"))
tracker.Add("ns1", "pod1", "container1", HookSourceAnnotation, "h1", PhasePre, 0)
tracker.Record("ns1", "pod1", "container1", HookSourceAnnotation, "h1", PhasePre, 0, true, fmt.Errorf("err"))
assert.True(t, tracker.IsComplete())
tracker.Add("ns1", "pod1", "container1", HookSourceAnnotation, "h1", "")
tracker.Add("ns1", "pod1", "container1", HookSourceAnnotation, "h1", "", 0)
assert.False(t, tracker.IsComplete())
}
func TestHookTracker_HookErrs(t *testing.T) {
tracker := NewHookTracker()
tracker.Add("ns1", "pod1", "container1", HookSourceAnnotation, "h1", "")
tracker.Record("ns1", "pod1", "container1", HookSourceAnnotation, "h1", "", true, fmt.Errorf("err"))
tracker.Add("ns1", "pod1", "container1", HookSourceAnnotation, "h1", "", 0)
tracker.Record("ns1", "pod1", "container1", HookSourceAnnotation, "h1", "", 0, true, fmt.Errorf("err"))
hookErrs := tracker.HookErrs()
assert.Len(t, hookErrs, 1)
@@ -109,7 +110,7 @@ func TestHookTracker_HookErrs(t *testing.T) {
func TestMultiHookTracker_Add(t *testing.T) {
mht := NewMultiHookTracker()
mht.Add("restore1", "ns1", "pod1", "container1", HookSourceAnnotation, "h1", "")
mht.Add("restore1", "ns1", "pod1", "container1", HookSourceAnnotation, "h1", "", 0)
key := hookKey{
podNamespace: "ns1",
@@ -118,6 +119,7 @@ func TestMultiHookTracker_Add(t *testing.T) {
hookPhase: "",
hookSource: HookSourceAnnotation,
hookName: "h1",
hookIndex: 0,
}
_, ok := mht.trackers["restore1"].tracker[key]
@@ -126,8 +128,8 @@ func TestMultiHookTracker_Add(t *testing.T) {
func TestMultiHookTracker_Record(t *testing.T) {
mht := NewMultiHookTracker()
mht.Add("restore1", "ns1", "pod1", "container1", HookSourceAnnotation, "h1", "")
err := mht.Record("restore1", "ns1", "pod1", "container1", HookSourceAnnotation, "h1", "", true, fmt.Errorf("err"))
mht.Add("restore1", "ns1", "pod1", "container1", HookSourceAnnotation, "h1", "", 0)
err := mht.Record("restore1", "ns1", "pod1", "container1", HookSourceAnnotation, "h1", "", 0, true, fmt.Errorf("err"))
key := hookKey{
podNamespace: "ns1",
@@ -136,6 +138,7 @@ func TestMultiHookTracker_Record(t *testing.T) {
hookPhase: "",
hookSource: HookSourceAnnotation,
hookName: "h1",
hookIndex: 0,
}
info := mht.trackers["restore1"].tracker[key]
@@ -143,29 +146,31 @@ func TestMultiHookTracker_Record(t *testing.T) {
assert.True(t, info.hookExecuted)
assert.NoError(t, err)
err = mht.Record("restore1", "ns2", "pod2", "container1", HookSourceAnnotation, "h1", "", true, fmt.Errorf("err"))
err = mht.Record("restore1", "ns2", "pod2", "container1", HookSourceAnnotation, "h1", "", 0, true, fmt.Errorf("err"))
assert.Error(t, err)
err = mht.Record("restore2", "ns2", "pod2", "container1", HookSourceAnnotation, "h1", "", true, fmt.Errorf("err"))
err = mht.Record("restore2", "ns2", "pod2", "container1", HookSourceAnnotation, "h1", "", 0, true, fmt.Errorf("err"))
assert.Error(t, err)
}
func TestMultiHookTracker_Stat(t *testing.T) {
mht := NewMultiHookTracker()
mht.Add("restore1", "ns1", "pod1", "container1", HookSourceAnnotation, "h1", "")
mht.Add("restore1", "ns2", "pod2", "container1", HookSourceAnnotation, "h2", "")
mht.Record("restore1", "ns1", "pod1", "container1", HookSourceAnnotation, "h1", "", true, fmt.Errorf("err"))
mht.Record("restore1", "ns2", "pod2", "container1", HookSourceAnnotation, "h2", "", false, nil)
mht.Add("restore1", "ns1", "pod1", "container1", HookSourceAnnotation, "h1", "", 0)
mht.Add("restore1", "ns2", "pod2", "container1", HookSourceAnnotation, "h2", "", 0)
mht.Add("restore1", "ns2", "pod2", "container1", HookSourceAnnotation, "h2", "", 1)
mht.Record("restore1", "ns1", "pod1", "container1", HookSourceAnnotation, "h1", "", 0, true, fmt.Errorf("err"))
mht.Record("restore1", "ns2", "pod2", "container1", HookSourceAnnotation, "h2", "", 0, false, nil)
mht.Record("restore1", "ns2", "pod2", "container1", HookSourceAnnotation, "h2", "", 1, false, nil)
attempted, failed := mht.Stat("restore1")
assert.Equal(t, 2, attempted)
assert.Equal(t, 3, attempted)
assert.Equal(t, 1, failed)
}
func TestMultiHookTracker_Delete(t *testing.T) {
mht := NewMultiHookTracker()
mht.Add("restore1", "ns1", "pod1", "container1", HookSourceAnnotation, "h1", "")
mht.Add("restore1", "ns1", "pod1", "container1", HookSourceAnnotation, "h1", "", 0)
mht.Delete("restore1")
_, ok := mht.trackers["restore1"]
@@ -174,11 +179,11 @@ func TestMultiHookTracker_Delete(t *testing.T) {
func TestMultiHookTracker_IsComplete(t *testing.T) {
mht := NewMultiHookTracker()
mht.Add("backup1", "ns1", "pod1", "container1", HookSourceAnnotation, "h1", PhasePre)
mht.Record("backup1", "ns1", "pod1", "container1", HookSourceAnnotation, "h1", PhasePre, true, fmt.Errorf("err"))
mht.Add("backup1", "ns1", "pod1", "container1", HookSourceAnnotation, "h1", PhasePre, 0)
mht.Record("backup1", "ns1", "pod1", "container1", HookSourceAnnotation, "h1", PhasePre, 0, true, fmt.Errorf("err"))
assert.True(t, mht.IsComplete("backup1"))
mht.Add("restore1", "ns1", "pod1", "container1", HookSourceAnnotation, "h1", "")
mht.Add("restore1", "ns1", "pod1", "container1", HookSourceAnnotation, "h1", "", 0)
assert.False(t, mht.IsComplete("restore1"))
assert.True(t, mht.IsComplete("restore2"))
@@ -186,8 +191,8 @@ func TestMultiHookTracker_IsComplete(t *testing.T) {
func TestMultiHookTracker_HookErrs(t *testing.T) {
mht := NewMultiHookTracker()
mht.Add("restore1", "ns1", "pod1", "container1", HookSourceAnnotation, "h1", "")
mht.Record("restore1", "ns1", "pod1", "container1", HookSourceAnnotation, "h1", "", true, fmt.Errorf("err"))
mht.Add("restore1", "ns1", "pod1", "container1", HookSourceAnnotation, "h1", "", 0)
mht.Record("restore1", "ns1", "pod1", "container1", HookSourceAnnotation, "h1", "", 0, true, fmt.Errorf("err"))
hookErrs := mht.HookErrs("restore1")
assert.Len(t, hookErrs, 1)

View File

@@ -223,7 +223,7 @@ func (h *DefaultItemHookHandler) HandleHooks(
hookFromAnnotations = getPodExecHookFromAnnotations(metadata.GetAnnotations(), "", log)
}
if hookFromAnnotations != nil {
hookTracker.Add(namespace, name, hookFromAnnotations.Container, HookSourceAnnotation, "", phase)
hookTracker.Add(namespace, name, hookFromAnnotations.Container, HookSourceAnnotation, "", phase, 0)
hookLog := log.WithFields(
logrus.Fields{
@@ -239,7 +239,7 @@ func (h *DefaultItemHookHandler) HandleHooks(
hookLog.WithError(errExec).Error("Error executing hook")
hookFailed = true
}
errTracker := hookTracker.Record(namespace, name, hookFromAnnotations.Container, HookSourceAnnotation, "", phase, hookFailed, errExec)
errTracker := hookTracker.Record(namespace, name, hookFromAnnotations.Container, HookSourceAnnotation, "", phase, 0, hookFailed, errExec)
if errTracker != nil {
hookLog.WithError(errTracker).Warn("Error recording the hook in hook tracker")
}
@@ -267,10 +267,10 @@ func (h *DefaultItemHookHandler) HandleHooks(
hooks = resourceHook.Post
}
for _, hook := range hooks {
for i, hook := range hooks {
if groupResource == kuberesource.Pods {
if hook.Exec != nil {
hookTracker.Add(namespace, name, hook.Exec.Container, HookSourceSpec, resourceHook.Name, phase)
hookTracker.Add(namespace, name, hook.Exec.Container, HookSourceSpec, resourceHook.Name, phase, i)
// The remaining hooks will only be executed if modeFailError is nil.
// Otherwise, execution will stop and only hook collection will occur.
if modeFailError == nil {
@@ -291,7 +291,7 @@ func (h *DefaultItemHookHandler) HandleHooks(
modeFailError = err
}
}
errTracker := hookTracker.Record(namespace, name, hook.Exec.Container, HookSourceSpec, resourceHook.Name, phase, hookFailed, err)
errTracker := hookTracker.Record(namespace, name, hook.Exec.Container, HookSourceSpec, resourceHook.Name, phase, i, hookFailed, err)
if errTracker != nil {
hookLog.WithError(errTracker).Warn("Error recording the hook in hook tracker")
}
@@ -534,6 +534,11 @@ type PodExecRestoreHook struct {
HookSource string
Hook velerov1api.ExecRestoreHook
executed bool
// hookIndex contains the slice index for the specific hook from the restore spec
// in order to track multiple hooks. Stored here because restore hook results are recorded
// outside of the original slice iteration
// for the same container
hookIndex int
}
// GroupRestoreExecHooks returns a list of hooks to be executed in a pod grouped by
@@ -561,12 +566,13 @@ func GroupRestoreExecHooks(
if hookFromAnnotation.Container == "" {
hookFromAnnotation.Container = pod.Spec.Containers[0].Name
}
hookTrack.Add(restoreName, metadata.GetNamespace(), metadata.GetName(), hookFromAnnotation.Container, HookSourceAnnotation, "<from-annotation>", HookPhase(""))
hookTrack.Add(restoreName, metadata.GetNamespace(), metadata.GetName(), hookFromAnnotation.Container, HookSourceAnnotation, "<from-annotation>", HookPhase(""), 0)
byContainer[hookFromAnnotation.Container] = []PodExecRestoreHook{
{
HookName: "<from-annotation>",
HookSource: HookSourceAnnotation,
Hook: *hookFromAnnotation,
hookIndex: 0,
},
}
return byContainer, nil
@@ -579,7 +585,7 @@ func GroupRestoreExecHooks(
if !rrh.Selector.applicableTo(kuberesource.Pods, namespace, labels) {
continue
}
for _, rh := range rrh.RestoreHooks {
for i, rh := range rrh.RestoreHooks {
if rh.Exec == nil {
continue
}
@@ -587,6 +593,7 @@ func GroupRestoreExecHooks(
HookName: rrh.Name,
Hook: *rh.Exec,
HookSource: HookSourceSpec,
hookIndex: i,
}
// default to false if attr WaitForReady not set
if named.Hook.WaitForReady == nil {
@@ -596,7 +603,7 @@ func GroupRestoreExecHooks(
if named.Hook.Container == "" {
named.Hook.Container = pod.Spec.Containers[0].Name
}
hookTrack.Add(restoreName, metadata.GetNamespace(), metadata.GetName(), named.Hook.Container, HookSourceSpec, rrh.Name, HookPhase(""))
hookTrack.Add(restoreName, metadata.GetNamespace(), metadata.GetName(), named.Hook.Container, HookSourceSpec, rrh.Name, HookPhase(""), i)
byContainer[named.Hook.Container] = append(byContainer[named.Hook.Container], named)
}
}

View File

@@ -1151,6 +1151,7 @@ func TestGroupRestoreExecHooks(t *testing.T) {
WaitTimeout: metav1.Duration{Duration: time.Minute},
WaitForReady: boolptr.False(),
},
hookIndex: 0,
},
{
HookName: "hook1",
@@ -1163,6 +1164,7 @@ func TestGroupRestoreExecHooks(t *testing.T) {
WaitTimeout: metav1.Duration{Duration: time.Minute * 2},
WaitForReady: boolptr.False(),
},
hookIndex: 2,
},
{
HookName: "hook2",
@@ -1175,6 +1177,7 @@ func TestGroupRestoreExecHooks(t *testing.T) {
WaitTimeout: metav1.Duration{Duration: time.Minute * 4},
WaitForReady: boolptr.True(),
},
hookIndex: 0,
},
},
"container2": {
@@ -1189,6 +1192,7 @@ func TestGroupRestoreExecHooks(t *testing.T) {
WaitTimeout: metav1.Duration{Duration: time.Second * 3},
WaitForReady: boolptr.False(),
},
hookIndex: 1,
},
},
},

View File

@@ -169,7 +169,7 @@ func (e *DefaultWaitExecHookHandler) HandleHooks(
hookLog.Error(err)
errors = append(errors, err)
errTracker := multiHookTracker.Record(restoreName, newPod.Namespace, newPod.Name, hook.Hook.Container, hook.HookSource, hook.HookName, HookPhase(""), true, err)
errTracker := multiHookTracker.Record(restoreName, newPod.Namespace, newPod.Name, hook.Hook.Container, hook.HookSource, hook.HookName, HookPhase(""), i, true, err)
if errTracker != nil {
hookLog.WithError(errTracker).Warn("Error recording the hook in hook tracker")
}
@@ -195,7 +195,7 @@ func (e *DefaultWaitExecHookHandler) HandleHooks(
hookFailed = true
}
errTracker := multiHookTracker.Record(restoreName, newPod.Namespace, newPod.Name, hook.Hook.Container, hook.HookSource, hook.HookName, HookPhase(""), hookFailed, hookErr)
errTracker := multiHookTracker.Record(restoreName, newPod.Namespace, newPod.Name, hook.Hook.Container, hook.HookSource, hook.HookName, HookPhase(""), i, hookFailed, hookErr)
if errTracker != nil {
hookLog.WithError(errTracker).Warn("Error recording the hook in hook tracker")
}
@@ -239,7 +239,7 @@ func (e *DefaultWaitExecHookHandler) HandleHooks(
// containers to become ready.
// Each unexecuted hook is logged as an error and this error will be returned from this function.
for _, hooks := range byContainer {
for _, hook := range hooks {
for i, hook := range hooks {
if hook.executed {
continue
}
@@ -252,7 +252,7 @@ func (e *DefaultWaitExecHookHandler) HandleHooks(
},
)
errTracker := multiHookTracker.Record(restoreName, pod.Namespace, pod.Name, hook.Hook.Container, hook.HookSource, hook.HookName, HookPhase(""), true, err)
errTracker := multiHookTracker.Record(restoreName, pod.Namespace, pod.Name, hook.Hook.Container, hook.HookSource, hook.HookName, HookPhase(""), i, true, err)
if errTracker != nil {
hookLog.WithError(errTracker).Warn("Error recording the hook in hook tracker")
}

View File

@@ -1007,17 +1007,17 @@ func TestRestoreHookTrackerUpdate(t *testing.T) {
}
hookTracker1 := NewMultiHookTracker()
hookTracker1.Add("restore1", "default", "my-pod", "container1", HookSourceAnnotation, "<from-annotation>", HookPhase(""))
hookTracker1.Add("restore1", "default", "my-pod", "container1", HookSourceAnnotation, "<from-annotation>", HookPhase(""), 0)
hookTracker2 := NewMultiHookTracker()
hookTracker2.Add("restore1", "default", "my-pod", "container1", HookSourceSpec, "my-hook-1", HookPhase(""))
hookTracker2.Add("restore1", "default", "my-pod", "container1", HookSourceSpec, "my-hook-1", HookPhase(""), 0)
hookTracker3 := NewMultiHookTracker()
hookTracker3.Add("restore1", "default", "my-pod", "container1", HookSourceSpec, "my-hook-1", HookPhase(""))
hookTracker3.Add("restore1", "default", "my-pod", "container2", HookSourceSpec, "my-hook-2", HookPhase(""))
hookTracker3.Add("restore1", "default", "my-pod", "container1", HookSourceSpec, "my-hook-1", HookPhase(""), 0)
hookTracker3.Add("restore1", "default", "my-pod", "container2", HookSourceSpec, "my-hook-2", HookPhase(""), 0)
hookTracker4 := NewMultiHookTracker()
hookTracker4.Add("restore1", "default", "my-pod", "container1", HookSourceSpec, "my-hook-1", HookPhase(""))
hookTracker4.Add("restore1", "default", "my-pod", "container1", HookSourceSpec, "my-hook-1", HookPhase(""), 0)
tests1 := []struct {
name string

View File

@@ -217,6 +217,9 @@ func DescribeBackupSpec(d *Describer, spec velerov1api.BackupSpec) {
d.Println()
d.Printf("Velero-Native Snapshot PVs:\t%s\n", BoolPointerString(spec.SnapshotVolumes, "false", "true", "auto"))
if spec.DefaultVolumesToFsBackup != nil {
d.Printf("File System Backup (Default):\t%s\n", BoolPointerString(spec.DefaultVolumesToFsBackup, "false", "true", ""))
}
d.Printf("Snapshot Move Data:\t%s\n", BoolPointerString(spec.SnapshotMoveData, "false", "true", "auto"))
if len(spec.DataMover) == 0 {
s = defaultDataMover

View File

@@ -281,6 +281,71 @@ Hooks:
OrderedResources:
kind1: rs1-1, rs1-2
`
input4 := builder.ForBackup("test-ns", "test-backup-4").
DefaultVolumesToFsBackup(true).
StorageLocation("backup-location").
Result().Spec
expect4 := `Namespaces:
Included: *
Excluded: <none>
Resources:
Included: *
Excluded: <none>
Cluster-scoped: auto
Label selector: <none>
Or label selector: <none>
Storage Location: backup-location
Velero-Native Snapshot PVs: auto
File System Backup (Default): true
Snapshot Move Data: auto
Data Mover: velero
TTL: 0s
CSISnapshotTimeout: 0s
ItemOperationTimeout: 0s
Hooks: <none>
`
input5 := builder.ForBackup("test-ns", "test-backup-5").
DefaultVolumesToFsBackup(false).
StorageLocation("backup-location").
Result().Spec
expect5 := `Namespaces:
Included: *
Excluded: <none>
Resources:
Included: *
Excluded: <none>
Cluster-scoped: auto
Label selector: <none>
Or label selector: <none>
Storage Location: backup-location
Velero-Native Snapshot PVs: auto
File System Backup (Default): false
Snapshot Move Data: auto
Data Mover: velero
TTL: 0s
CSISnapshotTimeout: 0s
ItemOperationTimeout: 0s
Hooks: <none>
`
testcases := []struct {
@@ -303,6 +368,16 @@ OrderedResources:
input: input3,
expect: expect3,
},
{
name: "DefaultVolumesToFsBackup is true",
input: input4,
expect: expect4,
},
{
name: "DefaultVolumesToFsBackup is false",
input: input5,
expect: expect5,
},
}
for _, tc := range testcases {

View File

@@ -102,6 +102,100 @@ Backup Template:
Hooks: <none>
Last Backup: 2023-06-25 15:04:05 +0000 UTC
`
input3 := builder.ForSchedule("velero", "schedule-3").
Phase(velerov1api.SchedulePhaseEnabled).
CronSchedule("0 0 * * *").
Template(builder.ForBackup("velero", "backup-1").DefaultVolumesToFsBackup(true).Result().Spec).
LastBackupTime("2023-06-25 15:04:05").Result()
expect3 := `Name: schedule-3
Namespace: velero
Labels: <none>
Annotations: <none>
Phase: Enabled
Paused: false
Schedule: 0 0 * * *
Backup Template:
Namespaces:
Included: *
Excluded: <none>
Resources:
Included: *
Excluded: <none>
Cluster-scoped: auto
Label selector: <none>
Or label selector: <none>
Storage Location:
Velero-Native Snapshot PVs: auto
File System Backup (Default): true
Snapshot Move Data: auto
Data Mover: velero
TTL: 0s
CSISnapshotTimeout: 0s
ItemOperationTimeout: 0s
Hooks: <none>
Last Backup: 2023-06-25 15:04:05 +0000 UTC
`
input4 := builder.ForSchedule("velero", "schedule-4").
Phase(velerov1api.SchedulePhaseEnabled).
CronSchedule("0 0 * * *").
Template(builder.ForBackup("velero", "backup-1").DefaultVolumesToFsBackup(false).Result().Spec).
LastBackupTime("2023-06-25 15:04:05").Result()
expect4 := `Name: schedule-4
Namespace: velero
Labels: <none>
Annotations: <none>
Phase: Enabled
Paused: false
Schedule: 0 0 * * *
Backup Template:
Namespaces:
Included: *
Excluded: <none>
Resources:
Included: *
Excluded: <none>
Cluster-scoped: auto
Label selector: <none>
Or label selector: <none>
Storage Location:
Velero-Native Snapshot PVs: auto
File System Backup (Default): false
Snapshot Move Data: auto
Data Mover: velero
TTL: 0s
CSISnapshotTimeout: 0s
ItemOperationTimeout: 0s
Hooks: <none>
Last Backup: 2023-06-25 15:04:05 +0000 UTC
`
@@ -120,6 +214,16 @@ Last Backup: 2023-06-25 15:04:05 +0000 UTC
input: input2,
expect: expect2,
},
{
name: "schedule with DefaultVolumesToFsBackup is true",
input: input3,
expect: expect3,
},
{
name: "schedule with DefaultVolumesToFsBackup is false",
input: input4,
expect: expect4,
},
}
for _, tc := range testcases {

View File

@@ -56,6 +56,7 @@ import (
kubeutil "github.com/vmware-tanzu/velero/pkg/util/kube"
"github.com/vmware-tanzu/velero/pkg/util/logging"
"github.com/vmware-tanzu/velero/pkg/util/results"
veleroutil "github.com/vmware-tanzu/velero/pkg/util/velero"
)
const (
@@ -417,6 +418,13 @@ func (b *backupReconciler) prepareBackupRequest(backup *velerov1api.Backup, logg
request.Status.ValidationErrors = append(request.Status.ValidationErrors,
fmt.Sprintf("backup can't be created because backup storage location %s is currently in read-only mode", request.StorageLocation.Name))
}
if !veleroutil.BSLIsAvailable(*request.StorageLocation) {
request.Status.ValidationErrors = append(
request.Status.ValidationErrors,
fmt.Sprintf("backup can't be created because BackupStorageLocation %s is in Unavailable status. please create a new backup after the BSL becomes available", request.StorageLocation.Name),
)
}
}
// add the storage location as a label for easy filtering later.

View File

@@ -156,7 +156,7 @@ func TestProcessBackupNonProcessedItems(t *testing.T) {
}
func TestProcessBackupValidationFailures(t *testing.T) {
defaultBackupLocation := builder.ForBackupStorageLocation("velero", "loc-1").Result()
defaultBackupLocation := builder.ForBackupStorageLocation("velero", "loc-1").Phase(velerov1api.BackupStorageLocationPhaseAvailable).Result()
tests := []struct {
name string
@@ -184,7 +184,7 @@ func TestProcessBackupValidationFailures(t *testing.T) {
{
name: "backup for read-only backup location fails validation",
backup: defaultBackup().StorageLocation("read-only").Result(),
backupLocation: builder.ForBackupStorageLocation("velero", "read-only").AccessMode(velerov1api.BackupStorageLocationAccessModeReadOnly).Result(),
backupLocation: builder.ForBackupStorageLocation("velero", "read-only").AccessMode(velerov1api.BackupStorageLocationAccessModeReadOnly).Phase(velerov1api.BackupStorageLocationPhaseAvailable).Result(),
expectedErrs: []string{"backup can't be created because backup storage location read-only is currently in read-only mode"},
},
{
@@ -200,6 +200,12 @@ func TestProcessBackupValidationFailures(t *testing.T) {
backupLocation: defaultBackupLocation,
expectedErrs: []string{"include-resources, exclude-resources and include-cluster-resources are old filter parameters.\ninclude-cluster-scoped-resources, exclude-cluster-scoped-resources, include-namespace-scoped-resources and exclude-namespace-scoped-resources are new filter parameters.\nThey cannot be used together"},
},
{
name: "BSL in unavailable state",
backup: defaultBackup().StorageLocation("unavailable").Result(),
backupLocation: builder.ForBackupStorageLocation("velero", "unavailable").Phase(velerov1api.BackupStorageLocationPhaseUnavailable).Result(),
expectedErrs: []string{"backup can't be created because BackupStorageLocation unavailable is in Unavailable status. please create a new backup after the BSL becomes available"},
},
}
for _, test := range tests {
@@ -593,7 +599,7 @@ func TestDefaultVolumesToResticDeprecation(t *testing.T) {
}
func TestProcessBackupCompletions(t *testing.T) {
defaultBackupLocation := builder.ForBackupStorageLocation("velero", "loc-1").Default(true).Bucket("store-1").Result()
defaultBackupLocation := builder.ForBackupStorageLocation("velero", "loc-1").Default(true).Bucket("store-1").Phase(velerov1api.BackupStorageLocationPhaseAvailable).Result()
now, err := time.Parse(time.RFC1123Z, time.RFC1123Z)
require.NoError(t, err)
@@ -653,7 +659,7 @@ func TestProcessBackupCompletions(t *testing.T) {
{
name: "backup with a specific backup location keeps it",
backup: defaultBackup().StorageLocation("alt-loc").Result(),
backupLocation: builder.ForBackupStorageLocation("velero", "alt-loc").Bucket("store-1").Result(),
backupLocation: builder.ForBackupStorageLocation("velero", "alt-loc").Bucket("store-1").Phase(velerov1api.BackupStorageLocationPhaseAvailable).Result(),
defaultVolumesToFsBackup: false,
expectedResult: &velerov1api.Backup{
TypeMeta: metav1.TypeMeta{
@@ -693,6 +699,7 @@ func TestProcessBackupCompletions(t *testing.T) {
backupLocation: builder.ForBackupStorageLocation("velero", "read-write").
Bucket("store-1").
AccessMode(velerov1api.BackupStorageLocationAccessModeReadWrite).
Phase(velerov1api.BackupStorageLocationPhaseAvailable).
Result(),
defaultVolumesToFsBackup: true,
expectedResult: &velerov1api.Backup{
@@ -1415,11 +1422,13 @@ func TestProcessBackupCompletions(t *testing.T) {
}
func TestValidateAndGetSnapshotLocations(t *testing.T) {
defaultBSL := builder.ForBackupStorageLocation(velerov1api.DefaultNamespace, "bsl").Phase(velerov1api.BackupStorageLocationPhaseAvailable).Result()
tests := []struct {
name string
backup *velerov1api.Backup
locations []*velerov1api.VolumeSnapshotLocation
defaultLocations map[string]string
bsl velerov1api.BackupStorageLocation
expectedVolumeSnapshotLocationNames []string // adding these in the expected order will allow to test with better msgs in case of a test failure
expectedErrors string
expectedSuccess bool
@@ -1433,6 +1442,7 @@ func TestValidateAndGetSnapshotLocations(t *testing.T) {
builder.ForVolumeSnapshotLocation(velerov1api.DefaultNamespace, "some-name").Provider("fake-provider").Result(),
},
expectedErrors: "a VolumeSnapshotLocation CRD for the location random-name with the name specified in the backup spec needs to be created before this snapshot can be executed. Error: volumesnapshotlocations.velero.io \"random-name\" not found", expectedSuccess: false,
bsl: *defaultBSL,
},
{
name: "duplicate locationName per provider: should filter out dups",
@@ -1443,6 +1453,7 @@ func TestValidateAndGetSnapshotLocations(t *testing.T) {
},
expectedVolumeSnapshotLocationNames: []string{"aws-us-west-1"},
expectedSuccess: true,
bsl: *defaultBSL,
},
{
name: "multiple non-dupe location names per provider should error",
@@ -1454,6 +1465,7 @@ func TestValidateAndGetSnapshotLocations(t *testing.T) {
},
expectedErrors: "more than one VolumeSnapshotLocation name specified for provider aws: aws-us-west-1; unexpected name was aws-us-east-1",
expectedSuccess: false,
bsl: *defaultBSL,
},
{
name: "no location name for the provider exists, only one VSL for the provider: use it",
@@ -1463,6 +1475,7 @@ func TestValidateAndGetSnapshotLocations(t *testing.T) {
},
expectedVolumeSnapshotLocationNames: []string{"aws-us-east-1"},
expectedSuccess: true,
bsl: *defaultBSL,
},
{
name: "no location name for the provider exists, no default, more than one VSL for the provider: error",
@@ -1472,6 +1485,7 @@ func TestValidateAndGetSnapshotLocations(t *testing.T) {
builder.ForVolumeSnapshotLocation(velerov1api.DefaultNamespace, "aws-us-west-1").Provider("aws").Result(),
},
expectedErrors: "provider aws has more than one possible volume snapshot location, and none were specified explicitly or as a default",
bsl: *defaultBSL,
},
{
name: "no location name for the provider exists, more than one VSL for the provider: the provider's default should be added",
@@ -1483,11 +1497,13 @@ func TestValidateAndGetSnapshotLocations(t *testing.T) {
},
expectedVolumeSnapshotLocationNames: []string{"aws-us-east-1"},
expectedSuccess: true,
bsl: *defaultBSL,
},
{
name: "no existing location name and no default location name given",
backup: defaultBackup().Phase(velerov1api.BackupPhaseNew).Result(),
expectedSuccess: true,
bsl: *defaultBSL,
},
{
name: "multiple location names for a provider, default location name for another provider",
@@ -1499,6 +1515,7 @@ func TestValidateAndGetSnapshotLocations(t *testing.T) {
},
expectedVolumeSnapshotLocationNames: []string{"aws-us-west-1", "some-name"},
expectedSuccess: true,
bsl: *defaultBSL,
},
{
name: "location name does not correspond to any existing location and snapshotvolume disabled; should return error",
@@ -1510,6 +1527,7 @@ func TestValidateAndGetSnapshotLocations(t *testing.T) {
},
expectedVolumeSnapshotLocationNames: nil,
expectedErrors: "a VolumeSnapshotLocation CRD for the location random-name with the name specified in the backup spec needs to be created before this snapshot can be executed. Error: volumesnapshotlocations.velero.io \"random-name\" not found", expectedSuccess: false,
bsl: *defaultBSL,
},
{
name: "duplicate locationName per provider and snapshotvolume disabled; should return only one BSL",
@@ -1520,6 +1538,7 @@ func TestValidateAndGetSnapshotLocations(t *testing.T) {
},
expectedVolumeSnapshotLocationNames: []string{"aws-us-west-1"},
expectedSuccess: true,
bsl: *defaultBSL,
},
{
name: "no location name for the provider exists, only one VSL created and snapshotvolume disabled; should return the VSL",
@@ -1529,6 +1548,7 @@ func TestValidateAndGetSnapshotLocations(t *testing.T) {
},
expectedVolumeSnapshotLocationNames: []string{"aws-us-east-1"},
expectedSuccess: true,
bsl: *defaultBSL,
},
{
name: "multiple location names for a provider, no default location and backup has no location defined, but snapshotvolume disabled, should return error",
@@ -1539,6 +1559,7 @@ func TestValidateAndGetSnapshotLocations(t *testing.T) {
},
expectedVolumeSnapshotLocationNames: nil,
expectedErrors: "provider aws has more than one possible volume snapshot location, and none were specified explicitly or as a default",
bsl: *defaultBSL,
},
}

View File

@@ -22,9 +22,8 @@ import (
"fmt"
"time"
"github.com/vmware-tanzu/velero/pkg/util/csi"
jsonpatch "github.com/evanphx/json-patch/v5"
snapshotv1api "github.com/kubernetes-csi/external-snapshotter/client/v7/apis/volumesnapshot/v1"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
corev1 "k8s.io/api/core/v1"
@@ -37,8 +36,6 @@ import (
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
snapshotv1api "github.com/kubernetes-csi/external-snapshotter/client/v7/apis/volumesnapshot/v1"
"github.com/vmware-tanzu/velero/internal/credentials"
"github.com/vmware-tanzu/velero/internal/delete"
"github.com/vmware-tanzu/velero/internal/volume"
@@ -56,8 +53,10 @@ import (
repomanager "github.com/vmware-tanzu/velero/pkg/repository/manager"
repotypes "github.com/vmware-tanzu/velero/pkg/repository/types"
"github.com/vmware-tanzu/velero/pkg/util/boolptr"
"github.com/vmware-tanzu/velero/pkg/util/csi"
"github.com/vmware-tanzu/velero/pkg/util/filesystem"
"github.com/vmware-tanzu/velero/pkg/util/kube"
veleroutil "github.com/vmware-tanzu/velero/pkg/util/velero"
)
const (
@@ -202,6 +201,11 @@ func (r *backupDeletionReconciler) Reconcile(ctx context.Context, req ctrl.Reque
return ctrl.Result{}, err
}
if !veleroutil.BSLIsAvailable(*location) {
err := r.patchDeleteBackupRequestWithError(ctx, dbr, fmt.Errorf("cannot delete backup because backup storage location %s is currently in Unavailable state", location.Name))
return ctrl.Result{}, err
}
// if the request object has no labels defined, initialize an empty map since
// we will be updating labels
if dbr.Labels == nil {
@@ -264,9 +268,7 @@ func (r *backupDeletionReconciler) Reconcile(ctx context.Context, req ctrl.Reque
if err != nil {
log.WithError(err).Errorf("Unable to download tarball for backup %s, skipping associated DeleteItemAction plugins", backup.Name)
log.Info("Cleaning up CSI volumesnapshots")
if err := r.deleteCSIVolumeSnapshots(ctx, backup, log); err != nil {
errs = append(errs, err.Error())
}
r.deleteCSIVolumeSnapshotsIfAny(ctx, backup, log)
} else {
defer closeAndRemoveFile(backupFile, r.logger)
deleteCtx := &delete.Context{
@@ -503,22 +505,22 @@ func (r *backupDeletionReconciler) deleteExistingDeletionRequests(ctx context.Co
return errs
}
// deleteCSIVolumeSnapshots clean up the CSI snapshots created by the backup, this should be called when the backup is failed
// deleteCSIVolumeSnapshotsIfAny clean up the CSI snapshots created by the backup, this should be called when the backup is failed
// when it's running, e.g. due to velero pod restart, and the backup.tar is failed to be downloaded from storage.
func (r *backupDeletionReconciler) deleteCSIVolumeSnapshots(ctx context.Context, backup *velerov1api.Backup, log logrus.FieldLogger) error {
func (r *backupDeletionReconciler) deleteCSIVolumeSnapshotsIfAny(ctx context.Context, backup *velerov1api.Backup, log logrus.FieldLogger) {
vsList := snapshotv1api.VolumeSnapshotList{}
if err := r.Client.List(ctx, &vsList, &client.ListOptions{
LabelSelector: labels.SelectorFromSet(map[string]string{
velerov1api.BackupNameLabel: label.GetValidName(backup.Name),
}),
}); err != nil {
return errors.Wrap(err, "error listing volume snapshots")
log.WithError(err).Warnf("Could not list volume snapshots, abort")
return
}
for _, item := range vsList.Items {
vs := item
csi.CleanupVolumeSnapshot(&vs, r.Client, log)
}
return nil
}
func (r *backupDeletionReconciler) deletePodVolumeSnapshots(ctx context.Context, backup *velerov1api.Backup) []error {

View File

@@ -126,6 +126,9 @@ func TestBackupDeletionControllerReconcile(t *testing.T) {
},
},
},
Status: velerov1api.BackupStorageLocationStatus{
Phase: velerov1api.BackupStorageLocationPhaseAvailable,
},
}
dbr := defaultTestDbr()
td := setupBackupDeletionControllerTest(t, dbr, location, backup)
@@ -254,7 +257,7 @@ func TestBackupDeletionControllerReconcile(t *testing.T) {
t.Run("backup storage location is in read-only mode", func(t *testing.T) {
backup := builder.ForBackup(velerov1api.DefaultNamespace, "foo").StorageLocation("default").Result()
location := builder.ForBackupStorageLocation("velero", "default").AccessMode(velerov1api.BackupStorageLocationAccessModeReadOnly).Result()
location := builder.ForBackupStorageLocation("velero", "default").Phase(velerov1api.BackupStorageLocationPhaseAvailable).AccessMode(velerov1api.BackupStorageLocationAccessModeReadOnly).Result()
td := setupBackupDeletionControllerTest(t, defaultTestDbr(), location, backup)
@@ -268,6 +271,24 @@ func TestBackupDeletionControllerReconcile(t *testing.T) {
assert.Len(t, res.Status.Errors, 1)
assert.Equal(t, "cannot delete backup because backup storage location default is currently in read-only mode", res.Status.Errors[0])
})
t.Run("backup storage location is in unavailable state", func(t *testing.T) {
backup := builder.ForBackup(velerov1api.DefaultNamespace, "foo").StorageLocation("default").Result()
location := builder.ForBackupStorageLocation("velero", "default").Phase(velerov1api.BackupStorageLocationPhaseUnavailable).Result()
td := setupBackupDeletionControllerTest(t, defaultTestDbr(), location, backup)
_, err := td.controller.Reconcile(context.TODO(), td.req)
require.NoError(t, err)
res := &velerov1api.DeleteBackupRequest{}
err = td.fakeClient.Get(ctx, td.req.NamespacedName, res)
require.NoError(t, err)
assert.Equal(t, "Processed", string(res.Status.Phase))
assert.Len(t, res.Status.Errors, 1)
assert.Equal(t, "cannot delete backup because backup storage location default is currently in Unavailable state", res.Status.Errors[0])
})
t.Run("full delete, no errors", func(t *testing.T) {
input := defaultTestDbr()
@@ -297,6 +318,9 @@ func TestBackupDeletionControllerReconcile(t *testing.T) {
},
},
},
Status: velerov1api.BackupStorageLocationStatus{
Phase: velerov1api.BackupStorageLocationPhaseAvailable,
},
}
snapshotLocation := &velerov1api.VolumeSnapshotLocation{
@@ -416,6 +440,9 @@ func TestBackupDeletionControllerReconcile(t *testing.T) {
},
},
},
Status: velerov1api.BackupStorageLocationStatus{
Phase: velerov1api.BackupStorageLocationPhaseAvailable,
},
}
snapshotLocation := &velerov1api.VolumeSnapshotLocation{
@@ -518,6 +545,9 @@ func TestBackupDeletionControllerReconcile(t *testing.T) {
},
},
},
Status: velerov1api.BackupStorageLocationStatus{
Phase: velerov1api.BackupStorageLocationPhaseAvailable,
},
}
snapshotLocation := &velerov1api.VolumeSnapshotLocation{
@@ -600,6 +630,9 @@ func TestBackupDeletionControllerReconcile(t *testing.T) {
},
},
},
Status: velerov1api.BackupStorageLocationStatus{
Phase: velerov1api.BackupStorageLocationPhaseAvailable,
},
}
snapshotLocation := &velerov1api.VolumeSnapshotLocation{

View File

@@ -41,6 +41,7 @@ import (
"github.com/vmware-tanzu/velero/pkg/persistence"
"github.com/vmware-tanzu/velero/pkg/plugin/clientmgmt"
"github.com/vmware-tanzu/velero/pkg/util/kube"
veleroutil "github.com/vmware-tanzu/velero/pkg/util/velero"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
@@ -92,6 +93,10 @@ func (b *backupSyncReconciler) Reconcile(ctx context.Context, req ctrl.Request)
}
return ctrl.Result{}, errors.Wrapf(err, "error getting BackupStorageLocation %s", req.String())
}
if !veleroutil.BSLIsAvailable(*location) {
log.Errorf("BackupStorageLocation is in unavailable state, skip syncing backup from it.")
return ctrl.Result{}, nil
}
pluginManager := b.newPluginManager(log)
defer pluginManager.CleanupClients()

View File

@@ -62,6 +62,9 @@ func defaultLocation(namespace string) *velerov1api.BackupStorageLocation {
},
Default: true,
},
Status: velerov1api.BackupStorageLocationStatus{
Phase: velerov1api.BackupStorageLocationPhaseAvailable,
},
}
}
@@ -141,6 +144,9 @@ func defaultLocationWithLongerLocationName(namespace string) *velerov1api.Backup
},
},
},
Status: velerov1api.BackupStorageLocationStatus{
Phase: velerov1api.BackupStorageLocationPhaseAvailable,
},
}
}
@@ -177,6 +183,21 @@ var _ = Describe("Backup Sync Reconciler", func() {
namespace: "ns-1",
location: defaultLocation("ns-1"),
},
{
name: "unavailable BSL",
namespace: "ns-1",
location: builder.ForBackupStorageLocation("ns-1", "default").Phase(velerov1api.BackupStorageLocationPhaseUnavailable).Result(),
cloudBackups: []*cloudBackupData{
{
backup: builder.ForBackup("ns-1", "backup-1").Result(),
backupShouldSkipSync: true,
},
{
backup: builder.ForBackup("ns-1", "backup-2").Result(),
backupShouldSkipSync: true,
},
},
},
{
name: "normal case",
namespace: "ns-1",

View File

@@ -810,10 +810,7 @@ func (r *DataUploadReconciler) setupExposeParam(du *velerov2alpha1api.DataUpload
return nil, errors.Wrapf(err, "failed to get PVC %s/%s", du.Spec.SourceNamespace, du.Spec.SourcePVC)
}
nodeOS, err := kube.GetPVCAttachingNodeOS(pvc, r.kubeClient.CoreV1(), r.kubeClient.StorageV1(), log)
if err != nil {
return nil, errors.Wrapf(err, "failed to get attaching node OS for PVC %s/%s", du.Spec.SourceNamespace, du.Spec.SourcePVC)
}
nodeOS := kube.GetPVCAttachingNodeOS(pvc, r.kubeClient.CoreV1(), r.kubeClient.StorageV1(), log)
if err := kube.HasNodeWithOS(context.Background(), nodeOS, r.kubeClient.CoreV1()); err != nil {
return nil, errors.Wrapf(err, "no appropriate node to run data upload for PVC %s/%s", du.Spec.SourceNamespace, du.Spec.SourcePVC)

View File

@@ -408,16 +408,6 @@ func TestReconcile(t *testing.T) {
expectedRequeue: ctrl.Result{},
expectedErrMsg: "failed to get PVC",
},
{
name: "Dataupload should fail to get PVC attaching node",
du: dataUploadBuilder().Result(),
pod: builder.ForPod("fake-ns", dataUploadName).Volumes(&corev1.Volume{Name: "test-pvc"}).Result(),
pvc: builder.ForPersistentVolumeClaim("fake-ns", "test-pvc").StorageClass("fake-sc").Result(),
expectedProcessed: true,
expected: dataUploadBuilder().Phase(velerov2alpha1api.DataUploadPhaseFailed).Result(),
expectedRequeue: ctrl.Result{},
expectedErrMsg: "error to get storage class",
},
{
name: "Dataupload should fail because expected node doesn't exist",
du: dataUploadBuilder().Result(),

View File

@@ -18,6 +18,7 @@ package controller
import (
"context"
"fmt"
"time"
"github.com/pkg/errors"
@@ -36,6 +37,7 @@ import (
"github.com/vmware-tanzu/velero/pkg/constant"
"github.com/vmware-tanzu/velero/pkg/label"
"github.com/vmware-tanzu/velero/pkg/util/kube"
veleroutil "github.com/vmware-tanzu/velero/pkg/util/velero"
)
const (
@@ -44,6 +46,7 @@ const (
gcFailureBSLNotFound = "BSLNotFound"
gcFailureBSLCannotGet = "BSLCannotGet"
gcFailureBSLReadOnly = "BSLReadOnly"
gcFailureBSLUnavailable = "BSLUnavailable"
)
// gcReconciler creates DeleteBackupRequests for expired backups.
@@ -144,12 +147,18 @@ func (c *gcReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Re
} else {
backup.Labels[garbageCollectionFailure] = gcFailureBSLCannotGet
}
if err := c.Update(ctx, backup); err != nil {
log.WithError(err).Error("error updating backup labels")
}
return ctrl.Result{}, errors.Wrap(err, "error getting backup storage location")
}
if !veleroutil.BSLIsAvailable(*loc) {
log.Infof("BSL %s is unavailable, cannot gc backup", loc.Name)
return ctrl.Result{}, fmt.Errorf("bsl %s is unavailable, cannot gc backup", loc.Name)
}
if loc.Spec.AccessMode == velerov1api.BackupStorageLocationAccessModeReadOnly {
log.Infof("Backup cannot be garbage-collected because backup storage location %s is currently in read-only mode", loc.Name)
backup.Labels[garbageCollectionFailure] = gcFailureBSLReadOnly

View File

@@ -46,7 +46,7 @@ func mockGCReconciler(fakeClient kbclient.Client, fakeClock *testclocks.FakeCloc
func TestGCReconcile(t *testing.T) {
fakeClock := testclocks.NewFakeClock(time.Now())
defaultBackupLocation := builder.ForBackupStorageLocation(velerov1api.DefaultNamespace, "default").Result()
defaultBackupLocation := builder.ForBackupStorageLocation(velerov1api.DefaultNamespace, "default").Phase(velerov1api.BackupStorageLocationPhaseAvailable).Result()
tests := []struct {
name string
@@ -66,12 +66,12 @@ func TestGCReconcile(t *testing.T) {
{
name: "expired backup in read-only storage location is not deleted",
backup: defaultBackup().Expiration(fakeClock.Now().Add(-time.Minute)).StorageLocation("read-only").Result(),
backupLocation: builder.ForBackupStorageLocation("velero", "read-only").AccessMode(velerov1api.BackupStorageLocationAccessModeReadOnly).Result(),
backupLocation: builder.ForBackupStorageLocation("velero", "read-only").AccessMode(velerov1api.BackupStorageLocationAccessModeReadOnly).Phase(velerov1api.BackupStorageLocationPhaseAvailable).Result(),
},
{
name: "expired backup in read-write storage location is deleted",
backup: defaultBackup().Expiration(fakeClock.Now().Add(-time.Minute)).StorageLocation("read-write").Result(),
backupLocation: builder.ForBackupStorageLocation("velero", "read-write").AccessMode(velerov1api.BackupStorageLocationAccessModeReadWrite).Result(),
backupLocation: builder.ForBackupStorageLocation("velero", "read-write").AccessMode(velerov1api.BackupStorageLocationAccessModeReadWrite).Phase(velerov1api.BackupStorageLocationPhaseAvailable).Result(),
},
{
name: "expired backup with no pending deletion requests is deleted",
@@ -118,6 +118,12 @@ func TestGCReconcile(t *testing.T) {
},
},
},
{
name: "BSL is unavailable",
backup: defaultBackup().Expiration(fakeClock.Now().Add(-time.Second)).StorageLocation("default").Result(),
backupLocation: builder.ForBackupStorageLocation(velerov1api.DefaultNamespace, "default").Phase(velerov1api.BackupStorageLocationPhaseUnavailable).Result(),
expectError: true,
},
}
for _, test := range tests {

View File

@@ -58,6 +58,7 @@ import (
kubeutil "github.com/vmware-tanzu/velero/pkg/util/kube"
"github.com/vmware-tanzu/velero/pkg/util/logging"
"github.com/vmware-tanzu/velero/pkg/util/results"
veleroutil "github.com/vmware-tanzu/velero/pkg/util/velero"
pkgrestoreUtil "github.com/vmware-tanzu/velero/pkg/util/velero/restore"
)
@@ -393,6 +394,11 @@ func (r *restoreReconciler) validateAndComplete(restore *api.Restore) (backupInf
return backupInfo{}, nil
}
if !veleroutil.BSLIsAvailable(*info.location) {
restore.Status.ValidationErrors = append(restore.Status.ValidationErrors, fmt.Sprintf("the BSL %s is unavailable, cannot retrieve the backup. please retry the restore after the BSL becomes available", info.location.Name))
return backupInfo{}, nil
}
// Fill in the ScheduleName so it's easier to consume for metrics.
if restore.Spec.ScheduleName == "" {
restore.Spec.ScheduleName = info.backup.GetLabels()[api.ScheduleNameLabel]
@@ -728,6 +734,10 @@ func (r *restoreReconciler) deleteExternalResources(restore *api.Restore) error
return errors.Wrap(err, fmt.Sprintf("can't get backup info, backup: %s", restore.Spec.BackupName))
}
if !veleroutil.BSLIsAvailable(*backupInfo.location) {
return fmt.Errorf("bsl %s is unavailable, cannot get the backup info", backupInfo.location.Name)
}
// delete restore files in object storage
pluginManager := r.newPluginManager(r.logger)
defer pluginManager.CleanupClients()

View File

@@ -66,7 +66,7 @@ func TestFetchBackupInfo(t *testing.T) {
{
name: "lister has backup",
backupName: "backup-1",
informerLocations: []*velerov1api.BackupStorageLocation{builder.ForBackupStorageLocation("velero", "default").Provider("myCloud").Bucket("bucket").Result()},
informerLocations: []*velerov1api.BackupStorageLocation{builder.ForBackupStorageLocation("velero", "default").Provider("myCloud").Bucket("bucket").Phase(velerov1api.BackupStorageLocationPhaseAvailable).Result()},
informerBackups: []*velerov1api.Backup{defaultBackup().StorageLocation("default").Result()},
expectedRes: defaultBackup().StorageLocation("default").Result(),
},
@@ -74,7 +74,7 @@ func TestFetchBackupInfo(t *testing.T) {
name: "lister does not have a backup, but backupSvc does",
backupName: "backup-1",
backupStoreBackup: defaultBackup().StorageLocation("default").Result(),
informerLocations: []*velerov1api.BackupStorageLocation{builder.ForBackupStorageLocation("velero", "default").Provider("myCloud").Bucket("bucket").Result()},
informerLocations: []*velerov1api.BackupStorageLocation{builder.ForBackupStorageLocation("velero", "default").Provider("myCloud").Bucket("bucket").Phase(velerov1api.BackupStorageLocationPhaseAvailable).Result()},
informerBackups: []*velerov1api.Backup{defaultBackup().StorageLocation("default").Result()},
expectedRes: defaultBackup().StorageLocation("default").Result(),
},
@@ -211,7 +211,7 @@ func TestProcessQueueItemSkips(t *testing.T) {
}
func TestRestoreReconcile(t *testing.T) {
defaultStorageLocation := builder.ForBackupStorageLocation("velero", "default").Provider("myCloud").Bucket("bucket").Result()
defaultStorageLocation := builder.ForBackupStorageLocation("velero", "default").Provider("myCloud").Bucket("bucket").Phase(velerov1api.BackupStorageLocationPhaseAvailable).Result()
now, err := time.Parse(time.RFC1123Z, time.RFC1123Z)
require.NoError(t, err)
@@ -464,6 +464,22 @@ func TestRestoreReconcile(t *testing.T) {
expectedCompletedTime: &timestamp,
expectedRestorerCall: NewRestore("foo", "bar", "backup-1", "ns-1", "", velerov1api.RestorePhaseInProgress).Result(),
},
{
name: "Restore creation is rejected when BSL is unavailable",
location: builder.ForBackupStorageLocation("velero", "default").Provider("myCloud").Bucket("bucket").Phase(velerov1api.BackupStorageLocationPhaseUnavailable).Result(),
restore: NewRestore("foo", "bar", "backup-1", "ns-1", "", velerov1api.RestorePhaseNew).Result(),
backup: defaultBackup().StorageLocation("default").Result(),
expectedErr: false,
expectedPhase: string(velerov1api.RestorePhaseNew),
expectedValidationErrors: []string{"the BSL %s is unavailable, cannot retrieve the backup. please retry the restore after the BSL becomes available"},
},
{
name: "Restore deletion is rejected when BSL is unavailable.",
location: builder.ForBackupStorageLocation("velero", "default").Provider("myCloud").Bucket("bucket").Phase(velerov1api.BackupStorageLocationPhaseUnavailable).Result(),
restore: NewRestore("foo", "bar", "backup-1", "ns-1", "", velerov1api.RestorePhaseCompleted).ObjectMeta(builder.WithFinalizers(ExternalResourcesFinalizer), builder.WithDeletionTimestamp(timestamp.Time)).Result(),
backup: defaultBackup().StorageLocation("default").Result(),
expectedErr: true,
},
}
formatFlag := logging.FormatText
@@ -738,7 +754,7 @@ func TestValidateAndCompleteWhenScheduleNameSpecified(t *testing.T) {
Result(),
))
location := builder.ForBackupStorageLocation("velero", "default").Provider("myCloud").Bucket("bucket").Result()
location := builder.ForBackupStorageLocation("velero", "default").Provider("myCloud").Bucket("bucket").Phase(velerov1api.BackupStorageLocationPhaseAvailable).Result()
require.NoError(t, r.kbClient.Create(context.Background(), location))
restore = &velerov1api.Restore{
@@ -797,7 +813,7 @@ func TestValidateAndCompleteWithResourceModifierSpecified(t *testing.T) {
},
}
location := builder.ForBackupStorageLocation("velero", "default").Provider("myCloud").Bucket("bucket").Result()
location := builder.ForBackupStorageLocation("velero", "default").Provider("myCloud").Bucket("bucket").Phase(velerov1api.BackupStorageLocationPhaseAvailable).Result()
require.NoError(t, r.kbClient.Create(context.Background(), location))
require.NoError(t, r.kbClient.Create(

View File

@@ -471,14 +471,14 @@ func TestWaitRestoreExecHook(t *testing.T) {
hookTracker2 := hook.NewMultiHookTracker()
restoreName2 := "restore2"
hookTracker2.Add(restoreName2, "ns", "pod", "con1", "s1", "h1", "")
hookTracker2.Record(restoreName2, "ns", "pod", "con1", "s1", "h1", "", false, nil)
hookTracker2.Add(restoreName2, "ns", "pod", "con1", "s1", "h1", "", 0)
hookTracker2.Record(restoreName2, "ns", "pod", "con1", "s1", "h1", "", 0, false, nil)
hookTracker3 := hook.NewMultiHookTracker()
restoreName3 := "restore3"
podNs, podName, container, source, hookName := "ns", "pod", "con1", "s1", "h1"
hookFailed, hookErr := true, fmt.Errorf("hook failed")
hookTracker3.Add(restoreName3, podNs, podName, container, source, hookName, hook.PhasePre)
hookTracker3.Add(restoreName3, podNs, podName, container, source, hookName, hook.PhasePre, 0)
tests := []struct {
name string
@@ -546,7 +546,7 @@ func TestWaitRestoreExecHook(t *testing.T) {
if tc.waitSec > 0 {
go func() {
time.Sleep(time.Second * time.Duration(tc.waitSec))
tc.hookTracker.Record(tc.restore.Name, tc.podNs, tc.podName, tc.Container, tc.Source, tc.hookName, hook.PhasePre, tc.hookFailed, tc.hookErr)
tc.hookTracker.Record(tc.restore.Name, tc.podNs, tc.podName, tc.Container, tc.Source, tc.hookName, hook.PhasePre, 0, tc.hookFailed, tc.hookErr)
}()
}

View File

@@ -681,6 +681,7 @@ func (e *csiSnapshotExposer) createBackupPod(
RestartPolicy: corev1.RestartPolicyNever,
SecurityContext: securityCtx,
Tolerations: toleration,
ImagePullSecrets: podInfo.imagePullSecrets,
},
}

View File

@@ -377,14 +377,14 @@ func (e *genericRestoreExposer) RebindVolume(ctx context.Context, ownerObject co
}
func (e *genericRestoreExposer) createRestorePod(ctx context.Context, ownerObject corev1.ObjectReference, targetPVC *corev1.PersistentVolumeClaim,
operationTimeout time.Duration, label map[string]string, annotation map[string]string, selectedNode string, resources corev1.ResourceRequirements, nodeType string) (*corev1.Pod, error) {
operationTimeout time.Duration, label map[string]string, annotation map[string]string, selectedNode string, resources corev1.ResourceRequirements, nodeOS string) (*corev1.Pod, error) {
restorePodName := ownerObject.Name
restorePVCName := ownerObject.Name
containerName := string(ownerObject.UID)
volumeName := string(ownerObject.UID)
podInfo, err := getInheritedPodInfo(ctx, e.kubeClient, ownerObject.Namespace, kube.NodeOSLinux)
podInfo, err := getInheritedPodInfo(ctx, e.kubeClient, ownerObject.Namespace, nodeOS)
if err != nil {
return nil, errors.Wrap(err, "error to get inherited pod info from node-agent")
}
@@ -427,7 +427,7 @@ func (e *genericRestoreExposer) createRestorePod(ctx context.Context, ownerObjec
nodeSelector := map[string]string{}
podOS := corev1.PodOS{}
toleration := []corev1.Toleration{}
if nodeType == kube.NodeOSWindows {
if nodeOS == kube.NodeOSWindows {
userID := "ContainerAdministrator"
securityCtx = &corev1.PodSecurityContext{
WindowsOptions: &corev1.WindowsSecurityContextOptions{
@@ -510,6 +510,7 @@ func (e *genericRestoreExposer) createRestorePod(ctx context.Context, ownerObjec
RestartPolicy: corev1.RestartPolicyNever,
SecurityContext: securityCtx,
Tolerations: toleration,
ImagePullSecrets: podInfo.imagePullSecrets,
},
}

View File

@@ -28,14 +28,15 @@ import (
)
type inheritedPodInfo struct {
image string
serviceAccount string
env []v1.EnvVar
envFrom []v1.EnvFromSource
volumeMounts []v1.VolumeMount
volumes []v1.Volume
logLevelArgs []string
logFormatArgs []string
image string
serviceAccount string
env []v1.EnvVar
envFrom []v1.EnvFromSource
volumeMounts []v1.VolumeMount
volumes []v1.Volume
logLevelArgs []string
logFormatArgs []string
imagePullSecrets []v1.LocalObjectReference
}
func getInheritedPodInfo(ctx context.Context, client kubernetes.Interface, veleroNamespace string, osType string) (inheritedPodInfo, error) {
@@ -71,5 +72,7 @@ func getInheritedPodInfo(ctx context.Context, client kubernetes.Interface, veler
}
}
podInfo.imagePullSecrets = podSpec.ImagePullSecrets
return podInfo, nil
}

View File

@@ -26,11 +26,11 @@ import (
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/kubernetes"
"github.com/vmware-tanzu/velero/pkg/util/kube"
appsv1 "k8s.io/api/apps/v1"
v1 "k8s.io/api/core/v1"
"k8s.io/client-go/kubernetes/fake"
"github.com/vmware-tanzu/velero/pkg/util/kube"
)
func TestGetInheritedPodInfo(t *testing.T) {
@@ -177,6 +177,11 @@ func TestGetInheritedPodInfo(t *testing.T) {
},
},
ServiceAccountName: "sa-1",
ImagePullSecrets: []v1.LocalObjectReference{
{
Name: "imagePullSecret1",
},
},
},
},
},
@@ -317,6 +322,11 @@ func TestGetInheritedPodInfo(t *testing.T) {
"--log-level",
"debug",
},
imagePullSecrets: []v1.LocalObjectReference{
{
Name: "imagePullSecret1",
},
},
},
},
}

View File

@@ -23,6 +23,7 @@ import (
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/utils/ptr"
"github.com/vmware-tanzu/velero/internal/velero"
)
@@ -177,7 +178,9 @@ func DaemonSet(namespace string, opts ...podTemplateOption) *appsv1.DaemonSet {
Name: "cloud-credentials",
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: "cloud-credentials",
// read-only for Owner, Group, Public
DefaultMode: ptr.To(int32(0444)),
SecretName: "cloud-credentials",
},
},
},

View File

@@ -24,6 +24,7 @@ import (
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/utils/ptr"
"github.com/vmware-tanzu/velero/internal/velero"
"github.com/vmware-tanzu/velero/pkg/builder"
@@ -404,7 +405,9 @@ func Deployment(namespace string, opts ...podTemplateOption) *appsv1.Deployment
Name: "cloud-credentials",
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: "cloud-credentials",
// read-only for Owner, Group, Public
DefaultMode: ptr.To(int32(0444)),
SecretName: "cloud-credentials",
},
},
},

View File

@@ -173,11 +173,28 @@ func newBackupper(
return
}
statusChangedToFinal := true
existObj, exist, err := b.pvbIndexer.Get(pvb)
if err == nil && exist {
existPVB, ok := existObj.(*velerov1api.PodVolumeBackup)
// the PVB in the indexer is already in final status, no need to call WaitGroup.Done()
if ok && (existPVB.Status.Phase == velerov1api.PodVolumeBackupPhaseCompleted ||
existPVB.Status.Phase == velerov1api.PodVolumeBackupPhaseFailed) {
statusChangedToFinal = false
}
}
// the Indexer inserts PVB directly if the PVB to be updated doesn't exist
if err := b.pvbIndexer.Update(pvb); err != nil {
log.WithError(err).Errorf("failed to update PVB %s/%s in indexer", pvb.Namespace, pvb.Name)
}
b.wg.Done()
// call WaitGroup.Done() once only when the PVB changes to final status the first time.
// This avoid the cases that the handler gets multiple update events whose PVBs are all in final status
// which causes panic with "negative WaitGroup counter" error
if statusChangedToFinal {
b.wg.Done()
}
},
},
)

View File

@@ -407,8 +407,16 @@ func StartNewJob(cli client.Client, ctx context.Context, repo *velerov1api.Backu
return maintenanceJob.Name, nil
}
func buildJob(cli client.Client, ctx context.Context, repo *velerov1api.BackupRepository, bslName string, config *JobConfigs,
podResources kube.PodResources, logLevel logrus.Level, logFormat *logging.FormatFlag) (*batchv1.Job, error) {
func buildJob(
cli client.Client,
ctx context.Context,
repo *velerov1api.BackupRepository,
bslName string,
config *JobConfigs,
podResources kube.PodResources,
logLevel logrus.Level,
logFormat *logging.FormatFlag,
) (*batchv1.Job, error) {
// Get the Velero server deployment
deployment := &appsv1.Deployment{}
err := cli.Get(ctx, types.NamespacedName{Name: "velero", Namespace: repo.Namespace}, deployment)
@@ -431,6 +439,8 @@ func buildJob(cli client.Client, ctx context.Context, repo *velerov1api.BackupRe
// Get the service account from the Velero server deployment
serviceAccount := veleroutil.GetServiceAccountFromVeleroServer(deployment)
imagePullSecrets := veleroutil.GetImagePullSecretsFromVeleroServer(deployment)
// Get image
image := veleroutil.GetVeleroServerImage(deployment)
@@ -520,6 +530,7 @@ func buildJob(cli client.Client, ctx context.Context, repo *velerov1api.BackupRe
Value: "windows",
},
},
ImagePullSecrets: imagePullSecrets,
},
},
},

View File

@@ -903,6 +903,11 @@ func TestBuildJob(t *testing.T) {
},
},
},
ImagePullSecrets: []v1.LocalObjectReference{
{
Name: "imagePullSecret1",
},
},
},
},
},
@@ -912,17 +917,18 @@ func TestBuildJob(t *testing.T) {
deploy2.Spec.Template.Labels = map[string]string{"azure.workload.identity/use": "fake-label-value"}
testCases := []struct {
name string
m *JobConfigs
deploy *appsv1.Deployment
logLevel logrus.Level
logFormat *logging.FormatFlag
thirdPartyLabel map[string]string
expectedJobName string
expectedError bool
expectedEnv []v1.EnvVar
expectedEnvFrom []v1.EnvFromSource
expectedPodLabel map[string]string
name string
m *JobConfigs
deploy *appsv1.Deployment
logLevel logrus.Level
logFormat *logging.FormatFlag
thirdPartyLabel map[string]string
expectedJobName string
expectedError bool
expectedEnv []v1.EnvVar
expectedEnvFrom []v1.EnvFromSource
expectedPodLabel map[string]string
expectedImagePullSecrets []v1.LocalObjectReference
}{
{
name: "Valid maintenance job without third party labels",
@@ -964,6 +970,11 @@ func TestBuildJob(t *testing.T) {
expectedPodLabel: map[string]string{
RepositoryNameLabel: "test-123",
},
expectedImagePullSecrets: []v1.LocalObjectReference{
{
Name: "imagePullSecret1",
},
},
},
{
name: "Valid maintenance job with third party labels",
@@ -1006,6 +1017,11 @@ func TestBuildJob(t *testing.T) {
RepositoryNameLabel: "test-123",
"azure.workload.identity/use": "fake-label-value",
},
expectedImagePullSecrets: []v1.LocalObjectReference{
{
Name: "imagePullSecret1",
},
},
},
{
name: "Error getting Velero server deployment",
@@ -1057,7 +1073,16 @@ func TestBuildJob(t *testing.T) {
cli := fake.NewClientBuilder().WithScheme(scheme).WithRuntimeObjects(objs...).Build()
// Call the function to test
job, err := buildJob(cli, context.TODO(), param.BackupRepo, param.BackupLocation.Name, tc.m, *tc.m.PodResources, tc.logLevel, tc.logFormat)
job, err := buildJob(
cli,
context.TODO(),
param.BackupRepo,
param.BackupLocation.Name,
tc.m,
*tc.m.PodResources,
tc.logLevel,
tc.logFormat,
)
// Check the error
if tc.expectedError {
@@ -1108,6 +1133,8 @@ func TestBuildJob(t *testing.T) {
assert.Equal(t, expectedArgs, container.Args)
assert.Equal(t, tc.expectedPodLabel, job.Spec.Template.Labels)
assert.Equal(t, tc.expectedImagePullSecrets, job.Spec.Template.Spec.ImagePullSecrets)
}
})
}

View File

@@ -17,6 +17,8 @@ limitations under the License.
package csi
import (
"fmt"
snapshotv1api "github.com/kubernetes-csi/external-snapshotter/client/v7/apis/volumesnapshot/v1"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
@@ -26,6 +28,7 @@ import (
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/client"
"github.com/vmware-tanzu/velero/pkg/kuberesource"
plugincommon "github.com/vmware-tanzu/velero/pkg/plugin/framework/common"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
"github.com/vmware-tanzu/velero/pkg/util"
@@ -106,12 +109,23 @@ func (p *volumeSnapshotRestoreItemAction) Execute(
return nil, errors.WithStack(err)
}
if vsFromBackup.Status == nil ||
vsFromBackup.Status.BoundVolumeSnapshotContentName == nil {
p.log.Errorf("VS %s doesn't have bound VSC", vsFromBackup.Name)
return nil, fmt.Errorf("VS %s doesn't have bound VSC", vsFromBackup.Name)
}
vsc := velero.ResourceIdentifier{
GroupResource: kuberesource.VolumeSnapshotContents,
Name: *vsFromBackup.Status.BoundVolumeSnapshotContentName,
}
p.log.Infof(`Returning from VolumeSnapshotRestoreItemAction with
no additionalItems`)
return &velero.RestoreItemActionExecuteOutput{
UpdatedItem: &unstructured.Unstructured{Object: vsMap},
AdditionalItems: []velero.ResourceIdentifier{},
AdditionalItems: []velero.ResourceIdentifier{vsc},
}, nil
}

View File

@@ -83,6 +83,7 @@ const ObjectStatusRestoreAnnotationKey = "velero.io/restore-status"
var resourceMustHave = []string{
"datauploads.velero.io",
"volumesnapshotcontents.snapshot.storage.k8s.io",
}
type VolumeSnapshotterGetter interface {
@@ -1704,11 +1705,13 @@ func (ctx *restoreContext) restoreItem(obj *unstructured.Unstructured, groupReso
}
if patchBytes != nil {
if _, err = resourceClient.Patch(obj.GetName(), patchBytes); err != nil {
restoreLogger.Errorf("error patch for managed fields %s: %s", kube.NamespaceAndName(obj), err.Error())
if !apierrors.IsNotFound(err) {
restoreLogger.Errorf("error patch for managed fields %s: %s", kube.NamespaceAndName(obj), err.Error())
errs.Add(namespace, err)
return warnings, errs, itemExists
}
restoreLogger.Warnf("item not found when patching managed fields %s: %s", kube.NamespaceAndName(obj), err.Error())
warnings.Add(namespace, err)
} else {
restoreLogger.Infof("the managed fields for %s is patched", kube.NamespaceAndName(obj))
}

View File

@@ -63,14 +63,6 @@ type FakeRestoreProgressUpdater struct {
func (f *FakeRestoreProgressUpdater) UpdateProgress(p *uploader.Progress) {}
func TestRunBackup(t *testing.T) {
mockBRepo := udmrepomocks.NewBackupRepo(t)
mockBRepo.On("GetAdvancedFeatures").Return(udmrepo.AdvancedFeatureInfo{})
var kp kopiaProvider
kp.log = logrus.New()
kp.bkRepo = mockBRepo
updater := FakeBackupProgressUpdater{PodVolumeBackup: &velerov1api.PodVolumeBackup{}, Log: kp.log, Ctx: context.Background(), Cli: fake.NewClientBuilder().WithScheme(util.VeleroScheme).Build()}
testCases := []struct {
name string
hookBackupFunc func(ctx context.Context, fsUploader kopia.SnapshotUploader, repoWriter repo.RepositoryWriter, sourcePath string, realSource string, forceFull bool, parentSnapshot string, volMode uploader.PersistentVolumeMode, uploaderCfg map[string]string, tags map[string]string, log logrus.FieldLogger) (*uploader.SnapshotInfo, bool, error)
@@ -102,6 +94,14 @@ func TestRunBackup(t *testing.T) {
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
mockBRepo := udmrepomocks.NewBackupRepo(t)
mockBRepo.On("GetAdvancedFeatures").Return(udmrepo.AdvancedFeatureInfo{})
var kp kopiaProvider
kp.log = logrus.New()
kp.bkRepo = mockBRepo
updater := FakeBackupProgressUpdater{PodVolumeBackup: &velerov1api.PodVolumeBackup{}, Log: kp.log, Ctx: context.Background(), Cli: fake.NewClientBuilder().WithScheme(util.VeleroScheme).Build()}
if tc.volMode == "" {
tc.volMode = uploader.PersistentVolumeFilesystem
}
@@ -117,10 +117,6 @@ func TestRunBackup(t *testing.T) {
}
func TestRunRestore(t *testing.T) {
var kp kopiaProvider
kp.log = logrus.New()
updater := FakeRestoreProgressUpdater{PodVolumeRestore: &velerov1api.PodVolumeRestore{}, Log: kp.log, Ctx: context.Background(), Cli: fake.NewClientBuilder().WithScheme(util.VeleroScheme).Build()}
testCases := []struct {
name string
hookRestoreFunc func(ctx context.Context, rep repo.RepositoryWriter, progress *kopia.Progress, snapshotID, dest string, volMode uploader.PersistentVolumeMode, uploaderCfg map[string]string, log logrus.FieldLogger, cancleCh chan struct{}) (int64, int32, error)
@@ -153,6 +149,10 @@ func TestRunRestore(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
var kp kopiaProvider
kp.log = logrus.New()
updater := FakeRestoreProgressUpdater{PodVolumeRestore: &velerov1api.PodVolumeRestore{}, Log: kp.log, Ctx: context.Background(), Cli: fake.NewClientBuilder().WithScheme(util.VeleroScheme).Build()}
if tc.volMode == "" {
tc.volMode = uploader.PersistentVolumeFilesystem
}

View File

@@ -467,13 +467,13 @@ func DiagnosePV(pv *corev1api.PersistentVolume) string {
}
func GetPVCAttachingNodeOS(pvc *corev1api.PersistentVolumeClaim, nodeClient corev1client.CoreV1Interface,
storageClient storagev1.StorageV1Interface, log logrus.FieldLogger) (string, error) {
storageClient storagev1.StorageV1Interface, log logrus.FieldLogger) string {
var nodeOS string
var scFsType string
if pvc.Spec.VolumeMode != nil && *pvc.Spec.VolumeMode == corev1api.PersistentVolumeBlock {
log.Infof("Use linux node for block mode PVC %s/%s", pvc.Namespace, pvc.Name)
return NodeOSLinux, nil
return NodeOSLinux
}
if pvc.Spec.VolumeName == "" {
@@ -485,53 +485,53 @@ func GetPVCAttachingNodeOS(pvc *corev1api.PersistentVolumeClaim, nodeClient core
}
nodeName := ""
if value := pvc.Annotations[KubeAnnSelectedNode]; value != "" {
nodeName = value
}
if nodeName == "" {
if pvc.Spec.VolumeName != "" {
n, err := GetPVAttachedNode(context.Background(), pvc.Spec.VolumeName, storageClient)
if err != nil {
return "", errors.Wrapf(err, "error to get attached node for PVC %s/%s", pvc.Namespace, pvc.Name)
}
if pvc.Spec.VolumeName != "" {
if n, err := GetPVAttachedNode(context.Background(), pvc.Spec.VolumeName, storageClient); err != nil {
log.WithError(err).Warnf("Failed to get attached node for PVC %s/%s", pvc.Namespace, pvc.Name)
} else {
nodeName = n
}
}
if nodeName != "" {
os, err := GetNodeOS(context.Background(), nodeName, nodeClient)
if err != nil {
return "", errors.Wrapf(err, "error to get os from node %s for PVC %s/%s", nodeName, pvc.Namespace, pvc.Name)
if nodeName == "" {
if value := pvc.Annotations[KubeAnnSelectedNode]; value != "" {
nodeName = value
}
}
nodeOS = os
if nodeName != "" {
if os, err := GetNodeOS(context.Background(), nodeName, nodeClient); err != nil {
log.WithError(err).Warnf("Failed to get os from node %s for PVC %s/%s", nodeName, pvc.Namespace, pvc.Name)
} else {
nodeOS = os
}
}
if pvc.Spec.StorageClassName != nil {
sc, err := storageClient.StorageClasses().Get(context.Background(), *pvc.Spec.StorageClassName, metav1.GetOptions{})
if err != nil {
return "", errors.Wrapf(err, "error to get storage class %s", *pvc.Spec.StorageClassName)
}
if sc.Parameters != nil {
if sc, err := storageClient.StorageClasses().Get(context.Background(), *pvc.Spec.StorageClassName, metav1.GetOptions{}); err != nil {
log.WithError(err).Warnf("Failed to get storage class %s for PVC %s/%s", *pvc.Spec.StorageClassName, pvc.Namespace, pvc.Name)
} else if sc.Parameters != nil {
scFsType = strings.ToLower(sc.Parameters["csi.storage.k8s.io/fstype"])
}
}
if nodeOS != "" {
log.Infof("Deduced node os %s from selected node for PVC %s/%s (fsType %s)", nodeOS, pvc.Namespace, pvc.Name, scFsType)
return nodeOS, nil
log.Infof("Deduced node os %s from selected/attached node for PVC %s/%s (fsType %s)", nodeOS, pvc.Namespace, pvc.Name, scFsType)
return nodeOS
}
if scFsType == "ntfs" {
log.Infof("Deduced Windows node os from fsType for PVC %s/%s", pvc.Namespace, pvc.Name)
return NodeOSWindows, nil
log.Infof("Deduced Windows node os from fsType %s for PVC %s/%s", scFsType, pvc.Namespace, pvc.Name)
return NodeOSWindows
}
if scFsType != "" {
log.Infof("Deduced linux node os from fsType %s for PVC %s/%s", scFsType, pvc.Namespace, pvc.Name)
return NodeOSLinux
}
log.Warnf("Cannot deduce node os for PVC %s/%s, default to linux", pvc.Namespace, pvc.Name)
return NodeOSLinux, nil
return NodeOSLinux
}
func GetPVAttachedNode(ctx context.Context, pv string, storageClient storagev1.StorageV1Interface) (string, error) {

View File

@@ -1674,34 +1674,6 @@ func TestGetPVCAttachingNodeOS(t *testing.T) {
},
}
pvcObjWithNode := &corev1api.PersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Namespace: "fake-namespace",
Name: "fake-pvc",
Annotations: map[string]string{KubeAnnSelectedNode: "fake-node"},
},
}
pvcObjWithVolume := &corev1api.PersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Namespace: "fake-namespace",
Name: "fake-pvc",
},
Spec: corev1api.PersistentVolumeClaimSpec{
VolumeName: "fake-volume-name",
},
}
pvcObjWithStorageClass := &corev1api.PersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Namespace: "fake-namespace",
Name: "fake-pvc",
},
Spec: corev1api.PersistentVolumeClaimSpec{
StorageClassName: &storageClass,
},
}
pvName := "fake-volume-name"
pvcObjWithAll := &corev1api.PersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
@@ -1715,17 +1687,6 @@ func TestGetPVCAttachingNodeOS(t *testing.T) {
},
}
pvcObjWithVolumeSC := &corev1api.PersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Namespace: "fake-namespace",
Name: "fake-pvc",
},
Spec: corev1api.PersistentVolumeClaimSpec{
VolumeName: pvName,
StorageClassName: &storageClass,
},
}
scObjWithoutFSType := &storagev1api.StorageClass{
ObjectMeta: metav1.ObjectMeta{
Name: "fake-storage-class",
@@ -1739,6 +1700,13 @@ func TestGetPVCAttachingNodeOS(t *testing.T) {
Parameters: map[string]string{"csi.storage.k8s.io/fstype": "ntfs"},
}
scObjWithFSTypeExt := &storagev1api.StorageClass{
ObjectMeta: metav1.ObjectMeta{
Name: "fake-storage-class",
},
Parameters: map[string]string{"csi.storage.k8s.io/fstype": "ext4"},
}
volAttachEmpty := &storagev1api.VolumeAttachment{
ObjectMeta: metav1.ObjectMeta{
Name: "fake-volume-attach-1",
@@ -1753,6 +1721,7 @@ func TestGetPVCAttachingNodeOS(t *testing.T) {
Source: storagev1api.VolumeAttachmentSource{
PersistentVolumeName: &pvName,
},
NodeName: "fake-node",
},
}
@@ -1773,7 +1742,6 @@ func TestGetPVCAttachingNodeOS(t *testing.T) {
pvc *corev1api.PersistentVolumeClaim
kubeClientObj []runtime.Object
expectedNodeOS string
err string
}{
{
name: "no selected node, volume name and storage class",
@@ -1781,53 +1749,51 @@ func TestGetPVCAttachingNodeOS(t *testing.T) {
expectedNodeOS: NodeOSLinux,
},
{
name: "node doesn't exist",
pvc: pvcObjWithNode,
err: "error to get os from node fake-node for PVC fake-namespace/fake-pvc: error getting node fake-node: nodes \"fake-node\" not found",
name: "fallback",
pvc: pvcObjWithAll,
expectedNodeOS: NodeOSLinux,
},
{
name: "node without os label",
pvc: pvcObjWithNode,
name: "with selected node, but node without label",
pvc: pvcObjWithAll,
kubeClientObj: []runtime.Object{
nodeNoOSLabel,
},
expectedNodeOS: NodeOSLinux,
},
{
name: "no attach volume",
pvc: pvcObjWithVolume,
expectedNodeOS: NodeOSLinux,
},
{
name: "sc doesn't exist",
pvc: pvcObjWithStorageClass,
err: "error to get storage class fake-storage-class: storageclasses.storage.k8s.io \"fake-storage-class\" not found",
},
{
name: "volume attachment not exist",
pvc: pvcObjWithVolume,
name: "volume attachment exist, but get node os fails",
pvc: pvcObjWithAll,
kubeClientObj: []runtime.Object{
nodeWindows,
scObjWithFSType,
volAttachEmpty,
volAttachWithOtherVolume,
volAttachWithVolume,
},
expectedNodeOS: NodeOSLinux,
expectedNodeOS: NodeOSWindows,
},
{
name: "volume attachment exist, node without label",
pvc: pvcObjWithAll,
kubeClientObj: []runtime.Object{
nodeNoOSLabel,
scObjWithFSType,
volAttachWithVolume,
},
expectedNodeOS: NodeOSWindows,
},
{
name: "sc without fsType",
pvc: pvcObjWithStorageClass,
pvc: pvcObjWithAll,
kubeClientObj: []runtime.Object{
scObjWithoutFSType,
},
expectedNodeOS: NodeOSLinux,
},
{
name: "deduce from node os",
name: "deduce from node os by selected node",
pvc: pvcObjWithAll,
kubeClientObj: []runtime.Object{
nodeWindows,
scObjWithFSType,
scObjWithFSTypeExt,
},
expectedNodeOS: NodeOSWindows,
},
@@ -1842,13 +1808,13 @@ func TestGetPVCAttachingNodeOS(t *testing.T) {
},
{
name: "deduce from attached node os",
pvc: pvcObjWithVolumeSC,
pvc: pvcObjWithAll,
kubeClientObj: []runtime.Object{
nodeWindows,
scObjWithFSType,
scObjWithFSTypeExt,
volAttachEmpty,
volAttachWithOtherVolume,
volAttachWithVolume,
volAttachWithOtherVolume,
},
expectedNodeOS: NodeOSWindows,
},
@@ -1864,13 +1830,7 @@ func TestGetPVCAttachingNodeOS(t *testing.T) {
var kubeClient kubernetes.Interface = fakeKubeClient
nodeOS, err := GetPVCAttachingNodeOS(test.pvc, kubeClient.CoreV1(), kubeClient.StorageV1(), velerotest.NewLogger())
if err != nil {
assert.EqualError(t, err, test.err)
} else {
assert.NoError(t, err)
}
nodeOS := GetPVCAttachingNodeOS(test.pvc, kubeClient.CoreV1(), kubeClient.StorageV1(), velerotest.NewLogger())
assert.Equal(t, test.expectedNodeOS, nodeOS)
})

View File

@@ -19,6 +19,8 @@ package velero
import (
appsv1 "k8s.io/api/apps/v1"
v1 "k8s.io/api/core/v1"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
)
// GetNodeSelectorFromVeleroServer get the node selector from the Velero server deployment
@@ -73,6 +75,11 @@ func GetServiceAccountFromVeleroServer(deployment *appsv1.Deployment) string {
return deployment.Spec.Template.Spec.ServiceAccountName
}
// GetImagePullSecretsFromVeleroServer get the image pull secrets from the Velero server deployment
func GetImagePullSecretsFromVeleroServer(deployment *appsv1.Deployment) []v1.LocalObjectReference {
return deployment.Spec.Template.Spec.ImagePullSecrets
}
// getVeleroServerImage get the image of the Velero server deployment
func GetVeleroServerImage(deployment *appsv1.Deployment) string {
return deployment.Spec.Template.Spec.Containers[0].Image
@@ -105,3 +112,7 @@ func GetVeleroServerAnnotationValue(deployment *appsv1.Deployment, key string) s
return deployment.Spec.Template.Annotations[key]
}
func BSLIsAvailable(bsl velerov1api.BackupStorageLocation) bool {
return bsl.Status.Phase == velerov1api.BackupStorageLocationPhaseAvailable
}

View File

@@ -21,9 +21,13 @@ import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
appsv1 "k8s.io/api/apps/v1"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/builder"
)
func TestGetNodeSelectorFromVeleroServer(t *testing.T) {
@@ -579,6 +583,63 @@ func TestGetServiceAccountFromVeleroServer(t *testing.T) {
}
}
func TestGetImagePullSecretsFromVeleroServer(t *testing.T) {
tests := []struct {
name string
deploy *appsv1.Deployment
want []v1.LocalObjectReference
}{
{
name: "no image pull secrets",
deploy: &appsv1.Deployment{
Spec: appsv1.DeploymentSpec{
Template: v1.PodTemplateSpec{
Spec: v1.PodSpec{
ServiceAccountName: "",
},
},
},
},
want: nil,
},
{
name: "image pull secrets",
deploy: &appsv1.Deployment{
Spec: appsv1.DeploymentSpec{
Template: v1.PodTemplateSpec{
Spec: v1.PodSpec{
ImagePullSecrets: []v1.LocalObjectReference{
{
Name: "imagePullSecret1",
},
{
Name: "imagePullSecret2",
},
},
},
},
},
},
want: []v1.LocalObjectReference{
{
Name: "imagePullSecret1",
},
{
Name: "imagePullSecret2",
},
},
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
got := GetImagePullSecretsFromVeleroServer(test.deploy)
require.Equal(t, test.want, got)
})
}
}
func TestGetVeleroServerImage(t *testing.T) {
tests := []struct {
name string
@@ -759,3 +820,11 @@ func TestGetVeleroServerLabelValue(t *testing.T) {
})
}
}
func TestBSLIsAvailable(t *testing.T) {
availableBSL := builder.ForBackupStorageLocation("velero", "available").Phase(velerov1api.BackupStorageLocationPhaseAvailable).Result()
unavailableBSL := builder.ForBackupStorageLocation("velero", "unavailable").Phase(velerov1api.BackupStorageLocationPhaseUnavailable).Result()
assert.True(t, BSLIsAvailable(*availableBSL))
assert.False(t, BSLIsAvailable(*unavailableBSL))
}

View File

@@ -102,6 +102,7 @@ OBJECT_STORE_PROVIDER ?=
INSTALL_VELERO ?= true
REGISTRY_CREDENTIAL_FILE ?=
KIBISHII_DIRECTORY ?= github.com/vmware-tanzu-experiments/distributed-data-generator/kubernetes/yaml/
IMAGE_REGISTRY_PROXY ?=
# Flags to create an additional BSL for multiple credentials tests
@@ -216,7 +217,8 @@ run-e2e: ginkgo
--default-cls-service-account-name=$(DEFAULT_CLS_SERVICE_ACCOUNT_NAME) \
--standby-cls-service-account-name=$(STANDBY_CLS_SERVICE_ACCOUNT_NAME) \
--kibishii-directory=$(KIBISHII_DIRECTORY) \
--disable-informer-cache=$(DISABLE_INFORMER_CACHE)
--disable-informer-cache=$(DISABLE_INFORMER_CACHE) \
--image-registry-proxy=$(IMAGE_REGISTRY_PROXY)
.PHONY: run-perf
run-perf: ginkgo

View File

@@ -328,26 +328,26 @@ STANDBY_CLUSTER=wl-antreav1311 \
DEFAULT_CLUSTER_NAME=192.168.0.4 \
STANDBY_CLUSTER_NAME=192.168.0.3 \
FEATURES=EnableCSI \
PLUGINS=gcr.io/velero-gcp/velero-plugin-for-aws:main \
PLUGINS=velero/velero-plugin-for-aws:main \
HAS_VSPHERE_PLUGIN=false \
OBJECT_STORE_PROVIDER=aws \
CREDS_FILE=$HOME/aws-credential \
BSL_CONFIG=region=us-east-1 \
BSL_BUCKET=nightly-normal-account4-test \
BSL_PREFIX=nightly \
ADDITIONAL_BSL_PLUGINS=gcr.io/velero-gcp/velero-plugin-for-aws:main \
ADDITIONAL_BSL_PLUGINS=velero/velero-plugin-for-aws:main \
ADDITIONAL_OBJECT_STORE_PROVIDER=aws \
ADDITIONAL_BSL_CONFIG=region=us-east-1 \
ADDITIONAL_BSL_BUCKET=nightly-restrict-account-test \
ADDITIONAL_BSL_PREFIX=nightly \
ADDITIONAL_CREDS_FILE=$HOME/aws-credential \
VELERO_IMAGE=gcr.io/velero-gcp/velero:main \
RESTORE_HELPER_IMAGE=gcr.io/velero-gcp/velero-restore-helper:main \
VELERO_IMAGE=velero/velero:main \
RESTORE_HELPER_IMAGE=velero/velero:main \
VERSION=main \
SNAPSHOT_MOVE_DATA=true \
STANDBY_CLUSTER_CLOUD_PROVIDER=vsphere \
STANDBY_CLUSTER_OBJECT_STORE_PROVIDER=aws \
STANDBY_CLUSTER_PLUGINS=gcr.io/velero-gcp/velero-plugin-for-aws:main \
STANDBY_CLUSTER_PLUGINS=velero/velero-plugin-for-aws:main \
DISABLE_INFORMER_CACHE=true \
REGISTRY_CREDENTIAL_FILE=$HOME/.docker/config.json \
GINKGO_LABELS=Migration \

View File

@@ -172,7 +172,9 @@ func BackupRestoreTest(backupRestoreTestConfig BackupRestoreTestConfig) {
Expect(VeleroInstall(context.Background(), &veleroCfg, false)).To(Succeed())
}
Expect(VeleroAddPluginsForProvider(context.TODO(), veleroCfg.VeleroCLI, veleroCfg.VeleroNamespace, veleroCfg.AdditionalBSLProvider, veleroCfg.AddBSLPlugins)).To(Succeed())
plugins, err := GetPlugins(context.TODO(), veleroCfg, false)
Expect(err).To(Succeed())
Expect(AddPlugins(plugins, veleroCfg)).To(Succeed())
// Create Secret for additional BSL
secretName := fmt.Sprintf("bsl-credentials-%s", UUIDgen)

View File

@@ -115,8 +115,17 @@ func runBackupDeletionTests(client TestClient, veleroCfg VeleroConfig, backupLoc
}()
}
if err := KibishiiPrepareBeforeBackup(oneHourTimeout, client, providerName, ns,
registryCredentialFile, veleroFeatures, kibishiiDirectory, useVolumeSnapshots, DefaultKibishiiData); err != nil {
if err := KibishiiPrepareBeforeBackup(
oneHourTimeout,
client,
providerName,
ns,
registryCredentialFile,
veleroFeatures,
kibishiiDirectory,
DefaultKibishiiData,
veleroCfg.ImageRegistryProxy,
); err != nil {
return errors.Wrapf(err, "Failed to install and prepare data for kibishii %s", ns)
}
err := ObjectsShouldNotBeInBucket(veleroCfg.ObjectStoreProvider, veleroCfg.CloudCredentialsFile, veleroCfg.BSLBucket, veleroCfg.BSLPrefix, veleroCfg.BSLConfig, backupName, BackupObjectsPrefix, 1)

View File

@@ -100,9 +100,17 @@ func TTLTest() {
})
By("Deploy sample workload of Kibishii", func() {
Expect(KibishiiPrepareBeforeBackup(ctx, client, veleroCfg.CloudProvider,
test.testNS, veleroCfg.RegistryCredentialFile, veleroCfg.Features,
veleroCfg.KibishiiDirectory, useVolumeSnapshots, DefaultKibishiiData)).To(Succeed())
Expect(KibishiiPrepareBeforeBackup(
ctx,
client,
veleroCfg.CloudProvider,
test.testNS,
veleroCfg.RegistryCredentialFile,
veleroCfg.Features,
veleroCfg.KibishiiDirectory,
DefaultKibishiiData,
veleroCfg.ImageRegistryProxy,
)).To(Succeed())
})
var BackupCfg BackupConfig

View File

@@ -121,7 +121,7 @@ func (v *BackupVolumeInfo) CreateResources() error {
volumeName := fmt.Sprintf("volume-info-pv-%d", i)
vols = append(vols, CreateVolumes(pvc.Name, []string{volumeName})...)
}
deployment := NewDeployment(v.CaseBaseName, createNSName, 1, labels, nil).WithVolume(vols).Result()
deployment := NewDeployment(v.CaseBaseName, createNSName, 1, labels, v.VeleroCfg.ImageRegistryProxy).WithVolume(vols).Result()
deployment, err := CreateDeployment(v.Client.ClientGo, createNSName, deployment)
if err != nil {
return errors.Wrap(err, fmt.Sprintf("failed to delete the namespace %q", createNSName))

View File

@@ -91,9 +91,17 @@ func (n *NamespaceMapping) CreateResources() error {
Expect(CreateNamespace(n.Ctx, n.Client, ns)).To(Succeed(), fmt.Sprintf("Failed to create namespace %s", ns))
})
By("Deploy sample workload of Kibishii", func() {
Expect(KibishiiPrepareBeforeBackup(n.Ctx, n.Client, n.VeleroCfg.CloudProvider,
ns, n.VeleroCfg.RegistryCredentialFile, n.VeleroCfg.Features,
n.VeleroCfg.KibishiiDirectory, false, n.kibishiiData)).To(Succeed())
Expect(KibishiiPrepareBeforeBackup(
n.Ctx,
n.Client,
n.VeleroCfg.CloudProvider,
ns,
n.VeleroCfg.RegistryCredentialFile,
n.VeleroCfg.Features,
n.VeleroCfg.KibishiiDirectory,
n.kibishiiData,
n.VeleroCfg.ImageRegistryProxy,
)).To(Succeed())
})
}
return nil

View File

@@ -75,7 +75,8 @@ func (p *PVCSelectedNodeChanging) CreateResources() error {
p.oldNodeName = nodeName
fmt.Printf("Create PVC on node %s\n", p.oldNodeName)
pvcAnn := map[string]string{p.ann: nodeName}
_, err := CreatePod(p.Client, p.namespace, p.podName, StorageClassName, p.pvcName, []string{p.volume}, pvcAnn, nil)
_, err := CreatePod(p.Client, p.namespace, p.podName, StorageClassName, p.pvcName, []string{p.volume},
pvcAnn, nil, p.VeleroCfg.ImageRegistryProxy)
Expect(err).To(Succeed())
err = WaitForPods(p.Ctx, p.Client, p.namespace, []string{p.podName})
Expect(err).To(Succeed())

View File

@@ -82,7 +82,7 @@ func (s *StorageClasssChanging) CreateResources() error {
Expect(err).To(Succeed())
vols := CreateVolumes(pvc.Name, []string{s.volume})
deployment := NewDeployment(s.CaseBaseName, s.namespace, 1, label, nil).WithVolume(vols).Result()
deployment := NewDeployment(s.CaseBaseName, s.namespace, 1, label, s.VeleroCfg.ImageRegistryProxy).WithVolume(vols).Result()
deployment, err = CreateDeployment(s.Client.ClientGo, s.namespace, deployment)
Expect(err).To(Succeed())
s.deploymentName = deployment.Name

View File

@@ -106,7 +106,9 @@ func BslDeletionTest(useVolumeSnapshots bool) {
}
By(fmt.Sprintf("Add an additional plugin for provider %s", veleroCfg.AdditionalBSLProvider), func() {
Expect(VeleroAddPluginsForProvider(context.TODO(), veleroCfg.VeleroCLI, veleroCfg.VeleroNamespace, veleroCfg.AdditionalBSLProvider, veleroCfg.AddBSLPlugins)).To(Succeed())
plugins, err := GetPlugins(context.TODO(), veleroCfg, false)
Expect(err).To(Succeed())
Expect(AddPlugins(plugins, veleroCfg)).To(Succeed())
})
additionalBsl := fmt.Sprintf("bsl-%s", UUIDgen)
@@ -150,9 +152,17 @@ func BslDeletionTest(useVolumeSnapshots bool) {
})
By("Deploy sample workload of Kibishii", func() {
Expect(KibishiiPrepareBeforeBackup(oneHourTimeout, *veleroCfg.ClientToInstallVelero, veleroCfg.CloudProvider,
bslDeletionTestNs, veleroCfg.RegistryCredentialFile, veleroCfg.Features,
veleroCfg.KibishiiDirectory, useVolumeSnapshots, DefaultKibishiiData)).To(Succeed())
Expect(KibishiiPrepareBeforeBackup(
oneHourTimeout,
*veleroCfg.ClientToInstallVelero,
veleroCfg.CloudProvider,
bslDeletionTestNs,
veleroCfg.RegistryCredentialFile,
veleroCfg.Features,
veleroCfg.KibishiiDirectory,
DefaultKibishiiData,
veleroCfg.ImageRegistryProxy,
)).To(Succeed())
})
// Restic can not backup PV only, so pod need to be labeled also

View File

@@ -55,6 +55,7 @@ import (
func init() {
test.VeleroCfg.Options = install.Options{}
test.VeleroCfg.BackupRepoConfigMap = test.BackupRepositoryConfigName // Set to the default value
flag.StringVar(
&test.VeleroCfg.CloudProvider,
"cloud-provider",
@@ -343,6 +344,18 @@ func init() {
false,
"a switch for installing vSphere plugin.",
)
flag.IntVar(
&test.VeleroCfg.ItemBlockWorkerCount,
"item-block-worker-count",
1,
"Velero backup's item block worker count.",
)
flag.StringVar(
&test.VeleroCfg.ImageRegistryProxy,
"image-registry-proxy",
"",
"The image registry proxy, e.g. when the DockerHub access limitation is reached, can use available proxy to replace. Default is nil.",
)
}
// Add label [SkipVanillaZfs]:
@@ -687,6 +700,8 @@ func TestE2e(t *testing.T) {
t.FailNow()
}
veleroutil.UpdateImagesMatrixByProxy(test.VeleroCfg.ImageRegistryProxy)
RegisterFailHandler(Fail)
testSuitePassed = RunSpecs(t, "E2e Suite")
}

View File

@@ -195,8 +195,8 @@ func (m *migrationE2E) Backup() error {
OriginVeleroCfg.RegistryCredentialFile,
OriginVeleroCfg.Features,
OriginVeleroCfg.KibishiiDirectory,
OriginVeleroCfg.UseVolumeSnapshots,
&m.kibishiiData,
OriginVeleroCfg.ImageRegistryProxy,
)).To(Succeed())
})

View File

@@ -95,7 +95,8 @@ func (p *ParallelFilesDownload) CreateResources() error {
})
By(fmt.Sprintf("Create pod %s in namespace %s", p.pod, p.namespace), func() {
_, err := CreatePod(p.Client, p.namespace, p.pod, StorageClassName, p.pvc, []string{p.volume}, nil, nil)
_, err := CreatePod(p.Client, p.namespace, p.pod, StorageClassName, p.pvc, []string{p.volume},
nil, nil, p.VeleroCfg.ImageRegistryProxy)
Expect(err).To(Succeed())
err = WaitForPods(p.Ctx, p.Client, p.namespace, []string{p.pod})
Expect(err).To(Succeed())

View File

@@ -86,7 +86,8 @@ func (p *ParallelFilesUpload) CreateResources() error {
})
By(fmt.Sprintf("Create pod %s in namespace %s", p.pod, p.namespace), func() {
_, err := CreatePod(p.Client, p.namespace, p.pod, StorageClassName, p.pvc, []string{p.volume}, nil, nil)
_, err := CreatePod(p.Client, p.namespace, p.pod, StorageClassName, p.pvc,
[]string{p.volume}, nil, nil, p.VeleroCfg.ImageRegistryProxy)
Expect(err).To(Succeed())
err = WaitForPods(p.Ctx, p.Client, p.namespace, []string{p.pod})
Expect(err).To(Succeed())

View File

@@ -87,7 +87,8 @@ func (p *PVBackupFiltering) CreateResources() error {
podName := fmt.Sprintf("pod-%d", i)
pods = append(pods, podName)
By(fmt.Sprintf("Create pod %s in namespace %s", podName, ns), func() {
pod, err := CreatePod(p.Client, ns, podName, StorageClassName, "", volumes, nil, nil)
pod, err := CreatePod(p.Client, ns, podName, StorageClassName, "",
volumes, nil, nil, p.VeleroCfg.ImageRegistryProxy)
Expect(err).To(Succeed())
ann := map[string]string{
p.annotation: volumesToAnnotation,

View File

@@ -68,7 +68,7 @@ func (f *FilteringCase) CreateResources() error {
}
//Create deployment
fmt.Printf("Creating deployment in namespaces ...%s\n", namespace)
deployment := NewDeployment(f.CaseBaseName, namespace, f.replica, f.labels, nil).Result()
deployment := NewDeployment(f.CaseBaseName, namespace, f.replica, f.labels, f.VeleroCfg.ImageRegistryProxy).Result()
deployment, err := CreateDeployment(f.Client.ClientGo, namespace, deployment)
if err != nil {
return errors.Wrap(err, fmt.Sprintf("failed to delete the namespace %q", namespace))

View File

@@ -88,7 +88,7 @@ func (e *ExcludeFromBackup) CreateResources() error {
}
//Create deployment: to be included
fmt.Printf("Creating deployment in namespaces ...%s\n", namespace)
deployment := NewDeployment(e.CaseBaseName, namespace, e.replica, label2, nil).Result()
deployment := NewDeployment(e.CaseBaseName, namespace, e.replica, label2, e.VeleroCfg.ImageRegistryProxy).Result()
deployment, err := CreateDeployment(e.Client.ClientGo, namespace, deployment)
if err != nil {
return errors.Wrap(err, fmt.Sprintf("failed to delete the namespace %q", namespace))

View File

@@ -88,7 +88,7 @@ func (l *LabelSelector) CreateResources() error {
//Create deployment
fmt.Printf("Creating deployment in namespaces ...%s\n", namespace)
deployment := NewDeployment(l.CaseBaseName, namespace, l.replica, labels, nil).Result()
deployment := NewDeployment(l.CaseBaseName, namespace, l.replica, labels, l.VeleroCfg.ImageRegistryProxy).Result()
deployment, err := CreateDeployment(l.Client.ClientGo, namespace, deployment)
if err != nil {
return errors.Wrap(err, fmt.Sprintf("failed to delete the namespace %q", namespace))

View File

@@ -145,7 +145,7 @@ func (r *ResourceModifiersCase) Clean() error {
}
func (r *ResourceModifiersCase) createDeployment(namespace string) error {
deployment := NewDeployment(r.CaseBaseName, namespace, 1, map[string]string{"app": "test"}, nil).Result()
deployment := NewDeployment(r.CaseBaseName, namespace, 1, map[string]string{"app": "test"}, r.VeleroCfg.ImageRegistryProxy).Result()
deployment, err := CreateDeployment(r.Client.ClientGo, namespace, deployment)
if err != nil {
return errors.Wrap(err, fmt.Sprintf("failed to create deloyment %s the namespace %q", deployment.Name, namespace))

View File

@@ -23,7 +23,7 @@ import (
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/pkg/errors"
v1 "k8s.io/api/core/v1"
corev1api "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"
. "github.com/vmware-tanzu/velero/test"
@@ -186,7 +186,7 @@ func (r *ResourcePoliciesCase) Clean() error {
return nil
}
func (r *ResourcePoliciesCase) createPVC(index int, namespace string, volList []*v1.Volume) error {
func (r *ResourcePoliciesCase) createPVC(index int, namespace string, volList []*corev1api.Volume) error {
var err error
for i := range volList {
pvcName := fmt.Sprintf("pvc-%d", i)
@@ -208,8 +208,8 @@ func (r *ResourcePoliciesCase) createPVC(index int, namespace string, volList []
return nil
}
func (r *ResourcePoliciesCase) createDeploymentWithVolume(namespace string, volList []*v1.Volume) error {
deployment := NewDeployment(r.CaseBaseName, namespace, 1, map[string]string{"resource-policies": "resource-policies"}, nil).WithVolume(volList).Result()
func (r *ResourcePoliciesCase) createDeploymentWithVolume(namespace string, volList []*corev1api.Volume) error {
deployment := NewDeployment(r.CaseBaseName, namespace, 1, map[string]string{"resource-policies": "resource-policies"}, r.VeleroCfg.ImageRegistryProxy).WithVolume(volList).Result()
deployment, err := CreateDeployment(r.Client.ClientGo, namespace, deployment)
if err != nil {
return errors.Wrap(err, fmt.Sprintf("failed to create deloyment %s the namespace %q", deployment.Name, namespace))

View File

@@ -84,6 +84,7 @@ func (s *InProgressCase) CreateResources() error {
[]string{s.volume},
nil,
s.podAnn,
s.VeleroCfg.ImageRegistryProxy,
)
Expect(err).To(Succeed())

View File

@@ -99,7 +99,7 @@ func (o *OrderedResources) CreateResources() error {
//Create deployment
deploymentName := fmt.Sprintf("deploy-%s", o.CaseBaseName)
fmt.Printf("Creating deployment %s in %s namespaces ...\n", deploymentName, o.Namespace)
deployment := k8sutil.NewDeployment(deploymentName, o.Namespace, 1, label, nil).Result()
deployment := k8sutil.NewDeployment(deploymentName, o.Namespace, 1, label, o.VeleroCfg.ImageRegistryProxy).Result()
_, err := k8sutil.CreateDeployment(o.Client.ClientGo, o.Namespace, deployment)
if err != nil {
return errors.Wrap(err, fmt.Sprintf("failed to create namespace %q with err %v", o.Namespace, err))

View File

@@ -126,6 +126,15 @@ func BackupUpgradeRestoreTest(useVolumeSnapshots bool, veleroCLI2Version VeleroC
tmpCfgForOldVeleroInstall.UpgradeFromVeleroVersion = veleroCLI2Version.VeleroVersion
tmpCfgForOldVeleroInstall.VeleroCLI = veleroCLI2Version.VeleroCLI
// CLI under version v1.14.x
if veleroCLI2Version.VeleroVersion < "v1.15" {
tmpCfgForOldVeleroInstall.BackupRepoConfigMap = ""
fmt.Printf(
"CLI version %s is lower than v1.15. Set BackupRepoConfigMap to empty, because it's not supported",
veleroCLI2Version.VeleroVersion,
)
}
tmpCfgForOldVeleroInstall, err = SetImagesToDefaultValues(
tmpCfgForOldVeleroInstall,
veleroCLI2Version.VeleroVersion,
@@ -157,9 +166,17 @@ func BackupUpgradeRestoreTest(useVolumeSnapshots bool, veleroCLI2Version VeleroC
})
By("Deploy sample workload of Kibishii", func() {
Expect(KibishiiPrepareBeforeBackup(oneHourTimeout, *veleroCfg.ClientToInstallVelero, tmpCfg.CloudProvider,
upgradeNamespace, tmpCfg.RegistryCredentialFile, tmpCfg.Features,
tmpCfg.KibishiiDirectory, useVolumeSnapshots, DefaultKibishiiData)).To(Succeed())
Expect(KibishiiPrepareBeforeBackup(
oneHourTimeout,
*veleroCfg.ClientToInstallVelero,
tmpCfg.CloudProvider,
upgradeNamespace,
tmpCfg.RegistryCredentialFile,
tmpCfg.Features,
tmpCfg.KibishiiDirectory,
DefaultKibishiiData,
tmpCfg.ImageRegistryProxy,
)).To(Succeed())
})
By(fmt.Sprintf("Backup namespace %s", upgradeNamespace), func() {
@@ -239,6 +256,9 @@ func BackupUpgradeRestoreTest(useVolumeSnapshots bool, veleroCLI2Version VeleroC
}
})
// Wait for 70s to make sure the backups are synced after Velero reinstall
time.Sleep(70 * time.Second)
By(fmt.Sprintf("Restore %s", upgradeNamespace), func() {
Expect(VeleroRestore(oneHourTimeout, tmpCfg.VeleroCLI,
tmpCfg.VeleroNamespace, restoreName, backupName, "")).To(Succeed(), func() string {

View File

@@ -44,13 +44,17 @@ const CSI = "csi"
const Velero = "velero"
const VeleroRestoreHelper = "velero-restore-helper"
const UploaderTypeRestic = "restic"
const (
UploaderTypeRestic = "restic"
UploaderTypeKopia = "kopia"
)
const (
KubeSystemNamespace = "kube-system"
VSphereCSIControllerNamespace = "vmware-system-csi"
VeleroVSphereSecretName = "velero-vsphere-config-secret"
VeleroVSphereConfigMapName = "velero-vsphere-plugin-config"
BackupRepositoryConfigName = "backup-repository-config"
)
var PublicCloudProviders = []string{AWS, Azure, GCP, Vsphere}
@@ -124,6 +128,7 @@ type VeleroConfig struct {
EKSPolicyARN string
FailFast bool
HasVspherePlugin bool
ImageRegistryProxy string
}
type VeleroCfgInPerf struct {

View File

@@ -18,11 +18,12 @@ package k8s
import (
"fmt"
"path"
"time"
"golang.org/x/net/context"
apps "k8s.io/api/apps/v1"
v1 "k8s.io/api/core/v1"
corev1api "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/wait"
clientset "k8s.io/client-go/kubernetes"
@@ -36,6 +37,7 @@ const (
PollInterval = 2 * time.Second
PollTimeout = 15 * time.Minute
DefaultContainerName = "container-busybox"
TestImage = "busybox:1.37.0"
)
// DeploymentBuilder builds Deployment objects.
@@ -48,29 +50,33 @@ func (d *DeploymentBuilder) Result() *apps.Deployment {
}
// newDeployment returns a RollingUpdate Deployment with a fake container image
func NewDeployment(name, ns string, replicas int32, labels map[string]string, containers []v1.Container) *DeploymentBuilder {
if containers == nil {
containers = []v1.Container{
{
Name: DefaultContainerName,
Image: "gcr.io/velero-gcp/busybox:latest",
Command: []string{"sleep", "1000000"},
// Make pod obeys the restricted pod security standards.
SecurityContext: &v1.SecurityContext{
AllowPrivilegeEscalation: boolptr.False(),
Capabilities: &v1.Capabilities{
Drop: []v1.Capability{"ALL"},
},
RunAsNonRoot: boolptr.True(),
RunAsUser: func(i int64) *int64 { return &i }(65534),
RunAsGroup: func(i int64) *int64 { return &i }(65534),
SeccompProfile: &v1.SeccompProfile{
Type: v1.SeccompProfileTypeRuntimeDefault,
},
func NewDeployment(name, ns string, replicas int32, labels map[string]string, imageRegistryProxy string) *DeploymentBuilder {
imageAddress := TestImage
if imageRegistryProxy != "" {
imageAddress = path.Join(imageRegistryProxy, TestImage)
}
containers := []corev1api.Container{
{
Name: DefaultContainerName,
Image: imageAddress,
Command: []string{"sleep", "1000000"},
// Make pod obeys the restricted pod security standards.
SecurityContext: &corev1api.SecurityContext{
AllowPrivilegeEscalation: boolptr.False(),
Capabilities: &corev1api.Capabilities{
Drop: []corev1api.Capability{"ALL"},
},
RunAsNonRoot: boolptr.True(),
RunAsUser: func(i int64) *int64 { return &i }(65534),
RunAsGroup: func(i int64) *int64 { return &i }(65534),
SeccompProfile: &corev1api.SeccompProfile{
Type: corev1api.SeccompProfileTypeRuntimeDefault,
},
},
}
},
}
return &DeploymentBuilder{
&apps.Deployment{
TypeMeta: metav1.TypeMeta{
@@ -89,14 +95,14 @@ func NewDeployment(name, ns string, replicas int32, labels map[string]string, co
Type: apps.RollingUpdateDeploymentStrategyType,
RollingUpdate: new(apps.RollingUpdateDeployment),
},
Template: v1.PodTemplateSpec{
Template: corev1api.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: labels,
},
Spec: v1.PodSpec{
SecurityContext: &v1.PodSecurityContext{
Spec: corev1api.PodSpec{
SecurityContext: &corev1api.PodSecurityContext{
FSGroup: func(i int64) *int64 { return &i }(65534),
FSGroupChangePolicy: func(policy v1.PodFSGroupChangePolicy) *v1.PodFSGroupChangePolicy { return &policy }(v1.FSGroupChangeAlways),
FSGroupChangePolicy: func(policy corev1api.PodFSGroupChangePolicy) *corev1api.PodFSGroupChangePolicy { return &policy }(corev1api.FSGroupChangeAlways),
},
Containers: containers,
},
@@ -106,10 +112,10 @@ func NewDeployment(name, ns string, replicas int32, labels map[string]string, co
}
}
func (d *DeploymentBuilder) WithVolume(volumes []*v1.Volume) *DeploymentBuilder {
vmList := []v1.VolumeMount{}
func (d *DeploymentBuilder) WithVolume(volumes []*corev1api.Volume) *DeploymentBuilder {
vmList := []corev1api.VolumeMount{}
for _, v := range volumes {
vmList = append(vmList, v1.VolumeMount{
vmList = append(vmList, corev1api.VolumeMount{
Name: v.Name,
MountPath: "/" + v.Name,
})

View File

@@ -19,20 +19,32 @@ package k8s
import (
"context"
"fmt"
"path"
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
v1 "k8s.io/api/core/v1"
corev1api "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/vmware-tanzu/velero/pkg/util/boolptr"
)
func CreatePod(client TestClient, ns, name, sc, pvcName string, volumeNameList []string, pvcAnn, ann map[string]string) (*corev1.Pod, error) {
func CreatePod(
client TestClient,
ns, name, sc, pvcName string,
volumeNameList []string,
pvcAnn, ann map[string]string,
imageRegistryProxy string,
) (*corev1api.Pod, error) {
if pvcName != "" && len(volumeNameList) != 1 {
return nil, errors.New("Volume name list should contain only 1 since PVC name is not empty")
}
volumes := []corev1.Volume{}
imageAddress := TestImage
if imageRegistryProxy != "" {
imageAddress = path.Join(imageRegistryProxy, TestImage)
}
volumes := []corev1api.Volume{}
for _, volume := range volumeNameList {
var _pvcName string
if pvcName == "" {
@@ -45,10 +57,10 @@ func CreatePod(client TestClient, ns, name, sc, pvcName string, volumeNameList [
return nil, err
}
volumes = append(volumes, corev1.Volume{
volumes = append(volumes, corev1api.Volume{
Name: volume,
VolumeSource: corev1.VolumeSource{
PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{
VolumeSource: corev1api.VolumeSource{
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
ClaimName: pvc.Name,
ReadOnly: false,
},
@@ -56,41 +68,41 @@ func CreatePod(client TestClient, ns, name, sc, pvcName string, volumeNameList [
})
}
vmList := []corev1.VolumeMount{}
vmList := []corev1api.VolumeMount{}
for _, v := range volumes {
vmList = append(vmList, corev1.VolumeMount{
vmList = append(vmList, corev1api.VolumeMount{
Name: v.Name,
MountPath: "/" + v.Name,
})
}
p := &corev1.Pod{
p := &corev1api.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Annotations: ann,
},
Spec: corev1.PodSpec{
SecurityContext: &v1.PodSecurityContext{
Spec: corev1api.PodSpec{
SecurityContext: &corev1api.PodSecurityContext{
FSGroup: func(i int64) *int64 { return &i }(65534),
FSGroupChangePolicy: func(policy v1.PodFSGroupChangePolicy) *v1.PodFSGroupChangePolicy { return &policy }(v1.FSGroupChangeAlways),
FSGroupChangePolicy: func(policy corev1api.PodFSGroupChangePolicy) *corev1api.PodFSGroupChangePolicy { return &policy }(corev1api.FSGroupChangeAlways),
},
Containers: []corev1.Container{
Containers: []corev1api.Container{
{
Name: name,
Image: "gcr.io/velero-gcp/busybox",
Image: imageAddress,
Command: []string{"sleep", "3600"},
VolumeMounts: vmList,
// Make pod obeys the restricted pod security standards.
SecurityContext: &v1.SecurityContext{
SecurityContext: &corev1api.SecurityContext{
AllowPrivilegeEscalation: boolptr.False(),
Capabilities: &v1.Capabilities{
Drop: []v1.Capability{"ALL"},
Capabilities: &corev1api.Capabilities{
Drop: []corev1api.Capability{"ALL"},
},
RunAsNonRoot: boolptr.True(),
RunAsUser: func(i int64) *int64 { return &i }(65534),
RunAsGroup: func(i int64) *int64 { return &i }(65534),
SeccompProfile: &v1.SeccompProfile{
Type: v1.SeccompProfileTypeRuntimeDefault,
SeccompProfile: &corev1api.SeccompProfile{
Type: corev1api.SeccompProfileTypeRuntimeDefault,
},
},
},
@@ -102,11 +114,11 @@ func CreatePod(client TestClient, ns, name, sc, pvcName string, volumeNameList [
return client.ClientGo.CoreV1().Pods(ns).Create(context.TODO(), p, metav1.CreateOptions{})
}
func GetPod(ctx context.Context, client TestClient, namespace string, pod string) (*corev1.Pod, error) {
func GetPod(ctx context.Context, client TestClient, namespace string, pod string) (*corev1api.Pod, error) {
return client.ClientGo.CoreV1().Pods(namespace).Get(ctx, pod, metav1.GetOptions{})
}
func AddAnnotationToPod(ctx context.Context, client TestClient, namespace, podName string, ann map[string]string) (*corev1.Pod, error) {
func AddAnnotationToPod(ctx context.Context, client TestClient, namespace, podName string, ann map[string]string) (*corev1api.Pod, error) {
newPod, err := GetPod(ctx, client, namespace, podName)
if err != nil {
return nil, errors.Wrap(err, fmt.Sprintf("Fail to ge pod %s in namespace %s", podName, namespace))
@@ -125,6 +137,6 @@ func AddAnnotationToPod(ctx context.Context, client TestClient, namespace, podNa
return client.ClientGo.CoreV1().Pods(namespace).Update(ctx, newPod, metav1.UpdateOptions{})
}
func ListPods(ctx context.Context, client TestClient, namespace string) (*corev1.PodList, error) {
func ListPods(ctx context.Context, client TestClient, namespace string) (*corev1api.PodList, error) {
return client.ClientGo.CoreV1().Pods(namespace).List(ctx, metav1.ListOptions{})
}

View File

@@ -18,7 +18,10 @@ package kibishii
import (
"fmt"
"html/template"
"os"
"os/exec"
"path"
"strconv"
"strings"
"time"
@@ -26,7 +29,10 @@ import (
. "github.com/onsi/ginkgo/v2"
"github.com/pkg/errors"
"golang.org/x/net/context"
appsv1api "k8s.io/api/apps/v1"
corev1api "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/util/wait"
"sigs.k8s.io/yaml"
veleroexec "github.com/vmware-tanzu/velero/pkg/util/exec"
. "github.com/vmware-tanzu/velero/test"
@@ -101,9 +107,17 @@ func RunKibishiiTests(
}
}()
fmt.Printf("KibishiiPrepareBeforeBackup %s\n", time.Now().Format("2006-01-02 15:04:05"))
if err := KibishiiPrepareBeforeBackup(oneHourTimeout, client, providerName,
kibishiiNamespace, registryCredentialFile, veleroFeatures,
kibishiiDirectory, useVolumeSnapshots, DefaultKibishiiData); err != nil {
if err := KibishiiPrepareBeforeBackup(
oneHourTimeout,
client,
providerName,
kibishiiNamespace,
registryCredentialFile,
veleroFeatures,
kibishiiDirectory,
DefaultKibishiiData,
veleroCfg.ImageRegistryProxy,
); err != nil {
return errors.Wrapf(err, "Failed to install and prepare data for kibishii %s", kibishiiNamespace)
}
fmt.Printf("KibishiiPrepareBeforeBackup done %s\n", time.Now().Format("2006-01-02 15:04:05"))
@@ -263,8 +277,15 @@ func RunKibishiiTests(
return nil
}
func installKibishii(ctx context.Context, namespace string, cloudPlatform, veleroFeatures,
kibishiiDirectory string, useVolumeSnapshots bool, workerReplicas int) error {
func installKibishii(
ctx context.Context,
namespace string,
cloudPlatform,
veleroFeatures,
kibishiiDirectory string,
workerReplicas int,
imageRegistryProxy string,
) error {
if strings.EqualFold(cloudPlatform, Azure) &&
strings.EqualFold(veleroFeatures, FeatureCSI) {
cloudPlatform = AzureCSI
@@ -273,9 +294,32 @@ func installKibishii(ctx context.Context, namespace string, cloudPlatform, veler
strings.EqualFold(veleroFeatures, FeatureCSI) {
cloudPlatform = AwsCSI
}
if strings.EqualFold(cloudPlatform, Vsphere) {
if strings.HasPrefix(kibishiiDirectory, "https://") {
return errors.New("vSphere needs to download the Kibishii repository first because it needs to inject some image patch file to work.")
}
kibishiiImage := readBaseKibishiiImage(path.Join(kibishiiDirectory, "base", "kibishii.yaml"))
if err := generateKibishiiImagePatch(
path.Join(imageRegistryProxy, kibishiiImage),
path.Join(kibishiiDirectory, cloudPlatform, "worker-image-patch.yaml"),
); err != nil {
return nil
}
jumpPadImage := readBaseJumpPadImage(path.Join(kibishiiDirectory, "base", "jump-pad.yaml"))
if err := generateJumpPadPatch(
path.Join(imageRegistryProxy, jumpPadImage),
path.Join(kibishiiDirectory, cloudPlatform, "jump-pad-image-patch.yaml"),
); err != nil {
return nil
}
}
// We use kustomize to generate YAML for Kibishii from the checked-in yaml directories
kibishiiInstallCmd := exec.CommandContext(ctx, "kubectl", "apply", "-n", namespace, "-k",
kibishiiDirectory+cloudPlatform, "--timeout=90s")
path.Join(kibishiiDirectory, cloudPlatform), "--timeout=90s")
_, stderr, err := veleroexec.RunCommand(kibishiiInstallCmd)
fmt.Printf("Install Kibishii cmd: %s\n", kibishiiInstallCmd)
if err != nil {
@@ -312,16 +356,134 @@ func installKibishii(ctx context.Context, namespace string, cloudPlatform, veler
return err
}
func readBaseKibishiiImage(kibishiiFilePath string) string {
bytes, err := os.ReadFile(kibishiiFilePath)
if err != nil {
return ""
}
sts := &appsv1api.StatefulSet{}
if err := yaml.UnmarshalStrict(bytes, sts); err != nil {
return ""
}
kibishiiImage := ""
if len(sts.Spec.Template.Spec.Containers) > 0 {
kibishiiImage = sts.Spec.Template.Spec.Containers[0].Image
}
return kibishiiImage
}
func readBaseJumpPadImage(jumpPadFilePath string) string {
bytes, err := os.ReadFile(jumpPadFilePath)
if err != nil {
return ""
}
pod := &corev1api.Pod{}
if err := yaml.UnmarshalStrict(bytes, pod); err != nil {
return ""
}
jumpPadImage := ""
if len(pod.Spec.Containers) > 0 {
jumpPadImage = pod.Spec.Containers[0].Image
}
return jumpPadImage
}
type patchImageData struct {
Image string
}
func generateKibishiiImagePatch(kibishiiImage string, patchDirectory string) error {
patchString := `
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: StatefulSet
metadata:
name: kibishii-deployment
spec:
template:
spec:
containers:
- name: kibishii
image: {{.Image}}
`
file, err := os.OpenFile(patchDirectory, os.O_CREATE|os.O_TRUNC|os.O_WRONLY, 0644)
defer file.Close()
if err != nil {
return err
}
patchTemplate, err := template.New("imagePatch").Parse(patchString)
if err != nil {
return err
}
if err := patchTemplate.Execute(file, patchImageData{Image: kibishiiImage}); err != nil {
return err
}
return nil
}
func generateJumpPadPatch(jumpPadImage string, patchDirectory string) error {
patchString := `
apiVersion: v1
kind: Pod
metadata:
name: jump-pad
spec:
containers:
- name: jump-pad
image: {{.Image}}
`
file, err := os.OpenFile(patchDirectory, os.O_CREATE|os.O_TRUNC|os.O_WRONLY, 0644)
defer file.Close()
if err != nil {
return err
}
patchTemplate, err := template.New("imagePatch").Parse(patchString)
if err != nil {
return err
}
if err := patchTemplate.Execute(file, patchImageData{Image: jumpPadImage}); err != nil {
return err
}
return nil
}
func generateData(ctx context.Context, namespace string, kibishiiData *KibishiiData) error {
timeout := 30 * time.Minute
interval := 1 * time.Second
err := wait.PollImmediate(interval, timeout, func() (bool, error) {
err := wait.PollUntilContextTimeout(ctx, interval, timeout, true, func(ctx context.Context) (bool, error) {
timeout, ctxCancel := context.WithTimeout(context.Background(), time.Minute*20)
defer ctxCancel()
kibishiiGenerateCmd := exec.CommandContext(timeout, "kubectl", "exec", "-n", namespace, "jump-pad", "--",
"/usr/local/bin/generate.sh", strconv.Itoa(kibishiiData.Levels), strconv.Itoa(kibishiiData.DirsPerLevel),
strconv.Itoa(kibishiiData.FilesPerLevel), strconv.Itoa(kibishiiData.FileLength),
strconv.Itoa(kibishiiData.BlockSize), strconv.Itoa(kibishiiData.PassNum), strconv.Itoa(kibishiiData.ExpectedNodes))
kibishiiGenerateCmd := exec.CommandContext(
timeout,
"kubectl",
"exec",
"-n",
namespace,
"jump-pad",
"--",
"/usr/local/bin/generate.sh",
strconv.Itoa(kibishiiData.Levels),
strconv.Itoa(kibishiiData.DirsPerLevel),
strconv.Itoa(kibishiiData.FilesPerLevel),
strconv.Itoa(kibishiiData.FileLength),
strconv.Itoa(kibishiiData.BlockSize),
strconv.Itoa(kibishiiData.PassNum),
strconv.Itoa(kibishiiData.ExpectedNodes),
)
fmt.Printf("kibishiiGenerateCmd cmd =%v\n", kibishiiGenerateCmd)
stdout, stderr, err := veleroexec.RunCommand(kibishiiGenerateCmd)
@@ -341,26 +503,44 @@ func generateData(ctx context.Context, namespace string, kibishiiData *KibishiiD
func verifyData(ctx context.Context, namespace string, kibishiiData *KibishiiData) error {
timeout := 10 * time.Minute
interval := 5 * time.Second
err := wait.PollImmediate(interval, timeout, func() (bool, error) {
timeout, ctxCancel := context.WithTimeout(context.Background(), time.Minute*20)
defer ctxCancel()
kibishiiVerifyCmd := exec.CommandContext(timeout, "kubectl", "exec", "-n", namespace, "jump-pad", "--",
"/usr/local/bin/verify.sh", strconv.Itoa(kibishiiData.Levels), strconv.Itoa(kibishiiData.DirsPerLevel),
strconv.Itoa(kibishiiData.FilesPerLevel), strconv.Itoa(kibishiiData.FileLength),
strconv.Itoa(kibishiiData.BlockSize), strconv.Itoa(kibishiiData.PassNum),
strconv.Itoa(kibishiiData.ExpectedNodes))
fmt.Printf("kibishiiVerifyCmd cmd =%v\n", kibishiiVerifyCmd)
err := wait.PollUntilContextTimeout(
ctx,
interval,
timeout,
true,
func(ctx context.Context) (bool, error) {
timeout, ctxCancel := context.WithTimeout(context.Background(), time.Minute*20)
defer ctxCancel()
kibishiiVerifyCmd := exec.CommandContext(
timeout,
"kubectl",
"exec",
"-n",
namespace,
"jump-pad",
"--",
"/usr/local/bin/verify.sh",
strconv.Itoa(kibishiiData.Levels),
strconv.Itoa(kibishiiData.DirsPerLevel),
strconv.Itoa(kibishiiData.FilesPerLevel),
strconv.Itoa(kibishiiData.FileLength),
strconv.Itoa(kibishiiData.BlockSize),
strconv.Itoa(kibishiiData.PassNum),
strconv.Itoa(kibishiiData.ExpectedNodes),
)
fmt.Printf("kibishiiVerifyCmd cmd =%v\n", kibishiiVerifyCmd)
stdout, stderr, err := veleroexec.RunCommand(kibishiiVerifyCmd)
if strings.Contains(stderr, "Timeout occurred") {
return false, nil
}
if err != nil {
fmt.Printf("Kibishi verify stdout Timeout occurred: %s stderr: %s err: %s\n", stdout, stderr, err)
return false, nil
}
return true, nil
})
stdout, stderr, err := veleroexec.RunCommand(kibishiiVerifyCmd)
if strings.Contains(stderr, "Timeout occurred") {
return false, nil
}
if err != nil {
fmt.Printf("Kibishi verify stdout Timeout occurred: %s stderr: %s err: %s\n", stdout, stderr, err)
return false, nil
}
return true, nil
},
)
if err != nil {
return errors.Wrapf(err, "Failed to verify kibishii data in namespace %s\n", namespace)
@@ -370,7 +550,12 @@ func verifyData(ctx context.Context, namespace string, kibishiiData *KibishiiDat
}
func waitForKibishiiPods(ctx context.Context, client TestClient, kibishiiNamespace string) error {
return WaitForPods(ctx, client, kibishiiNamespace, []string{"jump-pad", "etcd0", "etcd1", "etcd2", "kibishii-deployment-0", "kibishii-deployment-1"})
return WaitForPods(
ctx,
client,
kibishiiNamespace,
[]string{"jump-pad", "etcd0", "etcd1", "etcd2", "kibishii-deployment-0", "kibishii-deployment-1"},
)
}
func KibishiiGenerateData(oneHourTimeout context.Context, kibishiiNamespace string, kibishiiData *KibishiiData) error {
@@ -382,9 +567,17 @@ func KibishiiGenerateData(oneHourTimeout context.Context, kibishiiNamespace stri
return nil
}
func KibishiiPrepareBeforeBackup(oneHourTimeout context.Context, client TestClient,
providerName, kibishiiNamespace, registryCredentialFile, veleroFeatures,
kibishiiDirectory string, useVolumeSnapshots bool, kibishiiData *KibishiiData) error {
func KibishiiPrepareBeforeBackup(
oneHourTimeout context.Context,
client TestClient,
providerName,
kibishiiNamespace,
registryCredentialFile,
veleroFeatures,
kibishiiDirectory string,
kibishiiData *KibishiiData,
imageRegistryProxy string,
) error {
fmt.Printf("installKibishii %s\n", time.Now().Format("2006-01-02 15:04:05"))
serviceAccountName := "default"
@@ -398,8 +591,15 @@ func KibishiiPrepareBeforeBackup(oneHourTimeout context.Context, client TestClie
return errors.Wrapf(err, "failed to patch the service account %q under the namespace %q", serviceAccountName, kibishiiNamespace)
}
if err := installKibishii(oneHourTimeout, kibishiiNamespace, providerName, veleroFeatures,
kibishiiDirectory, useVolumeSnapshots, kibishiiData.ExpectedNodes); err != nil {
if err := installKibishii(
oneHourTimeout,
kibishiiNamespace,
providerName,
veleroFeatures,
kibishiiDirectory,
kibishiiData.ExpectedNodes,
imageRegistryProxy,
); err != nil {
return errors.Wrap(err, "Failed to install Kibishii workload")
}
// wait for kibishii pod startup

View File

@@ -84,7 +84,7 @@ func VeleroInstall(ctx context.Context, veleroCfg *test.VeleroConfig, isStandbyC
}
}
pluginsTmp, err := getPlugins(ctx, *veleroCfg)
pluginsTmp, err := GetPlugins(ctx, *veleroCfg, true)
if err != nil {
return errors.WithMessage(err, "Failed to get provider plugins")
}
@@ -120,29 +120,49 @@ func VeleroInstall(ctx context.Context, veleroCfg *test.VeleroConfig, isStandbyC
return errors.WithMessagef(err, "Failed to get Velero InstallOptions for plugin provider %s", veleroCfg.ObjectStoreProvider)
}
_, err = k8s.GetNamespace(ctx, *veleroCfg.ClientToInstallVelero, veleroCfg.VeleroNamespace)
// We should uninstall Velero for a new installation
if !apierrors.IsNotFound(err) {
if err := VeleroUninstall(context.Background(), *veleroCfg); err != nil {
return errors.Wrapf(err, "Failed to uninstall velero %s", veleroCfg.VeleroNamespace)
}
}
// If velero namespace does not exist, we should create it for service account creation
if err := k8s.KubectlCreateNamespace(ctx, veleroCfg.VeleroNamespace); err != nil {
return errors.Wrapf(err, "Failed to create namespace %s to install Velero", veleroCfg.VeleroNamespace)
}
// Create Backup Repository ConfigurationMap.
if _, err := k8s.CreateConfigMap(
veleroCfg.ClientToInstallVelero.ClientGo,
veleroCfg.VeleroNamespace,
test.BackupRepositoryConfigName,
nil,
map[string]string{
test.UploaderTypeKopia: "{\"cacheLimitMB\": 2048, \"fullMaintenanceInterval\": \"normalGC\"}",
},
); err != nil {
return errors.WithMessagef(err,
"Failed to create %s ConfigMap in %s namespace",
test.BackupRepositoryConfigName,
veleroCfg.VeleroNamespace,
)
}
// For AWS IRSA credential test, AWS IAM service account is required, so if ServiceAccountName and EKSPolicyARN
// are both provided, we assume IRSA test is running, otherwise skip this IAM service account creation part.
if veleroCfg.CloudProvider == test.AWS && veleroInstallOptions.ServiceAccountName != "" {
if veleroCfg.EKSPolicyARN == "" {
return errors.New("Please provide EKSPolicyARN for IRSA test.")
}
_, err = k8s.GetNamespace(ctx, *veleroCfg.ClientToInstallVelero, veleroCfg.VeleroNamespace)
// We should uninstall Velero for a new service account creation.
if !apierrors.IsNotFound(err) {
if err := VeleroUninstall(context.Background(), *veleroCfg); err != nil {
return errors.Wrapf(err, "Failed to uninstall velero %s", veleroCfg.VeleroNamespace)
}
}
// If velero namespace does not exist, we should create it for service account creation
if err := k8s.KubectlCreateNamespace(ctx, veleroCfg.VeleroNamespace); err != nil {
return errors.Wrapf(err, "Failed to create namespace %s to install Velero", veleroCfg.VeleroNamespace)
}
if err := k8s.KubectlDeleteClusterRoleBinding(ctx, "velero-cluster-role"); err != nil {
return errors.Wrapf(err, "Failed to delete clusterrolebinding %s to %s namespace", "velero-cluster-role", veleroCfg.VeleroNamespace)
}
if err := k8s.KubectlCreateClusterRoleBinding(ctx, "velero-cluster-role", "cluster-admin", veleroCfg.VeleroNamespace, veleroInstallOptions.ServiceAccountName); err != nil {
return errors.Wrapf(err, "Failed to create clusterrolebinding %s to %s namespace", "velero-cluster-role", veleroCfg.VeleroNamespace)
}
if err := eksutil.KubectlDeleteIAMServiceAcount(ctx, veleroInstallOptions.ServiceAccountName, veleroCfg.VeleroNamespace, veleroCfg.ClusterToInstallVelero); err != nil {
return errors.Wrapf(err, "Failed to delete service account %s to %s namespace", veleroInstallOptions.ServiceAccountName, veleroCfg.VeleroNamespace)
}
@@ -367,6 +387,14 @@ func installVeleroServer(ctx context.Context, cli, cloudProvider string, options
args = append(args, fmt.Sprintf("--uploader-type=%v", options.UploaderType))
}
if options.ItemBlockWorkerCount > 1 {
args = append(args, fmt.Sprintf("--item-block-worker-count=%d", options.ItemBlockWorkerCount))
}
if options.BackupRepoConfigMap != "" {
args = append(args, fmt.Sprintf("--backup-repository-configmap=%s", options.BackupRepoConfigMap))
}
if err := createVeleroResources(ctx, cli, namespace, args, options); err != nil {
return err
}

View File

@@ -27,6 +27,7 @@ import (
"os"
"os/exec"
"os/user"
"path"
"path/filepath"
"reflect"
"regexp"
@@ -57,81 +58,65 @@ const RestoreObjectsPrefix = "restores"
const PluginsObjectsPrefix = "plugins"
var ImagesMatrix = map[string]map[string][]string{
"v1.10": {
"aws": {"gcr.io/velero-gcp/velero-plugin-for-aws:v1.6.0"},
"azure": {"gcr.io/velero-gcp/velero-plugin-for-microsoft-azure:v1.6.0"},
"vsphere": {"gcr.io/velero-gcp/velero-plugin-for-vsphere:v1.5.1"},
"gcp": {"gcr.io/velero-gcp/velero-plugin-for-gcp:v1.6.0"},
"csi": {"gcr.io/velero-gcp/velero-plugin-for-csi:v0.4.0"},
"velero": {"gcr.io/velero-gcp/velero:v1.10.2"},
"velero-restore-helper": {"gcr.io/velero-gcp/velero-restore-helper:v1.10.2"},
},
"v1.11": {
"aws": {"gcr.io/velero-gcp/velero-plugin-for-aws:v1.7.0"},
"azure": {"gcr.io/velero-gcp/velero-plugin-for-microsoft-azure:v1.7.0"},
"vsphere": {"gcr.io/velero-gcp/velero-plugin-for-vsphere:v1.5.1"},
"gcp": {"gcr.io/velero-gcp/velero-plugin-for-gcp:v1.7.0"},
"csi": {"gcr.io/velero-gcp/velero-plugin-for-csi:v0.5.0"},
"velero": {"gcr.io/velero-gcp/velero:v1.11.1"},
"velero-restore-helper": {"gcr.io/velero-gcp/velero-restore-helper:v1.11.1"},
},
"v1.12": {
"aws": {"gcr.io/velero-gcp/velero-plugin-for-aws:v1.8.0"},
"azure": {"gcr.io/velero-gcp/velero-plugin-for-microsoft-azure:v1.8.0"},
"vsphere": {"gcr.io/velero-gcp/velero-plugin-for-vsphere:v1.5.1"},
"gcp": {"gcr.io/velero-gcp/velero-plugin-for-gcp:v1.8.0"},
"csi": {"gcr.io/velero-gcp/velero-plugin-for-csi:v0.6.0"},
"velero": {"gcr.io/velero-gcp/velero:v1.12.4"},
"velero-restore-helper": {"gcr.io/velero-gcp/velero-restore-helper:v1.12.4"},
},
"v1.13": {
"aws": {"gcr.io/velero-gcp/velero-plugin-for-aws:v1.9.2"},
"azure": {"gcr.io/velero-gcp/velero-plugin-for-microsoft-azure:v1.9.2"},
"vsphere": {"gcr.io/velero-gcp/velero-plugin-for-vsphere:v1.5.2"},
"gcp": {"gcr.io/velero-gcp/velero-plugin-for-gcp:v1.9.2"},
"csi": {"gcr.io/velero-gcp/velero-plugin-for-csi:v0.7.1"},
"datamover": {"gcr.io/velero-gcp/velero-plugin-for-aws:v1.9.2"},
"velero": {"gcr.io/velero-gcp/velero:v1.13.2"},
"velero-restore-helper": {"gcr.io/velero-gcp/velero-restore-helper:v1.13.2"},
"aws": {"velero/velero-plugin-for-aws:v1.9.2"},
"azure": {"velero/velero-plugin-for-microsoft-azure:v1.9.2"},
"vsphere": {"vsphereveleroplugin/velero-plugin-for-vsphere:v1.5.2"},
"gcp": {"velero/velero-plugin-for-gcp:v1.9.2"},
"csi": {"velero/velero-plugin-for-csi:v0.7.1"},
"datamover": {"velero/velero-plugin-for-aws:v1.9.2"},
"velero": {"velero/velero:v1.13.2"},
"velero-restore-helper": {"velero/velero-restore-helper:v1.13.2"},
},
"v1.14": {
"aws": {"gcr.io/velero-gcp/velero-plugin-for-aws:v1.10.1"},
"azure": {"gcr.io/velero-gcp/velero-plugin-for-microsoft-azure:v1.10.1"},
"vsphere": {"gcr.io/velero-gcp/velero-plugin-for-vsphere:v1.5.2"},
"gcp": {"gcr.io/velero-gcp/velero-plugin-for-gcp:v1.10.1"},
"datamover": {"gcr.io/velero-gcp/velero-plugin-for-aws:v1.10.1"},
"velero": {"gcr.io/velero-gcp/velero:v1.14.1"},
"velero-restore-helper": {"gcr.io/velero-gcp/velero-restore-helper:v1.14.1"},
"aws": {"velero/velero-plugin-for-aws:v1.10.1"},
"azure": {"velero/velero-plugin-for-microsoft-azure:v1.10.1"},
"vsphere": {"vsphereveleroplugin/velero-plugin-for-vsphere:v1.5.2"},
"gcp": {"velero/velero-plugin-for-gcp:v1.10.1"},
"datamover": {"velero/velero-plugin-for-aws:v1.10.1"},
"velero": {"velero/velero:v1.14.1"},
"velero-restore-helper": {"velero/velero-restore-helper:v1.14.1"},
},
"v1.15": {
"aws": {"gcr.io/velero-gcp/velero-plugin-for-aws:v1.11.0"},
"azure": {"gcr.io/velero-gcp/velero-plugin-for-microsoft-azure:v1.11.0"},
"vsphere": {"gcr.io/velero-gcp/velero-plugin-for-vsphere:v1.5.2"},
"gcp": {"gcr.io/velero-gcp/velero-plugin-for-gcp:v1.11.0"},
"datamover": {"gcr.io/velero-gcp/velero-plugin-for-aws:v1.11.0"},
"velero": {"gcr.io/velero-gcp/velero:v1.15.2"},
"velero-restore-helper": {"gcr.io/velero-gcp/velero-restore-helper:v1.15.2"},
"aws": {"velero/velero-plugin-for-aws:v1.11.0"},
"azure": {"velero/velero-plugin-for-microsoft-azure:v1.11.0"},
"vsphere": {"vsphereveleroplugin/velero-plugin-for-vsphere:v1.5.2"},
"gcp": {"velero/velero-plugin-for-gcp:v1.11.0"},
"datamover": {"velero/velero-plugin-for-aws:v1.11.0"},
"velero": {"velero/velero:v1.15.2"},
"velero-restore-helper": {"velero/velero-restore-helper:v1.15.2"},
},
"v1.16": {
"aws": {"gcr.io/velero-gcp/velero-plugin-for-aws:v1.12.0"},
"azure": {"gcr.io/velero-gcp/velero-plugin-for-microsoft-azure:v1.12.0"},
"vsphere": {"gcr.io/velero-gcp/velero-plugin-for-vsphere:v1.5.2"},
"gcp": {"gcr.io/velero-gcp/velero-plugin-for-gcp:v1.12.0"},
"datamover": {"gcr.io/velero-gcp/velero-plugin-for-aws:v1.12.0"},
"velero": {"gcr.io/velero-gcp/velero:v1.15.0"},
"velero-restore-helper": {"gcr.io/velero-gcp/velero:v1.16.0"},
"aws": {"velero/velero-plugin-for-aws:v1.12.1"},
"azure": {"velero/velero-plugin-for-microsoft-azure:v1.12.1"},
"vsphere": {"vsphereveleroplugin/velero-plugin-for-vsphere:v1.5.2"},
"gcp": {"velero/velero-plugin-for-gcp:v1.12.1"},
"datamover": {"velero/velero-plugin-for-aws:v1.12.1"},
"velero": {"velero/velero:v1.16.1"},
"velero-restore-helper": {"velero/velero:v1.16.1"},
},
"main": {
"aws": {"gcr.io/velero-gcp/velero-plugin-for-aws:main"},
"azure": {"gcr.io/velero-gcp/velero-plugin-for-microsoft-azure:main"},
"vsphere": {"gcr.io/velero-gcp/velero-plugin-for-vsphere:v1.5.2"},
"gcp": {"gcr.io/velero-gcp/velero-plugin-for-gcp:main"},
"datamover": {"gcr.io/velero-gcp/velero-plugin-for-aws:main"},
"velero": {"gcr.io/velero-gcp/velero:main"},
"velero-restore-helper": {"gcr.io/velero-gcp/velero-restore-helper:main"},
"aws": {"velero/velero-plugin-for-aws:main"},
"azure": {"velero/velero-plugin-for-microsoft-azure:main"},
"vsphere": {"vsphereveleroplugin/velero-plugin-for-vsphere:v1.5.2"},
"gcp": {"velero/velero-plugin-for-gcp:main"},
"datamover": {"velero/velero-plugin-for-aws:main"},
"velero": {"velero/velero:main"},
"velero-restore-helper": {"velero/velero-restore-helper:main"},
},
}
// UpdateImagesMatrixByProxy is used to append the proxy to the image lists.
func UpdateImagesMatrixByProxy(imageRegistryProxy string) {
if imageRegistryProxy != "" {
for i := range ImagesMatrix {
for j := range ImagesMatrix[i] {
ImagesMatrix[i][j][0] = path.Join(imageRegistryProxy, ImagesMatrix[i][j][0])
}
}
}
}
func SetImagesToDefaultValues(config VeleroConfig, version string) (VeleroConfig, error) {
fmt.Printf("Get the images for version %s\n", version)
@@ -672,37 +657,44 @@ func VeleroVersion(ctx context.Context, veleroCLI, veleroNamespace string) error
return nil
}
// getProviderPlugins only provide plugin for specific cloud provider
func getProviderPlugins(ctx context.Context, veleroCLI string, cloudProvider string) ([]string, error) {
if cloudProvider == "" {
return []string{}, errors.New("CloudProvider should be provided")
}
// GetPlugins will collect all kinds plugins for VeleroInstall, such as provider
// plugins(cloud provider/object store provider, if object store provider is not
// provided, it should be set to value as cloud provider's), feature plugins (CSI/Datamover)
func GetPlugins(ctx context.Context, veleroCfg VeleroConfig, defaultBSL bool) ([]string, error) {
veleroCLI := veleroCfg.VeleroCLI
cloudProvider := veleroCfg.CloudProvider
objectStoreProvider := veleroCfg.ObjectStoreProvider
providerPlugins := veleroCfg.Plugins
needDataMoverPlugin := false
var plugins []string
version, err := GetVeleroVersion(ctx, veleroCLI, true)
if err != nil {
return nil, errors.WithMessage(err, "failed to get velero version")
}
plugins, err := getPluginsByVersion(version, cloudProvider, false)
if err != nil {
return nil, errors.WithMessagef(err, "Fail to get plugin by provider %s and version %s", cloudProvider, version)
// Read the plugins for the additional BSL here.
if !defaultBSL {
fmt.Printf("Additional BSL provider = %s\n", veleroCfg.AdditionalBSLProvider)
fmt.Printf("Additional BSL plugins = %v\n", veleroCfg.AddBSLPlugins)
if veleroCfg.AddBSLPlugins == "" {
if veleroCfg.AdditionalBSLProvider == "" {
return []string{}, errors.New("AdditionalBSLProvider should be provided.")
}
plugins, err = getPluginsByVersion(version, veleroCfg.AdditionalBSLProvider, false)
if err != nil {
return nil, errors.WithMessagef(err, "Fail to get plugin by provider %s and version %s", veleroCfg.AdditionalBSLProvider, version)
}
} else {
plugins = append(plugins, veleroCfg.AddBSLPlugins)
}
return plugins, nil
}
return plugins, nil
}
// getPlugins will collect all kinds plugins for VeleroInstall, such as provider
// plugins(cloud provider/object store provider, if object store provider is not
// provided, it should be set to value as cloud provider's), feature plugins (CSI/Datamover)
func getPlugins(ctx context.Context, veleroCfg VeleroConfig) ([]string, error) {
veleroCLI := veleroCfg.VeleroCLI
cloudProvider := veleroCfg.CloudProvider
objectStoreProvider := veleroCfg.ObjectStoreProvider
providerPlugins := veleroCfg.Plugins
needDataMoverPlugin := false
// Fetch the plugins for the provider before checking for the object store provider below.
var plugins []string
if len(providerPlugins) > 0 {
plugins = strings.Split(providerPlugins, ",")
} else {
@@ -713,47 +705,30 @@ func getPlugins(ctx context.Context, veleroCfg VeleroConfig) ([]string, error) {
objectStoreProvider = cloudProvider
}
var version string
var err error
if veleroCfg.VeleroVersion != "" {
version = veleroCfg.VeleroVersion
} else {
version, err = GetVeleroVersion(ctx, veleroCLI, true)
if err != nil {
return nil, errors.WithMessage(err, "failed to get velero version")
}
}
if veleroCfg.SnapshotMoveData && veleroCfg.DataMoverPlugin == "" && !veleroCfg.IsUpgradeTest {
needDataMoverPlugin = true
}
plugins, err = getPluginsByVersion(version, cloudProvider, needDataMoverPlugin)
if err != nil {
return nil, errors.WithMessagef(err, "Fail to get plugin by provider %s and version %s", objectStoreProvider, version)
}
}
return plugins, nil
}
// VeleroAddPluginsForProvider determines which plugins need to be installed for a provider and
// installs them in the current Velero installation, skipping over those that are already installed.
func VeleroAddPluginsForProvider(ctx context.Context, veleroCLI string, veleroNamespace string, provider string, plugin string) error {
var err error
var plugins []string
if plugin == "" {
plugins, err = getProviderPlugins(ctx, veleroCLI, provider)
} else {
plugins = append(plugins, plugin)
}
fmt.Printf("provider cmd = %v\n", provider)
fmt.Printf("plugins cmd = %v\n", plugins)
if err != nil {
return errors.WithMessage(err, "Failed to get plugins")
}
// AddPlugins installs them in the current Velero installation, skipping over those that are already installed.
func AddPlugins(plugins []string, veleroCfg VeleroConfig) error {
for _, plugin := range plugins {
stdoutBuf := new(bytes.Buffer)
stderrBuf := new(bytes.Buffer)
installPluginCmd := exec.CommandContext(ctx, veleroCLI, "--namespace", veleroNamespace, "plugin", "add", plugin, "--confirm")
installPluginCmd := exec.CommandContext(context.TODO(), veleroCfg.VeleroCLI, "--namespace", veleroCfg.VeleroNamespace, "plugin", "add", plugin, "--confirm")
fmt.Printf("installPluginCmd cmd =%v\n", installPluginCmd)
installPluginCmd.Stdout = stdoutBuf
installPluginCmd.Stderr = stderrBuf
@@ -1446,14 +1421,6 @@ func UpdateVeleroDeployment(ctx context.Context, veleroCfg VeleroConfig) ([]stri
}
cmds = append(cmds, cmd)
args := fmt.Sprintf("s#\\\"image\\\"\\: \\\"velero\\/velero\\:v[0-9]*.[0-9]*.[0-9]\\\"#\\\"image\\\"\\: \\\"gcr.io\\/velero-gcp\\/nightly\\/velero\\:%s\\\"#g", veleroCfg.VeleroVersion)
cmd = &common.OsCommandLine{
Cmd: "sed",
Args: []string{args},
}
cmds = append(cmds, cmd)
cmd = &common.OsCommandLine{
Cmd: "sed",
Args: []string{fmt.Sprintf("s#\\\"server\\\",#\\\"server\\\",\\\"--uploader-type=%s\\\",#g", veleroCfg.UploaderType)},
@@ -1496,14 +1463,6 @@ func UpdateNodeAgent(ctx context.Context, veleroCfg VeleroConfig, dsjson string)
}
cmds = append(cmds, cmd)
args := fmt.Sprintf("s#\\\"image\\\"\\: \\\"velero\\/velero\\:v[0-9]*.[0-9]*.[0-9]\\\"#\\\"image\\\"\\: \\\"gcr.io\\/velero-gcp\\/nightly\\/velero\\:%s\\\"#g", veleroCfg.VeleroVersion)
cmd = &common.OsCommandLine{
Cmd: "sed",
Args: []string{args},
}
cmds = append(cmds, cmd)
cmd = &common.OsCommandLine{
Cmd: "sed",
Args: []string{"s#\\\"name\\\"\\: \\\"restic\\\"#\\\"name\\\"\\: \\\"node-agent\\\"#g"},