Compare commits

...

9 Commits

Author SHA1 Message Date
Bridget McErlean
525705bceb Add cherry-pick commits and changelog for v1.5.4 (#3651)
* Restore CAPI cluster objects in a better order

Restoring CAPI workload clusters without this ordering caused the
capi-controller-manager code to panic, resulting in an unhealthy cluster
state.

This can be worked around
(https://community.pivotal.io/s/article/5000e00001pJyN41611954332537?language=en_US),
but we provide the inclusion of these resources as a default in order to
provide a better out-of-the-box experience.

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Add changelog

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Use pod namespace from backup when matching PVBs (#3475)

* Use pod namespace from backup when matching PVBs

In #3051, we introduced an additional check to ensure that a PVB matched
a particular pod by checking both the name and the namespace of the pod.
This caused an issue when using a namespace mapping on restore. In the
case where a namespace mapping is being used, the check for whether a
PVB matches a particular pod will fail as the PVB was created for the
original pod namespace and is not aware of the new namespace mapping
being used. This resulted in PVRs not being created for pods that were
being restored into new namespaces. The restic init containers were
being created to wait on the volume restore, however this would cause
the restored pods to block indefinitely as they would be waiting for a
volume restore that was not scheduled.

To fix this, we use the original namespace of the pod from the backup to
match the PVB to the pod being restored, not the new namespace where
the pod is being restored into.

Fixes #3467.

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>

* Explain why the namespace mapping can't be used

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>

* Allow Dockerfiles to be configurable (#3634)

For internal builds of Velero, we need to be able to specify an
alternative Dockerfile which uses an alternative image registry to pull
the base images from. This change adapts our Makefile such that both the
main Dockerfile and build image Dockerfile can be overridden.

We have some special handling for the build image to only build when the
Dockerfile has changed. In this case, we check whether a custom
Dockerfile has been provided, and always rebuild in that case. For
custom build image Dockerfiles, use a fixed tag rather than the one
based on commit SHA of the original file.

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>

* Combine CRD install verification into 1 job, and update k8s versions (#3448)

* Validate CRDs against latest Kubernetes versions

Add Kubernetes v1.19 and v1.20 series images, and consolidate the job
into a single file to reduce repetition.

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Ignore job if the changes are only site/design

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Fix codespell error

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Cache Velero binary for reuse on workers

This will cache the Velero binary based on the PR number and a SHA256 of
the generated binary.

This way, the runners testing each version of Kubernetes do not need to
build it independently.

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Fix GitHub event access

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Wrap output path in quotes

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Move code checkout to build step

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Also cache go modules

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Fix syntax issues

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Download cached binary on each node

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Use cached go modules on main CI

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Add changelog for v1.5.4

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>

Co-authored-by: Nolan Brubaker <brubakern@vmware.com>
2021-04-01 11:32:03 -07:00
Bridget McErlean
123109a3bc Add changelog for v1.5.3
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2021-01-14 10:13:14 -05:00
Dave Smith-Uchida
9e4f4dc8c5 Increased limit for Velero pod to 512M. Fixes #3234
Signed-off-by: Dave Smith-Uchida <dsmithuchida@vmware.com>
2021-01-14 09:50:16 -05:00
Ashish Amarnath
f9cc5af2fd 🐛 BSLs with validation disabled should be validated at least once (#3084)
* 🐛 BSLs with validation disabled should be validated at least once

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

* review comments

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2021-01-14 09:24:35 -05:00
Bridget McErlean
864ff9a13c Don't fail backup deletion if downloading tarball fails (#2993)
* Don't fail backup if downloading tarball fails

Previously, we would always attempt to download the tarball for a backup
for processing DeleteItemAction plugins, even if there weren't any.
This caused an issue for some users in the case where the backup tarball
had been deleted from object storage as the backup deletion would fail.

Now, we only attempt to download the tarball in the case where there are
DeleteItemAction plugins. If downloading that tarball fails, we log
the error, skip the processing of the DeleteItemAction plugins and
proceed with the rest of the deletion.

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>

* Skip file removal in closeAndRemoveFile if nil

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2021-01-14 09:24:15 -05:00
Ashish Amarnath
bc0be36b8e 🐛 Do not run ItemAction plugins for unresolvable types for all types (#3059)
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2021-01-14 09:24:04 -05:00
Ashish Amarnath
cd26cd0455 🐛 Use namespace and name to match PVB to Pod restore (#3051)
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2021-01-14 09:23:49 -05:00
Piper Dougherty
9fa278f572 Adding fix for restic init container index on restores. (#3011)
* Adding handling of restic-wait init container at any order with warning.

Signed-off-by: Piper Dougherty <doughertypiper@gmail.com>

* Adding newline at end of files to match convention.

Signed-off-by: Piper Dougherty <doughertypiper@gmail.com>

* Formatting.

Signed-off-by: Piper Dougherty <doughertypiper@gmail.com>

* Update copyright year on modified files.

Signed-off-by: Piper Dougherty <doughertypiper@gmail.com>
2021-01-14 09:23:23 -05:00
Nolan Brubaker
e115e5a191 v1.5.2 changelogs and cherry-picks (#3023)
* Ensure PVs and PVCs remain bound when doing a restore (#3007)

* Only remove the UID from a PV's claimRef

The UID is the only part of a claimRef that might prevent it from being
rebound correctly on a restore. The namespace and name within the
claimRef should be preserved in order to ensure that the PV is claimed
by the correct PVC on restore.

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Remap PVs claimRef.namespace on relevant restores

When remapping namespaces, any included PVs need to have their claimRef
updated to point remapped namespaces to the new namespace name in order
to be bound to the correct PVC.

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Update tests and ensure claimRef namespace remaps

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Remove lowercased uid field from unstructured PV

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Fix issues that prevented PVs from being restored

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Add changelog

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Dynamically reprovision volumes without snapshots

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Update test for lower case uid field

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Remove stray debugging print statement

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Fix typo, remove extra code, add tests.

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* restore proper lowercase/plural CRD resource (#2949)

* restore proper lowercase/plural CRD resource

This commit restores the proper resource string
"customresourcedefinitions" for CRD. The prior change to
"CustomResourceDefinition" was made because this was being used
in another place to populate the CRD "Kind" field in
remap_crd_version_action.go -- there, just use the correct Kind
string instead of pulling from Resource.

Signed-off-by: Scott Seago <sseago@redhat.com>

* add changelog

Signed-off-by: Scott Seago <sseago@redhat.com>

* create CRB with velero-<namespace> (#2886)

* create CRB with velero-<namespace>

This will allow creating multiple instances of velero,
across two different namespaces

Signed-off-by: Alay Patel <alay1431@gmail.com>

* add changelog

Signed-off-by: Alay Patel <alay1431@gmail.com>

* add package var DefaultVeleroNamespace and use it wherever needed

Signed-off-by: Alay Patel <alay1431@gmail.com>

* Fix BSL controller to avoid invoking init() on all BSLs regardless of ValidationFrequency (#2992)

Signed-off-by: Bett, Antony <antony.bett@dell.com>

* Fix version cmd getting nil pointer (#2996)

Signed-off-by: Carlisia <carlisia@vmware.com>

* Changelogs for v1.5.2

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

Co-authored-by: Scott Seago <sseago@redhat.com>
Co-authored-by: Alay Patel <alay1431@gmail.com>
Co-authored-by: Antony S Bett <antony.bett@dell.com>
Co-authored-by: Carlisia Thompson <carlisia@vmware.com>
2020-10-20 14:51:30 -04:00
36 changed files with 1046 additions and 268 deletions

View File

@@ -1,20 +0,0 @@
name: "Verify Velero CRDs on k8s 1.16.9"
on: [pull_request]
jobs:
kind:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
- uses: engineerd/setup-kind@v0.4.0
with:
image: "kindest/node:v1.16.9"
- name: Testing
run: |
kubectl cluster-info
kubectl get pods -n kube-system
kubectl version
echo "current-context:" $(kubectl config current-context)
echo "environment-kubeconfig:" ${KUBECONFIG}
make local
./_output/bin/linux/amd64/velero install --crds-only --dry-run -oyaml | kubectl apply -f -

View File

@@ -1,20 +0,0 @@
name: "Verify Velero CRDs on k8s 1.17"
on: [pull_request]
jobs:
kind:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
- uses: engineerd/setup-kind@v0.4.0
with:
image: "kindest/node:v1.17.0"
- name: Testing
run: |
kubectl cluster-info
kubectl get pods -n kube-system
kubectl version
echo "current-context:" $(kubectl config current-context)
echo "environment-kubeconfig:" ${KUBECONFIG}
make local
./_output/bin/linux/amd64/velero install --crds-only --dry-run -oyaml | kubectl apply -f -

View File

@@ -1,20 +0,0 @@
name: "Verify Velero CRDs on k8s 1.18.4"
on: [pull_request]
jobs:
kind:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
- uses: engineerd/setup-kind@v0.4.0
with:
image: "kindest/node:v1.18.4"
- name: Testing
run: |
kubectl cluster-info
kubectl get pods -n kube-system
kubectl version
echo "current-context:" $(kubectl config current-context)
echo "environment-kubeconfig:" ${KUBECONFIG}
make local
./_output/bin/linux/amd64/velero install --crds-only --dry-run -oyaml | kubectl apply -f -

86
.github/workflows/crds-verify-kind.yaml vendored Normal file
View File

@@ -0,0 +1,86 @@
name: "Verify Velero CRDs across k8s versions"
on:
pull_request:
# Do not run when the change only includes these directories.
paths-ignore:
- "site/**"
- "design/**"
jobs:
# Build the Velero CLI once for all Kubernetes versions, and cache it so the fan-out workers can get it.
build-cli:
runs-on: ubuntu-latest
steps:
# Look for a CLI that's made for this PR
- name: Fetch built CLI
id: cache
uses: actions/cache@v2
env:
cache-name: cache-velero-cli
with:
path: ./_output/bin/linux/amd64/velero
# The cache key a combination of the current PR number, and a SHA256 hash of the Velero binary
key: velero-${{ github.event.pull_request.number }}-${{ hashFiles('./_output/bin/linux/amd64/velero') }}
# This key controls the prefixes that we'll look at in the cache to restore from
restore-keys: |
velero-${{ github.event.pull_request.number }}-
- name: Fetch cached go modules
uses: actions/cache@v2
if: steps.cache.outputs.cache-hit != 'true'
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Check out the code
uses: actions/checkout@v2
if: steps.cache.outputs.cache-hit != 'true'
# If no binaries were built for this PR, build it now.
- name: Build Velero CLI
if: steps.cache.outputs.cache-hit != 'true'
run: |
make local
# Check the common CLI against all kubernetes versions
crd-check:
needs: build-cli
runs-on: ubuntu-latest
strategy:
matrix:
# Latest k8s versions. There's no series-based tag, nor is there a latest tag.
k8s:
- 1.15.12
- 1.16.15
- 1.17.17
- 1.18.15
- 1.19.7
- 1.20.2
# All steps run in parallel unless otherwise specified.
# See https://docs.github.com/en/actions/learn-github-actions/managing-complex-workflows#creating-dependent-jobs
steps:
- name: Fetch built CLI
id: cache
uses: actions/cache@v2
env:
cache-name: cache-velero-cli
with:
path: ./_output/bin/linux/amd64/velero
# The cache key a combination of the current PR number, and a SHA256 hash of the Velero binary
key: velero-${{ github.event.pull_request.number }}-${{ hashFiles('./_output/bin/linux/amd64/velero') }}
# This key controls the prefixes that we'll look at in the cache to restore from
restore-keys: |
velero-${{ github.event.pull_request.number }}-
- uses: engineerd/setup-kind@v0.5.0
with:
image: "kindest/node:v${{ matrix.k8s }}"
- name: Install CRDs
run: |
kubectl cluster-info
kubectl get pods -n kube-system
kubectl version
echo "current-context:" $(kubectl config current-context)
echo "environment-kubeconfig:" ${KUBECONFIG}
./_output/bin/linux/amd64/velero install --crds-only --dry-run -oyaml | kubectl apply -f -

View File

@@ -1,14 +1,20 @@
name: Pull Request CI Check
on: [pull_request]
jobs:
build:
name: Run CI
runs-on: ubuntu-latest
steps:
- name: Check out the code
uses: actions/checkout@v2
- name: Check out the code
uses: actions/checkout@v2
- name: Fetch cached go modules
uses: actions/cache@v2
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Make ci
run: make ci
- name: Make ci
run: make ci

View File

@@ -26,13 +26,29 @@ REGISTRY ?= velero
# Image name
IMAGE ?= $(REGISTRY)/$(BIN)
# Build image handling. We push a build image for every changed version of
# We allow the Dockerfile to be configurable to enable the use of custom Dockerfiles
# that pull base images from different registries.
VELERO_DOCKERFILE ?= Dockerfile
BUILDER_IMAGE_DOCKERFILE ?= hack/build-image/Dockerfile
# Calculate the realpath of the build-image Dockerfile as we `cd` into the hack/build
# directory before this Dockerfile is used and any relative path will not be valid.
BUILDER_IMAGE_DOCKERFILE_REALPATH := $(shell realpath $(BUILDER_IMAGE_DOCKERFILE))
# Build image handling. We push a build image for every changed version of
# /hack/build-image/Dockerfile. We tag the dockerfile with the short commit hash
# of the commit that changed it. When determining if there is a build image in
# the registry to use we look for one that matches the current "commit" for the
# Dockerfile else we make one.
# In the case where the Dockerfile for the build image has been overridden using
# the BUILDER_IMAGE_DOCKERFILE variable, we always force a build.
ifneq "$(origin BUILDER_IMAGE_DOCKERFILE)" "file"
BUILDER_IMAGE_TAG := "custom"
else
BUILDER_IMAGE_TAG := $(shell git log -1 --pretty=%h $(BUILDER_IMAGE_DOCKERFILE))
endif
BUILDER_IMAGE_TAG := $(shell git log -1 --pretty=%h hack/build-image/Dockerfile)
BUILDER_IMAGE := $(REGISTRY)/build-image:$(BUILDER_IMAGE_TAG)
BUILDER_IMAGE_CACHED := $(shell docker images -q ${BUILDER_IMAGE} 2>/dev/null )
@@ -170,7 +186,7 @@ endif
--build-arg=VERSION=$(VERSION) \
--build-arg=GIT_SHA=$(GIT_SHA) \
--build-arg=GIT_TREE_STATE=$(GIT_TREE_STATE) \
-f Dockerfile .
-f $(VELERO_DOCKERFILE) .
container:
ifneq ($(BUILDX_ENABLED), true)
@@ -186,7 +202,7 @@ endif
--build-arg=GIT_SHA=$(GIT_SHA) \
--build-arg=GIT_TREE_STATE=$(GIT_TREE_STATE) \
--build-arg=RESTIC_VERSION=$(RESTIC_VERSION) \
-f Dockerfile .
-f $(VELERO_DOCKERFILE) .
@echo "container: $(IMAGE):$(VERSION)"
SKIP_TESTS ?=
@@ -233,11 +249,17 @@ build-dirs:
@mkdir -p .go/src/$(PKG) .go/pkg .go/bin .go/std/$(GOOS)/$(GOARCH) .go/go-build .go/golangci-lint
build-env:
@# if we detect changes in dockerfile force a new build-image
@# if we have overridden the value for the build-image Dockerfile,
@# force a build using that Dockerfile
@# if we detect changes in dockerfile force a new build-image
@# else if we dont have a cached image make one
@# finally use the cached image
ifneq ($(shell git diff --quiet HEAD -- hack/build-image/Dockerfile; echo $$?), 0)
@echo "Local changes detected in hack/build-image/Dockerfile"
ifneq "$(origin BUILDER_IMAGE_DOCKERFILE)" "file"
@echo "Dockerfile for builder image has been overridden to $(BUILDER_IMAGE_DOCKERFILE)"
@echo "Preparing a new builder-image"
$(MAKE) build-image
else ifneq ($(shell git diff --quiet HEAD -- $(BUILDER_IMAGE_DOCKERFILE); echo $$?), 0)
@echo "Local changes detected in $(BUILDER_IMAGE_DOCKERFILE)"
@echo "Preparing a new builder-image"
$(MAKE) build-image
else ifneq ($(BUILDER_IMAGE_CACHED),)
@@ -252,9 +274,9 @@ build-image:
@# This makes sure we don't leave the orphaned image behind.
$(eval old_id=$(shell docker image inspect --format '{{ .ID }}' ${BUILDER_IMAGE} 2>/dev/null))
ifeq ($(BUILDX_ENABLED), true)
@cd hack/build-image && docker buildx build --build-arg=GOPROXY=$(GOPROXY) --output=type=docker --pull -t $(BUILDER_IMAGE) .
@cd hack/build-image && docker buildx build --build-arg=GOPROXY=$(GOPROXY) --output=type=docker --pull -t $(BUILDER_IMAGE) -f $(BUILDER_IMAGE_DOCKERFILE_REALPATH) .
else
@cd hack/build-image && docker build --build-arg=GOPROXY=$(GOPROXY) --pull -t $(BUILDER_IMAGE) .
@cd hack/build-image && docker build --build-arg=GOPROXY=$(GOPROXY) --pull -t $(BUILDER_IMAGE) -f $(BUILDER_IMAGE_DOCKERFILE_REALPATH) .
endif
$(eval new_id=$(shell docker image inspect --format '{{ .ID }}' ${BUILDER_IMAGE} 2>/dev/null))
@if [ "$(old_id)" != "" ] && [ "$(old_id)" != "$(new_id)" ]; then \
@@ -264,7 +286,13 @@ endif
push-build-image:
@# this target will push the build-image it assumes you already have docker
@# credentials needed to accomplish this.
docker push $(BUILDER_IMAGE)
@# Pushing will be skipped if a custom Dockerfile was used to build the image.
ifneq "$(origin BUILDER_IMAGE_DOCKERFILE)" "file"
@echo "Dockerfile for builder image has been overridden"
@echo "Skipping push of custom image"
else
docker push $(BUILDER_IMAGE)
endif
build-image-hugo:
cd site && docker build --pull -t $(HUGO_IMAGE) .

View File

@@ -1,3 +1,62 @@
## v1.5.4
### 2021-03-31
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.5.4
### Container Image
`velero/velero:v1.5.4`
### Documentation
https://velero.io/docs/v1.5/
### Upgrading
https://velero.io/docs/v1.5/upgrade-to-1.5/
* Fixed a bug where restic volumes would not be restored when using a namespace mapping. (#3475, @zubron)
* Add CAPI Cluster and ClusterResourceSets to default restore priorities so that the capi-controller-manager does not panic on restores. (#3446, @nrb)
## v1.5.3
### 2021-01-14
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.5.3
### Container Image
`velero/velero:v1.5.3`
### Documentation
https://velero.io/docs/v1.5/
### Upgrading
https://velero.io/docs/v1.5/upgrade-to-1.5/
### All Changes
* Increased default Velero pod memory limit to 512Mi (#3234, @dsmithuchida)
* 🐛 BSLs with validation disabled should be validated at least once (#3084, @ashish-amarnath)
* Fixed an issue where the deletion of a backup would fail if the backup tarball couldn't be downloaded from object storage. Now the tarball is only downloaded if there are associated DeleteItemAction plugins and if downloading the tarball fails, the plugins are skipped. (#2993, @zubron)
* 🐛 ItemAction plugins for unresolvable types should not be run for all types (#3059, @ashish-amarnath)
* 🐛 Use namespace and name to match PVB to Pod restore (#3051, @ashish-amarnath)
* Allows the restic-wait container to exist in any order in the pod being restored. Prints a warning message in the case where the restic-wait container isn't the first container in the list of initialization containers. (#3011, @doughepi)
## v1.5.2
### 2020-10-20
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.5.2
### Container Image
`velero/velero:v1.5.2`
### Documentation
https://velero.io/docs/v1.5/
### Upgrading
https://velero.io/docs/v1.5/upgrade-to-1.5/
### All Changes
* Fix BSL controller to avoid invoking init() on all BSLs regardless of ValidationFrequency (#2992, @betta1)
* cli: allow creating multiple instances of Velero across two different namespaces (#2886, @alaypatel07)
* Restore CRD Resource name to fix CRD wait functionality. (#2949, @sseago)
* Ensure that bound PVCs and PVs remain bound on restore. (#3007, @nrb)
## v1.5.1
### 2020-09-16
@@ -80,3 +139,4 @@ Displays the Timestamps when issued a print or describe (#2748, @thejasbabu)
* when creating new backup from schedule from cli, allow backup name to be automatically generated (#2569, @cblecker)
* Convert manifests + BSL api client to kubebuilder (#2561, @carlisia)
* backup/restore: reinstantiate backup store just before uploading artifacts to ensure credentials are up-to-date (#2550, @skriss)

View File

@@ -51,24 +51,28 @@ func IsReadyToValidate(bslValidationFrequency *metav1.Duration, lastValidationTi
validationFrequency = bslValidationFrequency.Duration
}
if validationFrequency == 0 {
log.Debug("Validation period for this backup location is set to 0, skipping validation")
return false
}
if validationFrequency < 0 {
log.Debugf("Validation period must be non-negative, changing from %d to %d", validationFrequency, defaultLocationInfo.StoreValidationFrequency)
validationFrequency = defaultLocationInfo.StoreValidationFrequency
}
lastValidation := lastValidationTime
if lastValidation != nil { // always ready to validate the first time around, so only even do this check if this has happened before
nextValidation := lastValidation.Add(validationFrequency) // next validation time: last validation time + validation frequency
if time.Now().UTC().Before(nextValidation) { // ready only when NOW is equal to or after the next validation time
return false
}
if lastValidation == nil {
// Regardless of validation frequency, we want to validate all BSLs at least once.
return true
}
if validationFrequency == 0 {
// Validation was disabled so return false.
log.Debug("Validation period for this backup location is set to 0, skipping validation")
return false
}
// We want to validate BSL only if the set validation frequency/ interval has elapsed.
nextValidation := lastValidation.Add(validationFrequency) // next validation time: last validation time + validation frequency
if time.Now().UTC().Before(nextValidation) { // ready only when NOW is equal to or after the next validation time
return false
}
return true
}

View File

@@ -36,17 +36,23 @@ func TestIsReadyToValidate(t *testing.T) {
bslValidationFrequency *metav1.Duration
lastValidationTime *metav1.Time
defaultLocationInfo DefaultBackupLocationInfo
// serverDefaultValidationFrequency time.Duration
// backupLocation *velerov1api.BackupStorageLocation
ready bool
ready bool
}{
{
name: "don't validate, since frequency is set to zero",
name: "validate when true when validation frequency is zero and lastValidationTime is nil",
bslValidationFrequency: &metav1.Duration{Duration: 0},
defaultLocationInfo: DefaultBackupLocationInfo{
StoreValidationFrequency: 0,
},
ready: true,
},
{
name: "don't validate when false when validation is disabled and lastValidationTime is not nil",
bslValidationFrequency: &metav1.Duration{Duration: 0},
lastValidationTime: &metav1.Time{Time: time.Now()},
defaultLocationInfo: DefaultBackupLocationInfo{
StoreValidationFrequency: 0,
},
ready: false,
},
{
@@ -63,7 +69,8 @@ func TestIsReadyToValidate(t *testing.T) {
defaultLocationInfo: DefaultBackupLocationInfo{
StoreValidationFrequency: 1,
},
ready: false,
lastValidationTime: &metav1.Time{Time: time.Now()},
ready: false,
},
{
name: "validate as per default setting when location setting is not set",
@@ -77,7 +84,8 @@ func TestIsReadyToValidate(t *testing.T) {
defaultLocationInfo: DefaultBackupLocationInfo{
StoreValidationFrequency: 0,
},
ready: false,
lastValidationTime: &metav1.Time{Time: time.Now()},
ready: false,
},
{
name: "don't validate when now is before the NEXT validation time (validation frequency + last validation time)",
@@ -112,8 +120,8 @@ func TestIsReadyToValidate(t *testing.T) {
t.Run(tt.name, func(t *testing.T) {
g := NewWithT(t)
log := velerotest.NewLogger()
g.Expect(IsReadyToValidate(tt.bslValidationFrequency, tt.lastValidationTime, tt.defaultLocationInfo, log)).To(BeIdenticalTo(tt.ready))
actual := IsReadyToValidate(tt.bslValidationFrequency, tt.lastValidationTime, tt.defaultLocationInfo, log)
g.Expect(actual).To(BeIdenticalTo(tt.ready))
})
}
}

View File

@@ -127,7 +127,7 @@ func resolveActions(actions []velero.BackupItemAction, helper discovery.Helper)
return nil, err
}
resources := getResourceIncludesExcludes(helper, resourceSelector.IncludedResources, resourceSelector.ExcludedResources)
resources := collections.GetResourceIncludesExcludes(helper, resourceSelector.IncludedResources, resourceSelector.ExcludedResources)
namespaces := collections.NewIncludesExcludes().Includes(resourceSelector.IncludedNamespaces...).Excludes(resourceSelector.ExcludedNamespaces...)
selector := labels.Everything()
@@ -150,30 +150,6 @@ func resolveActions(actions []velero.BackupItemAction, helper discovery.Helper)
return resolved, nil
}
// getResourceIncludesExcludes takes the lists of resources to include and exclude, uses the
// discovery helper to resolve them to fully-qualified group-resource names, and returns an
// IncludesExcludes list.
func getResourceIncludesExcludes(helper discovery.Helper, includes, excludes []string) *collections.IncludesExcludes {
resources := collections.GenerateIncludesExcludes(
includes,
excludes,
func(item string) string {
gvr, _, err := helper.ResourceFor(schema.ParseGroupResource(item).WithVersion(""))
if err != nil {
// If we can't resolve it, return it as-is. This prevents the generated
// includes-excludes list from including *everything*, if none of the includes
// can be resolved. ref. https://github.com/vmware-tanzu/velero/issues/2461
return item
}
gr := gvr.GroupResource()
return gr.String()
},
)
return resources
}
// getNamespaceIncludesExcludes returns an IncludesExcludes list containing which namespaces to
// include and exclude from the backup.
func getNamespaceIncludesExcludes(backup *velerov1api.Backup) *collections.IncludesExcludes {
@@ -200,7 +176,7 @@ func getResourceHook(hookSpec velerov1api.BackupResourceHookSpec, discoveryHelpe
Name: hookSpec.Name,
Selector: hook.ResourceHookSelector{
Namespaces: collections.NewIncludesExcludes().Includes(hookSpec.IncludedNamespaces...).Excludes(hookSpec.ExcludedNamespaces...),
Resources: getResourceIncludesExcludes(discoveryHelper, hookSpec.IncludedResources, hookSpec.ExcludedResources),
Resources: collections.GetResourceIncludesExcludes(discoveryHelper, hookSpec.IncludedResources, hookSpec.ExcludedResources),
},
Pre: hookSpec.PreHooks,
Post: hookSpec.PostHooks,
@@ -242,7 +218,7 @@ func (kb *kubernetesBackupper) Backup(log logrus.FieldLogger, backupRequest *Req
log.Infof("Including namespaces: %s", backupRequest.NamespaceIncludesExcludes.IncludesString())
log.Infof("Excluding namespaces: %s", backupRequest.NamespaceIncludesExcludes.ExcludesString())
backupRequest.ResourceIncludesExcludes = getResourceIncludesExcludes(kb.discoveryHelper, backupRequest.Spec.IncludedResources, backupRequest.Spec.ExcludedResources)
backupRequest.ResourceIncludesExcludes = collections.GetResourceIncludesExcludes(kb.discoveryHelper, backupRequest.Spec.IncludedResources, backupRequest.Spec.ExcludedResources)
log.Infof("Including resources: %s", backupRequest.ResourceIncludesExcludes.IncludesString())
log.Infof("Excluding resources: %s", backupRequest.ResourceIncludesExcludes.ExcludesString())
log.Infof("Backing up all pod volumes using restic: %t", *backupRequest.Backup.Spec.DefaultVolumesToRestic)

View File

@@ -31,7 +31,6 @@ import (
"k8s.io/apimachinery/pkg/runtime"
v1 "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/kuberesource"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
)
@@ -111,7 +110,7 @@ func fetchV1beta1CRD(name string, betaCRDClient apiextv1beta1client.CustomResour
// See https://github.com/kubernetes/kubernetes/issues/3030. Unsure why this is happening here and not in main Velero;
// probably has to do with List calls and Dynamic client vs typed client
// Set these all the time, since they shouldn't ever be different, anyway
betaCRD.Kind = kuberesource.CustomResourceDefinitions.Resource
betaCRD.Kind = "CustomResourceDefinition"
betaCRD.APIVersion = apiextv1beta1.SchemeGroupVersion.String()
m, err := runtime.DefaultUnstructuredConverter.ToUnstructured(&betaCRD)

View File

@@ -75,6 +75,12 @@ func (b *PodVolumeBackupBuilder) PodName(name string) *PodVolumeBackupBuilder {
return b
}
// PodNamespace sets the name of the pod associated with this PodVolumeBackup.
func (b *PodVolumeBackupBuilder) PodNamespace(ns string) *PodVolumeBackupBuilder {
b.object.Spec.Pod.Namespace = ns
return b
}
// Volume sets the name of the volume associated with this PodVolumeBackup.
func (b *PodVolumeBackupBuilder) Volume(volume string) *PodVolumeBackupBuilder {
b.object.Spec.Volume = volume

View File

@@ -155,6 +155,10 @@ func (f *factory) KubebuilderClient() (kbclient.Client, error) {
Scheme: scheme,
})
if err != nil {
return nil, err
}
return kubebuilderClient, nil
}

View File

@@ -468,6 +468,9 @@ func (s *server) veleroResourcesExist() error {
// have restic restores run before controllers adopt the pods.
// - Replica sets go before deployments/other controllers so they can be explicitly
// restored and be adopted by controllers.
// - CAPI Clusters come before ClusterResourceSets because failing to do so means the CAPI controller-manager will panic.
// Both Clusters and ClusterResourceSets need to come before ClusterResourceSetBinding in order to properly restore workload clusters.
// See https://github.com/kubernetes-sigs/cluster-api/issues/4105
var defaultRestorePriorities = []string{
"customresourcedefinitions",
"namespaces",
@@ -487,6 +490,8 @@ var defaultRestorePriorities = []string{
// to ensure that we prioritize restoring from "apps" too, since this is how they're stored
// in the backup.
"replicasets.apps",
"clusters.cluster.x-k8s.io",
"clusterresourcesets.addons.cluster.x-k8s.io",
}
func (s *server) initRestic() error {

View File

@@ -734,6 +734,10 @@ func persistBackup(backup *pkgbackup.Request,
}
func closeAndRemoveFile(file *os.File, log logrus.FieldLogger) {
if file == nil {
log.Debug("Skipping removal of file due to nil file pointer")
return
}
if err := file.Close(); err != nil {
log.WithError(err).WithField("file", file.Name()).Error("error closing file")
}

View File

@@ -295,13 +295,6 @@ func (c *backupDeletionController) processRequest(req *velerov1api.DeleteBackupR
errs = append(errs, err.Error())
}
// Download the tarball
backupFile, err := downloadToTempFile(backup.Name, backupStore, log)
if err != nil {
return errors.Wrap(err, "error downloading backup")
}
defer closeAndRemoveFile(backupFile, c.logger)
actions, err := pluginManager.GetDeleteItemActions()
log.Debugf("%d actions before invoking actions", len(actions))
if err != nil {
@@ -309,20 +302,30 @@ func (c *backupDeletionController) processRequest(req *velerov1api.DeleteBackupR
}
// don't defer CleanupClients here, since it was already called above.
ctx := &delete.Context{
Backup: backup,
BackupReader: backupFile,
Actions: actions,
Log: c.logger,
DiscoveryHelper: c.helper,
Filesystem: filesystem.NewFileSystem(),
}
if len(actions) > 0 {
// Download the tarball
backupFile, err := downloadToTempFile(backup.Name, backupStore, log)
defer closeAndRemoveFile(backupFile, c.logger)
// Optimization: wrap in a gofunc? Would be useful for large backups with lots of objects.
// but what do we do with the error returned? We can't just swallow it as that may lead to dangling resources.
err = delete.InvokeDeleteActions(ctx)
if err != nil {
return errors.Wrap(err, "error invoking delete item actions")
if err != nil {
log.WithError(err).Errorf("Unable to download tarball for backup %s, skipping associated DeleteItemAction plugins", backup.Name)
} else {
ctx := &delete.Context{
Backup: backup,
BackupReader: backupFile,
Actions: actions,
Log: c.logger,
DiscoveryHelper: c.helper,
Filesystem: filesystem.NewFileSystem(),
}
// Optimization: wrap in a gofunc? Would be useful for large backups with lots of objects.
// but what do we do with the error returned? We can't just swallow it as that may lead to dangling resources.
err = delete.InvokeDeleteActions(ctx)
if err != nil {
return errors.Wrap(err, "error invoking delete item actions")
}
}
}
if backupStore != nil {

View File

@@ -49,6 +49,8 @@ import (
persistencemocks "github.com/vmware-tanzu/velero/pkg/persistence/mocks"
"github.com/vmware-tanzu/velero/pkg/plugin/clientmgmt"
pluginmocks "github.com/vmware-tanzu/velero/pkg/plugin/mocks"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
"github.com/vmware-tanzu/velero/pkg/plugin/velero/mocks"
velerotest "github.com/vmware-tanzu/velero/pkg/test"
"github.com/vmware-tanzu/velero/pkg/volume"
)
@@ -739,6 +741,265 @@ func TestBackupDeletionControllerProcessRequest(t *testing.T) {
// Make sure snapshot was deleted
assert.Equal(t, 0, td.volumeSnapshotter.SnapshotsTaken.Len())
})
t.Run("backup is not downloaded when there are no DeleteItemAction plugins", func(t *testing.T) {
backup := builder.ForBackup(velerov1api.DefaultNamespace, "foo").Result()
backup.UID = "uid"
backup.Spec.StorageLocation = "primary"
td := setupBackupDeletionControllerTest(t, backup)
location := &velerov1api.BackupStorageLocation{
ObjectMeta: metav1.ObjectMeta{
Namespace: backup.Namespace,
Name: backup.Spec.StorageLocation,
},
Spec: velerov1api.BackupStorageLocationSpec{
Provider: "objStoreProvider",
StorageType: velerov1api.StorageType{
ObjectStorage: &velerov1api.ObjectStorageLocation{
Bucket: "bucket",
},
},
},
}
require.NoError(t, td.fakeClient.Create(context.Background(), location))
snapshotLocation := &velerov1api.VolumeSnapshotLocation{
ObjectMeta: metav1.ObjectMeta{
Namespace: backup.Namespace,
Name: "vsl-1",
},
Spec: velerov1api.VolumeSnapshotLocationSpec{
Provider: "provider-1",
},
}
require.NoError(t, td.sharedInformers.Velero().V1().VolumeSnapshotLocations().Informer().GetStore().Add(snapshotLocation))
// Clear out req labels to make sure the controller adds them and does not
// panic when encountering a nil Labels map
// (https://github.com/vmware-tanzu/velero/issues/1546)
td.req.Labels = nil
td.client.PrependReactor("get", "backups", func(action core.Action) (bool, runtime.Object, error) {
return true, backup, nil
})
td.volumeSnapshotter.SnapshotsTaken.Insert("snap-1")
td.client.PrependReactor("patch", "deletebackuprequests", func(action core.Action) (bool, runtime.Object, error) {
return true, td.req, nil
})
td.client.PrependReactor("patch", "backups", func(action core.Action) (bool, runtime.Object, error) {
return true, backup, nil
})
snapshots := []*volume.Snapshot{
{
Spec: volume.SnapshotSpec{
Location: "vsl-1",
},
Status: volume.SnapshotStatus{
ProviderSnapshotID: "snap-1",
},
},
}
pluginManager := &pluginmocks.Manager{}
pluginManager.On("GetVolumeSnapshotter", "provider-1").Return(td.volumeSnapshotter, nil)
pluginManager.On("GetDeleteItemActions").Return([]velero.DeleteItemAction{}, nil)
pluginManager.On("CleanupClients")
td.controller.newPluginManager = func(logrus.FieldLogger) clientmgmt.Manager { return pluginManager }
td.backupStore.On("GetBackupVolumeSnapshots", td.req.Spec.BackupName).Return(snapshots, nil)
td.backupStore.On("DeleteBackup", td.req.Spec.BackupName).Return(nil)
err := td.controller.processRequest(td.req)
require.NoError(t, err)
td.backupStore.AssertNotCalled(t, "GetBackupContents", td.req.Spec.BackupName)
expectedActions := []core.Action{
core.NewPatchAction(
velerov1api.SchemeGroupVersion.WithResource("deletebackuprequests"),
td.req.Namespace,
td.req.Name,
types.MergePatchType,
[]byte(`{"metadata":{"labels":{"velero.io/backup-name":"foo"}},"status":{"phase":"InProgress"}}`),
),
core.NewGetAction(
velerov1api.SchemeGroupVersion.WithResource("backups"),
td.req.Namespace,
td.req.Spec.BackupName,
),
core.NewPatchAction(
velerov1api.SchemeGroupVersion.WithResource("deletebackuprequests"),
td.req.Namespace,
td.req.Name,
types.MergePatchType,
[]byte(`{"metadata":{"labels":{"velero.io/backup-uid":"uid"}}}`),
),
core.NewPatchAction(
velerov1api.SchemeGroupVersion.WithResource("backups"),
td.req.Namespace,
td.req.Spec.BackupName,
types.MergePatchType,
[]byte(`{"status":{"phase":"Deleting"}}`),
),
core.NewDeleteAction(
velerov1api.SchemeGroupVersion.WithResource("backups"),
td.req.Namespace,
td.req.Spec.BackupName,
),
core.NewPatchAction(
velerov1api.SchemeGroupVersion.WithResource("deletebackuprequests"),
td.req.Namespace,
td.req.Name,
types.MergePatchType,
[]byte(`{"status":{"phase":"Processed"}}`),
),
core.NewDeleteCollectionAction(
velerov1api.SchemeGroupVersion.WithResource("deletebackuprequests"),
td.req.Namespace,
pkgbackup.NewDeleteBackupRequestListOptions(td.req.Spec.BackupName, "uid"),
),
}
velerotest.CompareActions(t, expectedActions, td.client.Actions())
// Make sure snapshot was deleted
assert.Equal(t, 0, td.volumeSnapshotter.SnapshotsTaken.Len())
})
t.Run("backup is still deleted if downloading tarball fails for DeleteItemAction plugins", func(t *testing.T) {
backup := builder.ForBackup(velerov1api.DefaultNamespace, "foo").Result()
backup.UID = "uid"
backup.Spec.StorageLocation = "primary"
td := setupBackupDeletionControllerTest(t, backup)
location := &velerov1api.BackupStorageLocation{
ObjectMeta: metav1.ObjectMeta{
Namespace: backup.Namespace,
Name: backup.Spec.StorageLocation,
},
Spec: velerov1api.BackupStorageLocationSpec{
Provider: "objStoreProvider",
StorageType: velerov1api.StorageType{
ObjectStorage: &velerov1api.ObjectStorageLocation{
Bucket: "bucket",
},
},
},
}
require.NoError(t, td.fakeClient.Create(context.Background(), location))
snapshotLocation := &velerov1api.VolumeSnapshotLocation{
ObjectMeta: metav1.ObjectMeta{
Namespace: backup.Namespace,
Name: "vsl-1",
},
Spec: velerov1api.VolumeSnapshotLocationSpec{
Provider: "provider-1",
},
}
require.NoError(t, td.sharedInformers.Velero().V1().VolumeSnapshotLocations().Informer().GetStore().Add(snapshotLocation))
// Clear out req labels to make sure the controller adds them and does not
// panic when encountering a nil Labels map
// (https://github.com/vmware-tanzu/velero/issues/1546)
td.req.Labels = nil
td.client.PrependReactor("get", "backups", func(action core.Action) (bool, runtime.Object, error) {
return true, backup, nil
})
td.volumeSnapshotter.SnapshotsTaken.Insert("snap-1")
td.client.PrependReactor("patch", "deletebackuprequests", func(action core.Action) (bool, runtime.Object, error) {
return true, td.req, nil
})
td.client.PrependReactor("patch", "backups", func(action core.Action) (bool, runtime.Object, error) {
return true, backup, nil
})
snapshots := []*volume.Snapshot{
{
Spec: volume.SnapshotSpec{
Location: "vsl-1",
},
Status: volume.SnapshotStatus{
ProviderSnapshotID: "snap-1",
},
},
}
pluginManager := &pluginmocks.Manager{}
pluginManager.On("GetVolumeSnapshotter", "provider-1").Return(td.volumeSnapshotter, nil)
pluginManager.On("GetDeleteItemActions").Return([]velero.DeleteItemAction{new(mocks.DeleteItemAction)}, nil)
pluginManager.On("CleanupClients")
td.controller.newPluginManager = func(logrus.FieldLogger) clientmgmt.Manager { return pluginManager }
td.backupStore.On("GetBackupVolumeSnapshots", td.req.Spec.BackupName).Return(snapshots, nil)
td.backupStore.On("GetBackupContents", td.req.Spec.BackupName).Return(nil, fmt.Errorf("error downloading tarball"))
td.backupStore.On("DeleteBackup", td.req.Spec.BackupName).Return(nil)
err := td.controller.processRequest(td.req)
require.NoError(t, err)
expectedActions := []core.Action{
core.NewPatchAction(
velerov1api.SchemeGroupVersion.WithResource("deletebackuprequests"),
td.req.Namespace,
td.req.Name,
types.MergePatchType,
[]byte(`{"metadata":{"labels":{"velero.io/backup-name":"foo"}},"status":{"phase":"InProgress"}}`),
),
core.NewGetAction(
velerov1api.SchemeGroupVersion.WithResource("backups"),
td.req.Namespace,
td.req.Spec.BackupName,
),
core.NewPatchAction(
velerov1api.SchemeGroupVersion.WithResource("deletebackuprequests"),
td.req.Namespace,
td.req.Name,
types.MergePatchType,
[]byte(`{"metadata":{"labels":{"velero.io/backup-uid":"uid"}}}`),
),
core.NewPatchAction(
velerov1api.SchemeGroupVersion.WithResource("backups"),
td.req.Namespace,
td.req.Spec.BackupName,
types.MergePatchType,
[]byte(`{"status":{"phase":"Deleting"}}`),
),
core.NewDeleteAction(
velerov1api.SchemeGroupVersion.WithResource("backups"),
td.req.Namespace,
td.req.Spec.BackupName,
),
core.NewPatchAction(
velerov1api.SchemeGroupVersion.WithResource("deletebackuprequests"),
td.req.Namespace,
td.req.Name,
types.MergePatchType,
[]byte(`{"status":{"phase":"Processed"}}`),
),
core.NewDeleteCollectionAction(
velerov1api.SchemeGroupVersion.WithResource("deletebackuprequests"),
td.req.Namespace,
pkgbackup.NewDeleteBackupRequestListOptions(td.req.Spec.BackupName, "uid"),
),
}
velerotest.CompareActions(t, expectedActions, td.client.Actions())
// Make sure snapshot was deleted
assert.Equal(t, 0, td.volumeSnapshotter.SnapshotsTaken.Len())
})
}
func TestBackupDeletionControllerDeleteExpiredRequests(t *testing.T) {

View File

@@ -77,14 +77,14 @@ func (r *BackupStorageLocationReconciler) Reconcile(req ctrl.Request) (ctrl.Resu
defaultFound = true
}
backupStore, err := r.NewBackupStore(location, pluginManager, log)
if err != nil {
log.WithError(err).Error("Error getting a backup store")
if !storage.IsReadyToValidate(location.Spec.ValidationFrequency, location.Status.LastValidationTime, r.DefaultBackupLocationInfo, log) {
log.Debug("Backup location not ready to be validated")
continue
}
if !storage.IsReadyToValidate(location.Spec.ValidationFrequency, location.Status.LastValidationTime, r.DefaultBackupLocationInfo, log) {
log.Debug("Backup location not ready to be validated")
backupStore, err := r.NewBackupStore(location, pluginManager, log)
if err != nil {
log.WithError(err).Error("Error getting a backup store")
continue
}

View File

@@ -120,13 +120,13 @@ var _ = Describe("Backup Storage Location Reconciler", func() {
wantErr bool
}{
{
backupLocation: builder.ForBackupStorageLocation("ns-1", "location-1").ValidationFrequency(0).Result(),
backupLocation: builder.ForBackupStorageLocation("ns-1", "location-1").ValidationFrequency(0).LastValidationTime(time.Now()).Result(),
isValidError: nil,
expectedPhase: "",
wantErr: false,
},
{
backupLocation: builder.ForBackupStorageLocation("ns-1", "location-2").ValidationFrequency(0).Result(),
backupLocation: builder.ForBackupStorageLocation("ns-1", "location-2").ValidationFrequency(0).LastValidationTime(time.Now()).Result(),
isValidError: nil,
expectedPhase: "",
wantErr: false,

View File

@@ -1,5 +1,5 @@
/*
Copyright 2018 the Velero contributors.
Copyright 2020 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -155,6 +155,12 @@ func (c *podVolumeRestoreController) pvrHandler(obj interface{}) {
return
}
resticInitContainerIndex := getResticInitContainerIndex(pod)
if resticInitContainerIndex > 0 {
log.Warnf(`Init containers before the %s container may cause issues
if they interfere with volumes being restored: %s index %d`, restic.InitContainer, restic.InitContainer, resticInitContainerIndex)
}
log.Debug("Enqueueing")
c.enqueue(obj)
}
@@ -174,6 +180,12 @@ func (c *podVolumeRestoreController) podHandler(obj interface{}) {
return
}
resticInitContainerIndex := getResticInitContainerIndex(pod)
if resticInitContainerIndex > 0 {
log.Warnf(`Init containers before the %s container may cause issues
if they interfere with volumes being restored: %s index %d`, restic.InitContainer, restic.InitContainer, resticInitContainerIndex)
}
selector := labels.Set(map[string]string{
velerov1api.PodUIDLabel: string(pod.UID),
}).AsSelector()
@@ -208,18 +220,21 @@ func isPodOnNode(pod *corev1api.Pod, node string) bool {
}
func isResticInitContainerRunning(pod *corev1api.Pod) bool {
// no init containers, or the first one is not the velero restic one: return false
if len(pod.Spec.InitContainers) == 0 || pod.Spec.InitContainers[0].Name != restic.InitContainer {
return false
// Restic wait container can be anywhere in the list of init containers, but must be running.
i := getResticInitContainerIndex(pod)
return i >= 0 && pod.Status.InitContainerStatuses[i].State.Running != nil
}
func getResticInitContainerIndex(pod *corev1api.Pod) int {
// Restic wait container can be anywhere in the list of init containers so locate it.
for i, initContainer := range pod.Spec.InitContainers {
if initContainer.Name == restic.InitContainer {
return i
}
}
// status hasn't been created yet, or the first one is not yet running: return false
if len(pod.Status.InitContainerStatuses) == 0 || pod.Status.InitContainerStatuses[0].State.Running == nil {
return false
}
// else, it's running
return true
return -1
}
func (c *podVolumeRestoreController) processQueueItem(key string) error {

View File

@@ -1,5 +1,5 @@
/*
Copyright 2018 the Velero contributors.
Copyright 2020 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -491,7 +491,7 @@ func TestIsResticContainerRunning(t *testing.T) {
expected: false,
},
{
name: "pod with running restic init container that's not first should return false",
name: "pod with running restic init container that's not first should still work",
pod: &corev1api.Pod{
ObjectMeta: metav1.ObjectMeta{
Namespace: "ns-1",
@@ -522,7 +522,7 @@ func TestIsResticContainerRunning(t *testing.T) {
},
},
},
expected: false,
expected: true,
},
{
name: "pod with restic init container as first initContainer that's not running should return false",
@@ -598,3 +598,105 @@ func TestIsResticContainerRunning(t *testing.T) {
})
}
}
func TestGetResticInitContainerIndex(t *testing.T) {
tests := []struct {
name string
pod *corev1api.Pod
expected int
}{
{
name: "init container is not present return -1",
pod: &corev1api.Pod{
ObjectMeta: metav1.ObjectMeta{
Namespace: "ns-1",
Name: "pod-1",
},
},
expected: -1,
},
{
name: "pod with no restic init container return -1",
pod: &corev1api.Pod{
ObjectMeta: metav1.ObjectMeta{
Namespace: "ns-1",
Name: "pod-1",
},
Spec: corev1api.PodSpec{
InitContainers: []corev1api.Container{
{
Name: "non-restic-init",
},
},
},
},
expected: -1,
},
{
name: "pod with restic container as second initContainern should return 1",
pod: &corev1api.Pod{
ObjectMeta: metav1.ObjectMeta{
Namespace: "ns-1",
Name: "pod-1",
},
Spec: corev1api.PodSpec{
InitContainers: []corev1api.Container{
{
Name: "non-restic-init",
},
{
Name: restic.InitContainer,
},
},
},
},
expected: 1,
},
{
name: "pod with restic init container as first initContainer should return 0",
pod: &corev1api.Pod{
ObjectMeta: metav1.ObjectMeta{
Namespace: "ns-1",
Name: "pod-1",
},
Spec: corev1api.PodSpec{
InitContainers: []corev1api.Container{
{
Name: restic.InitContainer,
},
{
Name: "non-restic-init",
},
},
},
},
expected: 0,
},
{
name: "pod with restic init container as first initContainer should return 0",
pod: &corev1api.Pod{
ObjectMeta: metav1.ObjectMeta{
Namespace: "ns-1",
Name: "pod-1",
},
Spec: corev1api.PodSpec{
InitContainers: []corev1api.Container{
{
Name: restic.InitContainer,
},
{
Name: "non-restic-init",
},
},
},
},
expected: 0,
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
assert.Equal(t, test.expected, getResticInitContainerIndex(test.pod))
})
}
}

View File

@@ -46,11 +46,12 @@ var (
DefaultVeleroPodCPURequest = "500m"
DefaultVeleroPodMemRequest = "128Mi"
DefaultVeleroPodCPULimit = "1000m"
DefaultVeleroPodMemLimit = "256Mi"
DefaultVeleroPodMemLimit = "512Mi"
DefaultResticPodCPURequest = "500m"
DefaultResticPodMemRequest = "512Mi"
DefaultResticPodCPULimit = "1000m"
DefaultResticPodMemLimit = "1Gi"
DefaultVeleroNamespace = "velero"
)
func labels() map[string]string {
@@ -105,8 +106,12 @@ func ServiceAccount(namespace string, annotations map[string]string) *corev1.Ser
}
func ClusterRoleBinding(namespace string) *rbacv1beta1.ClusterRoleBinding {
crbName := "velero"
if namespace != DefaultVeleroNamespace {
crbName = "velero-" + namespace
}
crb := &rbacv1beta1.ClusterRoleBinding{
ObjectMeta: objectMeta("", "velero"),
ObjectMeta: objectMeta("", crbName),
TypeMeta: metav1.TypeMeta{
Kind: "ClusterRoleBinding",
APIVersion: rbacv1beta1.SchemeGroupVersion.String(),

View File

@@ -23,7 +23,7 @@ import (
)
func TestResources(t *testing.T) {
bsl := BackupStorageLocation("velero", "test", "test", "", make(map[string]string), []byte("test"))
bsl := BackupStorageLocation(DefaultVeleroNamespace, "test", "test", "", make(map[string]string), []byte("test"))
assert.Equal(t, "velero", bsl.ObjectMeta.Namespace)
assert.Equal(t, "test", bsl.Spec.Provider)
@@ -31,7 +31,7 @@ func TestResources(t *testing.T) {
assert.Equal(t, make(map[string]string), bsl.Spec.Config)
assert.Equal(t, []byte("test"), bsl.Spec.ObjectStorage.CACert)
vsl := VolumeSnapshotLocation("velero", "test", make(map[string]string))
vsl := VolumeSnapshotLocation(DefaultVeleroNamespace, "test", make(map[string]string))
assert.Equal(t, "velero", vsl.ObjectMeta.Namespace)
assert.Equal(t, "test", vsl.Spec.Provider)
@@ -41,12 +41,19 @@ func TestResources(t *testing.T) {
assert.Equal(t, "velero", ns.Name)
crb := ClusterRoleBinding("velero")
crb := ClusterRoleBinding(DefaultVeleroNamespace)
// The CRB is a cluster-scoped resource
assert.Equal(t, "", crb.ObjectMeta.Namespace)
assert.Equal(t, "velero", crb.ObjectMeta.Name)
assert.Equal(t, "velero", crb.Subjects[0].Namespace)
sa := ServiceAccount("velero", map[string]string{"abcd": "cbd"})
customNamespaceCRB := ClusterRoleBinding("foo")
// The CRB is a cluster-scoped resource
assert.Equal(t, "", customNamespaceCRB.ObjectMeta.Namespace)
assert.Equal(t, "velero-foo", customNamespaceCRB.ObjectMeta.Name)
assert.Equal(t, "foo", customNamespaceCRB.Subjects[0].Namespace)
sa := ServiceAccount(DefaultVeleroNamespace, map[string]string{"abcd": "cbd"})
assert.Equal(t, "velero", sa.ObjectMeta.Namespace)
assert.Equal(t, "cbd", sa.ObjectMeta.Annotations["abcd"])
}

View File

@@ -23,7 +23,7 @@ import (
var (
ClusterRoleBindings = schema.GroupResource{Group: "rbac.authorization.k8s.io", Resource: "clusterrolebindings"}
ClusterRoles = schema.GroupResource{Group: "rbac.authorization.k8s.io", Resource: "clusterroles"}
CustomResourceDefinitions = schema.GroupResource{Group: "apiextensions.k8s.io", Resource: "CustomResourceDefinition"}
CustomResourceDefinitions = schema.GroupResource{Group: "apiextensions.k8s.io", Resource: "customresourcedefinitions"}
Jobs = schema.GroupResource{Group: "batch", Resource: "jobs"}
Namespaces = schema.GroupResource{Group: "", Resource: "namespaces"}
PersistentVolumeClaims = schema.GroupResource{Group: "", Resource: "persistentvolumeclaims"}

View File

@@ -95,13 +95,17 @@ func getPodSnapshotAnnotations(obj metav1.Object) map[string]string {
return res
}
func isPVBMatchPod(pvb *velerov1api.PodVolumeBackup, podName string, namespace string) bool {
return podName == pvb.Spec.Pod.Name && namespace == pvb.Spec.Pod.Namespace
}
// GetVolumeBackupsForPod returns a map, of volume name -> snapshot id,
// of the PodVolumeBackups that exist for the provided pod.
func GetVolumeBackupsForPod(podVolumeBackups []*velerov1api.PodVolumeBackup, pod metav1.Object) map[string]string {
func GetVolumeBackupsForPod(podVolumeBackups []*velerov1api.PodVolumeBackup, pod metav1.Object, sourcePodNs string) map[string]string {
volumes := make(map[string]string)
for _, pvb := range podVolumeBackups {
if pod.GetName() != pvb.Spec.Pod.Name {
if !isPVBMatchPod(pvb, pod.GetName(), sourcePodNs) {
continue
}

View File

@@ -47,78 +47,93 @@ func TestGetVolumeBackupsForPod(t *testing.T) {
podVolumeBackups []*velerov1api.PodVolumeBackup
podAnnotations map[string]string
podName string
sourcePodNs string
expected map[string]string
}{
{
name: "nil annotations",
name: "nil annotations results in no volume backups returned",
podAnnotations: nil,
expected: nil,
},
{
name: "empty annotations",
name: "empty annotations results in no volume backups returned",
podAnnotations: make(map[string]string),
expected: nil,
},
{
name: "non-empty map, no snapshot annotation",
name: "pod annotations with no snapshot annotation prefix results in no volume backups returned",
podAnnotations: map[string]string{"foo": "bar"},
expected: nil,
},
{
name: "has snapshot annotation only, no suffix",
podAnnotations: map[string]string{podAnnotationPrefix: "bar"},
expected: map[string]string{"": "bar"},
name: "pod annotation with only snapshot annotation prefix, results in volume backup with empty volume key",
podAnnotations: map[string]string{podAnnotationPrefix: "snapshotID"},
expected: map[string]string{"": "snapshotID"},
},
{
name: "has snapshot annotation only, with suffix",
podAnnotations: map[string]string{podAnnotationPrefix + "foo": "bar"},
expected: map[string]string{"foo": "bar"},
name: "pod annotation with snapshot annotation prefix results in volume backup with volume name and snapshot ID",
podAnnotations: map[string]string{podAnnotationPrefix + "volume": "snapshotID"},
expected: map[string]string{"volume": "snapshotID"},
},
{
name: "has snapshot annotation, with suffix",
podAnnotations: map[string]string{"x": "y", podAnnotationPrefix + "foo": "bar", podAnnotationPrefix + "abc": "123"},
expected: map[string]string{"foo": "bar", "abc": "123"},
name: "only pod annotations with snapshot annotation prefix are considered",
podAnnotations: map[string]string{"x": "y", podAnnotationPrefix + "volume1": "snapshot1", podAnnotationPrefix + "volume2": "snapshot2"},
expected: map[string]string{"volume1": "snapshot1", "volume2": "snapshot2"},
},
{
name: "has snapshot annotation, with suffix, and also PVBs",
name: "pod annotations are not considered if PVBs are provided",
podVolumeBackups: []*velerov1api.PodVolumeBackup{
builder.ForPodVolumeBackup("velero", "pvb-1").PodName("TestPod").SnapshotID("bar").Volume("pvbtest1-foo").Result(),
builder.ForPodVolumeBackup("velero", "pvb-2").PodName("TestPod").SnapshotID("123").Volume("pvbtest2-abc").Result(),
builder.ForPodVolumeBackup("velero", "pvb-1").PodName("TestPod").PodNamespace("TestNS").SnapshotID("snapshot1").Volume("pvbtest1-foo").Result(),
builder.ForPodVolumeBackup("velero", "pvb-2").PodName("TestPod").PodNamespace("TestNS").SnapshotID("snapshot2").Volume("pvbtest2-abc").Result(),
},
podName: "TestPod",
sourcePodNs: "TestNS",
podAnnotations: map[string]string{"x": "y", podAnnotationPrefix + "foo": "bar", podAnnotationPrefix + "abc": "123"},
expected: map[string]string{"pvbtest1-foo": "bar", "pvbtest2-abc": "123"},
expected: map[string]string{"pvbtest1-foo": "snapshot1", "pvbtest2-abc": "snapshot2"},
},
{
name: "no snapshot annotation, but with PVBs",
name: "volume backups are returned even if no pod annotations are present",
podVolumeBackups: []*velerov1api.PodVolumeBackup{
builder.ForPodVolumeBackup("velero", "pvb-1").PodName("TestPod").SnapshotID("bar").Volume("pvbtest1-foo").Result(),
builder.ForPodVolumeBackup("velero", "pvb-2").PodName("TestPod").SnapshotID("123").Volume("pvbtest2-abc").Result(),
builder.ForPodVolumeBackup("velero", "pvb-1").PodName("TestPod").PodNamespace("TestNS").SnapshotID("snapshot1").Volume("pvbtest1-foo").Result(),
builder.ForPodVolumeBackup("velero", "pvb-2").PodName("TestPod").PodNamespace("TestNS").SnapshotID("snapshot2").Volume("pvbtest2-abc").Result(),
},
podName: "TestPod",
expected: map[string]string{"pvbtest1-foo": "bar", "pvbtest2-abc": "123"},
podName: "TestPod",
sourcePodNs: "TestNS",
expected: map[string]string{"pvbtest1-foo": "snapshot1", "pvbtest2-abc": "snapshot2"},
},
{
name: "no snapshot annotation, but with PVBs, some of which have snapshot IDs and some of which don't",
name: "only volumes from PVBs with snapshot IDs are returned",
podVolumeBackups: []*velerov1api.PodVolumeBackup{
builder.ForPodVolumeBackup("velero", "pvb-1").PodName("TestPod").SnapshotID("bar").Volume("pvbtest1-foo").Result(),
builder.ForPodVolumeBackup("velero", "pvb-2").PodName("TestPod").SnapshotID("123").Volume("pvbtest2-abc").Result(),
builder.ForPodVolumeBackup("velero", "pvb-3").PodName("TestPod").Volume("pvbtest3-foo").Result(),
builder.ForPodVolumeBackup("velero", "pvb-4").PodName("TestPod").Volume("pvbtest4-abc").Result(),
builder.ForPodVolumeBackup("velero", "pvb-1").PodName("TestPod").PodNamespace("TestNS").SnapshotID("snapshot1").Volume("pvbtest1-foo").Result(),
builder.ForPodVolumeBackup("velero", "pvb-2").PodName("TestPod").PodNamespace("TestNS").SnapshotID("snapshot2").Volume("pvbtest2-abc").Result(),
builder.ForPodVolumeBackup("velero", "pvb-3").PodName("TestPod").PodNamespace("TestNS").Volume("pvbtest3-foo").Result(),
builder.ForPodVolumeBackup("velero", "pvb-4").PodName("TestPod").PodNamespace("TestNS").Volume("pvbtest4-abc").Result(),
},
podName: "TestPod",
expected: map[string]string{"pvbtest1-foo": "bar", "pvbtest2-abc": "123"},
podName: "TestPod",
sourcePodNs: "TestNS",
expected: map[string]string{"pvbtest1-foo": "snapshot1", "pvbtest2-abc": "snapshot2"},
},
{
name: "has snapshot annotation, with suffix, and with PVBs from current pod and a PVB from another pod",
name: "only volumes from PVBs for the given pod are returned",
podVolumeBackups: []*velerov1api.PodVolumeBackup{
builder.ForPodVolumeBackup("velero", "pvb-1").PodName("TestPod").SnapshotID("bar").Volume("pvbtest1-foo").Result(),
builder.ForPodVolumeBackup("velero", "pvb-2").PodName("TestPod").SnapshotID("123").Volume("pvbtest2-abc").Result(),
builder.ForPodVolumeBackup("velero", "pvb-3").PodName("TestAnotherPod").SnapshotID("xyz").Volume("pvbtest3-xyz").Result(),
builder.ForPodVolumeBackup("velero", "pvb-1").PodName("TestPod").PodNamespace("TestNS").SnapshotID("snapshot1").Volume("pvbtest1-foo").Result(),
builder.ForPodVolumeBackup("velero", "pvb-2").PodName("TestPod").PodNamespace("TestNS").SnapshotID("snapshot2").Volume("pvbtest2-abc").Result(),
builder.ForPodVolumeBackup("velero", "pvb-3").PodName("TestAnotherPod").SnapshotID("snapshot3").Volume("pvbtest3-xyz").Result(),
},
podAnnotations: map[string]string{"x": "y", podAnnotationPrefix + "foo": "bar", podAnnotationPrefix + "abc": "123"},
podName: "TestPod",
expected: map[string]string{"pvbtest1-foo": "bar", "pvbtest2-abc": "123"},
podName: "TestPod",
sourcePodNs: "TestNS",
expected: map[string]string{"pvbtest1-foo": "snapshot1", "pvbtest2-abc": "snapshot2"},
},
{
name: "only volumes from PVBs which match the pod name and source pod namespace are returned",
podVolumeBackups: []*velerov1api.PodVolumeBackup{
builder.ForPodVolumeBackup("velero", "pvb-1").PodName("TestPod").PodNamespace("TestNS").SnapshotID("snapshot1").Volume("pvbtest1-foo").Result(),
builder.ForPodVolumeBackup("velero", "pvb-2").PodName("TestAnotherPod").PodNamespace("TestNS").SnapshotID("snapshot2").Volume("pvbtest2-abc").Result(),
builder.ForPodVolumeBackup("velero", "pvb-3").PodName("TestPod").PodNamespace("TestAnotherNS").SnapshotID("snapshot3").Volume("pvbtest3-xyz").Result(),
},
podName: "TestPod",
sourcePodNs: "TestNS",
expected: map[string]string{"pvbtest1-foo": "snapshot1"},
},
}
@@ -128,7 +143,7 @@ func TestGetVolumeBackupsForPod(t *testing.T) {
pod.Annotations = test.podAnnotations
pod.Name = test.podName
res := GetVolumeBackupsForPod(test.podVolumeBackups, pod)
res := GetVolumeBackupsForPod(test.podVolumeBackups, pod, test.sourcePodNs)
assert.Equal(t, test.expected, res)
})
}
@@ -564,6 +579,81 @@ func TestGetPodVolumesUsingRestic(t *testing.T) {
}
}
func TestIsPVBMatchPod(t *testing.T) {
testCases := []struct {
name string
pvb velerov1api.PodVolumeBackup
podName string
sourcePodNs string
expected bool
}{
{
name: "should match PVB and pod",
pvb: velerov1api.PodVolumeBackup{
Spec: velerov1api.PodVolumeBackupSpec{
Pod: corev1api.ObjectReference{
Name: "matching-pod",
Namespace: "matching-namespace",
},
},
},
podName: "matching-pod",
sourcePodNs: "matching-namespace",
expected: true,
},
{
name: "should not match PVB and pod, pod name mismatch",
pvb: velerov1api.PodVolumeBackup{
Spec: velerov1api.PodVolumeBackupSpec{
Pod: corev1api.ObjectReference{
Name: "matching-pod",
Namespace: "matching-namespace",
},
},
},
podName: "not-matching-pod",
sourcePodNs: "matching-namespace",
expected: false,
},
{
name: "should not match PVB and pod, pod namespace mismatch",
pvb: velerov1api.PodVolumeBackup{
Spec: velerov1api.PodVolumeBackupSpec{
Pod: corev1api.ObjectReference{
Name: "matching-pod",
Namespace: "matching-namespace",
},
},
},
podName: "matching-pod",
sourcePodNs: "not-matching-namespace",
expected: false,
},
{
name: "should not match PVB and pod, pod name and namespace mismatch",
pvb: velerov1api.PodVolumeBackup{
Spec: velerov1api.PodVolumeBackupSpec{
Pod: corev1api.ObjectReference{
Name: "matching-pod",
Namespace: "matching-namespace",
},
},
},
podName: "not-matching-pod",
sourcePodNs: "not-matching-namespace",
expected: false,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
actual := isPVBMatchPod(&tc.pvb, tc.podName, tc.sourcePodNs)
assert.Equal(t, tc.expected, actual)
})
}
}
func newFakeClient(t *testing.T, initObjs ...runtime.Object) client.Client {
err := velerov1api.AddToScheme(scheme.Scheme)
require.NoError(t, err)

View File

@@ -92,7 +92,7 @@ func newRestorer(
}
func (r *restorer) RestorePodVolumes(data RestoreData) []error {
volumesToRestore := GetVolumeBackupsForPod(data.PodVolumeBackups, data.Pod)
volumesToRestore := GetVolumeBackupsForPod(data.PodVolumeBackups, data.Pod, data.SourceNamespace)
if len(volumesToRestore) == 0 {
return nil
}

View File

@@ -47,20 +47,6 @@ func (r *pvRestorer) executePVAction(obj *unstructured.Unstructured) (*unstructu
return nil, errors.New("PersistentVolume is missing its name")
}
// It's simpler to just access the spec through the unstructured object than to convert
// to structured and back here, especially since the SetVolumeID(...) call below needs
// the unstructured representation (and does a conversion internally).
res, ok := obj.Object["spec"]
if !ok {
return nil, errors.New("spec not found")
}
spec, ok := res.(map[string]interface{})
if !ok {
return nil, errors.Errorf("spec was of type %T, expected map[string]interface{}", res)
}
delete(spec, "claimRef")
if boolptr.IsSetToFalse(r.snapshotVolumes) {
// The backup had snapshots disabled, so we can return early
return obj, nil

View File

@@ -56,19 +56,6 @@ func TestExecutePVAction_NoSnapshotRestores(t *testing.T) {
restore: builder.ForRestore(api.DefaultNamespace, "").Result(),
expectedErr: true,
},
{
name: "no spec should error",
obj: NewTestUnstructured().WithName("pv-1").Unstructured,
restore: builder.ForRestore(api.DefaultNamespace, "").Result(),
expectedErr: true,
},
{
name: "ensure spec.claimRef is deleted",
obj: NewTestUnstructured().WithName("pv-1").WithAnnotations("a", "b").WithSpec("claimRef", "someOtherField").Unstructured,
restore: builder.ForRestore(api.DefaultNamespace, "").RestorePVs(false).Result(),
backup: defaultBackup().Phase(api.BackupPhaseInProgress).Result(),
expectedRes: NewTestUnstructured().WithAnnotations("a", "b").WithName("pv-1").WithSpec("someOtherField").Unstructured,
},
{
name: "ensure spec.storageClassName is retained",
obj: NewTestUnstructured().WithName("pv-1").WithAnnotations("a", "b").WithSpec("storageClassName", "someOtherField").Unstructured,
@@ -81,7 +68,7 @@ func TestExecutePVAction_NoSnapshotRestores(t *testing.T) {
obj: NewTestUnstructured().WithName("pv-1").WithAnnotations("a", "b").WithSpec("claimRef", "storageClassName", "someOtherField").Unstructured,
restore: builder.ForRestore(api.DefaultNamespace, "").RestorePVs(true).Result(),
backup: defaultBackup().Phase(api.BackupPhaseInProgress).SnapshotVolumes(false).Result(),
expectedRes: NewTestUnstructured().WithName("pv-1").WithAnnotations("a", "b").WithSpec("storageClassName", "someOtherField").Unstructured,
expectedRes: NewTestUnstructured().WithName("pv-1").WithAnnotations("a", "b").WithSpec("claimRef", "storageClassName", "someOtherField").Unstructured,
},
{
name: "restore.spec.restorePVs=false, return early",

View File

@@ -76,6 +76,15 @@ func (a *ResticRestoreAction) Execute(input *velero.RestoreItemActionExecuteInpu
return nil, errors.Wrap(err, "unable to convert pod from runtime.Unstructured")
}
// At the point when this function is called, the namespace mapping for the restore
// has not yet been applied to `input.Item` so we can't perform a reverse-lookup in
// the namespace mapping in the restore spec. Instead, use the pod from the backup
// so that if the mapping is applied earlier, we still use the correct namespace.
var podFromBackup corev1.Pod
if err := runtime.DefaultUnstructuredConverter.FromUnstructured(input.ItemFromBackup.UnstructuredContent(), &podFromBackup); err != nil {
return nil, errors.Wrap(err, "unable to convert source pod from runtime.Unstructured")
}
log := a.logger.WithField("pod", kube.NamespaceAndName(&pod))
opts := label.NewListOptionsForBackup(input.Restore.Spec.BackupName)
@@ -88,7 +97,7 @@ func (a *ResticRestoreAction) Execute(input *velero.RestoreItemActionExecuteInpu
for i := range podVolumeBackupList.Items {
podVolumeBackups = append(podVolumeBackups, &podVolumeBackupList.Items[i])
}
volumeSnapshots := restic.GetVolumeBackupsForPod(podVolumeBackups, &pod)
volumeSnapshots := restic.GetVolumeBackupsForPod(podVolumeBackups, &pod, podFromBackup.Namespace)
if len(volumeSnapshots) == 0 {
log.Debug("No restic backups found for pod")
return velero.NewRestoreItemActionExecuteOutput(input.Item), nil

View File

@@ -122,6 +122,7 @@ func TestResticRestoreActionExecute(t *testing.T) {
tests := []struct {
name string
pod *corev1api.Pod
podFromBackup *corev1api.Pod
podVolumeBackups []*velerov1api.PodVolumeBackup
want *corev1api.Pod
}{
@@ -173,12 +174,14 @@ func TestResticRestoreActionExecute(t *testing.T) {
podVolumeBackups: []*velerov1api.PodVolumeBackup{
builder.ForPodVolumeBackup(veleroNs, "pvb-1").
PodName("my-pod").
PodNamespace("ns-1").
Volume("vol-1").
ObjectMeta(builder.WithLabels(velerov1api.BackupNameLabel, backupName)).
SnapshotID("foo").
Result(),
builder.ForPodVolumeBackup(veleroNs, "pvb-2").
PodName("my-pod").
PodNamespace("ns-1").
Volume("vol-2").
ObjectMeta(builder.WithLabels(velerov1api.BackupNameLabel, backupName)).
SnapshotID("foo").
@@ -200,6 +203,49 @@ func TestResticRestoreActionExecute(t *testing.T) {
builder.ForContainer("first-container", "").Result()).
Result(),
},
{
name: "Restoring pod in another namespace adds the restic initContainer and uses the namespace of the backup pod for matching PVBs",
pod: builder.ForPod("new-ns", "my-pod").
Volumes(
builder.ForVolume("vol-1").PersistentVolumeClaimSource("pvc-1").Result(),
builder.ForVolume("vol-2").PersistentVolumeClaimSource("pvc-2").Result(),
).
Result(),
podFromBackup: builder.ForPod("original-ns", "my-pod").
Volumes(
builder.ForVolume("vol-1").PersistentVolumeClaimSource("pvc-1").Result(),
builder.ForVolume("vol-2").PersistentVolumeClaimSource("pvc-2").Result(),
).
Result(),
podVolumeBackups: []*velerov1api.PodVolumeBackup{
builder.ForPodVolumeBackup(veleroNs, "pvb-1").
PodName("my-pod").
PodNamespace("original-ns").
Volume("vol-1").
ObjectMeta(builder.WithLabels(velerov1api.BackupNameLabel, backupName)).
SnapshotID("foo").
Result(),
builder.ForPodVolumeBackup(veleroNs, "pvb-2").
PodName("my-pod").
PodNamespace("original-ns").
Volume("vol-2").
ObjectMeta(builder.WithLabels(velerov1api.BackupNameLabel, backupName)).
SnapshotID("foo").
Result(),
},
want: builder.ForPod("new-ns", "my-pod").
Volumes(
builder.ForVolume("vol-1").PersistentVolumeClaimSource("pvc-1").Result(),
builder.ForVolume("vol-2").PersistentVolumeClaimSource("pvc-2").Result(),
).
InitContainers(
newResticInitContainerBuilder(initContainerImage(defaultImageBase), "").
Resources(&resourceReqs).
SecurityContext(&securityContext).
VolumeMounts(builder.ForVolumeMount("vol-1", "/restores/vol-1").Result(), builder.ForVolumeMount("vol-2", "/restores/vol-2").Result()).
Command([]string{"/velero-restic-restore-helper"}).Result()).
Result(),
},
}
for _, tc := range tests {
@@ -212,12 +258,24 @@ func TestResticRestoreActionExecute(t *testing.T) {
require.NoError(t, err)
}
unstructuredMap, err := runtime.DefaultUnstructuredConverter.ToUnstructured(tc.pod)
unstructuredPod, err := runtime.DefaultUnstructuredConverter.ToUnstructured(tc.pod)
require.NoError(t, err)
// Default to using the same pod for both Item and ItemFromBackup if podFromBackup not provided
var unstructuredPodFromBackup map[string]interface{}
if tc.podFromBackup != nil {
unstructuredPodFromBackup, err = runtime.DefaultUnstructuredConverter.ToUnstructured(tc.podFromBackup)
require.NoError(t, err)
} else {
unstructuredPodFromBackup = unstructuredPod
}
input := &velero.RestoreItemActionExecuteInput{
Item: &unstructured.Unstructured{
Object: unstructuredMap,
Object: unstructuredPod,
},
ItemFromBackup: &unstructured.Unstructured{
Object: unstructuredPodFromBackup,
},
Restore: builder.ForRestore(veleroNs, restoreName).
Backup(backupName).

View File

@@ -62,6 +62,14 @@ import (
"github.com/vmware-tanzu/velero/pkg/volume"
)
// These annotations are taken from the Kubernetes persistent volume/persistent volume claim controller.
// They cannot be directly importing because they are part of the kubernetes/kubernetes package, and importing that package is unsupported.
// Their values are well-known and slow changing. They're duplicated here as constants to provide compile-time checking.
// Originals can be found in kubernetes/kubernetes/pkg/controller/volume/persistentvolume/util/util.go.
const KubeAnnBindCompleted = "pv.kubernetes.io/bind-completed"
const KubeAnnBoundByController = "pv.kubernetes.io/bound-by-controller"
const KubeAnnDynamicallyProvisioned = "pv.kubernetes.io/provisioned-by"
type VolumeSnapshotterGetter interface {
GetVolumeSnapshotter(name string) (velero.VolumeSnapshotter, error)
}
@@ -882,6 +890,13 @@ func (ctx *restoreContext) restoreItem(obj *unstructured.Unstructured, groupReso
return warnings, errs
}
// Check to see if the claimRef.namespace field needs to be remapped, and do so if necessary.
_, err = remapClaimRefNS(ctx, obj)
if err != nil {
errs.Add(namespace, err)
return warnings, errs
}
var shouldRestoreSnapshot bool
if !shouldRenamePV {
// Check if the PV exists in the cluster before attempting to create
@@ -899,6 +914,9 @@ func (ctx *restoreContext) restoreItem(obj *unstructured.Unstructured, groupReso
}
if shouldRestoreSnapshot {
// reset the PV's binding status so that Kubernetes can properly associate it with the restored PVC.
obj = resetVolumeBindingInfo(obj)
// even if we're renaming the PV, obj still has the old name here, because the pvRestorer
// uses the original name to look up metadata about the snapshot.
ctx.log.Infof("Restoring persistent volume from snapshot.")
@@ -958,8 +976,9 @@ func (ctx *restoreContext) restoreItem(obj *unstructured.Unstructured, groupReso
default:
ctx.log.Infof("Restoring persistent volume as-is because it doesn't have a snapshot and its reclaim policy is not Delete.")
// we call the pvRestorer here to clear out the PV's claimRef, so it can be re-claimed
// when its PVC is restored.
obj = resetVolumeBindingInfo(obj)
// we call the pvRestorer here to clear out the PV's claimRef.UID, so it can be re-claimed
// when its PVC is restored and gets a new UID.
updatedObj, err := ctx.pvRestorer.executePVAction(obj)
if err != nil {
errs.Add(namespace, fmt.Errorf("error executing PVAction for %s: %v", resourceID, err))
@@ -1052,17 +1071,16 @@ func (ctx *restoreContext) restoreItem(obj *unstructured.Unstructured, groupReso
return warnings, errs
}
if pvc.Spec.VolumeName != "" && ctx.pvsToProvision.Has(pvc.Spec.VolumeName) {
ctx.log.Infof("Resetting PersistentVolumeClaim %s/%s for dynamic provisioning", namespace, name)
if pvc.Spec.VolumeName != "" {
// This used to only happen with restic volumes, but now always remove this binding metadata
obj = resetVolumeBindingInfo(obj)
// use the unstructured helpers here since we're only deleting and
// the unstructured converter will add back (empty) fields for metadata
// and status that we removed earlier.
unstructured.RemoveNestedField(obj.Object, "spec", "volumeName")
annotations := obj.GetAnnotations()
delete(annotations, "pv.kubernetes.io/bind-completed")
delete(annotations, "pv.kubernetes.io/bound-by-controller")
obj.SetAnnotations(annotations)
// This is the case for restic volumes, where we need to actually have an empty volume created instead of restoring one.
// The assumption is that any PV in pvsToProvision doesn't have an associated snapshot.
if ctx.pvsToProvision.Has(pvc.Spec.VolumeName) {
ctx.log.Infof("Resetting PersistentVolumeClaim %s/%s for dynamic provisioning", namespace, name)
unstructured.RemoveNestedField(obj.Object, "spec", "volumeName")
}
}
if newName, ok := ctx.renamedPVs[pvc.Spec.VolumeName]; ok {
@@ -1154,7 +1172,7 @@ func (ctx *restoreContext) restoreItem(obj *unstructured.Unstructured, groupReso
return warnings, errs
}
if groupResource == kuberesource.Pods && len(restic.GetVolumeBackupsForPod(ctx.podVolumeBackups, obj)) > 0 {
if groupResource == kuberesource.Pods && len(restic.GetVolumeBackupsForPod(ctx.podVolumeBackups, obj, originalNamespace)) > 0 {
restorePodVolumeBackups(ctx, createdObj, originalNamespace)
}
@@ -1215,6 +1233,40 @@ func shouldRenamePV(ctx *restoreContext, obj *unstructured.Unstructured, client
return true, nil
}
// remapClaimRefNS remaps a PersistentVolume's claimRef.Namespace based on a restore's NamespaceMappings, if necessary.
// Returns true if the namespace was remapped, false if it was not required.
func remapClaimRefNS(ctx *restoreContext, obj *unstructured.Unstructured) (bool, error) {
if len(ctx.restore.Spec.NamespaceMapping) == 0 {
ctx.log.Debug("Persistent volume does not need to have the claimRef.namespace remapped because restore is not remapping any namespaces")
return false, nil
}
// Conversion to the real type here is more readable than all the error checking involved with reading each field individually.
pv := new(v1.PersistentVolume)
if err := runtime.DefaultUnstructuredConverter.FromUnstructured(obj.Object, pv); err != nil {
return false, errors.Wrapf(err, "error converting persistent volume to structured")
}
if pv.Spec.ClaimRef == nil {
ctx.log.Debugf("Persistent volume does not need to have the claimRef.namepace remapped because it's not claimed")
return false, nil
}
targetNS, ok := ctx.restore.Spec.NamespaceMapping[pv.Spec.ClaimRef.Namespace]
if !ok {
ctx.log.Debugf("Persistent volume does not need to have the claimRef.namespace remapped because it's not claimed by a PVC in a namespace that's being remapped")
return false, nil
}
err := unstructured.SetNestedField(obj.Object, targetNS, "spec", "claimRef", "namespace")
if err != nil {
return false, err
}
ctx.log.Debug("Persistent volume's namespace was updated")
return true, nil
}
// restorePodVolumeBackups restores the PodVolumeBackups for the given restored pod
func restorePodVolumeBackups(ctx *restoreContext, createdObj *unstructured.Unstructured, originalNamespace string) {
if ctx.resticRestorer == nil {
@@ -1329,6 +1381,29 @@ func hasDeleteReclaimPolicy(obj map[string]interface{}) bool {
return policy == string(v1.PersistentVolumeReclaimDelete)
}
// resetVolumeBindingInfo clears any necessary metadata out of a PersistentVolume or PersistentVolumeClaim that would make it ineligible to be re-bound by Velero.
func resetVolumeBindingInfo(obj *unstructured.Unstructured) *unstructured.Unstructured {
// Clean out ClaimRef UID and resourceVersion, since this information is highly unique.
unstructured.RemoveNestedField(obj.Object, "spec", "claimRef", "uid")
unstructured.RemoveNestedField(obj.Object, "spec", "claimRef", "resourceVersion")
// Clear out any annotations used by the Kubernetes PV controllers to track bindings.
annotations := obj.GetAnnotations()
// Upon restore, this new PV will look like a statically provisioned, manually-bound volume rather than one bound by the controller, so remove the annotation that signals that a controller bound it.
delete(annotations, KubeAnnBindCompleted)
// Remove the annotation that signals that the PV is already bound; we want the PV(C) controller to take the two objects and bind them again.
delete(annotations, KubeAnnBoundByController)
// Remove the provisioned-by annotation which signals that the persistent volume was dynamically provisioned; it is now statically provisioned.
delete(annotations, KubeAnnDynamicallyProvisioned)
// GetAnnotations returns a copy, so we have to set them again
obj.SetAnnotations(annotations)
return obj
}
func resetMetadataAndStatus(obj *unstructured.Unstructured) (*unstructured.Unstructured, error) {
res, ok := obj.Object["metadata"]
if !ok {

View File

@@ -1830,7 +1830,7 @@ func TestRestorePersistentVolumes(t *testing.T) {
},
},
{
name: "when a PV with a reclaim policy of retain has no snapshot and does not exist in-cluster, it gets restored, without its claim ref",
name: "when a PV with a reclaim policy of retain has no snapshot and does not exist in-cluster, it gets restored, with its claim ref",
restore: defaultRestore().Result(),
backup: defaultBackup().Result(),
tarball: test.NewTarWriter(t).
@@ -1849,6 +1849,7 @@ func TestRestorePersistentVolumes(t *testing.T) {
ObjectMeta(
builder.WithLabels("velero.io/backup-name", "backup-1", "velero.io/restore-name", "restore-1"),
).
ClaimRef("ns-1", "pvc-1").
Result(),
),
},
@@ -2096,13 +2097,12 @@ func TestRestorePersistentVolumes(t *testing.T) {
want: []*test.APIResource{
test.PVs(
builder.ForPersistentVolume("source-pv").AWSEBSVolumeID("source-volume").ClaimRef("source-ns", "pvc-1").Result(),
// note that the renamed PV is not expected to have a claimRef in this test; that would be
// added after creation by the Kubernetes PV/PVC controller when it does a bind.
builder.ForPersistentVolume("renamed-source-pv").
ObjectMeta(
builder.WithAnnotations("velero.io/original-pv-name", "source-pv"),
builder.WithLabels("velero.io/backup-name", "backup-1", "velero.io/restore-name", "restore-1"),
).
// the namespace for this PV's claimRef should be the one that the PVC was remapped into.
).ClaimRef("target-ns", "pvc-1").
AWSEBSVolumeID("new-volume").
Result(),
),
@@ -2161,6 +2161,7 @@ func TestRestorePersistentVolumes(t *testing.T) {
ObjectMeta(
builder.WithLabels("velero.io/backup-name", "backup-1", "velero.io/restore-name", "restore-1"),
).
ClaimRef("target-ns", "pvc-1").
AWSEBSVolumeID("new-volume").
Result(),
),
@@ -2221,6 +2222,7 @@ func TestRestorePersistentVolumes(t *testing.T) {
builder.WithLabels("velero.io/backup-name", "backup-1", "velero.io/restore-name", "restore-1"),
builder.WithAnnotations("velero.io/original-pv-name", "source-pv"),
).
ClaimRef("target-ns", "pvc-1").
AWSEBSVolumeID("new-pvname").
Result(),
),
@@ -2340,13 +2342,12 @@ func TestRestorePersistentVolumes(t *testing.T) {
want: []*test.APIResource{
test.PVs(
builder.ForPersistentVolume("source-pv").AWSEBSVolumeID("source-volume").ClaimRef("source-ns", "pvc-1").Result(),
// note that the renamed PV is not expected to have a claimRef in this test; that would be
// added after creation by the Kubernetes PV/PVC controller when it does a bind.
builder.ForPersistentVolume("volumesnapshotter-renamed-source-pv").
ObjectMeta(
builder.WithAnnotations("velero.io/original-pv-name", "source-pv"),
builder.WithLabels("velero.io/backup-name", "backup-1", "velero.io/restore-name", "restore-1"),
).
ClaimRef("target-ns", "pvc-1").
AWSEBSVolumeID("new-volume").
Result(),
),
@@ -2434,14 +2435,14 @@ func TestRestoreWithRestic(t *testing.T) {
want map[*test.APIResource][]string
}{
{
name: "a pod that exists in given backup and contains associated PVBs should have should have RestorePodVolumes called",
name: "a pod that exists in given backup and contains associated PVBs should have RestorePodVolumes called",
restore: defaultRestore().Result(),
backup: defaultBackup().Result(),
apiResources: []*test.APIResource{test.Pods()},
podVolumeBackups: []*velerov1api.PodVolumeBackup{
builder.ForPodVolumeBackup("velero", "pvb-1").PodName("pod-1").SnapshotID("foo").Result(),
builder.ForPodVolumeBackup("velero", "pvb-2").PodName("pod-2").SnapshotID("foo").Result(),
builder.ForPodVolumeBackup("velero", "pvb-3").PodName("pod-4").SnapshotID("foo").Result(),
builder.ForPodVolumeBackup("velero", "pvb-2").PodName("pod-2").PodNamespace("ns-1").SnapshotID("foo").Result(),
builder.ForPodVolumeBackup("velero", "pvb-3").PodName("pod-4").PodNamespace("ns-2").SnapshotID("foo").Result(),
},
podWithPVBs: []*corev1api.Pod{
builder.ForPod("ns-1", "pod-2").
@@ -2846,3 +2847,49 @@ func (h *harness) AddItems(t *testing.T, resource *test.APIResource) {
require.NoError(t, err)
}
}
func Test_resetVolumeBindingInfo(t *testing.T) {
tests := []struct {
name string
obj *unstructured.Unstructured
expected *unstructured.Unstructured
}{
{
name: "PVs that are bound have their binding and dynamic provisioning annotations removed",
obj: NewTestUnstructured().WithMetadataField("kind", "persistentVolume").
WithName("pv-1").WithAnnotations(
KubeAnnBindCompleted,
KubeAnnBoundByController,
KubeAnnDynamicallyProvisioned,
).WithSpecField("claimRef", map[string]interface{}{
"namespace": "ns-1",
"name": "pvc-1",
"uid": "abc",
"resourceVersion": "1"}).Unstructured,
expected: NewTestUnstructured().WithMetadataField("kind", "persistentVolume").
WithName("pv-1").
WithAnnotations().
WithSpecField("claimRef", map[string]interface{}{
"namespace": "ns-1", "name": "pvc-1"}).Unstructured,
},
{
name: "PVCs that are bound have their binding annotations removed, but the volume name stays",
obj: NewTestUnstructured().WithMetadataField("kind", "persistentVolumeClaim").
WithName("pvc-1").WithAnnotations(
KubeAnnBindCompleted,
KubeAnnBoundByController,
KubeAnnDynamicallyProvisioned,
).WithSpecField("volumeName", "pv-1").Unstructured,
expected: NewTestUnstructured().WithMetadataField("kind", "persistentVolumeClaim").
WithName("pvc-1").WithAnnotations().
WithSpecField("volumeName", "pv-1").Unstructured,
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
actual := resetVolumeBindingInfo(tc.obj)
assert.Equal(t, tc.expected, actual)
})
}
}

View File

@@ -201,7 +201,10 @@ func GetResourceIncludesExcludes(helper discovery.Helper, includes, excludes []s
func(item string) string {
gvr, _, err := helper.ResourceFor(schema.ParseGroupResource(item).WithVersion(""))
if err != nil {
return ""
// If we can't resolve it, return it as-is. This prevents the generated
// includes-excludes list from including *everything*, if none of the includes
// can be resolved. ref. https://github.com/vmware-tanzu/velero/issues/2461
return item
}
gr := gvr.GroupResource()

View File

@@ -80,7 +80,7 @@ At installation, Velero sets default resource requests and limits for the Velero
|CPU request|500m|500m|
|Memory requests|128Mi|512Mi|
|CPU limit|1000m (1 CPU)|1000m (1 CPU)|
|Memory limit|256Mi|1024Mi|
|Memory limit|512Mi|1024Mi|
{{< /table >}}
### Install with custom resource requests and limits
@@ -111,7 +111,7 @@ Update the `spec.template.spec.containers.resources.limits` and `spec.template.s
```bash
kubectl patch deployment velero -n velero --patch \
'{"spec":{"template":{"spec":{"containers":[{"name": "velero", "resources": {"limits":{"cpu": "1", "memory": "256Mi"}, "requests": {"cpu": "1", "memory": "128Mi"}}}]}}}}'
'{"spec":{"template":{"spec":{"containers":[{"name": "velero", "resources": {"limits":{"cpu": "1", "memory": "512Mi"}, "requests": {"cpu": "1", "memory": "128Mi"}}}]}}}}'
```
**restic pod**

View File

@@ -80,7 +80,7 @@ At installation, Velero sets default resource requests and limits for the Velero
|CPU request|500m|500m|
|Memory requests|128Mi|512Mi|
|CPU limit|1000m (1 CPU)|1000m (1 CPU)|
|Memory limit|256Mi|1024Mi|
|Memory limit|512Mi|1024Mi|
{{< /table >}}
### Install with custom resource requests and limits
@@ -111,7 +111,7 @@ Update the `spec.template.spec.containers.resources.limits` and `spec.template.s
```bash
kubectl patch deployment velero -n velero --patch \
'{"spec":{"template":{"spec":{"containers":[{"name": "velero", "resources": {"limits":{"cpu": "1", "memory": "256Mi"}, "requests": {"cpu": "1", "memory": "128Mi"}}}]}}}}'
'{"spec":{"template":{"spec":{"containers":[{"name": "velero", "resources": {"limits":{"cpu": "1", "memory": "512Mi"}, "requests": {"cpu": "1", "memory": "128Mi"}}}]}}}}'
```
**restic pod**