Compare commits

..

4 Commits

Author SHA1 Message Date
Nolan Brubaker
4fc1c70028 v1.4.3 cherry-picks and changelogs (#3025)
* Ensure PVs and PVCs remain bound when doing a restore (#3007)

* Only remove the UID from a PV's claimRef

The UID is the only part of a claimRef that might prevent it from being
rebound correctly on a restore. The namespace and name within the
claimRef should be preserved in order to ensure that the PV is claimed
by the correct PVC on restore.

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Remap PVs claimRef.namespace on relevant restores

When remapping namespaces, any included PVs need to have their claimRef
updated to point remapped namespaces to the new namespace name in order
to be bound to the correct PVC.

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Update tests and ensure claimRef namespace remaps

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Remove lowercased uid field from unstructured PV

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Fix issues that prevented PVs from being restored

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Add changelog

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Dynamically reprovision volumes without snapshots

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Update test for lower case uid field

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Remove stray debugging print statement

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Fix typo, remove extra code, add tests.

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* fix: rename the PV if VolumeSnapshotter has modified the PV name (#2835)

When VolumeSnapshotter sets the PV name via SetVolumeID and PV is
not there in the cluster, velero does not rename the PV. Which causes
the pvc to be in the lost state as pvc points to the old PV but pv object
has been renamed by VolumeSnapshotter.

Signed-off-by: Pawan <pawan@mayadata.io>

* adding a test case for pv rename

Signed-off-by: Pawan <pawan@mayadata.io>
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* restore proper lowercase/plural CRD resource (#2949)

* restore proper lowercase/plural CRD resource

This commit restores the proper resource string
"customresourcedefinitions" for CRD. The prior change to
"CustomResourceDefinition" was made because this was being used
in another place to populate the CRD "Kind" field in
remap_crd_version_action.go -- there, just use the correct Kind
string instead of pulling from Resource.

Signed-off-by: Scott Seago <sseago@redhat.com>

* add changelog

Signed-off-by: Scott Seago <sseago@redhat.com>

* Update changelogs for v1.4.3

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

Co-authored-by: Pawan Prakash Sharma <pawanprakash101@gmail.com>
Co-authored-by: Scott Seago <sseago@redhat.com>
2020-10-20 14:20:24 -07:00
Nolan Brubaker
56a08a4d69 Backport GitHub Actions Docker image building to 1.4 series (#2706)
* setup ci in github actions

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

* remove travis config

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

* Add changelog for v1.4.2

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* update build status badge on readme

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

* don't error during backup when additional items returned by plugin don't exist (#2595)

* log a warning instead of erroring if additional item can't be found

Signed-off-by: Steve Kriss <krisss@vmware.com>

* always show backup warning/error count in get/describe

Signed-off-by: Steve Kriss <krisss@vmware.com>

* changelog

Signed-off-by: Steve Kriss <krisss@vmware.com>

* Update changelogs

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

Co-authored-by: Ashish Amarnath <ashisham@vmware.com>
Co-authored-by: Steve Kriss <krisss@vmware.com>
2020-07-13 12:35:26 -07:00
Nolan Brubaker
6e0469e7f3 Add changelogs for v1.4.1 (#2703)
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2020-07-13 11:06:46 -07:00
Nolan Brubaker
100541d9c2 v1.4.1 cherrypicks (#2704)
* Adjust restic timeout and pod values up (#2696)

* Adjust restic timeout and pod values up

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* 🐛 Use CRD version prior to remap_crd_version backup item action (#2683)

* 🐛 preserve crd version before remapping

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

* 🏃‍♂️ pass git state to build from makefile

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

* Add scripts for tagging Velero releases (#2592)

* Add release tools

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Document the tag-release release tool

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Make sure the upstream used is correct

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Add copyright statement

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Address review feedback

* Pause to allow for cherry-picking on the release branch before pushing
  it
* Move master branch logic into an else statement
* Correct typo

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Uncomment check for dirty git working tree

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

Co-authored-by: Ashish Amarnath <ashisham@vmware.com>
2020-07-13 11:02:42 -07:00
2488 changed files with 70335 additions and 92286 deletions

View File

@@ -1,3 +0,0 @@
.go/
.go.std/
site/

View File

@@ -14,11 +14,11 @@ about: Tell us about a problem you are experiencing
**The output of the following commands will help us better understand what's going on**:
(Pasting long output into a [GitHub gist](https://gist.github.com) or other pastebin is fine.)
- `kubectl logs deployment/velero -n velero`
- `velero backup describe <backupname>` or `kubectl get backup/<backupname> -n velero -o yaml`
- `velero backup logs <backupname>`
- `velero restore describe <restorename>` or `kubectl get restore/<restorename> -n velero -o yaml`
- `velero restore logs <restorename>`
* `kubectl logs deployment/velero -n velero`
* `velero backup describe <backupname>` or `kubectl get backup/<backupname> -n velero -o yaml`
* `velero backup logs <backupname>`
* `velero restore describe <restorename>` or `kubectl get restore/<restorename> -n velero -o yaml`
* `velero restore logs <restorename>`
**Anything else you would like to add:**
@@ -33,12 +33,3 @@ about: Tell us about a problem you are experiencing
- Kubernetes installer & version:
- Cloud provider or hardware configuration:
- OS (e.g. from `/etc/os-release`):
**Vote on this issue!**
This is an invitation to the Velero community to vote on issues, you can see the project's [top voted issues listed here](https://github.com/vmware-tanzu/velero/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc).
Use the "reaction smiley face" up to the right of this comment to vote.
- :+1: for "I would like to see this bug fixed as soon as possible"
- :-1: for "There are more important bugs to focus on right now"

View File

@@ -23,11 +23,3 @@ about: Suggest an idea for this project
- Kubernetes installer & version:
- Cloud provider or hardware configuration:
- OS (e.g. from `/etc/os-release`):
**Vote on this issue!**
This is an invitation to the Velero community to vote on issues, you can see the project's [top voted issues listed here](https://github.com/vmware-tanzu/velero/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc).
Use the "reaction smiley face" up to the right of this comment to vote.
- :+1: for "The project would be better with this feature added"
- :-1: for "This feature will not enhance the project in a meaningful way"

View File

@@ -1,11 +0,0 @@
addReviewers: true
addAssignees: author
# Set this to the length of the reviewers list because the default was not including everyone
numberOfReviewers: 4
reviewers:
- nrb
- ashish-amarnath
- carlisia
- zubron

View File

@@ -1,16 +0,0 @@
name: 'Auto Assign PR Reviewers'
# pull_request_target means that this will run on pull requests, but in the context of the base repo.
# This should mean PRs from forks are supported.
on:
pull_request_target:
types: [opened, reopened, sychronized]
jobs:
add-reviews:
runs-on: ubuntu-latest
steps:
- uses: kentaro-m/auto-assign-action@v1.1.1
with:
configuration-path: ".github/auto_assign.yml"
repo-token: '${{ secrets.GITHUB_TOKEN }}'

View File

@@ -1,86 +0,0 @@
name: "Verify Velero CRDs across k8s versions"
on:
pull_request:
# Do not run when the change only includes these directories.
paths-ignore:
- "site/**"
- "design/**"
jobs:
# Build the Velero CLI once for all Kubernetes versions, and cache it so the fan-out workers can get it.
build-cli:
runs-on: ubuntu-latest
steps:
# Look for a CLI that's made for this PR
- name: Fetch built CLI
id: cache
uses: actions/cache@v2
env:
cache-name: cache-velero-cli
with:
path: ./_output/bin/linux/amd64/velero
# The cache key a combination of the current PR number, and a SHA256 hash of the Velero binary
key: velero-${{ github.event.pull_request.number }}-${{ hashFiles('./_output/bin/linux/amd64/velero') }}
# This key controls the prefixes that we'll look at in the cache to restore from
restore-keys: |
velero-${{ github.event.pull_request.number }}-
- name: Fetch cached go modules
uses: actions/cache@v2
if: steps.cache.outputs.cache-hit != 'true'
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Check out the code
uses: actions/checkout@v2
if: steps.cache.outputs.cache-hit != 'true'
# If no binaries were built for this PR, build it now.
- name: Build Velero CLI
if: steps.cache.outputs.cache-hit != 'true'
run: |
make local
# Check the common CLI against all kubernetes versions
crd-check:
needs: build-cli
runs-on: ubuntu-latest
strategy:
matrix:
# Latest k8s versions. There's no series-based tag, nor is there a latest tag.
k8s:
- 1.15.12
- 1.16.15
- 1.17.17
- 1.18.15
- 1.19.7
- 1.20.2
# All steps run in parallel unless otherwise specified.
# See https://docs.github.com/en/actions/learn-github-actions/managing-complex-workflows#creating-dependent-jobs
steps:
- name: Fetch built CLI
id: cache
uses: actions/cache@v2
env:
cache-name: cache-velero-cli
with:
path: ./_output/bin/linux/amd64/velero
# The cache key a combination of the current PR number, and a SHA256 hash of the Velero binary
key: velero-${{ github.event.pull_request.number }}-${{ hashFiles('./_output/bin/linux/amd64/velero') }}
# This key controls the prefixes that we'll look at in the cache to restore from
restore-keys: |
velero-${{ github.event.pull_request.number }}-
- uses: engineerd/setup-kind@v0.5.0
with:
image: "kindest/node:v${{ matrix.k8s }}"
- name: Install CRDs
run: |
kubectl cluster-info
kubectl get pods -n kube-system
kubectl version
echo "current-context:" $(kubectl config current-context)
echo "environment-kubeconfig:" ${KUBECONFIG}
./_output/bin/linux/amd64/velero install --crds-only --dry-run -oyaml | kubectl apply -f -

View File

@@ -1,15 +0,0 @@
name: Pull Request Changelog Check
on: [pull_request]
jobs:
build:
name: Run Changelog Check
runs-on: ubuntu-latest
steps:
- name: Check out the code
uses: actions/checkout@v2
- name: Changelog check
if: ${{ !(contains(github.event.pull_request.labels.*.name, 'changelog-not-required') || contains(github.event.pull_request.labels.*.name, 'Design') || contains(github.event.pull_request.labels.*.name, 'Website') || contains(github.event.pull_request.labels.*.name, 'Documentation'))}}
run: ./hack/changelog-check.sh

View File

@@ -1,20 +0,0 @@
name: Pull Request CI Check
on: [pull_request]
jobs:
build:
name: Run CI
runs-on: ubuntu-latest
steps:
- name: Check out the code
uses: actions/checkout@v2
- name: Fetch cached go modules
uses: actions/cache@v2
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Make ci
run: make ci

View File

@@ -1,14 +0,0 @@
name: Pull Request Linter Check
on: [pull_request]
jobs:
build:
name: Run Linter Check
runs-on: ubuntu-latest
steps:
- name: Check out the code
uses: actions/checkout@v2
- name: Linter check
run: make lint

14
.github/workflows/pr.yml vendored Normal file
View File

@@ -0,0 +1,14 @@
name: Pull Request CI Check
on: [pull_request]
jobs:
build:
name: Run CI
runs-on: ubuntu-latest
steps:
- name: Check out the code
uses: actions/checkout@v2
- name: Make ci
run: make ci

View File

@@ -1,24 +0,0 @@
name: build-image
on:
push:
branches: [ main ]
paths:
- 'hack/build-image/Dockerfile'
jobs:
build:
name: Build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
- name: Build
run: make build-image
- name: Publish container image
run: |
docker login -u ${{ secrets.DOCKER_USER }} -p ${{ secrets.DOCKER_PASSWORD }}
make push-build-image

View File

@@ -1,8 +1,8 @@
name: Main CI
name: Master CI
on:
push:
branches: [ main ]
branches: [ master ]
tags:
- '*'
@@ -22,13 +22,6 @@ jobs:
- name: Check out code into the Go module directory
uses: actions/checkout@v2
- name: Set up Docker Buildx
id: buildx
uses: crazy-max/ghaction-docker-buildx@v3
with:
buildx-version: latest
qemu-version: latest
- name: Build
run: make local

View File

@@ -1,18 +0,0 @@
on:
issue_comment:
types: [created]
name: Automatic Rebase
jobs:
rebase:
name: Rebase
if: github.event.issue.pull_request != '' && contains(github.event.comment.body, '/rebase')
runs-on: ubuntu-latest
steps:
- name: Checkout the latest code
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Automatic Rebase
uses: cirrus-actions/rebase@1.3.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

11
.gitignore vendored
View File

@@ -38,8 +38,13 @@ Tiltfile
.vscode
*.diff
# Hugo compiled data
site/public
site/resources
# Jekyll compiled data
site/_site
site/.sass-cache
site/.jekyll
site/.jekyll-metadata
site/.bundle
site/vendor
.ruby-version
.vs

View File

@@ -1,8 +1,10 @@
## Current release:
* [CHANGELOG-1.5.md][15]
* [CHANGELOG-1.4.md][14]
## Development release:
* [Unreleased Changes][0]
## Older releases:
* [CHANGELOG-1.4.md][14]
* [CHANGELOG-1.3.md][13]
* [CHANGELOG-1.2.md][12]
* [CHANGELOG-1.1.md][11]
@@ -18,19 +20,18 @@
* [CHANGELOG-0.3.md][1]
[15]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.5.md
[14]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.4.md
[13]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.3.md
[12]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.2.md
[11]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.1.md
[10]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.0.md
[9]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-0.11.md
[8]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-0.10.md
[7]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-0.9.md
[6]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-0.8.md
[5]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-0.7.md
[4]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-0.6.md
[3]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-0.5.md
[2]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-0.4.md
[1]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-0.3.md
[0]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/unreleased
[14]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-1.4.md
[13]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-1.3.md
[12]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-1.2.md
[11]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-1.1.md
[10]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-1.0.md
[9]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-0.11.md
[8]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-0.10.md
[7]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-0.9.md
[6]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-0.8.md
[5]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-0.7.md
[4]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-0.6.md
[3]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-0.5.md
[2]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-0.4.md
[1]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-0.3.md
[0]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/unreleased

View File

@@ -1,3 +1,3 @@
# Contributing
Authors are expected to follow some guidelines when submitting PRs. Please see [our documentation](https://velero.io/docs/main/code-standards/) for details.
Authors are expected to follow some guidelines when submitting PRs. Please see [our documentation](https://velero.io/docs/master/code-standards/) for details.

View File

@@ -1,61 +0,0 @@
# Copyright 2020 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM --platform=$BUILDPLATFORM golang:1.14 as builder-env
ARG GOPROXY
ARG PKG
ARG VERSION
ARG GIT_SHA
ARG GIT_TREE_STATE
ENV CGO_ENABLED=0 \
GO111MODULE=on \
GOPROXY=${GOPROXY} \
LDFLAGS="-X ${PKG}/pkg/buildinfo.Version=${VERSION} -X ${PKG}/pkg/buildinfo.GitSHA=${GIT_SHA} -X ${PKG}/pkg/buildinfo.GitTreeState=${GIT_TREE_STATE}"
WORKDIR /go/src/github.com/vmware-tanzu/velero
COPY . /go/src/github.com/vmware-tanzu/velero
RUN apt-get update && apt-get install -y bzip2
FROM --platform=$BUILDPLATFORM builder-env as builder
ARG TARGETOS
ARG TARGETARCH
ARG TARGETVARIANT
ARG PKG
ARG BIN
ARG RESTIC_VERSION
ENV GOOS=${TARGETOS} \
GOARCH=${TARGETARCH} \
GOARM=${TARGETVARIANT}
RUN mkdir -p /output/usr/bin && \
bash ./hack/download-restic.sh && \
export GOARM=$( echo "${GOARM}" | cut -c2-) && \
go build -o /output/${BIN} \
-ldflags "${LDFLAGS}" ${PKG}/cmd/${BIN}
FROM ubuntu:focal
LABEL maintainer="Nolan Brubaker <brubakern@vmware.com>"
RUN apt-get update && apt-get install -y ca-certificates && rm -rf /var/lib/apt/lists/*
COPY --from=builder /output /
USER nobody:nogroup

33
Dockerfile-velero Normal file
View File

@@ -0,0 +1,33 @@
# Copyright 2017, 2019 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM ubuntu:focal
LABEL maintainer="Steve Kriss <krisss@vmware.com>"
RUN apt-get update && \
apt-get install -y --no-install-recommends ca-certificates wget bzip2 && \
wget --quiet https://github.com/restic/restic/releases/download/v0.9.6/restic_0.9.6_linux_amd64.bz2 && \
bunzip2 restic_0.9.6_linux_amd64.bz2 && \
mv restic_0.9.6_linux_amd64 /usr/bin/restic && \
chmod +x /usr/bin/restic && \
apt-get remove -y wget bzip2 && \
rm -rf /var/lib/apt/lists/*
ADD /bin/linux/amd64/velero /velero
USER nobody:nogroup
ENTRYPOINT ["/velero"]

23
Dockerfile-velero-arm Normal file
View File

@@ -0,0 +1,23 @@
# Copyright 2020 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM arm32v7/ubuntu:focal
ADD /bin/linux/arm/restic /usr/bin/restic
ADD /bin/linux/arm/velero /velero
USER nobody:nogroup
ENTRYPOINT ["/velero"]

23
Dockerfile-velero-arm64 Normal file
View File

@@ -0,0 +1,23 @@
# Copyright 2020 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM arm64v8/ubuntu:focal
ADD /bin/linux/arm64/restic /usr/bin/restic
ADD /bin/linux/arm64/velero /velero
USER nobody:nogroup
ENTRYPOINT ["/velero"]

25
Dockerfile-velero-ppc64le Normal file
View File

@@ -0,0 +1,25 @@
# Copyright 2019 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM ppc64le/ubuntu:focal
LABEL maintainer="Prajyot Parab <prajyot.parab@ibm.com>"
ADD /bin/linux/ppc64le/restic /usr/bin/restic
ADD /bin/linux/ppc64le/velero /velero
USER nobody:nogroup
ENTRYPOINT ["/velero"]

View File

@@ -0,0 +1,23 @@
# Copyright 2018, 2019 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM ubuntu:focal
LABEL maintainer="Steve Kriss <krisss@vmware.com>"
ADD /bin/linux/amd64/velero-restic-restore-helper .
USER nobody:nogroup
ENTRYPOINT [ "/velero-restic-restore-helper" ]

View File

@@ -0,0 +1,21 @@
# Copyright 2020 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM arm32v7/ubuntu:focal
ADD /bin/linux/arm/velero-restic-restore-helper .
USER nobody:nogroup
ENTRYPOINT [ "/velero-restic-restore-helper" ]

View File

@@ -0,0 +1,21 @@
# Copyright 2020 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM arm64v8/ubuntu:focal
ADD /bin/linux/arm64/velero-restic-restore-helper .
USER nobody:nogroup
ENTRYPOINT [ "/velero-restic-restore-helper" ]

View File

@@ -0,0 +1,23 @@
# Copyright 2019 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM ppc64le/ubuntu:focal
LABEL maintainer="Prajyot Parab <prajyot.parab@ibm.com>"
ADD /bin/linux/ppc64le/velero-restic-restore-helper .
USER nobody:nogroup
ENTRYPOINT [ "/velero-restic-restore-helper" ]

View File

@@ -1,112 +0,0 @@
# Velero Governance
This document defines the project governance for Velero.
## Overview
**Velero**, an open source project, is committed to building an open, inclusive, productive and self-governing open source community focused on building a high quality tool that enables users to safely backup and restore, perform disaster recovery, and migrate Kubernetes cluster resources and persistent volumes. The community is governed by this document with the goal of defining how community should work together to achieve this goal.
## Code Repositories
The following code repositories are governed by Velero community and maintained under the `vmware-tanzu\Velero` organization.
* **[Velero](https://github.com/vmware-tanzu/velero):** Main Velero codebase
* **[Helm Chart](https://github.com/vmware-tanzu/helm-charts/tree/main/charts/velero):** The Helm chart for the Velero server component
* **[Velero CSI Plugin](https://github.com/vmware-tanzu/velero-plugin-for-csi):** This repository contains Velero plugins for snapshotting CSI backed PVCs using the CSI beta snapshot APIs
* **[Velero Plugin for vSphere](https://github.com/vmware-tanzu/velero-plugin-for-vsphere):** This repository contains the Velero Plugin for vSphere. This plugin is a volume snapshotter plugin that provides crash-consistent snapshots of vSphere block volumes and backup of volume data into S3 compatible storage.
* **[Velero Plugin for AWS](https://github.com/vmware-tanzu/velero-plugin-for-aws):** This repository contains the plugins to support running Velero on AWS, including the object store plugin and the volume snapshotter plugin
* **[Velero Plugin for GCP](https://github.com/vmware-tanzu/velero-plugin-for-gcp):** This repository contains the plugins to support running Velero on GCP, including the object store plugin and the volume snapshotter plugin
* **[Velero Plugin for Azure](https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure):** This repository contains the plugins to support running Velero on Azure, including the object store plugin and the volume snapshotter plugin
* **[Velero Plugin Example](https://github.com/vmware-tanzu/velero-plugin-example):** This repository contains example plugins for Velero
## Community Roles
* **Users:** Members that engage with the Velero community via any medium (Slack, GitHub, mailing lists, etc.).
* **Contributors:** Regular contributions to projects (documentation, code reviews, responding to issues, participation in proposal discussions, contributing code, etc.).
* **Maintainers**: The Velero project leaders. They are responsible for the overall health and direction of the project; final reviewers of PRs and responsible for releases. Some Maintainers are responsible for one or more components within a project, acting as technical leads for that component. Maintainers are expected to contribute code and documentation, review PRs including ensuring quality of code, triage issues, proactively fix bugs, and perform maintenance tasks for these components.
### Maintainers
New maintainers must be nominated by an existing maintainer and must be elected by a supermajority of existing maintainers. Likewise, maintainers can be removed by a supermajority of the existing maintainers or can resign by notifying one of the maintainers.
### Supermajority
A supermajority is defined as two-thirds of members in the group.
A supermajority of [Maintainers](#maintainers) is required for certain
decisions as outlined above. A supermajority vote is equivalent to the number of votes in favor being at least twice the number of votes against. For example, if you have 5 maintainers, a supermajority vote is 4 votes. Voting on decisions can happen on the mailing list, GitHub, Slack, email, or via a voting service, when appropriate. Maintainers can either vote "agree, yes, +1", "disagree, no, -1", or "abstain". A vote passes when supermajority is met. An abstain vote equals not voting at all.
### Decision Making
Ideally, all project decisions are resolved by consensus. If impossible, any
maintainer may call a vote. Unless otherwise specified in this document, any
vote will be decided by a supermajority of maintainers.
Votes by maintainers belonging to the same company
will count as one vote; e.g., 4 maintainers employed by fictional company **Valerium** will
only have **one** combined vote. If voting members from a given company do not
agree, the company's vote is determined by a supermajority of voters from that
company. If no supermajority is achieved, the company is considered to have
abstained.
## Proposal Process
One of the most important aspects in any open source community is the concept
of proposals. Large changes to the codebase and / or new features should be
preceded by a proposal in our community repo. This process allows for all
members of the community to weigh in on the concept (including the technical
details), share their comments and ideas, and offer to help. It also ensures
that members are not duplicating work or inadvertently stepping on toes by
making large conflicting changes.
The project roadmap is defined by accepted proposals.
Proposals should cover the high-level objectives, use cases, and technical
recommendations on how to implement. In general, the community member(s)
interested in implementing the proposal should be either deeply engaged in the
proposal process or be an author of the proposal.
The proposal should be documented as a separated markdown file pushed to the root of the
`design` folder in the [Velero](https://github.com/vmware-tanzu/velero/tree/main/design)
repository via PR. The name of the file should follow the name pattern `<short
meaningful words joined by '-'>_design.md`, e.g:
`restore-hooks-design.md`.
Use the [Proposal Template](https://github.com/vmware-tanzu/velero/blob/main/design/_template.md) as a starting point.
### Proposal Lifecycle
The proposal PR can follow the GitHub lifecycle of the PR to indicate its status:
* **Open**: Proposal is created and under review and discussion.
* **Merged**: Proposal has been reviewed and is accepted (either by consensus or through a vote).
* **Closed**: Proposal has been reviewed and was rejected (either by consensus or through a vote).
## Lazy Consensus
To maintain velocity in a project as busy as Velero, the concept of [Lazy
Consensus](http://en.osswiki.info/concepts/lazy_consensus) is practiced. Ideas
and / or proposals should be shared by maintainers via
GitHub with the appropriate maintainer groups (e.g.,
`@vmware-tanzu/velero-maintainers`) tagged. Out of respect for other contributors,
major changes should also be accompanied by a ping on Slack or a note on the
Velero mailing list as appropriate. Author(s) of proposal, Pull Requests,
issues, etc. will give a time period of no less than five (5) working days for
comment and remain cognizant of popular observed world holidays.
Other maintainers may chime in and request additional time for review, but
should remain cognizant of blocking progress and abstain from delaying
progress unless absolutely needed. The expectation is that blocking progress
is accompanied by a guarantee to review and respond to the relevant action(s)
(proposals, PRs, issues, etc.) in short order.
Lazy Consensus is practiced for all projects in the `Velero` org, including
the main project repository and the additional repositories.
Lazy consensus does _not_ apply to the process of:
* Removal of maintainers from Velero
## Updating Governance
All substantive changes in Governance require a supermajority agreement by all maintainers.

View File

@@ -1,28 +0,0 @@
# Velero Maintainers
[GOVERNANCE.md](https://github.com/vmware-tanzu/velero/blob/main/GOVERNANCE.md) describes governance guidelines and maintainer responsibilities.
## Maintainers
| Maintainer | GitHub ID | Affiliation |
| --------------- | --------- | ----------- |
| Carlisia Campos | [carlisia](https://github.com/carlisia) | [VMware](https://www.github.com/vmware/) |
| Nolan Brubaker | [nrb](https://github.com/nrb) | [VMware](https://www.github.com/vmware/) |
| Ashish Amarnath | [ashish-amarnath](https://github.com/ashish-amarnath) | [VMware](https://www.github.com/vmware/) |
| Stephanie Bauman | [stephbman](https://github.com/stephbman) | [VMware](https://www.github.com/vmware/) |
| Bridget McErlean | [zubron](https://github.com/zubron) | [VMware](https://www.github.com/vmware/) |
## Emeritus Maintainers
* Adnan Abdulhussein ([prydonius](https://github.com/prydonius))
* Andy Goldstein ([ncdc](https://github.com/ncdc))
* Steve Kriss ([skriss](https://github.com/skriss))
## Velero Contributors & Stakeholders
| Feature Area | Lead |
| ----------------------------- | :---------------------: |
| Technical Lead | Nolan Brubaker (nrb) |
| Kubernetes CSI Liaison | Nolan Brubaker (nrb), Ashish Amarnath (ashish-amarnath) |
| Deployment | Carlisia Campos (carlisia), Carlos Tadeu Panato Junior (cpanato), JenTing Hsiao (jenting) |
| Community Management | Jonas Rosland (jonasrosland) |
| Product Management | Stephanie Bauman (stephbman) |

311
Makefile
View File

@@ -1,6 +1,6 @@
# Copyright 2016 The Kubernetes Authors.
#
# Modifications Copyright 2020 the Velero contributors.
# Modifications Copyright 2017 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -23,81 +23,26 @@ PKG := github.com/vmware-tanzu/velero
# Where to push the docker image.
REGISTRY ?= velero
# Image name
IMAGE ?= $(REGISTRY)/$(BIN)
# We allow the Dockerfile to be configurable to enable the use of custom Dockerfiles
# that pull base images from different registries.
VELERO_DOCKERFILE ?= Dockerfile
BUILDER_IMAGE_DOCKERFILE ?= hack/build-image/Dockerfile
# Calculate the realpath of the build-image Dockerfile as we `cd` into the hack/build
# directory before this Dockerfile is used and any relative path will not be valid.
BUILDER_IMAGE_DOCKERFILE_REALPATH := $(shell realpath $(BUILDER_IMAGE_DOCKERFILE))
# Build image handling. We push a build image for every changed version of
# /hack/build-image/Dockerfile. We tag the dockerfile with the short commit hash
# of the commit that changed it. When determining if there is a build image in
# the registry to use we look for one that matches the current "commit" for the
# Dockerfile else we make one.
# In the case where the Dockerfile for the build image has been overridden using
# the BUILDER_IMAGE_DOCKERFILE variable, we always force a build.
ifneq "$(origin BUILDER_IMAGE_DOCKERFILE)" "file"
BUILDER_IMAGE_TAG := "custom"
else
BUILDER_IMAGE_TAG := $(shell git log -1 --pretty=%h $(BUILDER_IMAGE_DOCKERFILE))
endif
BUILDER_IMAGE := $(REGISTRY)/build-image:$(BUILDER_IMAGE_TAG)
BUILDER_IMAGE_CACHED := $(shell docker images -q ${BUILDER_IMAGE} 2>/dev/null )
HUGO_IMAGE := hugo-builder
# Which architecture to build - see $(ALL_ARCH) for options.
# if the 'local' rule is being run, detect the ARCH from 'go env'
# if it wasn't specified by the caller.
local : ARCH ?= $(shell go env GOOS)-$(shell go env GOARCH)
ARCH ?= linux-amd64
VERSION ?= main
VERSION ?= master
TAG_LATEST ?= false
ifeq ($(TAG_LATEST), true)
IMAGE_TAGS ?= $(IMAGE):$(VERSION) $(IMAGE):latest
else
IMAGE_TAGS ?= $(IMAGE):$(VERSION)
endif
ifeq ($(shell docker buildx inspect 2>/dev/null | awk '/Status/ { print $$2 }'), running)
BUILDX_ENABLED ?= true
else
BUILDX_ENABLED ?= false
endif
define BUILDX_ERROR
buildx not enabled, refusing to run this recipe
see: https://velero.io/docs/main/build-from-source/#making-images-and-updating-velero for more info
endef
# The version of restic binary to be downloaded for power architecture
RESTIC_VERSION ?= 0.9.6
CLI_PLATFORMS ?= linux-amd64 linux-arm linux-arm64 darwin-amd64 windows-amd64 linux-ppc64le
BUILDX_PLATFORMS ?= $(subst -,/,$(ARCH))
BUILDX_OUTPUT_TYPE ?= docker
CONTAINER_PLATFORMS ?= linux-amd64 linux-ppc64le linux-arm linux-arm64
MANIFEST_PLATFORMS ?= amd64 ppc64le arm arm64
# set git sha and tree state
GIT_SHA = $(shell git rev-parse HEAD)
ifneq ($(shell git status --porcelain 2> /dev/null),)
GIT_TREE_STATE ?= dirty
else
GIT_TREE_STATE ?= clean
endif
# The default linters used by lint and local-lint
LINTERS ?= "gosec,goconst,gofmt,goimports,unparam"
GIT_DIRTY = $(shell git status --porcelain 2> /dev/null)
###
### These variables should not need tweaking.
@@ -106,7 +51,36 @@ LINTERS ?= "gosec,goconst,gofmt,goimports,unparam"
platform_temp = $(subst -, ,$(ARCH))
GOOS = $(word 1, $(platform_temp))
GOARCH = $(word 2, $(platform_temp))
GOPROXY ?= https://proxy.golang.org
# Set default base image dynamically for each arch
ifeq ($(GOARCH),amd64)
DOCKERFILE ?= Dockerfile-$(BIN)
local-arch:
@echo "local environment for amd64 is up-to-date"
endif
ifeq ($(GOARCH),arm)
DOCKERFILE ?= Dockerfile-$(BIN)-arm
local-arch:
@mkdir -p _output/bin/linux/arm/
@wget -q -O - https://github.com/restic/restic/releases/download/v$(RESTIC_VERSION)/restic_$(RESTIC_VERSION)_linux_arm.bz2 | bunzip2 > _output/bin/linux/arm/restic
@chmod a+x _output/bin/linux/arm/restic
endif
ifeq ($(GOARCH),arm64)
DOCKERFILE ?= Dockerfile-$(BIN)-arm64
local-arch:
@mkdir -p _output/bin/linux/arm64/
@wget -q -O - https://github.com/restic/restic/releases/download/v$(RESTIC_VERSION)/restic_$(RESTIC_VERSION)_linux_arm64.bz2 | bunzip2 > _output/bin/linux/arm64/restic
@chmod a+x _output/bin/linux/arm64/restic
endif
ifeq ($(GOARCH),ppc64le)
DOCKERFILE ?= Dockerfile-$(BIN)-ppc64le
local-arch:
RESTIC_VERSION=$(RESTIC_VERSION) \
./hack/get-restic-ppc64le.sh
endif
MULTIARCH_IMAGE = $(REGISTRY)/$(BIN)
IMAGE ?= $(REGISTRY)/$(BIN)-$(GOARCH)
# If you want to build all binaries, see the 'all-build' rule.
# If you want to build all containers, see the 'all-containers' rule.
@@ -119,11 +93,23 @@ build-%:
@$(MAKE) --no-print-directory ARCH=$* build
@$(MAKE) --no-print-directory ARCH=$* build BIN=velero-restic-restore-helper
container-%:
@$(MAKE) --no-print-directory ARCH=$* container
@$(MAKE) --no-print-directory ARCH=$* container BIN=velero-restic-restore-helper
push-%:
@$(MAKE) --no-print-directory ARCH=$* push
@$(MAKE) --no-print-directory ARCH=$* push BIN=velero-restic-restore-helper
all-build: $(addprefix build-, $(CLI_PLATFORMS))
all-containers: container-builder-env
@$(MAKE) --no-print-directory container
@$(MAKE) --no-print-directory container BIN=velero-restic-restore-helper
all-containers: $(addprefix container-, $(CONTAINER_PLATFORMS))
all-push: $(addprefix push-, $(CONTAINER_PLATFORMS))
all-manifests:
@$(MAKE) manifest
@$(MAKE) manifest BIN=velero-restic-restore-helper
local: build-dirs
GOOS=$(GOOS) \
@@ -132,7 +118,7 @@ local: build-dirs
PKG=$(PKG) \
BIN=$(BIN) \
GIT_SHA=$(GIT_SHA) \
GIT_TREE_STATE=$(GIT_TREE_STATE) \
GIT_DIRTY="$(GIT_DIRTY)" \
OUTPUT_DIR=$$(pwd)/_output/bin/$(GOOS)/$(GOARCH) \
./hack/build.sh
@@ -147,14 +133,16 @@ _output/bin/$(GOOS)/$(GOARCH)/$(BIN): build-dirs
PKG=$(PKG) \
BIN=$(BIN) \
GIT_SHA=$(GIT_SHA) \
GIT_TREE_STATE=$(GIT_TREE_STATE) \
GIT_DIRTY=\"$(GIT_DIRTY)\" \
OUTPUT_DIR=/output/$(GOOS)/$(GOARCH) \
./hack/build.sh'"
TTY := $(shell tty -s && echo "-t")
BUILDER_IMAGE := velero-builder
# Example: make shell CMD="date > datefile"
shell: build-dirs build-env
shell: build-dirs build-image
@# bind-mount the Velero root dir in at /github.com/vmware-tanzu/velero
@# because the Kubernetes code-generator tools require the project to
@# exist in a directory hierarchy ending like this (but *NOT* necessarily
@@ -170,41 +158,51 @@ shell: build-dirs build-env
-v "$$(pwd)/.go/std:/go/std:delegated" \
-v "$$(pwd)/.go/std/$(GOOS)/$(GOARCH):/usr/local/go/pkg/$(GOOS)_$(GOARCH)_static:delegated" \
-v "$$(pwd)/.go/go-build:/.cache/go-build:delegated" \
-v "$$(pwd)/.go/golangci-lint:/.cache/golangci-lint:delegated" \
-w /github.com/vmware-tanzu/velero \
$(BUILDER_IMAGE) \
/bin/sh $(CMD)
container-builder-env:
ifneq ($(BUILDX_ENABLED), true)
$(error $(BUILDX_ERROR))
endif
@docker buildx build \
--target=builder-env \
--build-arg=GOPROXY=$(GOPROXY) \
--build-arg=PKG=$(PKG) \
--build-arg=VERSION=$(VERSION) \
--build-arg=GIT_SHA=$(GIT_SHA) \
--build-arg=GIT_TREE_STATE=$(GIT_TREE_STATE) \
-f $(VELERO_DOCKERFILE) .
DOTFILE_IMAGE = $(subst :,_,$(subst /,_,$(IMAGE))-$(VERSION))
container:
ifneq ($(BUILDX_ENABLED), true)
$(error $(BUILDX_ERROR))
endif
@docker buildx build --pull \
--output=type=$(BUILDX_OUTPUT_TYPE) \
--platform $(BUILDX_PLATFORMS) \
$(addprefix -t , $(IMAGE_TAGS)) \
--build-arg=PKG=$(PKG) \
--build-arg=BIN=$(BIN) \
--build-arg=VERSION=$(VERSION) \
--build-arg=GIT_SHA=$(GIT_SHA) \
--build-arg=GIT_TREE_STATE=$(GIT_TREE_STATE) \
--build-arg=RESTIC_VERSION=$(RESTIC_VERSION) \
-f $(VELERO_DOCKERFILE) .
all-containers:
$(MAKE) container
$(MAKE) container BIN=velero-restic-restore-helper
container: local-arch .container-$(DOTFILE_IMAGE) container-name
.container-$(DOTFILE_IMAGE): _output/bin/$(GOOS)/$(GOARCH)/$(BIN) $(DOCKERFILE)
@cp $(DOCKERFILE) _output/.dockerfile-$(BIN)-$(GOOS)-$(GOARCH)
@docker build --pull -t $(IMAGE):$(VERSION) -f _output/.dockerfile-$(BIN)-$(GOOS)-$(GOARCH) _output
@docker images -q $(IMAGE):$(VERSION) > $@
container-name:
@echo "container: $(IMAGE):$(VERSION)"
push: .push-$(DOTFILE_IMAGE) push-name
.push-$(DOTFILE_IMAGE): .container-$(DOTFILE_IMAGE)
@docker push $(IMAGE):$(VERSION)
ifeq ($(TAG_LATEST), true)
docker tag $(IMAGE):$(VERSION) $(IMAGE):latest
docker push $(IMAGE):latest
endif
@docker images -q $(IMAGE):$(VERSION) > $@
push-name:
@echo "pushed: $(IMAGE):$(VERSION)"
manifest: .manifest-$(MULTIARCH_IMAGE) manifest-name
.manifest-$(MULTIARCH_IMAGE):
@DOCKER_CLI_EXPERIMENTAL=enabled docker manifest create $(MULTIARCH_IMAGE):$(VERSION) \
$(foreach arch, $(MANIFEST_PLATFORMS), $(MULTIARCH_IMAGE)-$(arch):$(VERSION))
@DOCKER_CLI_EXPERIMENTAL=enabled docker manifest push --purge $(MULTIARCH_IMAGE):$(VERSION)
ifeq ($(TAG_LATEST), true)
@DOCKER_CLI_EXPERIMENTAL=enabled docker manifest create $(MULTIARCH_IMAGE):latest \
$(foreach arch, $(MANIFEST_PLATFORMS), $(MULTIARCH_IMAGE)-$(arch):latest)
@DOCKER_CLI_EXPERIMENTAL=enabled docker manifest push --purge $(MULTIARCH_IMAGE):latest
endif
manifest-name:
@echo "pushed: $(MULTIARCH_IMAGE):$(VERSION)"
SKIP_TESTS ?=
test: build-dirs
ifneq ($(SKIP_TESTS), 1)
@@ -221,108 +219,33 @@ ifneq ($(SKIP_TESTS), 1)
@$(MAKE) shell CMD="-c 'hack/verify-all.sh'"
endif
lint:
ifneq ($(SKIP_TESTS), 1)
@$(MAKE) shell CMD="-c 'hack/lint.sh $(LINTERS)'"
endif
local-lint:
ifneq ($(SKIP_TESTS), 1)
@hack/lint.sh $(LINTERS)
endif
lint-all:
ifneq ($(SKIP_TESTS), 1)
@$(MAKE) shell CMD="-c 'hack/lint.sh $(LINTERS) true'"
endif
local-lint-all:
ifneq ($(SKIP_TESTS), 1)
@hack/lint.sh $(LINTERS) true
endif
update:
@$(MAKE) shell CMD="-c 'hack/update-all.sh'"
build-dirs:
@mkdir -p _output/bin/$(GOOS)/$(GOARCH)
@mkdir -p .go/src/$(PKG) .go/pkg .go/bin .go/std/$(GOOS)/$(GOARCH) .go/go-build .go/golangci-lint
build-env:
@# if we have overridden the value for the build-image Dockerfile,
@# force a build using that Dockerfile
@# if we detect changes in dockerfile force a new build-image
@# else if we dont have a cached image make one
@# finally use the cached image
ifneq "$(origin BUILDER_IMAGE_DOCKERFILE)" "file"
@echo "Dockerfile for builder image has been overridden to $(BUILDER_IMAGE_DOCKERFILE)"
@echo "Preparing a new builder-image"
$(MAKE) build-image
else ifneq ($(shell git diff --quiet HEAD -- $(BUILDER_IMAGE_DOCKERFILE); echo $$?), 0)
@echo "Local changes detected in $(BUILDER_IMAGE_DOCKERFILE)"
@echo "Preparing a new builder-image"
$(MAKE) build-image
else ifneq ($(BUILDER_IMAGE_CACHED),)
@echo "Using Cached Image: $(BUILDER_IMAGE)"
else
@echo "Trying to pull build-image: $(BUILDER_IMAGE)"
docker pull -q $(BUILDER_IMAGE) || $(MAKE) build-image
endif
@mkdir -p .go/src/$(PKG) .go/pkg .go/bin .go/std/$(GOOS)/$(GOARCH) .go/go-build
build-image:
@# When we build a new image we just untag the old one.
@# This makes sure we don't leave the orphaned image behind.
$(eval old_id=$(shell docker image inspect --format '{{ .ID }}' ${BUILDER_IMAGE} 2>/dev/null))
ifeq ($(BUILDX_ENABLED), true)
@cd hack/build-image && docker buildx build --build-arg=GOPROXY=$(GOPROXY) --output=type=docker --pull -t $(BUILDER_IMAGE) -f $(BUILDER_IMAGE_DOCKERFILE_REALPATH) .
else
@cd hack/build-image && docker build --build-arg=GOPROXY=$(GOPROXY) --pull -t $(BUILDER_IMAGE) -f $(BUILDER_IMAGE_DOCKERFILE_REALPATH) .
endif
$(eval new_id=$(shell docker image inspect --format '{{ .ID }}' ${BUILDER_IMAGE} 2>/dev/null))
@if [ "$(old_id)" != "" ] && [ "$(old_id)" != "$(new_id)" ]; then \
docker rmi -f $$id || true; \
fi
push-build-image:
@# this target will push the build-image it assumes you already have docker
@# credentials needed to accomplish this.
@# Pushing will be skipped if a custom Dockerfile was used to build the image.
ifneq "$(origin BUILDER_IMAGE_DOCKERFILE)" "file"
@echo "Dockerfile for builder image has been overridden"
@echo "Skipping push of custom image"
else
docker push $(BUILDER_IMAGE)
endif
build-image-hugo:
cd site && docker build --pull -t $(HUGO_IMAGE) .
cd hack/build-image && docker build --pull -t $(BUILDER_IMAGE) .
clean:
# if we have a cached image then use it to run go clean --modcache
# this test checks if we there is an image id in the BUILDER_IMAGE_CACHED variable.
ifneq ($(strip $(BUILDER_IMAGE_CACHED)),)
$(MAKE) shell CMD="-c 'go clean --modcache'"
docker rmi -f $(BUILDER_IMAGE) || true
endif
rm -rf .container-* _output/.dockerfile-* .push-*
rm -rf .go _output
docker rmi $(HUGO_IMAGE)
docker rmi $(BUILDER_IMAGE)
.PHONY: modules
modules:
go mod tidy
.PHONY: verify-modules
verify-modules: modules
@if !(git diff --quiet HEAD -- go.sum go.mod); then \
echo "go module files are out of date, please commit the changes to go.mod and go.sum"; exit 1; \
fi
ci: verify-modules verify all test
changelog:
hack/changelog.sh
@@ -346,16 +269,38 @@ release:
GITHUB_TOKEN=$(GITHUB_TOKEN) \
RELEASE_NOTES_FILE=$(RELEASE_NOTES_FILE) \
PUBLISH=$(PUBLISH) \
./hack/release-tools/goreleaser.sh'"
./hack/goreleaser.sh'"
serve-docs: build-image-hugo
serve-docs:
docker run \
--rm \
-v "$$(pwd)/site:/srv/hugo" \
-it -p 1313:1313 \
$(HUGO_IMAGE) \
hugo server --bind=0.0.0.0 --enableGitInfo=false
# gen-docs generates a new versioned docs directory under site/content/docs.
# Please read the documentation in the script for instructions on how to use it.
-v "$$(pwd)/site:/srv/jekyll" \
-it -p 4000:4000 \
jekyll/jekyll \
jekyll serve --livereload --incremental
# gen-docs generates a new versioned docs directory under site/docs. It follows
# the following process:
# 1. Copies the contents of the most recently tagged docs directory into the new
# directory, to establish a useful baseline to diff against.
# 2. Adds all copied content from step 1 to git's staging area via 'git add'.
# 3. Replaces the contents of the new docs directory with the contents of the
# 'master' docs directory, updating any version-specific links (e.g. to a
# specific branch of the GitHub repository) to use the new version
# 4. Copies the previous version's ToC file and runs 'git add' to establish
# a useful baseline to diff against.
# 5. Replaces the content of the new ToC file with the master ToC.
# 6. Update site/_config.yml and site/_data/toc-mapping.yml to include entries
# for the new version.
#
# The unstaged changes in the working directory can now easily be diff'ed against the
# staged changes using 'git diff' to review all docs changes made since the previous
# tagged version. Once the unstaged changes are ready, they can be added to the
# staging area using 'git add' and then committed.
#
# To run gen-docs: "NEW_DOCS_VERSION=v1.4 VELERO_VERSION=v1.4.0 make gen-docs"
#
# **NOTE**: there are additional manual steps required to finalize the process of generating
# a new versioned docs site. The full process is documented in site/README-JEKYLL.md.
gen-docs:
@hack/release-tools/gen-docs.sh
@hack/gen-docs.sh

View File

@@ -1,7 +0,0 @@
domain: io
repo: github.com/vmware-tanzu/velero
resources:
- group: velero
kind: BackupStorageLocation
version: v1
version: "2"

View File

@@ -1,7 +1,6 @@
![100]
[![Build Status][1]][2] [![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/3811/badge)](https://bestpractices.coreinfrastructure.org/projects/3811)
[![Build Status][1]][2]
## Overview
@@ -34,8 +33,8 @@ If you are ready to jump in and test, add code, or help with documentation, foll
See [the list of releases][6] to find out about feature changes.
[1]: https://github.com/vmware-tanzu/velero/workflows/Main%20CI/badge.svg
[2]: https://github.com/vmware-tanzu/velero/actions?query=workflow%3A"Main+CI"
[1]: https://github.com/vmware-tanzu/velero/workflows/Master%20CI/badge.svg
[2]: https://github.com/vmware-tanzu/velero/actions?query=workflow%3A"Master+CI"
[4]: https://github.com/vmware-tanzu/velero/issues
[6]: https://github.com/vmware-tanzu/velero/releases
[9]: https://kubernetes.io/docs/setup/
@@ -48,4 +47,4 @@ See [the list of releases][6] to find out about feature changes.
[29]: https://velero.io/docs/
[30]: https://velero.io/docs/troubleshooting
[31]: https://velero.io/docs/start-contributing
[100]: https://velero.io/docs/main/img/velero.png
[100]: https://velero.io/docs/master/img/velero.png

View File

@@ -1,36 +0,0 @@
## Velero Roadmap
### About this document
This document provides a link to the [Velero Project board](https://app.zenhub.com/workspaces/velero-5c59c15e39d47b774b5864e3/board?repos=99143276,112385197,190224441,214524700,214524630,213946861) that serves as the up to date description of items that are in the release pipeline. The board has separate swim lanes based on prioritization. Most items are gathered from the community or include a feedback loop with the community. This should serve as a reference point for Velero users and contributors to understand where the project is heading, and help determine if a contribution could be conflicting with a longer term plan. You will need the ZenHub plugin to view the board.
### How to help?
Discussion on the roadmap can take place in threads under [Issues](https://github.com/vmware-tanzu/velero/issues) or in [community meetings](https://velero.io/community/). Please open and comment on an issue if you want to provide suggestions, use cases, and feedback to an item in the roadmap. Please review the roadmap to avoid potential duplicated effort.
### How to add an item to the roadmap?
One of the most important aspects in any open source community is the concept of proposals. Large changes to the codebase and / or new features should be preceded by a [proposal](https://github.com/vmware-tanzu/velero/blob/main/GOVERNANCE.md#proposal-process) in our repo.
For smaller enhancements, you can open an issue to track that initiative or feature request.
We work with and rely on community feedback to focus our efforts to improve Velero and maintain a healthy roadmap.
### Current Roadmap
The following table includes the current roadmap for Velero. If you have any questions or would like to contribute to Velero, please attend a [community meeting](https://velero.io/community/) to discuss with our team. If you don't know where to start, we are always looking for contributors that will help us reduce technical, automation, and documentation debt.
Please take the timelines & dates as proposals and goals. Priorities and requirements change based on community feedback, roadblocks encountered, community contributions, etc. If you depend on a specific item, we encourage you to attend community meetings to get updated status information, or help us deliver that feature by contributing to Velero.
`Last Updated: May 2020`
|Theme|Description|Timeline|
|--|--|--|
|Restic Improvements|Introduce improvements in annotating resources for Restic backup|August 2020|
|Extensibility|Add restore hooks for enhanced recovery scenarios|August 2020|
|CSI|Continue improving the CSI snapshot capabilities and participate in the upstream K8s CSI community|Long running (dependent on CSI working group)|
|Backup/Restore|Improvements to long-running copy operations from a performance and reliability standpoint|August 2020|
|UX|Improvements to install and configuration user experience|August 2020|
|Restic Improvements|Improve the use of Restic in Velero and offer stable support|Dec 2020|
|Perf & Scale|Introduce a scalable model by using a worker pod for each backup/restore operation and improve operations|Dec 2020|
|Backup/Restore|Better backup and restore semantics for certain Kubernetes resources like stateful sets, operators|Dec 2020|
|Security|Enable the use of custom credential providers|Dec 2020|
|Self-Service & Multitenancy|Reduce friction by enabling developers to backup their namespaces via self-service. Introduce a Velero multi-tenancy model, enabling owners of namespaces to backup and restore within their access scope|Mar 2021|
|Backup/Restore|Cross availability zone or region backup and restore|Mar 2021|
|Application Consistency|Offer blueprints for backing up and restoring popular applications|May 2021|
|Backup/Restore|Data only backup and restore|May 2021|
|Backup/Restore|Introduce the ability to overwrite existing objects during a restore|May 2021|
|Backup/Restore|What-if dry run for backup and restore|May 2021|

View File

@@ -1,128 +0,0 @@
# Security Release Process
Velero is an open source tool with a growing community devoted to safe backup and restore, disaster recovery, and data migration of Kubernetes resources and persistent volumes. The community has adopted this security disclosure and response policy to ensure we responsibly handle critical issues.
## Supported Versions
The Velero project maintains the following [governance document](https://github.com/vmware-tanzu/velero/blob/main/GOVERNANCE.md), [release document](https://github.com/vmware-tanzu/velero/blob/f42c63af1b9af445e38f78a7256b1c48ef79c10e/site/docs/main/release-instructions.md), and [support document](https://velero.io/docs/main/support-process/). Please refer to these for release and related details. Only the most recent version of Velero is supported. Each [release](https://github.com/vmware-tanzu/velero/releases) includes information about upgrading to the latest version.
## Reporting a Vulnerability - Private Disclosure Process
Security is of the highest importance and all security vulnerabilities or suspected security vulnerabilities should be reported to Velero privately, to minimize attacks against current users of Velero before they are fixed. Vulnerabilities will be investigated and patched on the next patch (or minor) release as soon as possible. This information could be kept entirely internal to the project.
If you know of a publicly disclosed security vulnerability for Velero, please **IMMEDIATELY** contact the VMware Security Team (security@vmware.com).
**IMPORTANT: Do not file public issues on GitHub for security vulnerabilities**
To report a vulnerability or a security-related issue, please contact the VMware email address with the details of the vulnerability. The email will be fielded by the VMware Security Team and then shared with the Velero maintainers who have committer and release permissions. Emails will be addressed within 3 business days, including a detailed plan to investigate the issue and any potential workarounds to perform in the meantime. Do not report non-security-impacting bugs through this channel. Use [GitHub issues](https://github.com/vmware-tanzu/velero/issues/new/choose) instead.
## Proposed Email Content
Provide a descriptive subject line and in the body of the email include the following information:
* Basic identity information, such as your name and your affiliation or company.
* Detailed steps to reproduce the vulnerability (POC scripts, screenshots, and logs are all helpful to us).
* Description of the effects of the vulnerability on Velero and the related hardware and software configurations, so that the VMware Security Team can reproduce it.
* How the vulnerability affects Velero usage and an estimation of the attack surface, if there is one.
* List other projects or dependencies that were used in conjunction with Velero to produce the vulnerability.
## When to report a vulnerability
* When you think Velero has a potential security vulnerability.
* When you suspect a potential vulnerability but you are unsure that it impacts Velero.
* When you know of or suspect a potential vulnerability on another project that is used by Velero.
## Patch, Release, and Disclosure
The VMware Security Team will respond to vulnerability reports as follows:
1. The Security Team will investigate the vulnerability and determine its effects and criticality.
2. If the issue is not deemed to be a vulnerability, the Security Team will follow up with a detailed reason for rejection.
3. The Security Team will initiate a conversation with the reporter within 3 business days.
4. If a vulnerability is acknowledged and the timeline for a fix is determined, the Security Team will work on a plan to communicate with the appropriate community, including identifying mitigating steps that affected users can take to protect themselves until the fix is rolled out.
5. The Security Team will also create a [CVSS](https://www.first.org/cvss/specification-document) using the [CVSS Calculator](https://www.first.org/cvss/calculator/3.0). The Security Team makes the final call on the calculated CVSS; it is better to move quickly than making the CVSS perfect. Issues may also be reported to [Mitre](https://cve.mitre.org/) using this [scoring calculator](https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator). The CVE will initially be set to private.
6. The Security Team will work on fixing the vulnerability and perform internal testing before preparing to roll out the fix.
7. The Security Team will provide early disclosure of the vulnerability by emailing the [Velero Distributors](https://groups.google.com/u/1/g/projectvelero-distributors) mailing list. Distributors can initially plan for the vulnerability patch ahead of the fix, and later can test the fix and provide feedback to the Velero team. See the section **Early Disclosure to Velero Distributors List** for details about how to join this mailing list.
8. A public disclosure date is negotiated by the VMware SecurityTeam, the bug submitter, and the distributors list. We prefer to fully disclose the bug as soon as possible once a user mitigation or patch is available. It is reasonable to delay disclosure when the bug or the fix is not yet fully understood, the solution is not well-tested, or for distributor coordination. The timeframe for disclosure is from immediate (especially if its already publicly known) to a few weeks. For a critical vulnerability with a straightforward mitigation, we expect the report date for the public disclosure date to be on the order of 14 business days. The VMware Security Team holds the final say when setting a public disclosure date.
9. Once the fix is confirmed, the Security Team will patch the vulnerability in the next patch or minor release, and backport a patch release into all earlier supported releases. Upon release of the patched version of Velero, we will follow the **Public Disclosure Process**.
## Public Disclosure Process
The Security Team publishes a [public advisory](https://github.com/vmware-tanzu/velero/security/advisories) to the Velero community via GitHub. In most cases, additional communication via Slack, Twitter, mailing lists, blog and other channels will assist in educating Velero users and rolling out the patched release to affected users.
The Security Team will also publish any mitigating steps users can take until the fix can be applied to their Velero instances. Velero distributors will handle creating and publishing their own security advisories.
## Mailing lists
* Use security@vmware.com to report security concerns to the VMware Security Team, who uses the list to privately discuss security issues and fixes prior to disclosure.
* Join the [Velero Distributors](https://groups.google.com/u/1/g/projectvelero-distributors) mailing list for early private information and vulnerability disclosure. Early disclosure may include mitigating steps and additional information on security patch releases. See below for information on how Velero distributors or vendors can apply to join this list.
## Early Disclosure to Velero Distributors List
The private list is intended to be used primarily to provide actionable information to multiple distributor projects at once. This list is not intended to inform individuals about security issues.
## Membership Criteria
To be eligible to join the [Velero Distributors](https://groups.google.com/u/1/g/projectvelero-distributors) mailing list, you should:
1. Be an active distributor of Velero.
2. Have a user base that is not limited to your own organization.
3. Have a publicly verifiable track record up to the present day of fixing security issues.
4. Not be a downstream or rebuild of another distributor.
5. Be a participant and active contributor in the Velero community.
6. Accept the Embargo Policy that is outlined below.
7. Have someone who is already on the list vouch for the person requesting membership on behalf of your distribution.
**The terms and conditions of the Embargo Policy apply to all members of this mailing list. A request for membership represents your acceptance to the terms and conditions of the Embargo Policy.**
## Embargo Policy
The information that members receive on the Velero Distributors mailing list must not be made public, shared, or even hinted at anywhere beyond those who need to know within your specific team, unless you receive explicit approval to do so from the VMware Security Team. This remains true until the public disclosure date/time agreed upon by the list. Members of the list and others cannot use the information for any reason other than to get the issue fixed for your respective distribution's users.
Before you share any information from the list with members of your team who are required to fix the issue, these team members must agree to the same terms, and only be provided with information on a need-to-know basis.
In the unfortunate event that you share information beyond what is permitted by this policy, you must urgently inform the VMware Security Team (security@vmware.com) of exactly what information was leaked and to whom. If you continue to leak information and break the policy outlined here, you will be permanently removed from the list.
## Requesting to Join
Send new membership requests to projectvelero-distributors@googlegroups.com. In the body of your request please specify how you qualify for membership and fulfill each criterion listed in the Membership Criteria section above.
## Confidentiality, integrity and availability
We consider vulnerabilities leading to the compromise of data confidentiality, elevation of privilege, or integrity to be our highest priority concerns. Availability, in particular in areas relating to DoS and resource exhaustion, is also a serious security concern. The VMware Security Team takes all vulnerabilities, potential vulnerabilities, and suspected vulnerabilities seriously and will investigate them in an urgent and expeditious manner.
Note that we do not currently consider the default settings for Velero to be secure-by-default. It is necessary for operators to explicitly configure settings, role based access control, and other resource related features in Velero to provide a hardened Velero environment. We will not act on any security disclosure that relates to a lack of safe defaults. Over time, we will work towards improved safe-by-default configuration, taking into account backwards compatibility.

View File

@@ -4,4 +4,4 @@ Thanks for trying out Velero! We welcome all feedback, find all the ways to conn
- [Velero Community](https://velero.io/community/)
You can find details on the Velero maintainers' support process [here](https://velero.io/docs/main/support-process/).
You can find details on the Velero maintainers' support process [here](https://velero.io/docs/master/support-process/).

View File

@@ -69,8 +69,8 @@ carefully to ensure a successful upgrade!**
- The `Config` CRD has been replaced by `BackupStorageLocation` and `VolumeSnapshotLocation` CRDs.
- The interface for external plugins (object/block stores, backup/restore item actions) has changed. If you have authored any custom plugins, they'll
need to be updated for v0.10.
- The [`ObjectStore.ListCommonPrefixes`](https://github.com/vmware-tanzu/velero/blob/main/pkg/cloudprovider/object_store.go#L50) signature has changed to add a `prefix` parameter.
- Registering plugins has changed. Create a new plugin server with the `NewServer` function, and register plugins with the appropriate functions. See the [`Server`](https://github.com/vmware-tanzu/velero/blob/main/pkg/plugin/server.go#L37) interface for details.
- The [`ObjectStore.ListCommonPrefixes`](https://github.com/heptio/ark/blob/master/pkg/cloudprovider/object_store.go#L50) signature has changed to add a `prefix` parameter.
- Registering plugins has changed. Create a new plugin server with the `NewServer` function, and register plugins with the appropriate functions. See the [`Server`](https://github.com/heptio/ark/blob/master/pkg/plugin/server.go#L37) interface for details.
- The organization of Ark data in object storage has changed. Existing data will need to be moved around to conform to the new layout.
### All Changes
@@ -89,7 +89,7 @@ need to be updated for v0.10.
- [ec013e6f](https://github.com/heptio/ark/commit/ec013e6f) Document upgrading plugins in the deployment
- [d6162e94](https://github.com/heptio/ark/commit/d6162e94) fix goreleaser bugs
- [a15df276](https://github.com/heptio/ark/commit/a15df276) Add correct link and change role
- [46bed015](https://github.com/heptio/ark/commit/46bed015) add 0.10 breaking changes warning to readme in main
- [46bed015](https://github.com/heptio/ark/commit/46bed015) add 0.10 breaking changes warning to readme in master
- [e3a7d6a2](https://github.com/heptio/ark/commit/e3a7d6a2) add content for issue 994
- [400911e9](https://github.com/heptio/ark/commit/400911e9) address docs issue #978
- [b818cc27](https://github.com/heptio/ark/commit/b818cc27) don't require a default provider VSL if there's only 1
@@ -247,7 +247,7 @@ need to be updated for v0.10.
- [5b89f7b6](https://github.com/heptio/ark/commit/5b89f7b6) Skip backup sync if it already exists in k8s
- [c6050845](https://github.com/heptio/ark/commit/c6050845) restore controller: switch to 'c' for receiver name
- [706ae07d](https://github.com/heptio/ark/commit/706ae07d) enable a schedule to be provided as the source for a restore
- [aea68414](https://github.com/heptio/ark/commit/aea68414) fix up Slack link in troubleshooting on main branch
- [aea68414](https://github.com/heptio/ark/commit/aea68414) fix up Slack link in troubleshooting on master branch
- [bb8e2e91](https://github.com/heptio/ark/commit/bb8e2e91) Document how to run the Ark server locally
- [dc84e591](https://github.com/heptio/ark/commit/dc84e591) Remove outdated namespace deletion content
- [23abbc9a](https://github.com/heptio/ark/commit/23abbc9a) fix paths

View File

@@ -137,7 +137,7 @@
### Highlights:
* Ark now has support for backing up and restoring Kubernetes volumes using a free open-source backup tool called [restic](https://github.com/restic/restic).
This provides users an out-of-the-box solution for backing up and restoring almost any type of Kubernetes volume, whether or not it has snapshot support
integrated with Ark. For more information, see the [documentation](https://github.com/vmware-tanzu/velero/blob/main/docs/restic.md).
integrated with Ark. For more information, see the [documentation](https://github.com/heptio/ark/blob/master/docs/restic.md).
* Support for Prometheus metrics has been added! View total number of backup attempts (including success or failure), total backup size in bytes, and backup
durations. More metrics coming in future releases!

View File

@@ -90,7 +90,7 @@ We fixed a large number of bugs and made some smaller usability improvements in
### All Changes
* Corrected the selfLink for Backup CR in site/docs/main/output-file-format.md (#2292, @RushinthJohn)
* Corrected the selfLink for Backup CR in site/docs/master/output-file-format.md (#2292, @RushinthJohn)
* Back up schema-less CustomResourceDefinitions as v1beta1, even if they are retrieved via the v1 endpoint. (#2264, @nrb)
* Bug fix: restic backup volume snapshot to the second location failed (#2244, @jenting)
* Added support of using PV name from volumesnapshotter('SetVolumeID') in case of PV renaming during the restore (#2216, @mynktl)

View File

@@ -1,3 +1,23 @@
## v1.4.3
### 2020-10-20
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.4.3
### Container Image
`velero/velero:v1.4.3`
### Documentation
https://velero.io/docs/v1.4/
### Upgrading
https://velero.io/docs/v1.4/upgrade-to-1.4/
### All Changes
* Restore CRD Resource name to fix CRD wait functionality. (#2949, @sseago)
* rename the PV if VolumeSnapshotter has modified the PV name (#2835, @pawanpraka1)
* Ensure that bound PVCs and PVs remain bound on restore. (#3007, @nrb)
## v1.4.2
### 2020-07-13

View File

@@ -1,142 +0,0 @@
## v1.5.4
### 2021-03-31
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.5.4
### Container Image
`velero/velero:v1.5.4`
### Documentation
https://velero.io/docs/v1.5/
### Upgrading
https://velero.io/docs/v1.5/upgrade-to-1.5/
* Fixed a bug where restic volumes would not be restored when using a namespace mapping. (#3475, @zubron)
* Add CAPI Cluster and ClusterResourceSets to default restore priorities so that the capi-controller-manager does not panic on restores. (#3446, @nrb)
## v1.5.3
### 2021-01-14
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.5.3
### Container Image
`velero/velero:v1.5.3`
### Documentation
https://velero.io/docs/v1.5/
### Upgrading
https://velero.io/docs/v1.5/upgrade-to-1.5/
### All Changes
* Increased default Velero pod memory limit to 512Mi (#3234, @dsmithuchida)
* 🐛 BSLs with validation disabled should be validated at least once (#3084, @ashish-amarnath)
* Fixed an issue where the deletion of a backup would fail if the backup tarball couldn't be downloaded from object storage. Now the tarball is only downloaded if there are associated DeleteItemAction plugins and if downloading the tarball fails, the plugins are skipped. (#2993, @zubron)
* 🐛 ItemAction plugins for unresolvable types should not be run for all types (#3059, @ashish-amarnath)
* 🐛 Use namespace and name to match PVB to Pod restore (#3051, @ashish-amarnath)
* Allows the restic-wait container to exist in any order in the pod being restored. Prints a warning message in the case where the restic-wait container isn't the first container in the list of initialization containers. (#3011, @doughepi)
## v1.5.2
### 2020-10-20
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.5.2
### Container Image
`velero/velero:v1.5.2`
### Documentation
https://velero.io/docs/v1.5/
### Upgrading
https://velero.io/docs/v1.5/upgrade-to-1.5/
### All Changes
* Fix BSL controller to avoid invoking init() on all BSLs regardless of ValidationFrequency (#2992, @betta1)
* cli: allow creating multiple instances of Velero across two different namespaces (#2886, @alaypatel07)
* Restore CRD Resource name to fix CRD wait functionality. (#2949, @sseago)
* Ensure that bound PVCs and PVs remain bound on restore. (#3007, @nrb)
## v1.5.1
### 2020-09-16
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.5.1
### Container Image
`velero/velero:v1.5.1`
### Documentation
https://velero.io/docs/v1.5/
### Upgrading
https://velero.io/docs/v1.5/upgrade-to-1.5/
### Highlights
* Auto Volume Backup Using Restic with `--default-volumes-to-restic` flag
* DeleteItemAction plugins
* Code modernization
* Restore Hooks: InitContianer Restore Hooks and Exec Restore Hooks
### All Changes
* 🏃‍♂️ add shortnames for CRDs (#2911, @ashish-amarnath)
* Use format version instead of version on `velero backup describe` since version has been deprecated (#2901, @jenting)
* fix EnableAPIGroupersions output log format (#2882, @jenting)
* Convert ServerStatusRequest controller to kubebuilder (#2838, @carlisia)
* rename the PV if VolumeSnapshotter has modified the PV name (#2835, @pawanpraka1)
* Implement post-restore exec hooks in pod containers (#2804, @areed)
* Check for errors on restic backup command (#2863, @dymurray)
* 🐛 fix passing LDFLAGS across build stages (#2853, @ashish-amarnath)
* Feature: Invoke DeleteItemAction plugins based on backup contents when a backup is deleted. (#2815, @nrb)
* When JSON logging format is enabled, place error message at "error.message" instead of "error" for compatibility with Elasticsearch/ELK and the Elastic Common Schema (#2830, @bgagnon)
* discovery Helper support get GroupVersionResource and an APIResource from GroupVersionKind (#2764, @runzexia)
* Migrate site from Jekyll to Hugo (#2720, @tbatard)
* Add the DeleteItemAction plugin type (#2808, @nrb)
* 🐛 Manually patch the generated yaml for restore CRD as a hacky workaround (#2814, @ashish-amarnath)
* Setup crd validation github action on k8s versions (#2805, @ashish-amarnath)
* 🐛 Supply command to run restic-wait init container (#2802, @ashish-amarnath)
* Make init and exec restore hooks as optional in restore hookSpec (#2793, @ashish-amarnath)
* Implement restore hooks injecting init containers into pod spec (#2787, @ashish-amarnath)
* Pass default-volumes-to-restic flag from create schedule to backup (#2776, @ashish-amarnath)
* Enhance Backup to support backing up resources in specific orders and add --ordered-resources option to support this feature. (#2724, @phuong)
* Fix inconsistent type for the "resource" structured logging field (#2796, @bgagnon)
* Add the ability to set the allowPrivilegeEscalation flag in the securityContext for the Restic restore helper. (#2792, @doughepi)
* Add cacert flag for velero backup-location create (#2778, @jenting)
* Exclude volumes mounting secrets and configmaps from defaulting volume backups to restic (#2762, @ashish-amarnath)
* Add types to implement restore hooks (#2761, @ashish-amarnath)
* Add wait group and error channel for restore hooks to restore context. (#2755, @areed)
* Refactor image builds to use buildx for multi arch image building (#2754, @robreus)
* Add annotation key constants for restore hooks (#2750, @ashish-amarnath)
* Adds Start and CompletionTimestamp to RestoreStatus
Displays the Timestamps when issued a print or describe (#2748, @thejasbabu)
* Move pkg/backup/item_hook_handlers.go to internal/hook (#2734, @nrb)
* add metrics for restic back up operation (#2719, @ashish-amarnath)
* StorageGrid compatibility by removing explicit gzip accept header setting (#2712, @fvsqr)
* restic: add support for setting SecurityContext (runAsUser, runAsGroup) for restore (#2621, @jaygridley)
* Add backupValidationFailureTotal to metrics (#2714, @kathpeony)
* bump Kubernetes module dependencies to v0.18.4 to fix https://github.com/vmware-tanzu/velero/issues/2540 by adding code compatibility with kubernetes v1.18 (#2651, @laverya)
* Add a BSL controller to handle validation + update BSL status phase (validation removed from the server and no longer blocks when there's any invalid BSL) (#2674, @carlisia)
* updated acceptable values on cron schedule from 0-7 to 0-6 (#2676, @dthrasher)
* Improve velero download doc (#2660, @carlisia)
* Update basic-install and release-instructions documentation for Windows Chocolatey package (#2638, @adamrushuk)
* move CSI plugin out of prototype into beta (#2636, @ashish-amarnath)
* Add a new supported provider for an object storage plugin for Storj (#2635, @jessicagreben)
* Update basic-install.md documentation: Add windows cli installation option via chocolatey (#2629, @adamrushuk)
* Documentation: Update Jekyll to 4.1.0. Switch from redcarpet to kramdown for Markdown renderer (#2625, @tbatard)
* improve builder image handling so that we don't rebuild each `make shell` (#2620, @mauilion)
* first check if there are pending changed on the build-image dockerfile if so build it.
* then check if there is an image in the registry if so pull it.
* then build an image cause we don't have a cached image. (this handles the backward compat case.)
* fix make clean to clear go mod cache before removing dirs (for containerized builds)
* Add linter checks to Makefile (#2615, @tbatard)
* add a CI check for a changelog file (#2613, @ashish-amarnath)
* implement option to back up all volumes by default with restic (#2611, @ashish-amarnath)
* When a timeout string can't be parsed, log the error as a warning instead of silently consuming the error. (#2610, @nrb)
* Azure: support using `aad-pod-identity` auth when using restic (#2602, @skriss)
* log a warning instead of erroring if an additional item returned from a plugin can't be found in the Kubernetes API (#2595, @skriss)
* when creating new backup from schedule from cli, allow backup name to be automatically generated (#2569, @cblecker)
* Convert manifests + BSL api client to kubebuilder (#2561, @carlisia)
* backup/restore: reinstantiate backup store just before uploading artifacts to ensure credentials are up-to-date (#2550, @skriss)

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

View File

@@ -1,48 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
name: manager-role
rules:
- apiGroups:
- velero.io
resources:
- backupstoragelocations
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- backupstoragelocations/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- serverstatusrequests
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- serverstatusrequests/status
verbs:
- get
- patch
- update

View File

@@ -1,16 +0,0 @@
apiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
creationTimestamp: null
labels:
component: velero
name: default
namespace: velero
spec:
config:
region: minio
s3ForcePathStyle: "true"
s3Url: http://minio.velero.svc:9000
objectStorage:
bucket: velero
provider: aws

View File

@@ -1,25 +0,0 @@
---
apiVersion: velero.io/v1
kind: ServerStatusRequest
metadata:
creationTimestamp: "2020-08-21T15:34:34Z"
generateName: velero-cli-
generation: 1
name: velero-cli-6wkzd
namespace: velero
resourceVersion: "544749"
selfLink: /apis/velero.io/v1/namespaces/velero/serverstatusrequests/velero-cli-6wkzd
uid: 335ea64e-1904-40ec-8106-1f2b22e9540e
spec: {}
status:
phase: Processed
plugins:
- kind: ObjectStore
name: velero.io/aws
- kind: VolumeSnapshotter
name: velero.io/aws
- kind: BackupItemAction
name: velero.io/crd-remap-version
- kind: BackupItemAction
name: velero.io/pod
processedTimestamp: "2020-08-21T15:34:34Z"

View File

@@ -1,35 +1,35 @@
# Design proposal template `<replace with your proposal's title>`
# Design proposal template (replace with your proposal's title)
One to two sentences that describes the goal of this proposal.
The reader should be able to tell by the title, and the opening paragraph, if this document is relevant to them.
_Note_: The preferred style for design documents is one sentence per line.
*Do not wrap lines*.
This aids in review of the document as changes to a line are not obscured by the reflowing those changes caused and has a side effect of avoiding debate about one or two space after a period.
_Note_: The name of the file should follow the name pattern `<short meaningful words joined by '-'>_design.md`, e.g:
`listener-design.md`.
## Abstract
One to two sentences that describes the goal of this proposal and the problem being solved by the proposed change.
The reader should be able to tell by the title, and the opening paragraph, if this document is relevant to them.
## Background
One to two paragraphs of exposition to set the context for this proposal.
## Goals
- A short list of things which will be accomplished by implementing this proposal.
- Two things is ok.
- Three is pushing it.
- More than three goals suggests that the proposal's scope is too large.
## Non Goals
- A short list of items which are:
- a. out of scope
- b. follow on items which are deliberately excluded from this proposal.
## Background
One to two paragraphs of exposition to set the context for this proposal.
## High-Level Design
One to two paragraphs that describe the high level changes that will be made to implement this proposal.
## Detailed Design
A detailed design describing how the changes to the product should be made.
The names of types, fields, interfaces, and methods should be agreed on here, not debated in code review.
@@ -38,16 +38,9 @@ The same applies to changes in CRDs, YAML examples, and so on.
Ideally the changes should be made in sequence so that the work required to implement this design can be done incrementally, possibly in parallel.
## Alternatives Considered
If there are alternative high level or detailed designs that were not pursued they should be called out here with a brief explanation of why they were not pursued.
## Security Considerations
If this proposal has an impact to the security of the product, its users, or data stored or transmitted via the product, they must be addressed here.
## Compatibility
A discussion of any compatibility issues that need to be considered
## Implementation
A description of the implementation, timelines, and any resources that have agreed to contribute.
## Open Issues
A discussion of issues relating to this proposal for which the author does not know the solution. This section may be omitted if there are none.

View File

@@ -1,40 +0,0 @@
## Backup Resources Order
This document proposes a solution that allows user to specify a backup order for resources of specific resource type.
## Background
During backup process, user may need to back up resources of specific type in some specific order to ensure the resources were backup properly because these resources are related and ordering might be required to preserve the consistency for the apps to recover itself <20>from the backup image
(Ex: primary-secondary database pods in a cluster).
## Goals
- Enable user to specify an order of back up resources belong to specific resource type
## Alternatives Considered
- Use a plugin to backup an resources and all the sub resources. For example use a plugin for StatefulSet and backup pods belong to the StatefulSet in specific order. This plugin solution is not generic and requires plugin for each resource type.
## High-Level Design
User will specify a map of resource type to list resource names (separate by semicolons). Each name will be in the format "namespaceName/resourceName" to enable ordering accross namespaces. Based on this map, the resources of each resource type will be sorted by the order specified in the list of resources. If a resource instance belong to that specific type but its name is not in the order list, then it will be put behind other resources that are in the list.
### Changes to BackupSpec
Add new field to BackupSpec
type BackupSpec struct {
...
// OrderedResources contains a list of key-value pairs that represent the order
// of backup of resources that belong to specific resource type
// +optional
// +nullable
OrderedResources map[string]string
}
### Changes to itemCollector
Function getResourceItems collects all items belong to a specific resource type. This function will be enhanced to check with the map to see whether the OrderedResources has specified the order for this resource type. If such order exists, then sort the items by such order being process before return.
### Changes to velero CLI
Add new flag "--ordered-resources" to Velero backup create command which takes a string of key-values pairs which represents the map between resource type and the order of the items of such resource type. Key-value pairs are separated by semicolon, items in the value are separated by commas.
Example:
>velero backup create mybackup --ordered-resources "pod=ns1/pod1,ns1/pod2;persistentvolumeclaim=n2/slavepod,ns2/primarypod"
## Open Issues
- In the CLI, the design proposes to use commas to separate items of a resource type and semicolon to separate key-value pairs. This follows the convention of using commas to separate items in a list (For example: --include-namespaces ns1,ns2). However, the syntax for map in labels and annotations use commas to seperate key-value pairs. So it introduces some inconsistency.
- For pods that managed by Deployment or DaemonSet, this design may not work because the pods' name is randomly generated and if pods are restarted, they would have different names so the Backup operation may not consider the restarted pods in the sorting algorithm. This problem will be addressed when we enhance the design to use regular expression to specify the OrderResources instead of exact match.

View File

@@ -276,7 +276,7 @@ The value for these flags will be stored as annotations.
#### Handling CA certs
In anticipation of a new configuration implementation to handle custom CA certs (as per design doc https://github.com/vmware-tanzu/velero/blob/main/design/custom-ca-support.md), a new flag `velero storage-location create/set --cacert-file mapStringString` is proposed. It sets the configuration to use for creating a secret containing a custom certificate for an S3 location of a plugin provider. Format is provider:path-to-file.
In anticipation of a new configuration implementation to handle custom CA certs (as per design doc https://github.com/vmware-tanzu/velero/blob/master/design/custom-ca-support.md), a new flag `velero storage-location create/set --cacert-file mapStringString` is proposed. It sets the configuration to use for creating a secret containing a custom certificate for an S3 location of a plugin provider. Format is provider:path-to-file.
See discussion https://github.com/vmware-tanzu/velero/pull/2259#discussion_r384700723 for more clarification.
@@ -370,4 +370,4 @@ https://github.com/jpeach/contour/tree/1c575c772e9fd747fba72ae41ab99bdae7a01864/
## Security Considerations
N/A
N/A

View File

@@ -306,16 +306,16 @@ Without these objects, the provider-level snapshots cannot be located in order t
[1]: https://kubernetes.io/blog/2018/10/09/introducing-volume-snapshot-alpha-for-kubernetes/
[2]: https://github.com/kubernetes-csi/external-snapshotter/blob/master/pkg/apis/volumesnapshot/v1alpha1/types.go#L41
[3]: https://github.com/kubernetes-csi/external-snapshotter/blob/master/pkg/apis/volumesnapshot/v1alpha1/types.go#L161
[4]: https://github.com/heptio/velero/blob/main/pkg/volume/snapshot.go#L21
[5]: https://github.com/heptio/velero/blob/main/pkg/apis/velero/v1/pod_volume_backup.go#L88
[4]: https://github.com/heptio/velero/blob/master/pkg/volume/snapshot.go#L21
[5]: https://github.com/heptio/velero/blob/master/pkg/apis/velero/v1/pod_volume_backup.go#L88
[6]: https://github.com/heptio/velero-csi-plugin/
[7]: https://github.com/heptio/velero/blob/main/pkg/plugin/velero/volume_snapshotter.go#L26
[8]: https://github.com/heptio/velero/blob/main/pkg/controller/backup_controller.go#L560
[9]: https://github.com/heptio/velero/blob/main/pkg/persistence/object_store.go#L46
[10]: https://github.com/heptio/velero/blob/main/pkg/apis/velero/v1/labels_annotations.go#L21
[11]: https://github.com/heptio/velero/blob/main/pkg/cmd/server/server.go#L471
[12]: https://github.com/heptio/velero/blob/main/pkg/cmd/util/output/backup_describer.go
[13]: https://github.com/heptio/velero/blob/main/pkg/cmd/util/output/backup_describer.go#L214
[7]: https://github.com/heptio/velero/blob/master/pkg/plugin/velero/volume_snapshotter.go#L26
[8]: https://github.com/heptio/velero/blob/master/pkg/controller/backup_controller.go#L560
[9]: https://github.com/heptio/velero/blob/master/pkg/persistence/object_store.go#L46
[10]: https://github.com/heptio/velero/blob/master/pkg/apis/velero/v1/labels_annotations.go#L21
[11]: https://github.com/heptio/velero/blob/master/pkg/cmd/server/server.go#L471
[12]: https://github.com/heptio/velero/blob/master/pkg/cmd/util/output/backup_describer.go
[13]: https://github.com/heptio/velero/blob/master/pkg/cmd/util/output/backup_describer.go#L214
[14]: https://github.com/kubernetes/kubernetes/blob/8ea9edbb0290e9de1e6d274e816a4002892cca6f/pkg/controller/volume/persistentvolume/util/util.go#L69
[15]: https://github.com/kubernetes/kubernetes/pull/30285
[16]: https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/core/types.go#L237

View File

@@ -73,8 +73,8 @@ This same approach can be taken for CA bundles. The bundle can be stored in a
secret which is referenced on the BSL and written to a temp file prior to
invoking Restic.
[1](https://github.com/vmware-tanzu/velero/blob/main/pkg/restic/repository_manager.go#L238-L245)
[2](https://github.com/vmware-tanzu/velero/blob/main/pkg/restic/common.go#L168-L203)
[1](https://github.com/vmware-tanzu/velero/blob/master/pkg/restic/repository_manager.go#L238-L245)
[2](https://github.com/vmware-tanzu/velero/blob/master/pkg/restic/common.go#L168-L203)
## Detailed Design
@@ -126,7 +126,7 @@ would look like:
$ velero client config set cacert PATH
```
[1]: https://github.com/vmware-tanzu/velero-plugin-for-aws/blob/main/velero-plugin-for-aws/object_store.go#L135
[2]: https://github.com/vmware-tanzu/velero/blob/main/pkg/restic/command.go#L47
[3]: https://github.com/restic/restic/blob/main/internal/backend/http_transport.go#L81
[4]: https://github.com/vmware-tanzu/velero-plugin-for-aws/blob/main/velero-plugin-for-aws/object_store.go#L154
[1]: https://github.com/vmware-tanzu/velero-plugin-for-aws/blob/master/velero-plugin-for-aws/object_store.go#L135
[2]: https://github.com/vmware-tanzu/velero/blob/master/pkg/restic/command.go#L47
[3]: https://github.com/restic/restic/blob/master/internal/backend/http_transport.go#L81
[4]: https://github.com/vmware-tanzu/velero-plugin-for-aws/blob/master/velero-plugin-for-aws/object_store.go#L154

View File

@@ -1,199 +0,0 @@
# Delete Item Action Plugins
## Abstract
Velero should provide a way to delete items created during a backup, with a model and interface similar to that of BackupItemAction and RestoreItemAction plugins.
These plugins would be invoked when a backup is deleted, and would receive items from within the backup tarball.
## Background
As part of Container Storage Interface (CSI) snapshot support, Velero added a new pattern for backing up and restoring snapshots via BackupItemAction and RestoreItemAction plugins.
When others have tried to use this pattern, however, they encountered issues with deleting the resources made in their own ItemAction plugins, as Velero does not expose any sort of extension at backup deletion time.
These plugins largely seek to delete resources that exist outside of Kubernetes.
This design seeks to provide the missing extension point.
## Goals
- Provide a DeleteItemAction API for plugins to implement
- Update Velero backup deletion logic to invoke registered DeleteItemAction plugins.
## Non Goals
- Specific implementations of hte DeleteItemAction API beyond test cases.
- Rollback of DeleteItemAction execution.
## High-Level Design
The DeleteItemAction plugin API will closely resemble the RestoreItemAction plugin design, in that plugins will receive the Velero `Backup` Go struct that is being deleted and a matching Kubernetes resource extracted from the backup tarball.
The Velero backup deletion process will be modified so that if there are any DeleteItemAction plugins registered, the backup tarball will be downloaded and extracted, similar to how restore logic works now.
Then, each item in the backup tarball will be iterated over to see if a DeleteItemAction plugin matches for it.
If a DeleteItemAction plugin matches, the `Backup` and relevant item will be passed to the DeleteItemAction.
The DeleteItemAction plugins will be run _first_ in the backup deletion process, before deleting snapshots from storage or `Restore`s from the Kubernetes API server.
DeleteItemAction plugins *cannot* rollback their actions.
This is because there is currently no way to recover other deleted components of a backup, such as volume/restic snapshots or other DeleteItemAction resources.
DeleteItemAction plugins will be run in alphanumeric order based on their registered names.
## Detailed Design
### New types
The `DeleteItemAction` interface is as follows:
```go
// DeleteItemAction is an actor that performs an action based on an item in a backup that is being deleted.
type DeleteItemAction interface {
// AppliesTo returns information about which resources this action should be invoked for.
// A DeleteItemAction's Execute function will only be invoked on items that match the returned
// selector. A zero-valued ResourceSelector matches all resources.
AppliesTo() (ResourceSelector, error)
// Execute allows the ItemAction to perform arbitrary logic with the item being deleted.
Execute(DeleteItemActionInput) error
}
```
The `DeleteItemActionInput` type is defined as follows:
```go
type DeleteItemActionInput struct {
// Item is the item taken from the pristine backed up version of resource.
Item runtime.Unstructured
// Backup is the representation of the backup resource processed by Velero.
Backup *api.Backup
}
```
Both `DeleteItemAction` and `DeleteItemActionInput` will be defined in `pkg/plugin/velero/delete_item_action.go`.
### Generate protobuf definitions and client/servers
In `pkg/plugin/proto`, add `DeleteItemAction.proto`.
Protobuf definitions will be necessary for:
```protobuf
message DeleteItemActionExecuteRequest {
...
}
message DeleteItemActionExecuteResponse {
...
}
message DeleteItemActionAppliesToRequest {
...
}
message DeleteItemActionAppliesToResponse {
...
}
service DeleteItemAction {
rpc AppliesTo(DeleteItemActionAppliesToRequest) returns (DeleteItemActionAppliesToResponse)
rpc Execute(DeleteItemActionExecuteRequest) returns (DeleteItemActionExecuteResponse)
}
```
Once these are written, then a client and server implementation can be written in `pkg/plugin/framework/delete_item_action_client.go` and `pkg/plugin/framework/delete_item_action_server.go`, respectively.
These should be largely the same as the client and server implementations for `RestoreItemAction` and `BackupItemAction` plugins.
### Restartable delete plugins
Similar to `RestoreItemAction` and `BackupItemAction` plugins, restartable processes will need to be implemented.
In `pkg/plugin/clientmgmt`, add `restartable_delete_item_action.go`, creating the following unexported type:
```go
type restartableDeleteItemAction struct {
key kindAndName
sharedPluginProcess RestartableProcess
config map[string]string
}
// newRestartableDeleteItemAction returns a new restartableDeleteItemAction.
func newRestartableDeleteItemAction(name string, sharedPluginProcess RestartableProcess) *restartableDeleteItemAction {
// ...
}
// getDeleteItemAction returns the delete item action for this restartableDeleteItemAction. It does *not* restart the
// plugin process.
func (r *restartableDeleteItemAction) getDeleteItemAction() (velero.DeleteItemAction, error) {
// ...
}
// getDelegate restarts the plugin process (if needed) and returns the delete item action for this restartableDeleteItemAction.
func (r *restartableDeleteItemAction) getDelegate() (velero.DeleteItemAction, error) {
// ...
}
// AppliesTo restarts the plugin's process if needed, then delegates the call.
func (r *restartableDeleteItemAction) AppliesTo() (velero.ResourceSelector, error) {
// ...
}
// Execute restarts the plugin's process if needed, then delegates the call.
func (r *restartableDeleteItemAction) Execute(input *velero.DeleteItemActionInput) (error) {
// ...
}
```
This file will be very similar in structure to
### Plugin manager changes
Add the following methods to `pkg/plugin/clientmgmt/manager.go`'s `Manager` interface:
```go
type Manager interface {
...
// Get DeleteItemAction returns a DeleteItemAction plugin for name.
GetDeleteItemAction(name string) (DeleteItemAction, error)
// GetDeteteItemActions returns the all DeleteItemAction plugins.
GetDeleteItemActions() ([]DeleteItemAction, error)
}
```
The unexported `manager` type should implement both the `GetDeleteItemAction` and `GetDeleteItemActions`.
Both of these methods should have the same exception for `velero.io/`-prefixed plugins that all other types do.
`GetDeleteItemAction` and `GetDeleteItemActions` will invoke the `restartableDeleteItemAction` implementations.
### Deletion controller modifications
`pkg/controller/backup_deletion_controller.go` will be updated to have plugin management invoked.
In `processRequest`, before deleting snapshots, get any registered `DeleteItemAction` plugins.
If there are none, proceed as normal.
If there are one or more, download the backup tarball from backup storage, untar it to temporary storage, and iterate through the items, matching them to the applicable plugins.
## Alternatives Considered
Another proposal for higher level `DeleteItemActions` was initially included, which would require implementors to individually download the backup tarball themselves.
While this may be useful long term, it is not a good fit for the current goals as each plugin would be re-implementing a lot of boilerplate.
See the deletion-plugins.md file for this alternative proposal in more detail.
The `VolumeSnapshotter` interface is not generic enough to meet the requirements here, as it is specifically for taking snapshots of block devices.
## Security Considerations
By their nature, `DeleteItemAction` plugins will be deleting data, which would normally be a security concern.
However, these will only be invoked in two situations: either when a `BackupDeleteRequest` is sent via a user with the `velero` CLI or some other management system, or when a Velero `Backup` expires by going over its TTL.
Because of this, the data deletion is not a concern.
## Compatibility
In terms of backwards compatibility, this design should stay compatible with most Velero installations that are upgrading.
If not DeleteItemAction plugins are present, then the backup deletion process should proceed the same way it worked prior to their inclusion.
## Implementation
The implementation dependencies are, roughly, in the order as they are described in the [Detailed Design](#detailed-design) section.
## Open Issues

View File

@@ -1,82 +0,0 @@
# Deletion Plugins
Status: Alternative Proposal
## Abstract
Velero should introduce a new type of plugin that runs when a backup is deleted.
These plugins will delete any external resources associated with the backup so that they will not be left orphaned.
## Background
With the CSI plugin, Velero developers introduced a pattern of using BackupItemAction and RestoreItemAction plugins tied to PersistentVolumeClaims to create other resources to complete a backup.
In the CSI plugin case, Velero does clean up of these other resources, which are Kubernetes Custom Resources, within the core Velero server.
However, for external plugins that wish to use this same pattern, this is not a practical solution.
Velero's core cannot be extended for all possible Custom Resources, and not external resources that get created are Kubernetes Custom Resources.
Therefore, Velero needs some mechanism that allows plugin authors who have created resources within a BackupItemAction or RestoreItemAction plugin to ensure those resources are deleted, regardless of what system those resources reside in.
## Goals
- Provide a new plugin type in Velero that is invoked when a backup is deleted.
## Non Goals
- Implementations of specific deletion plugins.
- Rollback of deletion plugin execution.
## High-Level Design
Velero will provide a new plugin type that is similar to its existing plugin architecture.
These plugins will be referred to as `DeleteAction` plugins.
`DeleteAction` plugins will receive the `Backup` CustomResource being deleted on execution.
`DeleteAction` plugins cannot prevent deletion of an item.
This is because multiple `DeleteAction` plugins can be registered, and this proposal does not include rollback and undoing of a deletion action.
Thus, if multiple `DeleteAction` plugins have already run but another would request the deletion of a backup stopped, the backup that's retained would be inconsistent.
`DeleteActions` will apply to `Backup`s based on a label on the `Backup` itself.
In order to ensure that `Backup`s don't execute `DeleteAction` plugins that are not relevant to them, `DeleteAction` plugins can register an `AppliesTo` function which will define a label selector on Velero backups.
`DeleteActions` will be run in alphanumerical order by plugin name.
This order is somewhat arbitrary, but will be used to give authors and users a somewhat predictable order of events.
## Detailed Design
The `DeleteAction` plugins will implement the following Go interface, defined in `pkg/plugin/velero/deletion_action.go`:
```go
type DeleteAction struct {
// AppliesTo will match the DeleteAction plugin against Velero Backups that it should operate against.
AppliesTo()
// Execute runs the custom plugin logic and may connect to external services.
Execute(backup *api.backup) error
}
```
The following methods would be added to the `clientmgmt.Manager` interface in `pkg/pluginclientmgmt/manager.go`:
```
type Manager interface {
...
// GetDeleteActions returns the registered DeleteActions.
//TODO: do we need to get these by name, or can we get them all?
GetDeleteActions([]velero.DeleteAction, error)
...
```
## Alternatives Considered
TODO
## Security Considerations
TODO
## Compatibility
Backwards compatibility should be straight-forward; if there are no installed `DeleteAction` plugins, then the backup deletion process will proceed as it does today.
## Implementation
TODO
## Open Issues
In order to add a custom label to the backup, the backup must be modifiable inside of the `BackupItemActon` and `RestoreItemAction` plugins, which it currently is not. A work around for now is for the user to apply a label to the backup at creation time, but that is not ideal.

View File

@@ -3,7 +3,7 @@
Status: Accepted
Some features may take a while to get fully implemented, and we don't necessarily want to have long-lived feature branches
A simple feature flag implementation allows code to be merged into main, but not used unless a flag is set.
A simple feature flag implementation allows code to be merged into master, but not used unless a flag is set.
## Goals

View File

@@ -1,448 +0,0 @@
# Progress reporting for backups and restores handled by volume snapshotters
Users face difficulty in knowing the progress of backup/restore operations of volume snapshotters. This is very similar to the issues faced by users to know progress for restic backup/restore, like, estimation of operation, operation in-progress/hung etc.
Each plugin might be providing a way to know the progress, but, it need not uniform across the plugins.
Even though plugins provide the way to know the progress of backup operation, this information won't be available to user during restore time on the destination cluster.
So, apart from the issues like progress, status of operation, volume snapshotters have unique problems like
- not being uniform across plugins
- not knowing the backup information during restore operation
- need to be optional as few plugins may not have a way to provide the progress information
This document proposes an approach for plugins to follow to provide backup/restore progress, which can be used by users to know the progress.
## Goals
- Provide uniform way of visibility into backup/restore operations performed by volume snapshotters
## Non Goals
- Plugin implementation for this approach
## Background
(Omitted, see introduction)
## High-Level Design
### Progress of backup operation handled by volume snapshotter
Progress will be updated by volume snapshotter in VolumePluginBackup CR which is specific to that backup operation.
### Progress of restore operation handled by volume snapshotter
Progress will be updated by volume snapshotter in VolumePluginRestore CR which is specific to that restore operation.
## Detailed Design
### Approach 1
Existing `Snapshot` Go struct from `volume` package have most of the details related to backup operation performed by volumesnapshotters.
This struct also gets backed up to backup location. But, this struct doesn't get synced on other clusters at regular intervals.
It is currently synced only during restore operation, and velero CLI shows few of its contents.
At a high level, in this approach, this struct will be converted to a CR by adding new fields (related to Progress tracking) to it, and gets rid of `volume.Snapshot` struct.
Instead of backing up of Go struct, proposal is: to backup CRs to backup location, and sync them into other cluster by backupSyncController running in that cluster.
#### VolumePluginBackup CR
There is one addition to volume.SnapshotSpec, i.e., ProviderName to convert it to CR's spec. Below is the updated VolumePluginBackup CR's Spec:
```
type VolumePluginBackupSpec struct {
// BackupName is the name of the Velero backup this snapshot
// is associated with.
BackupName string `json:"backupName"`
// BackupUID is the UID of the Velero backup this snapshot
// is associated with.
BackupUID string `json:"backupUID"`
// Location is the name of the VolumeSnapshotLocation where this snapshot is stored.
Location string `json:"location"`
// PersistentVolumeName is the Kubernetes name for the volume.
PersistentVolumeName string `json:"persistentVolumeName"`
// ProviderVolumeID is the provider's ID for the volume.
ProviderVolumeID string `json:"providerVolumeID"`
// Provider is the Provider field given in VolumeSnapshotLocation
Provider string `json:"provider"`
// VolumeType is the type of the disk/volume in the cloud provider
// API.
VolumeType string `json:"volumeType"`
// VolumeAZ is the where the volume is provisioned
// in the cloud provider.
VolumeAZ string `json:"volumeAZ,omitempty"`
// VolumeIOPS is the optional value of provisioned IOPS for the
// disk/volume in the cloud provider API.
VolumeIOPS *int64 `json:"volumeIOPS,omitempty"`
}
```
Few fields (except first two) are added to volume.SnapshotStatus to convert it to CR's status. Below is the updated VolumePluginBackup CR's status:
```
type VolumePluginBackupStatus struct {
// ProviderSnapshotID is the ID of the snapshot taken in the cloud
// provider API of this volume.
ProviderSnapshotID string `json:"providerSnapshotID,omitempty"`
// Phase is the current state of the VolumeSnapshot.
Phase SnapshotPhase `json:"phase,omitempty"`
// PluginSpecific are a map of key-value pairs that plugin want to provide
// to user to identify plugin properties related to this backup
// +optional
PluginSpecific map[string]string `json:"pluginSpecific,omitempty"`
// Message is a message about the volume plugin's backup's status.
// +optional
Message string `json:"message,omitempty"`
// StartTimestamp records the time a backup was started.
// Separate from CreationTimestamp, since that value changes
// on restores.
// The server's time is used for StartTimestamps
// +optional
// +nullable
StartTimestamp *metav1.Time `json:"startTimestamp,omitempty"`
// CompletionTimestamp records the time a backup was completed.
// Completion time is recorded even on failed backups.
// Completion time is recorded before uploading the backup object.
// The server's time is used for CompletionTimestamps
// +optional
// +nullable
CompletionTimestamp *metav1.Time `json:"completionTimestamp,omitempty"`
// Progress holds the total number of bytes of the volume and the current
// number of backed up bytes. This can be used to display progress information
// about the backup operation.
// +optional
Progress VolumeOperationProgress `json:"progress,omitempty"`
}
type VolumeOperationProgress struct {
TotalBytes int64
BytesDone int64
}
type VolumePluginBackup struct {
metav1.TypeMeta `json:",inline"`
// +optional
metav1.ObjectMeta `json:"metadata,omitempty"`
// +optional
Spec VolumePluginBackupSpec `json:"spec,omitempty"`
// +optional
Status VolumePluginBackupStatus `json:"status,omitempty"`
}
```
For every backup operation of volume, Velero creates VolumePluginBackup CR before calling volumesnapshotter's CreateSnapshot API.
In order to know the CR created for the particular backup of a volume, Velero adds following labels to CR:
- `velero.io/backup-name` with value as Backup Name, and,
- `velero.io/pv-name` with value as volume that is undergoing backup
Backup name being unique won't cause issues like duplicates in identifying the CR.
Labels will be set with the value returned from `GetValidName` function. (https://github.com/vmware-tanzu/velero/blob/main/pkg/label/label.go#L35).
If Plugin supports showing progress of the operation it is performing, it does following:
- finds the VolumePluginBackup CR related to this backup operation by using `tags` passed in CreateSnapshot call
- updates the CR with the progress regularly.
After return from `CreateSnapshot` in `takePVSnapshot`, currently Velero adds `volume.Snapshot` to `backupRequest`. Instead of this, CR will be added to `backupRequest`.
During persistBackup call, this CR also will be backed up to backup location.
In backupSyncController, it checks for any VolumePluginBackup CRs that need to be synced from backup location, and syncs them to cluster if needed.
VolumePluginBackup will be useful as long as backed up data is available at backup location. When the Backup is deleted either by manually or due to expiry, VolumePluginBackup also can be deleted.
`processRequest` of `backupDeletionController` will perform deletion of VolumePluginBackup before volumesnapshotter's DeleteSnapshot is called.
#### Backward compatibility:
Currently `volume.Snapshot` is backed up as `<backupname>-volumesnapshots.json.gz` file in the backup location.
As the VolumePluginBackup CR is backed up instead of `volume.Snapshot`, to provide backward compatibility, CR will be backed as the same file i.e., `<backupname>-volumesnapshots.json.gz` file in the backup location.
For backward compatibility on restore side, consider below possible cases wrt Velero version on restore side and format of json.gz file at object location:
- older version of Velero, older json.gz file (backupname-volumesnapshots.json.gz)
- older version of Velero, newer json.gz file
- newer version of Velero, older json.gz file
- newer version of Velero, newer json.gz file
First and last should be fine.
For second case, decode in `GetBackupVolumeSnapshots` on the restore side should fill only required fields of older version and should work.
For third case, after decode, metadata.name will be empty. `GetBackupVolumeSnapshots` decodes older json.gz into the CR which goes fine.
It will be modified to return []VolumePluginBackupSpec, and the changes are done accordingly in its caller.
If decode fails in second case during implementation, this CR need to be backed up to different file. And, for backward compatibility, newer code should check for old file existence, and follow older code if exists. If it doesn't exists, check for newer file and follow the newer code.
`backupSyncController` on restore clusters gets the `<backupname>-volumesnapshots.json.gz` object from backup location and decodes it to in-memory VolumePluginBackup CR. If its `metadata.name` is populated, controller creates CR. Otherwise, it will not create the CR on the cluster. It can be even considered to create CR on the cluster.
#### VolumePluginRestore CR
```
// VolumePluginRestoreSpec is the specification for a VolumePluginRestore CR.
type VolumePluginRestoreSpec struct {
// SnapshotID is the identifier for the snapshot of the volume.
// This will be used to relate with output in 'velero describe backup'
SnapshotID string `json:"snapshotID"`
// BackupName is the name of the Velero backup from which PV will be
// created.
BackupName string `json:"backupName"`
// Provider is the Provider field given in VolumeSnapshotLocation
Provider string `json:"provider"`
// VolumeType is the type of the disk/volume in the cloud provider
// API.
VolumeType string `json:"volumeType"`
// VolumeAZ is the where the volume is provisioned
// in the cloud provider.
VolumeAZ string `json:"volumeAZ,omitempty"`
}
// VolumePluginRestoreStatus is the current status of a VolumePluginRestore CR.
type VolumePluginRestoreStatus struct {
// Phase is the current state of the VolumePluginRestore.
Phase string `json:"phase"`
// VolumeID is the PV name to which restore done
VolumeID string `json:"volumeID"`
// Message is a message about the volume plugin's restore's status.
// +optional
Message string `json:"message,omitempty"`
// StartTimestamp records the time a restore was started.
// Separate from CreationTimestamp, since that value changes
// on restores.
// The server's time is used for StartTimestamps
// +optional
// +nullable
StartTimestamp *metav1.Time `json:"startTimestamp,omitempty"`
// CompletionTimestamp records the time a restore was completed.
// Completion time is recorded even on failed restores.
// The server's time is used for CompletionTimestamps
// +optional
// +nullable
CompletionTimestamp *metav1.Time `json:"completionTimestamp,omitempty"`
// Progress holds the total number of bytes of the snapshot and the current
// number of restored bytes. This can be used to display progress information
// about the restore operation.
// +optional
Progress VolumeOperationProgress `json:"progress,omitempty"`
// PluginSpecific are a map of key-value pairs that plugin want to provide
// to user to identify plugin properties related to this restore
// +optional
PluginSpecific map[string]string `json:"pluginSpecific,omitempty"`
}
type VolumePluginRestore struct {
metav1.TypeMeta `json:",inline"`
// +optional
metav1.ObjectMeta `json:"metadata,omitempty"`
// +optional
Spec VolumePluginRestoreSpec `json:"spec,omitempty"`
// +optional
Status VolumePluginRestoreStatus `json:"status,omitempty"`
}
```
For every restore operation, Velero creates VolumePluginRestore CR before calling volumesnapshotter's CreateVolumeFromSnapshot API.
In order to know the CR created for the particular restore of a volume, Velero adds following labels to CR:
- `velero.io/backup-name` with value as Backup Name, and,
- `velero.io/snapshot-id` with value as snapshot id that need to be restored
- `velero.io/provider` with value as `Provider` in `VolumeSnapshotLocation`
Labels will be set with the value returned from `GetValidName` function. (https://github.com/vmware-tanzu/velero/blob/main/pkg/label/label.go#L35).
Plugin will be able to identify CR by using snapshotID that it received as parameter of CreateVolumeFromSnapshot API, and plugin's Provider name.
It updates the progress of restore operation regularly if plugin supports feature of showing progress.
Velero deletes VolumePluginRestore CR when it handles deletion of Restore CR.
### Approach 2
This approach is different to approach 1 only with respect to Backup.
#### VolumePluginBackup CR
```
// VolumePluginBackupSpec is the specification for a VolumePluginBackup CR.
type VolumePluginBackupSpec struct {
// Volume is the PV name to be backed up.
Volume string `json:"volume"`
// Backup name
Backup string `json:"backup"`
// Provider is the Provider field given in VolumeSnapshotLocation
Provider string `json:"provider"`
}
// VolumePluginBackupStatus is the current status of a VolumePluginBackup CR.
type VolumePluginBackupStatus struct {
// Phase is the current state of the VolumePluginBackup.
Phase string `json:"phase"`
// SnapshotID is the identifier for the snapshot of the volume.
// This will be used to relate with output in 'velero describe backup'
SnapshotID string `json:"snapshotID"`
// Message is a message about the volume plugin's backup's status.
// +optional
Message string `json:"message,omitempty"`
// StartTimestamp records the time a backup was started.
// Separate from CreationTimestamp, since that value changes
// on restores.
// The server's time is used for StartTimestamps
// +optional
// +nullable
StartTimestamp *metav1.Time `json:"startTimestamp,omitempty"`
// CompletionTimestamp records the time a backup was completed.
// Completion time is recorded even on failed backups.
// Completion time is recorded before uploading the backup object.
// The server's time is used for CompletionTimestamps
// +optional
// +nullable
CompletionTimestamp *metav1.Time `json:"completionTimestamp,omitempty"`
// PluginSpecific are a map of key-value pairs that plugin want to provide
// to user to identify plugin properties related to this backup
// +optional
PluginSpecific map[string]string `json:"pluginSpecific,omitempty"`
// Progress holds the total number of bytes of the volume and the current
// number of backed up bytes. This can be used to display progress information
// about the backup operation.
// +optional
Progress VolumeOperationProgress `json:"progress,omitempty"`
}
type VolumeOperationProgress struct {
TotalBytes int64
BytesDone int64
}
type VolumePluginBackup struct {
metav1.TypeMeta `json:",inline"`
// +optional
metav1.ObjectMeta `json:"metadata,omitempty"`
// +optional
Spec VolumePluginBackupSpec `json:"spec,omitempty"`
// +optional
Status VolumePluginBackupStatus `json:"status,omitempty"`
}
```
For every backup operation of volume, volume snapshotter creates VolumePluginBackup CR in Velero namespace.
It keep updating the progress of operation along with other details like Volume name, Backup Name, SnapshotID etc as mentioned in the CR.
In order to know the CR created for the particular backup of a volume, volume snapshotters adds following labels to CR:
- `velero.io/backup-name` with value as Backup Name, and,
- `velero.io/volume-name` with value as volume that is undergoing backup
Backup name being unique won't cause issues like duplicates in identifying the CR.
Plugin need to sanitize the value that can be set for above labels. Label need to be set with the value returned from `GetValidName` function. (https://github.com/vmware-tanzu/velero/blob/main/pkg/label/label.go#L35).
Though no restrictions are required on the name of CR, as a general practice, volume snapshotter can name this CR with the value same as return value of CreateSnapshot.
After return from `CreateSnapshot` in `takePVSnapshot`, if VolumePluginBackup CR exists for particular backup of the volume, velero adds this CR to `backupRequest`.
During persistBackup call, this CR also will be backed up to backup location.
In backupSyncController, it checks for any VolumePluginBackup CRs that need to be synced from backup location, and syncs them to cluster if needed.
`processRequest` of `backupDeletionController` will perform deletion of VolumePluginBackup before volumesnapshotter's DeleteSnapshot is called.
Another alternative is:
Deletion of `VolumePluginBackup` CR can be delegated to plugin. Plugin can perform deletion of VolumePluginBackup using the `snapshotID` passed in volumesnapshotter's DeleteSnapshot request.
### 'core' Velero client/server required changes
- Creation of the VolumePluginBackup/VolumePluginRestore CRDs at installation time
- Persistence of VolumePluginBackup CRs towards the end of the back up operation
- As part of backup synchronization, VolumePluginBackup CRs related to the backup will be synced.
- Deletion of VolumePluginBackup when volumeshapshotter's DeleteSnapshot is called
- Deletion of VolumePluginRestore as part of handling deletion of Restore CR
- In case of approach 1,
- converting `volume.Snapshot` struct as CR and its related changes
- creation of VolumePlugin(Backup|Restore) CRs before calling volumesnapshotter's API
- `GetBackupVolumeSnapshots` and its callers related changes for change in return type from []volume.Snapshot to []VolumePluginBackupSpec.
### Velero CLI required changes
In 'velero describe' CLI, required CRs will be fetched from API server and its contents like backupName, PVName (if changed due to label size limitation), size of PV snapshot will be shown in the output.
### API Upgrade
When CRs gets upgraded, velero can support older API versions also (till they get deprecated) to identify the CRs that need to be persisted to backup location.
However, it can provide preference over latest supported API.
If new fields are added without changing API version, it won't cause any problem as these resources are intended to provide information, and, there is no reconciliation on these resources.
### Compatibility of latest plugin with older version of Velero
Plugin that supports this CR should handle the situation gracefully when CRDs are not installed. It can handle the errors occured during creation/updation of the CRs.
## Limitations:
Non K8s native plugins will not be able to implement this as they can not create the CRs.
## Open Questions
## Alternatives Considered
### Add another method to VolumeSnapshotter interface
Above proposed approach have limitation that plugin need to be K8s native in order to create, update CRs.
Instead, a new method for 'Progress' will be added to interface. Velero server regularly polls this 'Progress' method and updates VolumePluginBackup CR on behalf of plugin.
But, this involves good amount of changes and needs a way for backward compatibility.
As volume plugins are mostly K8s native, its fine to go ahead with current limiation.
### Update Backup CR
Instead of creating new CRs, plugins can directly update the status of Backup CR. But, this deviates from current approach of having seperate CRs like PodVolumeBackup/PodVolumeRestore to know operations progress.
### Restricting on name rather than using labels
Instead of using labels to identify the CR related to particular backup on a volume, restrictions can be placed on the name of VolumePluginBackup CR to be same as the value returned from CreateSnapshot.
But, this can cause issue when volume snapshotter just crashed without returning snapshot id to velero.
### Backing up VolumePluginBackup CR to different object
If CR is backed up to different object other than `#backup-volumesnapshots.json.gz` in backup location, restore controller need to follow 'fall-back model'.
It first need to check for new kind of object, and, if it doesn't exists, follow the old model. To avoid 'fall-back' model which prone to errors, VolumePluginBackup CR is backed to same location as that of `volume.Snapshot` location.
## Security Considerations
Currently everything runs under the same `velero` service account so all plugins have broad access, which would include being able to modify CRs created by another plugin.

View File

@@ -32,14 +32,14 @@ It will also update the `spec.volumeName` of the related persistent volume claim
## Detailed Design
In `pkg/restore/restore.go`, around [line 872](https://github.com/vmware-tanzu/velero/blob/main/pkg/restore/restore.go#L872), Velero has special-case code for persistent volumes.
In `pkg/restore/restore.go`, around [line 872](https://github.com/heptio/velero/blob/master/pkg/restore/restore.go#L872), Velero has special-case code for persistent volumes.
This code will be updated to check for the two preconditions described in the previous section.
If the preconditions are met, the object will be given a new name.
The persistent volume will also be annotated with the original name, e.g. `velero.io/original-pv-name=NAME`.
Importantly, the name change will occur **before** [line 890](https://github.com/vmware-tanzu/velero/blob/main/pkg/restore/restore.go#L890), where Velero checks to see if it should restore the persistent volume.
Importantly, the name change will occur **before** [line 890](https://github.com/heptio/velero/blob/master/pkg/restore/restore.go#L890), where Velero checks to see if it should restore the persistent volume.
Additionally, the old and new persistent volume names will be recorded in a new field that will be added to the `context` struct, `renamedPVs map[string]string`.
In the special-case code for persistent volume claims starting on [line 987](https://github.com/heptio/velero/blob/main/pkg/restore/restore.go#L987), Velero will check to see if the claimed persistent volume has been renamed by looking in `ctx.renamedPVs`.
In the special-case code for persistent volume claims starting on [line 987](https://github.com/heptio/velero/blob/master/pkg/restore/restore.go#L987), Velero will check to see if the claimed persistent volume has been renamed by looking in `ctx.renamedPVs`.
If so, Velero will update the persistent volume claim's `spec.volumeName` to the new name.
## Alternatives Considered

View File

@@ -1,143 +0,0 @@
# Restore Hooks
This document proposes a solution that allows a user to specify Restore Hooks, much like Backup Hooks, that can be executed during the restore process.
## Goals
- Enable custom commands to be run during a restore in order to mirror the commands that are available to the backup process.
- Provide observability into the result of commands run in restored pods.
## Non Goals
- Handling any application specific scenarios (postgres, mongo, etc)
## Background
Velero supports Backup Hooks to execute commands before and/or after a backup.
This enables a user to, among other things, prepare data to be backed up without having to freeze an in-use volume.
An example of this would be to attach an empty volume to a Postgres pod, use a backup hook to execute `pg_dump` from the data volume, and back up the volume containing the export.
The problem is that there's no easy or automated way to include an automated restore process.
After a restore with the example configuration above, the postgres pod will be empty, but there will be a need to manually exec in and run `pg_restore`.
## High-Level Design
The Restore spec will have a `spec.hooks` section matching the same section on the Backup spec except no `pre` hooks can be defined - only `post`.
Annotations comparable to the annotations used during backup can also be set on pods.
For each restored pod, the Velero server will check if there are any hooks applicable to the pod.
If a restored pod has any applicable hooks, Velero will wait for the container where the hook is to be executed to reach status Running.
The Restore log will include the results of each post-restore hook and the Restore object status will incorporate the results of hooks.
The Restore log will include the results of each hook and the Restore object status will incorporate the results of hooks.
A new section at `spec.hooks.resources.initContainers` will allow for injecting initContainers into restored pods.
Annotations can be set as an alternative to defining the initContainers in the Restore object.
## Detailed Design
Post-restore hooks can be defined by annotation and/or by an array of resource hooks in the Restore spec.
The following annotations are supported:
- post.hook.restore.velero.io/container
- post.hook.restore.velero.io/command
- post.hook.restore.velero.io/on-error
- post.hook.restore.velero.io/exec-timeout
- post.hook.restore.velero.io/wait-timeout
Init restore hooks can be defined by annotation and/or in the new `initContainers` section in the Restore spec.
The initContainers schema is `pod.spec.initContainers`.
The following annotations are supported:
- init.hook.restore.velero.io/timeout
- init.hook.restore.velero.io/initContainers
This is an example of defining hooks in the Restore spec.
```yaml
apiVersion: velero.io/v1
kind: Restore
spec:
...
hooks:
resources:
-
name: my-hook
includedNamespaces:
- '*'
excludedNamespaces:
- some-namespace
includedResources:
- pods
excludedResources: []
labelSelector:
matchLabels:
app: velero
component: server
post:
-
exec:
container: postgres
command:
- /bin/bash
- -c
- rm /docker-entrypoint-initdb.d/dump.sql
onError: Fail
timeout: 10s
readyTimeout: 60s
init:
timeout: 120s
initContainers:
- name: restore
image: postgres:12
command: ["/bin/bash", "-c", "mv /backup/dump.sql /docker-entrypoint-initdb.d/"]
volumeMounts:
- name: backup
mountPath: /backup
```
As with Backups, if an annotation is defined on a pod then no hooks from the Restore spec will be applied.
### Implementation
The types and function in pkg/backup/item_hook_handler.go will be moved to a new package (pkg/hooks) and exported so they can be used for both backups and restores.
The post-restore hooks implementation will closely follow the design of restoring pod volumes with restic.
The pkg/restore.context type will have new fields `hooksWaitGroup` and `hooksErrs` comparable to `resticWaitGroup` and `resticErr`.
The pkg/restore.context.execute function will start a goroutine for each pod with applicable hooks and then continue with restoring other items.
Each hooks goroutine will create a pkg/util/hooks.ItemHookHandler for each pod and send any error on the context.hooksErrs channel.
The ItemHookHandler already includes stdout and stderr and other metadata in the Backup log so the same logs will automatically be added to the Restore log (passed as the first argument to the ItemHookhandler.HandleHooks method.)
The pkg/restore.context.execute function will wait for the hooksWaitGroup before returning.
Any errors received on context.hooksErrs will be added to errs.Velero.
One difference compared to the restic restore design is that any error on the context.hooksErrs channel will cancel the context of all hooks, since errors are only reported on this channel if the hook specified `onError: Fail`.
However, canceling the hooks goroutines will not cancel the restic goroutines.
In practice the restic goroutines will complete before the hooks since the hooks do not run until a pod is ready, but it's possible a hook will be executed and fail while a different pod is still in the pod volume restore phase.
Failed hooks with `onError: Continue` will appear in the Restore log but will not affect the status of the parent Restore.
Failed hooks with `onError: Fail` will cause the parent Restore to have status Partially Failed.
If initContainers are specified for a pod, Velero will inject the containers into the beginning of the pod's initContainers list.
If a restic initContainer is also being injected, the restore initContainers will be injected directly after the restic initContainer.
The restore will use a RestoreItemAction to inject the initContainers.
Stdout and stderr of the restore initContainers will not be added to the Restore logs.
InitContainers that fail will not affect the parent Restore's status.
## Alternatives Considered
Wait for all restored Pods to report Ready, then execute the first hook in all applicable Pods simultaneously, then proceed to the next hook, etc.
That could introduce deadlock, e.g. if an API pod cannot be ready until the DB pod is restored.
Put the restore hooks on the Backup spec as a third lifecycle event named `restore` along with `pre` and `post`.
That would be confusing since `pre` and `post` would appear in the Backup log but `restore` would only be in the Restore log.
Execute restore hooks in parallel for each Pod.
That would not match the behavior of Backups.
Wait for PodStatus ready before executing the post-restore hooks in any container.
There are cases where the pod should not report itself ready until after the restore hook has run.
Include the logs from initContainers in the Restore log.
Unlike exec hooks where stdout and stderr are permanently lost if not added to the Restore log, the logs of the injected initContainers are available through the K8s API with kubectl or another client.
## Security Considerations
Stdout or stderr in the Restore log may contain sensitive information, but the same risk already exists for Backup hooks.

86
go.mod
View File

@@ -4,83 +4,39 @@ go 1.14
require (
cloud.google.com/go v0.46.2 // indirect
github.com/Azure/azure-sdk-for-go v42.0.0+incompatible
github.com/Azure/go-autorest/autorest v0.9.6
github.com/Azure/go-autorest/autorest/azure/auth v0.4.2
github.com/Azure/azure-sdk-for-go v35.0.0+incompatible
github.com/Azure/go-autorest/autorest v0.9.0
github.com/Azure/go-autorest/autorest/adal v0.5.0
github.com/Azure/go-autorest/autorest/to v0.3.0
github.com/Azure/go-autorest/autorest/validation v0.2.0 // indirect
github.com/aws/aws-sdk-go v1.28.2
github.com/aws/aws-sdk-go v1.13.12
github.com/docker/spdystream v0.0.0-20170912183627-bc6354cbbc29 // indirect
github.com/evanphx/json-patch v4.5.0+incompatible
github.com/go-ini/ini v1.28.2 // indirect
github.com/gobwas/glob v0.2.3
github.com/gofrs/uuid v3.2.0+incompatible
github.com/golang/protobuf v1.4.2
github.com/golang/protobuf v1.3.2
github.com/hashicorp/go-hclog v0.0.0-20180709165350-ff2cf002a8dd
github.com/hashicorp/go-plugin v0.0.0-20190610192547-a1bc61569a26
github.com/jmespath/go-jmespath v0.0.0-20160202185014-0b12d6b521d8 // indirect
github.com/joho/godotenv v1.3.0
github.com/kubernetes-csi/external-snapshotter/v2 v2.2.0-rc1
github.com/onsi/ginkgo v1.13.0
github.com/onsi/gomega v1.10.1
github.com/pkg/errors v0.9.1
github.com/prometheus/client_golang v1.5.1
github.com/robfig/cron v1.1.0
github.com/kubernetes-csi/external-snapshotter/v2 v2.1.0
github.com/pkg/errors v0.8.1
github.com/prometheus/client_golang v1.0.0
github.com/robfig/cron v0.0.0-20170309132418-df38d32658d8
github.com/sirupsen/logrus v1.4.2
github.com/smartystreets/goconvey v1.6.4 // indirect
github.com/spf13/afero v1.2.2
github.com/spf13/cobra v0.0.6
github.com/spf13/cobra v0.0.5
github.com/spf13/pflag v1.0.5
github.com/stretchr/testify v1.4.0
golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7
google.golang.org/genproto v0.0.0-20200731012542-8145dea6a485 // indirect
google.golang.org/grpc v1.31.0
google.golang.org/protobuf v1.25.0 // indirect
k8s.io/api v0.18.4
k8s.io/apiextensions-apiserver v0.18.4
k8s.io/apimachinery v0.18.4
k8s.io/cli-runtime v0.18.4
k8s.io/client-go v0.18.4
golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553
google.golang.org/grpc v1.26.0
k8s.io/api v0.17.4
k8s.io/apiextensions-apiserver v0.17.4
k8s.io/apimachinery v0.17.4
k8s.io/cli-runtime v0.17.4
k8s.io/client-go v0.17.4
k8s.io/klog v1.0.0
sigs.k8s.io/cluster-api v0.3.8
sigs.k8s.io/controller-runtime v0.6.1
k8s.io/utils v0.0.0-20191218082557-f07c713de883 // indirect
)
replace k8s.io/api => k8s.io/api v0.18.4
replace k8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.18.4
replace k8s.io/apimachinery => k8s.io/apimachinery v0.18.4
replace k8s.io/apiserver => k8s.io/apiserver v0.18.4
replace k8s.io/cli-runtime => k8s.io/cli-runtime v0.18.4
replace k8s.io/client-go => k8s.io/client-go v0.18.4
replace k8s.io/cloud-provider => k8s.io/cloud-provider v0.18.4
replace k8s.io/cluster-bootstrap => k8s.io/cluster-bootstrap v0.18.4
replace k8s.io/code-generator => k8s.io/code-generator v0.18.4
replace k8s.io/component-base => k8s.io/component-base v0.18.4
replace k8s.io/cri-api => k8s.io/cri-api v0.18.4
replace k8s.io/kube-aggregator => k8s.io/kube-aggregator v0.18.4
replace k8s.io/kube-controller-manager => k8s.io/kube-controller-manager v0.18.4
replace k8s.io/kube-proxy => k8s.io/kube-proxy v0.18.4
replace k8s.io/kube-scheduler => k8s.io/kube-scheduler v0.18.4
replace k8s.io/kubectl => k8s.io/kubectl v0.18.4
replace k8s.io/kubelet => k8s.io/kubelet v0.18.4
replace k8s.io/legacy-cloud-providers => k8s.io/legacy-cloud-providers v0.18.4
replace k8s.io/metrics => k8s.io/metrics v0.18.4
replace k8s.io/sample-apiserver => k8s.io/sample-apiserver v0.18.4
replace k8s.io/csi-translation-lib => k8s.io/csi-translation-lib v0.18.4

628
go.sum

File diff suppressed because it is too large Load Diff

View File

@@ -1,415 +0,0 @@
# This file contains all available configuration options
# with their default values.
# options for analysis running
run:
# default concurrency is a available CPU number
concurrency: 4
# timeout for analysis, e.g. 30s, 5m, default is 1m
timeout: 5m
# exit code when at least one issue was found, default is 1
issues-exit-code: 1
# include test files or not, default is true
tests: true
# list of build tags, all linters use it. Default is empty list.
#build-tags:
# - mytag
# which dirs to skip: issues from them won't be reported;
# can use regexp here: generated.*, regexp is applied on full path;
# default value is empty list, but default dirs are skipped independently
# from this option's value (see skip-dirs-use-default).
# "/" will be replaced by current OS file path separator to properly work
# on Windows.
#skip-dirs:
# - src/external_libs
# - autogenerated_by_my_lib
# default is true. Enables skipping of directories:
# vendor$, third_party$, testdata$, examples$, Godeps$, builtin$
skip-dirs-use-default: true
# which files to skip: they will be analyzed, but issues from them
# won't be reported. Default value is empty list, but there is
# no need to include all autogenerated files, we confidently recognize
# autogenerated files. If it's not please let us know.
# "/" will be replaced by current OS file path separator to properly work
# on Windows.
# skip-files:
# - ".*\\.my\\.go$"
# - lib/bad.go
# by default isn't set. If set we pass it to "go list -mod={option}". From "go help modules":
# If invoked with -mod=readonly, the go command is disallowed from the implicit
# automatic updating of go.mod described above. Instead, it fails when any changes
# to go.mod are needed. This setting is most useful to check that go.mod does
# not need updates, such as in a continuous integration and testing system.
# If invoked with -mod=vendor, the go command assumes that the vendor
# directory holds the correct copies of dependencies and ignores
# the dependency descriptions in go.mod.
# modules-download-mode: readonly|release|vendor
modules-download-mode: readonly
# Allow multiple parallel golangci-lint instances running.
# If false (default) - golangci-lint acquires file lock on start.
allow-parallel-runners: false
# output configuration options
output:
# colored-line-number|line-number|json|tab|checkstyle|code-climate, default is "colored-line-number"
format: colored-line-number
# print lines of code with issue, default is true
print-issued-lines: true
# print linter name in the end of issue text, default is true
print-linter-name: true
# make issues output unique by line, default is true
uniq-by-line: true
# all available settings of specific linters
linters-settings:
dogsled:
# checks assignments with too many blank identifiers; default is 2
max-blank-identifiers: 2
dupl:
# tokens count to trigger issue, 150 by default
threshold: 100
errcheck:
# report about not checking of errors in type assertions: `a := b.(MyStruct)`;
# default is false: such cases aren't reported by default.
check-type-assertions: false
# report about assignment of errors to blank identifier: `num, _ := strconv.Atoi(numStr)`;
# default is false: such cases aren't reported by default.
check-blank: false
# [deprecated] comma-separated list of pairs of the form pkg:regex
# the regex is used to ignore names within pkg. (default "fmt:.*").
# see https://github.com/kisielk/errcheck#the-deprecated-method for details
# ignore: fmt:.*,io/ioutil:^Read.*
# path to a file containing a list of functions to exclude from checking
# see https://github.com/kisielk/errcheck#excluding-functions for details
# exclude: /path/to/file.txt
exhaustive:
# indicates that switch statements are to be considered exhaustive if a
# 'default' case is present, even if all enum members aren't listed in the
# switch
default-signifies-exhaustive: false
funlen:
lines: 60
statements: 40
gocognit:
# minimal code complexity to report, 30 by default (but we recommend 10-20)
min-complexity: 10
nestif:
# minimal complexity of if statements to report, 5 by default
min-complexity: 4
goconst:
# minimal length of string constant, 3 by default
min-len: 3
# minimal occurrences count to trigger, 3 by default
min-occurrences: 3
gocritic:
# Which checks should be enabled; can't be combined with 'disabled-checks';
# See https://go-critic.github.io/overview#checks-overview
# To check which checks are enabled run `GL_DEBUG=gocritic golangci-lint run`
# By default list of stable checks is used.
# enabled-checks:
# - rangeValCopy
# Which checks should be disabled; can't be combined with 'enabled-checks'; default is empty
# disabled-checks:
# - regexpMust
# Enable multiple checks by tags, run `GL_DEBUG=gocritic golangci-lint run` to see all tags and checks.
# Empty list by default. See https://github.com/go-critic/go-critic#usage -> section "Tags".
# enabled-tags:
# - performance
# disabled-tags:
# - experimental
settings: # settings passed to gocritic
captLocal: # must be valid enabled check name
paramsOnly: true
# rangeValCopy:
# sizeThreshold: 32
gocyclo:
# minimal code complexity to report, 30 by default (but we recommend 10-20)
min-complexity: 10
godot:
# check all top-level comments, not only declarations
check-all: false
godox:
# report any comments starting with keywords, this is useful for TODO or FIXME comments that
# might be left in the code accidentally and should be resolved before merging
keywords: # default keywords are TODO, BUG, and FIXME, these can be overwritten by this setting
- NOTE
- OPTIMIZE # marks code that should be optimized before merging
- HACK # marks hack-arounds that should be removed before merging
gofmt:
# simplify code: gofmt with `-s` option, true by default
simplify: true
goimports:
# put imports beginning with prefix after 3rd-party packages;
# it's a comma-separated list of prefixes
local-prefixes: github.com/org/project
golint:
# minimal confidence for issues, default is 0.8
min-confidence: 0.8
gomnd:
settings:
mnd:
# the list of enabled checks, see https://github.com/tommy-muehle/go-mnd/#checks for description.
checks: argument,case,condition,operation,return,assign
gomodguard:
allowed:
modules: # List of allowed modules
# - gopkg.in/yaml.v2
domains: # List of allowed module domains
# - golang.org
blocked:
modules: # List of blocked modules
# - github.com/uudashr/go-module: # Blocked module
# recommendations: # Recommended modules that should be used instead (Optional)
# - golang.org/x/mod
# reason: "`mod` is the official go.mod parser library." # Reason why the recommended module should be used (Optional)
versions: # List of blocked module version constraints
# - github.com/mitchellh/go-homedir: # Blocked module with version constraint
# version: "< 1.1.0" # Version constraint, see https://github.com/Masterminds/semver#basic-comparisons
# reason: "testing if blocked version constraint works." # Reason why the version constraint exists. (Optional)
govet:
# report about shadowed variables
check-shadowing: true
# settings per analyzer
settings:
printf: # analyzer name, run `go tool vet help` to see all analyzers
funcs: # run `go tool vet help printf` to see available settings for `printf` analyzer
- (github.com/golangci/golangci-lint/pkg/logutils.Log).Infof
- (github.com/golangci/golangci-lint/pkg/logutils.Log).Warnf
- (github.com/golangci/golangci-lint/pkg/logutils.Log).Errorf
- (github.com/golangci/golangci-lint/pkg/logutils.Log).Fatalf
# enable or disable analyzers by name
enable:
- atomicalign
enable-all: false
disable:
- shadow
disable-all: false
depguard:
list-type: blacklist
include-go-root: false
packages:
- github.com/sirupsen/logrus
packages-with-error-message:
# specify an error message to output when a blacklisted package is used
- github.com/sirupsen/logrus: "logging is allowed only by logutils.Log"
lll:
# max line length, lines longer will be reported. Default is 120.
# '\t' is counted as 1 character by default, and can be changed with the tab-width option
line-length: 120
# tab width in spaces. Default to 1.
tab-width: 1
maligned:
# print struct with more effective memory layout or not, false by default
suggest-new: true
misspell:
# Correct spellings using locale preferences for US or UK.
# Default is to use a neutral variety of English.
# Setting locale to US will correct the British spelling of 'colour' to 'color'.
locale: US
ignore-words:
- someword
nakedret:
# make an issue if func has more lines of code than this setting and it has naked returns; default is 30
max-func-lines: 30
prealloc:
# XXX: we don't recommend using this linter before doing performance profiling.
# For most programs usage of prealloc will be a premature optimization.
# Report preallocation suggestions only on simple loops that have no returns/breaks/continues/gotos in them.
# True by default.
simple: true
range-loops: true # Report preallocation suggestions on range loops, true by default
for-loops: false # Report preallocation suggestions on for loops, false by default
nolintlint:
# Enable to ensure that nolint directives are all used. Default is true.
allow-unused: false
# Disable to ensure that nolint directives don't have a leading space. Default is true.
allow-leading-space: true
# Exclude following linters from requiring an explanation. Default is [].
allow-no-explanation: []
# Enable to require an explanation of nonzero length after each nolint directive. Default is false.
require-explanation: true
# Enable to require nolint directives to mention the specific linter being suppressed. Default is false.
require-specific: true
rowserrcheck:
packages:
- github.com/jmoiron/sqlx
testpackage:
# regexp pattern to skip files
skip-regexp: (export|internal)_test\.go
unparam:
# Inspect exported functions, default is false. Set to true if no external program/library imports your code.
# XXX: if you enable this setting, unparam will report a lot of false-positives in text editors:
# if it's called for subdir of a project it can't find external interfaces. All text editor integrations
# with golangci-lint call it on a directory with the changed file.
check-exported: false
unused:
# treat code as a program (not a library) and report unused exported identifiers; default is false.
# XXX: if you enable this setting, unused will report a lot of false-positives in text editors:
# if it's called for subdir of a project it can't find funcs usages. All text editor integrations
# with golangci-lint call it on a directory with the changed file.
check-exported: false
whitespace:
multi-if: false # Enforces newlines (or comments) after every multi-line if statement
multi-func: false # Enforces newlines (or comments) after every multi-line function signature
wsl:
# If true append is only allowed to be cuddled if appending value is
# matching variables, fields or types on line above. Default is true.
strict-append: true
# Allow calls and assignments to be cuddled as long as the lines have any
# matching variables, fields or types. Default is true.
allow-assign-and-call: true
# Allow multiline assignments to be cuddled. Default is true.
allow-multiline-assign: true
# Allow declarations (var) to be cuddled.
allow-cuddle-declarations: false
# Allow trailing comments in ending of blocks
allow-trailing-comment: false
# Force newlines in end of case at this limit (0 = never).
force-case-trailing-whitespace: 0
# Force cuddling of err checks with err var assignment
force-err-cuddling: false
# Allow leading comments to be separated with empty liens
allow-separated-leading-comment: false
# The custom section can be used to define linter plugins to be loaded at runtime. See README doc
# for more info.
# custom:
# Each custom linter should have a unique name.
# example:
# The path to the plugin *.so. Can be absolute or local. Required for each custom linter
# path: /path/to/example.so
# The description of the linter. Optional, just for documentation purposes.
# description: This is an example usage of a plugin linter.
# Intended to point to the repo location of the linter. Optional, just for documentation purposes.
# original-url: github.com/golangci/example-linter
linters:
# enable:
# - megacheck
# - govet
# disable:
# - maligned
# - prealloc
disable-all: true
presets:
# - bugs
# - unused
fast: false
#issues:
# # List of regexps of issue texts to exclude, empty list by default.
# # But independently from this option we use default exclude patterns,
# # it can be disabled by `exclude-use-default: false`. To list all
# # excluded by default patterns execute `golangci-lint run --help`
# exclude:
# - abcdef
#
# # Excluding configuration per-path, per-linter, per-text and per-source
# exclude-rules:
# # Exclude some linters from running on tests files.
# - path: _test\.go
# linters:
# - gocyclo
# - errcheck
# - dupl
# - gosec
#
# # Exclude known linters from partially hard-vendored code,
# # which is impossible to exclude via "nolint" comments.
# - path: internal/hmac/
# text: "weak cryptographic primitive"
# linters:
# - gosec
#
# # Exclude some staticcheck messages
# - linters:
# - staticcheck
# text: "SA9003:"
#
# # Exclude lll issues for long lines with go:generate
# - linters:
# - lll
# source: "^//go:generate "
# Independently from option `exclude` we use default exclude patterns,
# it can be disabled by this option. To list all
# excluded by default patterns execute `golangci-lint run --help`.
# Default value for this option is true.
exclude-use-default: false
# The default value is false. If set to true exclude and exclude-rules
# regular expressions become case sensitive.
exclude-case-sensitive: false
# The list of ids of default excludes to include or disable. By default it's empty.
include:
- EXC0002 # disable excluding of issues about comments from golint
# Maximum issues count per one linter. Set to 0 to disable. Default is 50.
max-issues-per-linter: 0
# Maximum count of issues with the same text. Set to 0 to disable. Default is 3.
max-same-issues: 0
# Show only new issues: if there are unstaged changes or untracked files,
# only those changes are analyzed, else only changes in HEAD~ are analyzed.
# It's a super-useful option for integration of golangci-lint into existing
# large codebase. It's not practical to fix all existing issues at the moment
# of integration: much better don't allow issues in new code.
# Default is false.
new: false
# Show only new issues created after git revision `REV`
new-from-rev: REV
# Show only new issues created in git patch with set file path.
new-from-patch: path/to/patch/file
severity:
# Default value is empty string.
# Set the default severity for issues. If severity rules are defined and the issues
# do not match or no severity is provided to the rule this will be the default
# severity applied. Severities should match the supported severity names of the
# selected out format.
# - Code climate: https://docs.codeclimate.com/docs/issues#issue-severity
# - Checkstyle: https://checkstyle.sourceforge.io/property_types.html#severity
# - Github: https://help.github.com/en/actions/reference/workflow-commands-for-github-actions#setting-an-error-message
default-severity: error
# The default value is false.
# If set to true severity-rules regular expressions become case sensitive.
case-sensitive: false
# Default value is empty list.
# When a list of severity rules are provided, severity information will be added to lint
# issues. Severity rules have the same filtering capability as exclude rules except you
# are allowed to specify one matcher per severity rule.
# Only affects out formats that support setting severity information.
rules:
- linters:
- dupl
severity: info

View File

@@ -14,27 +14,18 @@
FROM golang:1.14
ARG GOPROXY
ENV GO111MODULE=on
# Use a proxy for go modules to reduce the likelihood of various hosts being down and breaking the build
ENV GOPROXY=${GOPROXY}
ENV GOPROXY=https://proxy.golang.org
# get code-generation tools (for now keep in GOPATH since they're not fully modules-compatible yet)
RUN mkdir -p /go/src/k8s.io
WORKDIR /go/src/k8s.io
RUN git config --global advice.detachedHead false
RUN git clone -b v0.18.4 https://github.com/kubernetes/code-generator
RUN wget --quiet https://github.com/kubernetes-sigs/kubebuilder/releases/download/v2.3.1/kubebuilder_2.3.1_linux_amd64.tar.gz && \
tar -zxvf kubebuilder_2.3.1_linux_amd64.tar.gz && \
mv kubebuilder_2.3.1_linux_amd64 /usr/local/kubebuilder && \
chmod +x /usr/local/kubebuilder && \
export PATH=$PATH:/usr/local/kubebuilder/bin && \
rm kubebuilder_2.3.1_linux_amd64.tar.gz
RUN git clone -b kubernetes-1.17.0 https://github.com/kubernetes/code-generator
# get controller-tools
RUN go get sigs.k8s.io/controller-tools/cmd/controller-gen@v0.3.0
RUN go get sigs.k8s.io/controller-tools/cmd/controller-gen@v0.2.4
# get goimports (the revision is pinned so we don't indiscriminately update, but the particular commit
# is not important)
@@ -54,11 +45,3 @@ RUN wget --quiet https://github.com/goreleaser/goreleaser/releases/download/v0.1
tar xvf goreleaser_Linux_x86_64.tar.gz && \
mv goreleaser /usr/bin/goreleaser && \
chmod +x /usr/bin/goreleaser
# get golangci-lint
RUN curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(go env GOPATH)/bin v1.27.0
# install kubectl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN mv ./kubectl /usr/local/bin

View File

@@ -2,8 +2,6 @@
# Copyright 2016 The Kubernetes Authors.
#
# Modifications Copyright 2020 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
@@ -20,39 +18,40 @@ set -o errexit
set -o nounset
set -o pipefail
if [[ -z "${PKG}" ]]; then
if [ -z "${PKG}" ]; then
echo "PKG must be set"
exit 1
fi
if [[ -z "${BIN}" ]]; then
if [ -z "${BIN}" ]; then
echo "BIN must be set"
exit 1
fi
if [[ -z "${GOOS}" ]]; then
if [ -z "${GOOS}" ]; then
echo "GOOS must be set"
exit 1
fi
if [[ -z "${GOARCH}" ]]; then
if [ -z "${GOARCH}" ]; then
echo "GOARCH must be set"
exit 1
fi
if [[ -z "${VERSION}" ]]; then
if [ -z "${VERSION}" ]; then
echo "VERSION must be set"
exit 1
fi
if [[ -z "${GIT_SHA}" ]]; then
if [ -z "${GIT_SHA}" ]; then
echo "GIT_SHA must be set"
exit 1
fi
if [[ -z "${GIT_TREE_STATE}" ]]; then
echo "GIT_TREE_STATE must be set"
exit 1
fi
export CGO_ENABLED=0
if [[ -z "${GIT_DIRTY}" ]]; then
GIT_TREE_STATE=clean
else
GIT_TREE_STATE=dirty
fi
LDFLAGS="-X ${PKG}/pkg/buildinfo.Version=${VERSION}"
LDFLAGS="${LDFLAGS} -X ${PKG}/pkg/buildinfo.GitSHA=${GIT_SHA}"
LDFLAGS="${LDFLAGS} -X ${PKG}/pkg/buildinfo.GitTreeState=${GIT_TREE_STATE}"

View File

@@ -1,40 +0,0 @@
#!/bin/bash
# Copyright 2020 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set +x
if [[ -z "$CI" ]]; then
echo "This script is intended to be run only on Github Actions." >&2
exit 1
fi
CHANGELOG_PATH='changelogs/unreleased'
# https://help.github.com/en/actions/reference/events-that-trigger-workflows#pull-request-event-pull_request
# GITHUB_REF is something like "refs/pull/:prNumber/merge"
pr_number=$(echo $GITHUB_REF | cut -d / -f 3)
change_log_file="${CHANGELOG_PATH}/${pr_number}-*"
if ls ${change_log_file} 1> /dev/null 2>&1; then
echo "changelog for PR ${pr_number} exists"
exit 0
else
echo "PR ${pr_number} is missing a changelog. Please refer https://velero.io/docs/main/code-standards/#adding-a-changelog and add a changelog."
exit 1
fi

View File

@@ -14,8 +14,8 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
// This code embeds the CRD manifests in config/crd/bases in
// config/crd/crds/crds.go.
// This code embeds the CRD manifests in pkg/generated/crds/manifests in
// pkg/generated/crds/crds.go.
package main
@@ -30,7 +30,7 @@ import (
"text/template"
)
// This is relative to config/crd/crds
// This is relative to pkg/generated/crds
const goHeaderFile = "../../../hack/boilerplate.go.txt"
const tpl = `{{.GoHeader}}
@@ -96,14 +96,14 @@ func main() {
GoHeader: string(headerBytes),
}
// This is relative to config/crd/crds
manifests, err := ioutil.ReadDir("../bases")
// This is relative to pkg/generated/crds
manifests, err := ioutil.ReadDir("manifests")
if err != nil {
log.Fatalln(err)
}
for _, crd := range manifests {
file, err := os.Open("../bases/" + crd.Name())
file, err := os.Open("manifests/" + crd.Name())
if err != nil {
log.Fatalln(err)
}
@@ -120,7 +120,7 @@ func main() {
data.RawCRDs = append(data.RawCRDs, fmt.Sprintf("%q", buf.Bytes()))
}
t, err := template.New("crd").Parse(tpl)
t, err := template.New("crds").Parse(tpl)
if err != nil {
log.Fatalln(err)
}

View File

@@ -1,6 +1,6 @@
#!/bin/bash
# Copyright 2020 the Velero contributors.
# Copyright 2019 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -15,7 +15,7 @@
# limitations under the License.
# docker-push is invoked by the CI/CD system to deploy docker images to Docker Hub.
# It will build images for all commits to main and all git tags.
# It will build images for all commits to master and all git tags.
# The highest, non-prerelease semantic version will also be given the `latest` tag.
set +x
@@ -56,43 +56,34 @@ elif [[ "$triggeredBy" == "tags" ]]; then
TAG=$(echo $GITHUB_REF | cut -d / -f 3)
fi
if [[ "$BRANCH" == "main" ]]; then
if [[ "$BRANCH" == "master" ]]; then
VERSION="$BRANCH"
elif [[ ! -z "$TAG" ]]; then
# Explicitly checkout tags when building from a git tag.
# This is not needed when building from main
# This is not needed when building from master
git fetch --tags
# Calculate the latest release if there's a tag.
highest_release
VERSION="$TAG"
else
echo "We're not on main and we're not building a tag, exit early."
echo "We're not on master and we're not building a tag, exit early."
exit 0
fi
# Assume we're not tagging `latest` by default, and never on main.
# Assume we're not tagging `latest` by default, and never on master.
TAG_LATEST=false
if [[ "$BRANCH" == "main" ]]; then
echo "Building main, not tagging latest."
if [[ "$BRANCH" == "master" ]]; then
echo "Building master, not tagging latest."
elif [[ "$TAG" == "$HIGHEST" ]]; then
TAG_LATEST=true
fi
if [[ -z "$BUILDX_PLATFORMS" ]]; then
BUILDX_PLATFORMS="linux/amd64,linux/arm64,linux/arm/v7,linux/ppc64le"
fi
# Debugging info
echo "Highest tag found: $HIGHEST"
echo "BRANCH: $BRANCH"
echo "TAG: $TAG"
echo "TAG_LATEST: $TAG_LATEST"
echo "BUILDX_PLATFORMS: $BUILDX_PLATFORMS"
echo "Building and pushing container images."
VERSION="$VERSION" \
TAG_LATEST="$TAG_LATEST" \
BUILDX_PLATFORMS="$BUILDX_PLATFORMS" \
BUILDX_OUTPUT_TYPE="registry" \
make all-containers
VERSION="$VERSION" TAG_LATEST="$TAG_LATEST" make all-containers all-push all-manifests

View File

@@ -1,54 +0,0 @@
#!/bin/bash
# Copyright 2020 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -o errexit
set -o nounset
set -o pipefail
if [[ -z "${BIN}" ]]; then
echo "BIN must be set"
exit 1
fi
if [[ "${BIN}" != "velero" ]]; then
echo "${BIN} does not need the restic binary"
exit 0
fi
if [[ -z "${GOOS}" ]]; then
echo "GOOS must be set"
exit 1
fi
if [[ -z "${GOARCH}" ]]; then
echo "GOARCH must be set"
exit 1
fi
if [[ -z "${RESTIC_VERSION}" ]]; then
echo "RESTIC_VERSION must be set"
exit 1
fi
# TODO: when the new restic version is released, make ppc64le to be also downloaded from their github releases.
# This has been merged and will be applied to next release: https://github.com/restic/restic/pull/2342
if [[ "${GOARCH}" = "ppc64le" ]]; then
wget --timeout=1 --tries=5 --quiet https://oplab9.parqtec.unicamp.br/pub/ppc64el/restic/restic-${RESTIC_VERSION} -O /output/usr/bin/restic
else
wget --quiet https://github.com/restic/restic/releases/download/v${RESTIC_VERSION}/restic_${RESTIC_VERSION}_${GOOS}_${GOARCH}.bz2
bunzip2 restic_${RESTIC_VERSION}_${GOOS}_${GOARCH}.bz2
mv restic_${RESTIC_VERSION}_${GOOS}_${GOARCH} /output/usr/bin/restic
fi
chmod +x /output/usr/bin/restic

108
hack/gen-docs.sh Executable file
View File

@@ -0,0 +1,108 @@
#!/bin/bash
# Copyright 2019 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# gen-docs.sh is used for the "make gen-docs" target. See additional
# documentation in the Makefile.
set -o errexit
set -o nounset
set -o pipefail
# don't run if there's already a directory for the target docs version
if [[ -d site/docs/$NEW_DOCS_VERSION ]]; then
echo "ERROR: site/docs/$NEW_DOCS_VERSION already exists"
exit 1
fi
# get the alphabetically last item in site/docs to use as PREVIOUS_DOCS_VERSION
# if not explicitly specified by the user
if [[ -z "${PREVIOUS_DOCS_VERSION:-}" ]]; then
echo "PREVIOUS_DOCS_VERSION was not specified, getting the latest version"
PREVIOUS_DOCS_VERSION=$(ls -1 site/docs/ | tail -n 1)
fi
# make a copy of the previous versioned docs dir
echo "Creating copy of docs directory site/docs/$PREVIOUS_DOCS_VERSION in site/docs/$NEW_DOCS_VERSION"
cp -r site/docs/${PREVIOUS_DOCS_VERSION}/ site/docs/${NEW_DOCS_VERSION}/
# 'git add' the previous version's docs as-is so we get a useful diff when we copy the master docs in
echo "Running 'git add' for previous version's doc contents to use as a base for diff"
git add site/docs/${NEW_DOCS_VERSION}
# now copy the contents of site/docs/master into the same directory so we can get a nice
# git diff of what changed since previous version
echo "Copying site/docs/master/ to site/docs/${NEW_DOCS_VERSION}/"
rm -rf site/docs/${NEW_DOCS_VERSION}/ && cp -r site/docs/master/ site/docs/${NEW_DOCS_VERSION}/
# make a copy of the previous versioned ToC
NEW_DOCS_TOC="$(echo ${NEW_DOCS_VERSION} | tr . -)-toc"
PREVIOUS_DOCS_TOC="$(echo ${PREVIOUS_DOCS_VERSION} | tr . -)-toc"
echo "Creating copy of site/_data/$PREVIOUS_DOCS_TOC.yml at site/_data/$NEW_DOCS_TOC.yml"
cp site/_data/$PREVIOUS_DOCS_TOC.yml site/_data/$NEW_DOCS_TOC.yml
# 'git add' the previous version's ToC content as-is so we get a useful diff when we copy the master ToC in
echo "Running 'git add' for previous version's ToC to use as a base for diff"
git add site/_data/$NEW_DOCS_TOC.yml
# now copy the master ToC so we can get a nice git diff of what changed since previous version
echo "Copying site/_data/master-toc.yml to site/_data/$NEW_DOCS_TOC.yml"
rm site/_data/$NEW_DOCS_TOC.yml && cp site/_data/master-toc.yml site/_data/$NEW_DOCS_TOC.yml
# replace known version-specific links -- the sed syntax is slightly different in OS X and Linux,
# so check which OS we're running on.
if [[ $(uname) == "Darwin" ]]; then
echo "[OS X] updating version-specific links"
find site/docs/${NEW_DOCS_VERSION} -type f -name "*.md" | xargs sed -i '' "s|https://velero.io/docs/master|https://velero.io/docs/$VELERO_VERSION|g"
find site/docs/${NEW_DOCS_VERSION} -type f -name "*.md" | xargs sed -i '' "s|https://github.com/vmware-tanzu/velero/blob/master|https://github.com/vmware-tanzu/velero/blob/$VELERO_VERSION|g"
echo "[OS X] Updating latest version in _config.yml"
sed -i '' "s/latest: ${PREVIOUS_DOCS_VERSION}/latest: ${NEW_DOCS_VERSION}/" site/_config.yml
# newlines and lack of indentation are requirements for this sed syntax
# which is doing an append
echo "[OS X] Adding latest version to versions list in _config.yml"
sed -i '' "/- master/a\\
- ${NEW_DOCS_VERSION}
" site/_config.yml
echo "[OS X] Adding ToC mapping entry"
sed -i '' "/master: master-toc/a\\
${NEW_DOCS_VERSION}: ${NEW_DOCS_TOC}
" site/_data/toc-mapping.yml
else
echo "[Linux] updating version-specific links"
find site/docs/${NEW_DOCS_VERSION} -type f -name "*.md" | xargs sed -i'' "s|https://velero.io/docs/master|https://velero.io/docs/$VELERO_VERSION|g"
find site/docs/${NEW_DOCS_VERSION} -type f -name "*.md" | xargs sed -i'' "s|https://github.com/vmware-tanzu/velero/blob/master|https://github.com/vmware-tanzu/velero/blob/$VELERO_VERSION|g"
echo "[Linux] Updating latest version in _config.yml"
sed -i'' "s/latest: ${PREVIOUS_DOCS_VERSION}/latest: ${NEW_DOCS_VERSION}/" site/_config.yml
echo "[Linux] Adding latest version to versions list in _config.yml"
sed -i'' "/- master/a - ${NEW_DOCS_VERSION}" site/_config.yml
echo "[Linux] Adding ToC mapping entry"
sed -i'' "/master: master-toc/a ${NEW_DOCS_VERSION}: ${NEW_DOCS_TOC}" site/_data/toc-mapping.yml
fi
echo "Success! site/docs/$NEW_DOCS_VERSION has been created."
echo ""
echo "The next steps are:"
echo " 1. Consult site/README-JEKYLL.md for further manual steps required to finalize the new versioned docs generation."
echo " 2. Run a 'git diff' to review all changes made to the docs since the previous version."
echo " 3. Make any manual changes/corrections necessary."
echo " 4. Run 'git add' to stage all unstaged changes, then 'git commit'."

34
hack/get-restic-ppc64le.sh Executable file
View File

@@ -0,0 +1,34 @@
#!/bin/bash
#
# Copyright 2019 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -o errexit
set -o nounset
set -o pipefail
if [ -z "${RESTIC_VERSION}" ]; then
echo "RESTIC_VERSION must be set"
exit 1
fi
if [ ! -d "_output/bin/linux/ppc64le/" ]; then
mkdir -p _output/bin/linux/ppc64le/
fi
wget --quiet https://oplab9.parqtec.unicamp.br/pub/ppc64el/restic/restic-${RESTIC_VERSION}
mv restic-${RESTIC_VERSION} _output/bin/linux/ppc64le/restic
chmod +x _output/bin/linux/ppc64le/restic

View File

@@ -36,16 +36,14 @@ else
export GIT_TREE_STATE=dirty
fi
# $PUBLISH must explicitly be set to 'TRUE' for goreleaser
# $PUBLISH must explicitly be set to 'true' for goreleaser
# to publish the release to GitHub.
if [[ "${PUBLISH:-}" != "TRUE" ]]; then
echo "Not set to publish"
if [[ "${PUBLISH:-}" != "true" ]]; then
goreleaser release \
--rm-dist \
--release-notes="${RELEASE_NOTES_FILE}" \
--skip-publish
else
echo "Getting ready to publish"
goreleaser release \
--rm-dist \
--release-notes="${RELEASE_NOTES_FILE}"

View File

@@ -1,36 +0,0 @@
#!/bin/bash
#
# Copyright 2020 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
LINTERS=${1:-"gosec,goconst,gofmt,goimports,unparam"}
ALL=${2:-false}
HACK_DIR=$(dirname "${BASH_SOURCE[0]}")
# Printing out cache status
golangci-lint cache status
if [[ $ALL == true ]] ; then
action=""
else
action="-n"
fi
# Enable GL_DEBUG line below for debug messages for golangci-lint
# export GL_DEBUG=loader,gocritic,env
CMD="golangci-lint run -E ${LINTERS} $action -c $HACK_DIR/../golangci.yaml"
echo "Running $CMD"
eval $CMD

View File

@@ -1,22 +0,0 @@
#!/usr/bin/env bash
# This script assumes that 2 environment variables are defined outside of it:
# VELERO_VERSION - a full version version string, starting with v. example: v1.4.2
# HOMEBREW_GITHUB_API_TOKEN - the GitHub API token that the brew command will use to create a PR on the user's behalf.
# Check if brew is found on the user's $PATH; exit if not.
if [ -z $(which brew) ];
then
echo "Homebrew must first be installed to use this script!"
exit 1
fi
# GitHub URL which contains the source code archive for the tagged release
URL=https://github.com/vmware-tanzu/velero/archive/$VELERO_VERSION.tar.gz
# Update brew so we're sure we have the latest Velero formula
brew update
# Invoke brew's helper function, which will run all their tests and end up opening a browser with the resulting PR.
brew bump-formula-pr velero --url=$URL

View File

@@ -22,13 +22,13 @@ import (
"regexp"
)
// This regex should match both our GA format (example: v1.4.3) and pre-release formats (v1.2.4-beta.2, v1.5.0-rc.1)
// This regex should match both our GA format (example: v1.4.3) and pre-release format (v1.2.4-beta.2)
// The following sub-capture groups are defined:
// major
// minor
// patch
// prerelease (this will be alpha/beta/rc followed by a ".", followed by 1 or more digits (alpha.5)
var release_regex *regexp.Regexp = regexp.MustCompile("^v(?P<major>[[:digit:]]+)\\.(?P<minor>[[:digit:]]+)\\.(?P<patch>[[:digit:]]+)(-{1}(?P<prerelease>(alpha|beta|rc)\\.[[:digit:]]+))*")
// prerelease (this will be alpha/beta followed by a ".", followed by 1 or more digits (alpha.5)
var release_regex *regexp.Regexp = regexp.MustCompile("^v(?P<major>[[:digit:]]+)\\.(?P<minor>[[:digit:]]+)\\.(?P<patch>[[:digit:]]+)(-{1}(?P<prerelease>(alpha|beta)\\.[[:digit:]]+))*")
// This small program exists because checking the VELERO_VERSION rules in bash is difficult, and difficult to test for correctness.
// Calling it with --verify will verify whether or not the VELERO_VERSION environment variable is a valid version string, without parsing for its components.

View File

@@ -1,142 +0,0 @@
#!/bin/bash
# Copyright 2020 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# gen-docs.sh is used for the "make gen-docs" target. It generates a new
# versioned docs directory under site/content/docs. It follows
# the following process:
# 1. Copies the contents of the most recently tagged docs directory into the new
# directory, to establish a useful baseline to diff against.
# 2. Adds all copied content from step 1 to git's staging area via 'git add'.
# 3. Replaces the contents of the new docs directory with the contents of the
# 'main' docs directory, updating any version-specific links (e.g. to a
# specific branch of the GitHub repository) to use the new version
# 4. Copies the previous version's ToC file and runs 'git add' to establish
# a useful baseline to diff against.
# 5. Replaces the content of the new ToC file with the main ToC.
# 6. Update site/config.yaml and site/_data/toc-mapping.yml to include entries
# for the new version.
#
# The unstaged changes in the working directory can now easily be diff'ed against the
# staged changes using 'git diff' to review all docs changes made since the previous
# tagged version. Once the unstaged changes are ready, they can be added to the
# staging area using 'git add' and then committed.
#
# NEW_DOCS_VERSION defines the version that the docs will be tagged with
# (i.e. whats in the URL, what shows up in the version dropdown on the site).
# This should be formatted as either v1.4 (for any GA release, including minor), or v1.5.0-beta.1/v1.5.0-rc.1 (for an alpha/beta/RC).
# To run gen-docs: "VELERO_VERSION=v1.4.0 NEW_DOCS_VERSION=v1.4 PREVIOUS_DOCS_VERSION= make gen-docs"
# Note: if PREVIOUS_DOCS_VERSION is not set, the script will copy from the
# latest version.
#
# **NOTE**: there are additional manual steps required to finalize the process of generating
# a new versioned docs site. The full process is documented in site/README-HUGO.md
set -o errexit
set -o nounset
set -o pipefail
DOCS_DIRECTORY=site/content/docs
DATA_DOCS_DIRECTORY=site/data/docs
CONFIG_FILE=site/config.yaml
MAIN_BRANCH=main
# don't run if there's already a directory for the target docs version
if [[ -d $DOCS_DIRECTORY/$NEW_DOCS_VERSION ]]; then
echo "ERROR: $DOCS_DIRECTORY/$NEW_DOCS_VERSION already exists"
exit 1
fi
# get the alphabetically last item in $DOCS_DIRECTORY to use as PREVIOUS_DOCS_VERSION
# if not explicitly specified by the user
if [[ -z "${PREVIOUS_DOCS_VERSION:-}" ]]; then
echo "PREVIOUS_DOCS_VERSION was not specified, getting the latest version"
PREVIOUS_DOCS_VERSION=$(ls -1 $DOCS_DIRECTORY/ | tail -n 1)
fi
# make a copy of the previous versioned docs dir
echo "Creating copy of docs directory $DOCS_DIRECTORY/$PREVIOUS_DOCS_VERSION in $DOCS_DIRECTORY/$NEW_DOCS_VERSION"
cp -r $DOCS_DIRECTORY/${PREVIOUS_DOCS_VERSION}/ $DOCS_DIRECTORY/${NEW_DOCS_VERSION}/
# 'git add' the previous version's docs as-is so we get a useful diff when we copy the $MAIN_BRANCH docs in
echo "Running 'git add' for previous version's doc contents to use as a base for diff"
git add -f $DOCS_DIRECTORY/${NEW_DOCS_VERSION}
# now copy the contents of $DOCS_DIRECTORY/$MAIN_BRANCH into the same directory so we can get a nice
# git diff of what changed since previous version
echo "Copying $DOCS_DIRECTORY/$MAIN_BRANCH/ to $DOCS_DIRECTORY/${NEW_DOCS_VERSION}/"
rm -rf $DOCS_DIRECTORY/${NEW_DOCS_VERSION}/ && cp -r $DOCS_DIRECTORY/$MAIN_BRANCH/ $DOCS_DIRECTORY/${NEW_DOCS_VERSION}/
# make a copy of the previous versioned ToC
NEW_DOCS_TOC="$(echo ${NEW_DOCS_VERSION} | tr . -)-toc"
PREVIOUS_DOCS_TOC="$(echo ${PREVIOUS_DOCS_VERSION} | tr . -)-toc"
echo "Creating copy of $DATA_DOCS_DIRECTORY/$PREVIOUS_DOCS_TOC.yml at $DATA_DOCS_DIRECTORY/$NEW_DOCS_TOC.yml"
cp $DATA_DOCS_DIRECTORY/$PREVIOUS_DOCS_TOC.yml $DATA_DOCS_DIRECTORY/$NEW_DOCS_TOC.yml
# 'git add' the previous version's ToC content as-is so we get a useful diff when we copy the $MAIN_BRANCH ToC in
echo "Running 'git add' for previous version's ToC to use as a base for diff"
git add $DATA_DOCS_DIRECTORY/$NEW_DOCS_TOC.yml
# now copy the $MAIN_BRANCH ToC so we can get a nice git diff of what changed since previous version
echo "Copying $DATA_DOCS_DIRECTORY/$MAIN_BRANCH-toc.yml to $DATA_DOCS_DIRECTORY/$NEW_DOCS_TOC.yml"
rm $DATA_DOCS_DIRECTORY/$NEW_DOCS_TOC.yml && cp $DATA_DOCS_DIRECTORY/$MAIN_BRANCH-toc.yml $DATA_DOCS_DIRECTORY/$NEW_DOCS_TOC.yml
# replace known version-specific links -- the sed syntax is slightly different in OS X and Linux,
# so check which OS we're running on.
if [[ $(uname) == "Darwin" ]]; then
echo "[OS X] updating version-specific links"
find $DOCS_DIRECTORY/${NEW_DOCS_VERSION} -type f -name "*.md" | xargs sed -i '' "s|https://velero.io/docs/$MAIN_BRANCH|https://velero.io/docs/$VELERO_VERSION|g"
find $DOCS_DIRECTORY/${NEW_DOCS_VERSION} -type f -name "*.md" | xargs sed -i '' "s|https://github.com/vmware-tanzu/velero/blob/$MAIN_BRANCH|https://github.com/vmware-tanzu/velero/blob/$VELERO_VERSION|g"
find $DOCS_DIRECTORY/${NEW_DOCS_VERSION} -type f -name "_index.md" | xargs sed -i '' "s|version: $MAIN_BRANCH|version: $NEW_DOCS_VERSION|g"
echo "[OS X] Updating latest version in $CONFIG_FILE"
sed -i '' "s/latest: ${PREVIOUS_DOCS_VERSION}/latest: ${NEW_DOCS_VERSION}/" $CONFIG_FILE
# newlines and lack of indentation are requirements for this sed syntax
# which is doing an append
echo "[OS X] Adding latest version to versions list in $CONFIG_FILE"
sed -i '' "/- $MAIN_BRANCH/a\\
\ \ \ \ - ${NEW_DOCS_VERSION}
" $CONFIG_FILE
echo "[OS X] Adding ToC mapping entry"
sed -i '' "/$MAIN_BRANCH: $MAIN_BRANCH-toc/a\\
${NEW_DOCS_VERSION}: ${NEW_DOCS_TOC}
" $DATA_DOCS_DIRECTORY/toc-mapping.yml
else
echo "[Linux] updating version-specific links"
find $DOCS_DIRECTORY/${NEW_DOCS_VERSION} -type f -name "*.md" | xargs sed -i'' "s|https://velero.io/docs/$MAIN_BRANCH|https://velero.io/docs/$VELERO_VERSION|g"
find $DOCS_DIRECTORY/${NEW_DOCS_VERSION} -type f -name "*.md" | xargs sed -i'' "s|https://github.com/vmware-tanzu/velero/blob/$MAIN_BRANCH|https://github.com/vmware-tanzu/velero/blob/$VELERO_VERSION|g"
echo "[Linux] Updating latest version in $CONFIG_FILE"
sed -i'' "s/latest: ${PREVIOUS_DOCS_VERSION}/latest: ${NEW_DOCS_VERSION}/" $CONFIG_FILE
echo "[Linux] Adding latest version to versions list in $CONFIG_FILE"
sed -i'' "/- $MAIN_BRANCH/a - ${NEW_DOCS_VERSION}" $CONFIG_FILE
echo "[Linux] Adding ToC mapping entry"
sed -i'' "/$MAIN_BRANCH: $MAIN_BRANCH-toc/a ${NEW_DOCS_VERSION}: ${NEW_DOCS_TOC}" $DATA_DOCS_DIRECTORY/toc-mapping.yml
fi
echo "Success! $DOCS_DIRECTORY/$NEW_DOCS_VERSION has been created."
echo ""
echo "The next steps are:"
echo " 1. Consult site/README-HUGO.md for further manual steps required to finalize the new versioned docs generation."
echo " 2. Run a 'git diff' to review all changes made to the docs since the previous version."
echo " 3. Make any manual changes/corrections necessary."
echo " 4. Run 'git add' to stage all unstaged changes, then 'git commit'."

View File

@@ -15,24 +15,9 @@
# limitations under the License.
# This script will do the necessary checks and actions to create a release of Velero. It will:
# - validate that all prerequisites are met
# - verify the version string is what the user expects.
# - create a git tag
# - push the created git tag to GitHub
# - run GoReleaser
# The following variables are needed:
# - $VELERO_VERSION: defines the tag of Velero that any https://github.com/vmware-tanzu/velero/...
# links in the docs should redirect to.
# - $publish: TRUE/FALSE value where FALSE (or not including it) will indicate a dry-run, and TRUE, or simply adding 'publish',
# will tag the release with the $VELERO_VERSION and push the tag to a remote named 'upstream'.
# - $GITHUB_TOKEN: Needed to run the goreleaser process to generate a GitHub release.
# Use https://github.com/settings/tokens/new?scopes=repo if you don't already have a token.
# Regenerate an existing token: https://github.com/settings/tokens.
# You may regenerate the token for every release if you prefer.
# See https://goreleaser.com/environment/ for more details.
# This script will do the necessary checks and actions to create a release of Velero.
# It will first validate that all prerequisites are met, then verify the version string is what the user expects.
# A git tag will be created and pushed to GitHub, and GoReleaser will be invoked.
# This script is meant to be a combination of documentation and executable.
# If you have questions at any point, please stop and ask!
@@ -40,25 +25,12 @@
# Directory in which the script itself resides, so we can use it for calling programs that are in the same directory.
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
# Parse out the branch we're on so we can switch back to it at the end of a dry-run, where we delete the tag. Requires git v1.8.1+
upstream_branch=$(git symbolic-ref --short HEAD)
function tag_and_push() {
echo "Tagging $VELERO_VERSION"
git tag $VELERO_VERSION || true
if [[ $publish == "TRUE" ]]; then
echo "Pushing $VELERO_VERSION"
git push upstream $VELERO_VERSION
fi
echo "Tagging and pushing $VELERO_VERSION"
git tag $VELERO_VERSION
git push $VELERO_VERSION
}
# Default to a dry-run mode
publish=FALSE
if [[ "$1" = "publish" ]]; then
publish=TRUE
fi
# For now, have the person doing the release pass in the VELERO_VERSION variable as an environment variable.
# In the future, we might be able to inspect git via `git describe --abbrev=0` to get a hint for it.
if [[ -z "$VELERO_VERSION" ]]; then
@@ -72,7 +44,7 @@ if [[ -z "$GITHUB_TOKEN" ]]; then
exit 1
fi
# Ensure that we have a clean working tree before we let any changes happen, especially important for cutting release branches.
Ensure that we have a clean working tree before we let any changes happen, especially important for cutting release branches.
if [[ -n $(git status --short) ]]; then
echo "Your git working directory is dirty! Please clean up untracked files and stash any changes before proceeding."
exit 3
@@ -91,19 +63,13 @@ printf "Based on this, the following assumptions have been made: \n"
[[ "$VELERO_PATCH" != 0 ]] && printf "*\t This is a patch release.\n"
# $VELERO_PRERELEASE gets populated by the chk_version.go script that parses and verifies the given version format
# -n is "string is non-empty"
[[ -n $VELERO_PRERELEASE ]] && printf "*\t This is a pre-release.\n"
# -z is "string is empty"
[[ -z $VELERO_PRERELEASE ]] && printf "*\t This is a GA release.\n"
if [[ $publish == "TRUE" ]]; then
echo "If this is all correct, press enter/return to proceed to TAG THE RELEASE and UPLOAD THE TAG TO GITHUB."
else
echo "If this is all correct, press enter/return to proceed to TAG THE RELEASE and PROCEED WITH THE DRY-RUN."
fi
echo "If this is all correct, press enter/return to proceed to TAG THE RELEASE and UPLOAD THE TAG TO GITHUB."
echo "Otherwise, press ctrl-c to CANCEL the process without making any changes."
read -p "Ready to continue? "
@@ -111,9 +77,8 @@ read -p "Ready to continue? "
echo "Alright, let's go."
echo "Pulling down all git tags and branches before doing any work."
git fetch upstream --tags
git fetch upstream --all --tags
# $VELERO_PATCH gets populated by the chk_version.go scrip that parses and verifies the given version format
# If we've got a patch release, we'll need to create a release branch for it.
if [[ "$VELERO_PATCH" > 0 ]]; then
release_branch_name=release-$VELERO_MAJOR.$VELERO_MINOR
@@ -130,38 +95,24 @@ if [[ "$VELERO_PATCH" > 0 ]]; then
echo "Now you'll need to cherry-pick any relevant git commits into this release branch."
echo "Either pause this script with ctrl-z, or open a new terminal window and do the cherry-picking."
if [[ $publish == "TRUE" ]]; then
read -p "Press enter when you're done cherry-picking. THIS WILL MAKE A TAG PUSH THE BRANCH TO upstream"
else
read -p "Press enter when you're done cherry-picking."
fi
read -p "Press enter when you're done cherry-picking. THIS WILL MAKE A TAG PUSH THE BRANCH TO UPSTREAM"
# TODO can/should we add a way to review the cherry-picked commits before the push?
if [[ $publish == "TRUE" ]]; then
echo "Pushing $release_branch_name to upstream remote"
git push --set-upstream upstream/$release_branch_name $release_branch_name
fi
echo "Pushing $release_branch_name to upstream remote"
git push --set-upstream upstream/$release_branch_name $release_branch_name
tag_and_push
else
echo "Checking out upstream/main."
git checkout upstream/main
echo "Checking out upstream/master."
git checkout upstream/master
tag_and_push
fi
echo "Invoking Goreleaser to create the GitHub release."
RELEASE_NOTES_FILE=changelogs/CHANGELOG-$VELERO_MAJOR.$VELERO_MINOR.md \
PUBLISH=$publish \
PUBLISH=TRUE \
make release
if [[ $publish == "FALSE" ]]; then
# Delete the local tag so we don't potentially conflict when it's re-run for real.
# This also means we won't have to just ignore existing tags in tag_and_push, which could be a problem if there's an existing tag.
echo "Dry run complete. Deleting git tag $VELERO_VERSION"
git checkout $upstream_branch
git tag -d $VELERO_VERSION
fi

View File

@@ -1,3 +0,0 @@
[
{ "op": "replace", "path": "/spec/validation/openAPIV3Schema/properties/spec/properties/hooks/properties/resources/items/properties/postHooks/items/properties/init/properties/initContainers/items/properties/ports/items/required", "value": [ "containerPort", "protocol"] }
]

View File

@@ -1,7 +1,6 @@
#!/bin/bash
# Copyright 2016 The Kubernetes Authors.
# Modifications Copyright 2020 The Velero Contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -24,7 +23,6 @@ export CGO_ENABLED=0
TARGETS=(
./cmd/...
./pkg/...
./internal/...
)
if [[ ${#@} -ne 0 ]]; then

View File

@@ -44,21 +44,9 @@ ${GOPATH}/src/k8s.io/code-generator/generate-groups.sh \
--output-base ../../.. \
$@
# Generate manifests e.g. CRD, RBAC etc.
controller-gen \
crd:crdVersions=v1beta1,preserveUnknownFields=false,trivialVersions=true \
rbac:roleName=manager-role \
paths=./pkg/apis/velero/v1/... \
paths=./pkg/controller/... \
output:crd:artifacts:config=config/crd/bases
crd:crdVersions=v1beta1,preserveUnknownFields=false \
output:dir=./pkg/generated/crds/manifests \
paths=./pkg/apis/velero/v1/...
# this is a super hacky workaround for https://github.com/kubernetes/kubernetes/issues/91395
# which a result of fixing the validation on CRD objects. The validation ensures the fields that are list map keys, are either marked
# as required or have default values to ensure merging of list map items work as expected.
# With "containerPort" and "protocol" being considered as x-kubernetes-list-map-keys in the container ports, and "protocol" was not
# a required field, the CRD would fail validation with errors similar to the one reported in https://github.com/kubernetes/kubernetes/issues/91395.
# once controller-gen (above) is able to generate CRDs with `protocol` as a required field, this hack can be removed.
kubectl patch -f config/crd/bases/velero.io_restores.yaml -p "$(cat hack/restore-crd-patch.json)" --type=json --local=true -o yaml > /tmp/velero.io_restores-yaml.patched
mv /tmp/velero.io_restores-yaml.patched config/crd/bases/velero.io_restores.yaml
go generate ./config/crd/crds
go generate ./pkg/generated/crds

View File

@@ -19,11 +19,11 @@ HACK_DIR=$(dirname "${BASH_SOURCE}")
${HACK_DIR}/update-generated-crd-code.sh --verify-only
# ensure no changes to generated CRDs
if ! git diff --exit-code config/crd/crds/crds.go >/dev/null; then
if ! git diff --exit-code pkg/generated/crds/crds.go >/dev/null; then
# revert changes to state before running CRD generation to stay consistent
# with code-generator `--verify-only` option which discards generated changes
git checkout config/crd
git checkout pkg/generated/crds
echo "CRD verification - failed! Generated CRDs are out-of-date, please run 'make update' and 'git add' the generated file(s)."
echo "CRD verification - failed! Generated CRDs are out-of-date, please run 'make update'."
exit 1
fi
fi

View File

@@ -1,199 +0,0 @@
/*
Copyright 2020 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package delete
import (
"io"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/util/sets"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/archive"
"github.com/vmware-tanzu/velero/pkg/discovery"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
"github.com/vmware-tanzu/velero/pkg/util/collections"
"github.com/vmware-tanzu/velero/pkg/util/filesystem"
)
// Context provides the necessary environment to run DeleteItemAction plugins
type Context struct {
Backup *velerov1api.Backup
BackupReader io.Reader
Actions []velero.DeleteItemAction
Filesystem filesystem.Interface
Log logrus.FieldLogger
DiscoveryHelper discovery.Helper
resolvedActions []resolvedAction
}
func InvokeDeleteActions(ctx *Context) error {
var err error
ctx.resolvedActions, err = resolveActions(ctx.Actions, ctx.DiscoveryHelper)
// No actions installed and no error means we don't have to continue;
// just do the backup deletion without worrying about plugins.
if len(ctx.resolvedActions) == 0 && err == nil {
ctx.Log.Debug("No delete item actions present, proceeding with rest of backup deletion process")
return nil
} else if err != nil {
return errors.Wrapf(err, "error resolving actions")
}
// get items out of backup tarball into a temp directory
dir, err := archive.NewExtractor(ctx.Log, ctx.Filesystem).UnzipAndExtractBackup(ctx.BackupReader)
if err != nil {
return errors.Wrapf(err, "error extracting backup")
}
defer ctx.Filesystem.RemoveAll(dir)
ctx.Log.Debugf("Downloaded and extracted the backup file to: %s", dir)
backupResources, err := archive.NewParser(ctx.Log, ctx.Filesystem).Parse(dir)
processdResources := sets.NewString()
ctx.Log.Debugf("Trying to reconcile resource names with Kube API server.")
// Transform resource names based on what's canonical in the API server.
for resource := range backupResources {
gvr, _, err := ctx.DiscoveryHelper.ResourceFor(schema.ParseGroupResource(resource).WithVersion(""))
if err != nil {
return errors.Wrapf(err, "failed to resolve resource into complete group/version/resource: %v", resource)
}
groupResource := gvr.GroupResource()
// We've already seen this group/resource, so don't process it again.
if processdResources.Has(groupResource.String()) {
continue
}
// Get a list of all items that exist for this resource
resourceList := backupResources[groupResource.String()]
if resourceList == nil {
// After canonicalization from the API server, the resources may not exist in the tarball
// Skip them if that's the case.
continue
}
// Iterate over all items, grouped by namespace.
for namespace, items := range resourceList.ItemsByNamespace {
nsLog := ctx.Log.WithField("namespace", namespace)
nsLog.Info("Starting to check for items in namespace")
// Filter applicable actions based on namespace only once per namespace.
actions := ctx.getApplicableActions(groupResource, namespace)
// Process individual items from the backup
for _, item := range items {
itemPath := archive.GetItemFilePath(dir, resource, namespace, item)
// obj is the Unstructured item from the backup
obj, err := archive.Unmarshal(ctx.Filesystem, itemPath)
if err != nil {
return errors.Wrapf(err, "Could not unmarshal item: %v", item)
}
itemLog := nsLog.WithField("item", obj.GetName())
itemLog.Infof("invoking DeleteItemAction plugins")
for _, action := range actions {
if !action.selector.Matches(labels.Set(obj.GetLabels())) {
continue
}
err = action.Execute(&velero.DeleteItemActionExecuteInput{
Item: obj,
Backup: ctx.Backup,
})
// Since we want to keep looping even on errors, log them instead of just returning.
if err != nil {
itemLog.WithError(err).Error("plugin error")
}
}
}
}
}
return nil
}
// getApplicableActions takes resolved DeleteItemActions and filters them for a given group/resource and namespace.
func (ctx *Context) getApplicableActions(groupResource schema.GroupResource, namespace string) []resolvedAction {
var actions []resolvedAction
for _, action := range ctx.resolvedActions {
if !action.resourceIncludesExcludes.ShouldInclude(groupResource.String()) {
continue
}
if namespace != "" && !action.namespaceIncludesExcludes.ShouldInclude(namespace) {
continue
}
if namespace == "" && !action.namespaceIncludesExcludes.IncludeEverything() {
continue
}
actions = append(actions, action)
}
return actions
}
// resolvedActions are DeleteItemActions decorated with resource/namespace include/exclude collections, as well as label selectors for easy comparison.
type resolvedAction struct {
velero.DeleteItemAction
resourceIncludesExcludes *collections.IncludesExcludes
namespaceIncludesExcludes *collections.IncludesExcludes
selector labels.Selector
}
// resolveActions resolves the AppliesTo ResourceSelectors of DeleteItemActions plugins against the Kubernetes discovery API for fully-qualified names.
func resolveActions(actions []velero.DeleteItemAction, helper discovery.Helper) ([]resolvedAction, error) {
var resolved []resolvedAction
for _, action := range actions {
resourceSelector, err := action.AppliesTo()
if err != nil {
return nil, err
}
resources := collections.GetResourceIncludesExcludes(helper, resourceSelector.IncludedResources, resourceSelector.ExcludedResources)
namespaces := collections.NewIncludesExcludes().Includes(resourceSelector.IncludedNamespaces...).Excludes(resourceSelector.ExcludedNamespaces...)
selector := labels.Everything()
if resourceSelector.LabelSelector != "" {
if selector, err = labels.Parse(resourceSelector.LabelSelector); err != nil {
return nil, err
}
}
res := resolvedAction{
DeleteItemAction: action,
resourceIncludesExcludes: resources,
namespaceIncludesExcludes: namespaces,
selector: selector,
}
resolved = append(resolved, res)
}
return resolved, nil
}

View File

@@ -1,268 +0,0 @@
/*
Copyright 2020 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package delete
import (
"context"
"io"
"sort"
"testing"
"github.com/sirupsen/logrus"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"k8s.io/apimachinery/pkg/api/meta"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/builder"
"github.com/vmware-tanzu/velero/pkg/discovery"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
"github.com/vmware-tanzu/velero/pkg/test"
kubeutil "github.com/vmware-tanzu/velero/pkg/util/kube"
)
func TestInvokeDeleteItemActionsRunForCorrectItems(t *testing.T) {
// Declare test-singleton objects.
fs := test.NewFakeFileSystem()
log := logrus.StandardLogger()
tests := []struct {
name string
backup *velerov1api.Backup
apiResources []*test.APIResource
tarball io.Reader
actions map[*recordResourcesAction][]string // recordResourceActions are the plugins that will capture item ids, the []string values are the ids we'll test against.
}{
{
name: "single action with no selector runs for all items",
backup: builder.ForBackup("velero", "velero").Result(),
tarball: test.NewTarWriter(t).
AddItems("pods", builder.ForPod("ns-1", "pod-1").Result(), builder.ForPod("ns-2", "pod-2").Result()).
AddItems("persistentvolumes", builder.ForPersistentVolume("pv-1").Result(), builder.ForPersistentVolume("pv-2").Result()).
Done(),
apiResources: []*test.APIResource{test.Pods(), test.PVs()},
actions: map[*recordResourcesAction][]string{
new(recordResourcesAction): {"ns-1/pod-1", "ns-2/pod-2", "pv-1", "pv-2"},
},
},
{
name: "single action with a resource selector for namespaced resources runs only for matching resources",
backup: builder.ForBackup("velero", "velero").Result(),
tarball: test.NewTarWriter(t).
AddItems("pods", builder.ForPod("ns-1", "pod-1").Result(), builder.ForPod("ns-2", "pod-2").Result()).
AddItems("persistentvolumes", builder.ForPersistentVolume("pv-1").Result(), builder.ForPersistentVolume("pv-2").Result()).
Done(),
apiResources: []*test.APIResource{test.Pods(), test.PVs()},
actions: map[*recordResourcesAction][]string{
new(recordResourcesAction).ForResource("pods"): {"ns-1/pod-1", "ns-2/pod-2"},
},
},
{
name: "single action with a resource selector for cluster-scoped resources runs only for matching resources",
backup: builder.ForBackup("velero", "velero").Result(),
tarball: test.NewTarWriter(t).
AddItems("pods", builder.ForPod("ns-1", "pod-1").Result(), builder.ForPod("ns-2", "pod-2").Result()).
AddItems("persistentvolumes", builder.ForPersistentVolume("pv-1").Result(), builder.ForPersistentVolume("pv-2").Result()).
Done(),
apiResources: []*test.APIResource{test.Pods(), test.PVs()},
actions: map[*recordResourcesAction][]string{
new(recordResourcesAction).ForResource("persistentvolumes"): {"pv-1", "pv-2"},
},
},
{
name: "single action with a namespace selector runs only for resources in that namespace",
backup: builder.ForBackup("velero", "velero").Result(),
tarball: test.NewTarWriter(t).
AddItems("pods", builder.ForPod("ns-1", "pod-1").Result(), builder.ForPod("ns-2", "pod-2").Result()).
AddItems("persistentvolumeclaims", builder.ForPersistentVolumeClaim("ns-1", "pvc-1").Result(), builder.ForPersistentVolumeClaim("ns-2", "pvc-2").Result()).
AddItems("persistentvolumes", builder.ForPersistentVolume("pv-1").Result(), builder.ForPersistentVolume("pv-2").Result()).
Done(),
apiResources: []*test.APIResource{test.Pods(), test.PVCs(), test.PVs()},
actions: map[*recordResourcesAction][]string{
new(recordResourcesAction).ForNamespace("ns-1"): {"ns-1/pod-1", "ns-1/pvc-1"},
},
},
{
name: "multiple actions, each with a different resource selector using short name, run for matching resources",
backup: builder.ForBackup("velero", "velero").Result(),
tarball: test.NewTarWriter(t).
AddItems("pods", builder.ForPod("ns-1", "pod-1").Result(), builder.ForPod("ns-2", "pod-2").Result()).
AddItems("persistentvolumeclaims", builder.ForPersistentVolumeClaim("ns-1", "pvc-1").Result(), builder.ForPersistentVolumeClaim("ns-2", "pvc-2").Result()).
AddItems("persistentvolumes", builder.ForPersistentVolume("pv-1").Result(), builder.ForPersistentVolume("pv-2").Result()).
Done(),
apiResources: []*test.APIResource{test.Pods(), test.PVCs(), test.PVs()},
actions: map[*recordResourcesAction][]string{
new(recordResourcesAction).ForResource("po"): {"ns-1/pod-1", "ns-2/pod-2"},
new(recordResourcesAction).ForResource("pv"): {"pv-1", "pv-2"},
},
},
{
name: "actions with selectors that don't match anything don't run for any resources",
backup: builder.ForBackup("velero", "velero").Result(),
tarball: test.NewTarWriter(t).
AddItems("pods", builder.ForPod("ns-1", "pod-1").Result()).
AddItems("persistentvolumeclaims", builder.ForPersistentVolumeClaim("ns-2", "pvc-2").Result()).
Done(),
apiResources: []*test.APIResource{test.Pods(), test.PVCs(), test.PVs()},
actions: map[*recordResourcesAction][]string{
new(recordResourcesAction).ForNamespace("ns-1").ForResource("persistentvolumeclaims"): nil,
new(recordResourcesAction).ForNamespace("ns-2").ForResource("pods"): nil,
},
},
{
name: "single action with label selector runs only for those items",
backup: builder.ForBackup("velero", "velero").Result(),
tarball: test.NewTarWriter(t).
AddItems("pods", builder.ForPod("ns-1", "pod-1").ObjectMeta(builder.WithLabels("app", "app1")).Result(), builder.ForPod("ns-2", "pod-2").Result()).
AddItems("persistentvolumeclaims", builder.ForPersistentVolumeClaim("ns-1", "pvc-1").Result(), builder.ForPersistentVolumeClaim("ns-2", "pvc-2").ObjectMeta(builder.WithLabels("app", "app1")).Result()).
Done(),
apiResources: []*test.APIResource{test.Pods(), test.PVCs()},
actions: map[*recordResourcesAction][]string{
new(recordResourcesAction).ForLabelSelector("app=app1"): {"ns-1/pod-1", "ns-2/pvc-2"},
},
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
// test harness contains the fake API server/discovery client
h := newHarness(t)
for _, r := range tc.apiResources {
h.addResource(t, r)
}
// Get the plugins out of the map in order to use them.
actions := []velero.DeleteItemAction{}
for action := range tc.actions {
actions = append(actions, action)
}
c := &Context{
Backup: tc.backup,
BackupReader: tc.tarball,
Filesystem: fs,
DiscoveryHelper: h.discoveryHelper,
Actions: actions,
Log: log,
}
err := InvokeDeleteActions(c)
require.NoError(t, err)
// Compare the plugins against the ids that we wanted.
for action, want := range tc.actions {
sort.Strings(want)
sort.Strings(action.ids)
assert.Equal(t, want, action.ids)
}
})
}
}
// TODO: unify this with the test harness in pkg/restore/restore_test.go
type harness struct {
*test.APIServer
discoveryHelper discovery.Helper
}
func newHarness(t *testing.T) *harness {
t.Helper()
apiServer := test.NewAPIServer(t)
log := logrus.StandardLogger()
discoveryHelper, err := discovery.NewHelper(apiServer.DiscoveryClient, log)
require.NoError(t, err)
return &harness{
APIServer: apiServer,
discoveryHelper: discoveryHelper,
}
}
// addResource adds an APIResource and it's items to a faked API server for testing.
func (h *harness) addResource(t *testing.T, resource *test.APIResource) {
t.Helper()
h.DiscoveryClient.WithAPIResource(resource)
require.NoError(t, h.discoveryHelper.Refresh())
for _, item := range resource.Items {
obj, err := runtime.DefaultUnstructuredConverter.ToUnstructured(item)
require.NoError(t, err)
unstructuredObj := &unstructured.Unstructured{Object: obj}
if resource.Namespaced {
_, err = h.DynamicClient.Resource(resource.GVR()).Namespace(item.GetNamespace()).Create(context.TODO(), unstructuredObj, metav1.CreateOptions{})
} else {
_, err = h.DynamicClient.Resource(resource.GVR()).Create(context.TODO(), unstructuredObj, metav1.CreateOptions{})
}
require.NoError(t, err)
}
}
// recordResourcesAction is a delete item action that can be configured to run
// for specific resources/namespaces and simply record the items that is is
// executed for.
type recordResourcesAction struct {
selector velero.ResourceSelector
ids []string
}
func (a *recordResourcesAction) AppliesTo() (velero.ResourceSelector, error) {
return a.selector, nil
}
func (a *recordResourcesAction) Execute(input *velero.DeleteItemActionExecuteInput) error {
metadata, err := meta.Accessor(input.Item)
if err != nil {
return err
}
a.ids = append(a.ids, kubeutil.NamespaceAndName(metadata))
return nil
}
func (a *recordResourcesAction) ForResource(resource string) *recordResourcesAction {
a.selector.IncludedResources = append(a.selector.IncludedResources, resource)
return a
}
func (a *recordResourcesAction) ForNamespace(namespace string) *recordResourcesAction {
a.selector.IncludedNamespaces = append(a.selector.IncludedNamespaces, namespace)
return a
}
func (a *recordResourcesAction) ForLabelSelector(selector string) *recordResourcesAction {
a.selector.LabelSelector = selector
return a
}
func TestInvokeDeleteItemActionsWithNoPlugins(t *testing.T) {
c := &Context{
Backup: builder.ForBackup("velero", "velero").Result(),
Log: logrus.StandardLogger(),
// No other fields are set on the assumption that if 0 actions are present,
// the backup tarball and file system being empty will produce no errors.
}
err := InvokeDeleteActions(c)
require.NoError(t, err)
}

View File

@@ -1,520 +0,0 @@
/*
Copyright 2020 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package hook
import (
"encoding/json"
"fmt"
"strings"
"time"
uuid "github.com/gofrs/uuid"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
corev1api "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/meta"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/kuberesource"
"github.com/vmware-tanzu/velero/pkg/podexec"
"github.com/vmware-tanzu/velero/pkg/restic"
"github.com/vmware-tanzu/velero/pkg/util/collections"
"github.com/vmware-tanzu/velero/pkg/util/kube"
)
type hookPhase string
const (
PhasePre hookPhase = "pre"
PhasePost hookPhase = "post"
)
const (
// Backup hook annotations
podBackupHookContainerAnnotationKey = "hook.backup.velero.io/container"
podBackupHookCommandAnnotationKey = "hook.backup.velero.io/command"
podBackupHookOnErrorAnnotationKey = "hook.backup.velero.io/on-error"
podBackupHookTimeoutAnnotationKey = "hook.backup.velero.io/timeout"
// Restore hook annotations
podRestoreHookContainerAnnotationKey = "post.hook.restore.velero.io/container"
podRestoreHookCommandAnnotationKey = "post.hook.restore.velero.io/command"
podRestoreHookOnErrorAnnotationKey = "post.hook.restore.velero.io/on-error"
podRestoreHookTimeoutAnnotationKey = "post.hook.restore.velero.io/exec-timeout"
podRestoreHookWaitTimeoutAnnotationKey = "post.hook.restore.velero.io/wait-timeout"
podRestoreHookInitContainerImageAnnotationKey = "init.hook.restore.velero.io/container-image"
podRestoreHookInitContainerNameAnnotationKey = "init.hook.restore.velero.io/container-name"
podRestoreHookInitContainerCommandAnnotationKey = "init.hook.restore.velero.io/command"
podRestoreHookInitContainerTimeoutAnnotationKey = "init.hook.restore.velero.io/timeout"
)
// ItemHookHandler invokes hooks for an item.
type ItemHookHandler interface {
// HandleHooks invokes hooks for an item. If the item is a pod and the appropriate annotations exist
// to specify a hook, that is executed. Otherwise, this looks at the backup context's Backup to
// determine if there are any hooks relevant to the item, taking into account the hook spec's
// namespaces, resources, and label selector.
HandleHooks(
log logrus.FieldLogger,
groupResource schema.GroupResource,
obj runtime.Unstructured,
resourceHooks []ResourceHook,
phase hookPhase,
) error
}
// ItemRestoreHookHandler invokes restore hooks for an item
type ItemRestoreHookHandler interface {
HandleRestoreHooks(
log logrus.FieldLogger,
groupResource schema.GroupResource,
obj runtime.Unstructured,
rh []ResourceRestoreHook,
) (runtime.Unstructured, error)
}
// InitContainerRestoreHookHandler is the restore hook handler to add init containers to restored pods.
type InitContainerRestoreHookHandler struct{}
// HandleRestoreHooks runs the restore hooks for an item.
// If the item is a pod, then hooks are chosen to be run as follows:
// If the pod has the appropriate annotations specifying the hook action, then hooks from the annotation are run
// Otherwise, the supplied ResourceRestoreHooks are applied.
func (i *InitContainerRestoreHookHandler) HandleRestoreHooks(
log logrus.FieldLogger,
groupResource schema.GroupResource,
obj runtime.Unstructured,
resourceRestoreHooks []ResourceRestoreHook,
) (runtime.Unstructured, error) {
// We only support hooks on pods right now
if groupResource != kuberesource.Pods {
return nil, nil
}
metadata, err := meta.Accessor(obj)
if err != nil {
return nil, errors.Wrap(err, "unable to get a metadata accessor")
}
pod := new(corev1api.Pod)
if err := runtime.DefaultUnstructuredConverter.FromUnstructured(obj.UnstructuredContent(), pod); err != nil {
return nil, errors.WithStack(err)
}
initContainers := []corev1api.Container{}
// If this pod had pod volumes backed up using restic, then we want to the pod volumes restored prior to
// running the restore hook init containers. This allows the restore hook init containers to prepare the
// restored data to be consumed by the application container(s).
// So if there is a "resitc-wait" init container already on the pod at index 0, we'll preserve that and run
// it before running any other init container.
if len(pod.Spec.InitContainers) > 0 && pod.Spec.InitContainers[0].Name == restic.InitContainer {
initContainers = append(initContainers, pod.Spec.InitContainers[0])
pod.Spec.InitContainers = pod.Spec.InitContainers[1:]
}
hooksFromAnnotations := getInitRestoreHookFromAnnotation(kube.NamespaceAndName(pod), metadata.GetAnnotations(), log)
if hooksFromAnnotations != nil {
log.Infof("Handling InitRestoreHooks from pod annotaions")
initContainers = append(initContainers, hooksFromAnnotations.InitContainers...)
} else {
log.Infof("Handling InitRestoreHooks from RestoreSpec")
// pod did not have the annotations appropriate for restore hooks
// running applicable ResourceRestoreHooks supplied.
namespace := metadata.GetNamespace()
labels := labels.Set(metadata.GetLabels())
for _, rh := range resourceRestoreHooks {
if !rh.Selector.applicableTo(groupResource, namespace, labels) {
continue
}
for _, hook := range rh.RestoreHooks {
if hook.Init != nil {
initContainers = append(initContainers, hook.Init.InitContainers...)
}
}
}
}
pod.Spec.InitContainers = append(initContainers, pod.Spec.InitContainers...)
log.Infof("Returning pod %s/%s with %d init container(s)", pod.Namespace, pod.Name, len(pod.Spec.InitContainers))
podMap, err := runtime.DefaultUnstructuredConverter.ToUnstructured(&pod)
if err != nil {
return nil, errors.WithStack(err)
}
return &unstructured.Unstructured{Object: podMap}, nil
}
// DefaultItemHookHandler is the default itemHookHandler.
type DefaultItemHookHandler struct {
PodCommandExecutor podexec.PodCommandExecutor
}
func (h *DefaultItemHookHandler) HandleHooks(
log logrus.FieldLogger,
groupResource schema.GroupResource,
obj runtime.Unstructured,
resourceHooks []ResourceHook,
phase hookPhase,
) error {
// We only support hooks on pods right now
if groupResource != kuberesource.Pods {
return nil
}
metadata, err := meta.Accessor(obj)
if err != nil {
return errors.Wrap(err, "unable to get a metadata accessor")
}
namespace := metadata.GetNamespace()
name := metadata.GetName()
// If the pod has the hook specified via annotations, that takes priority.
hookFromAnnotations := getPodExecHookFromAnnotations(metadata.GetAnnotations(), phase, log)
if phase == PhasePre && hookFromAnnotations == nil {
// See if the pod has the legacy hook annotation keys (i.e. without a phase specified)
hookFromAnnotations = getPodExecHookFromAnnotations(metadata.GetAnnotations(), "", log)
}
if hookFromAnnotations != nil {
hookLog := log.WithFields(
logrus.Fields{
"hookSource": "annotation",
"hookType": "exec",
"hookPhase": phase,
},
)
if err := h.PodCommandExecutor.ExecutePodCommand(hookLog, obj.UnstructuredContent(), namespace, name, "<from-annotation>", hookFromAnnotations); err != nil {
hookLog.WithError(err).Error("Error executing hook")
if hookFromAnnotations.OnError == velerov1api.HookErrorModeFail {
return err
}
}
return nil
}
labels := labels.Set(metadata.GetLabels())
// Otherwise, check for hooks defined in the backup spec.
for _, resourceHook := range resourceHooks {
if !resourceHook.Selector.applicableTo(groupResource, namespace, labels) {
continue
}
var hooks []velerov1api.BackupResourceHook
if phase == PhasePre {
hooks = resourceHook.Pre
} else {
hooks = resourceHook.Post
}
for _, hook := range hooks {
if groupResource == kuberesource.Pods {
if hook.Exec != nil {
hookLog := log.WithFields(
logrus.Fields{
"hookSource": "backupSpec",
"hookType": "exec",
"hookPhase": phase,
},
)
err := h.PodCommandExecutor.ExecutePodCommand(hookLog, obj.UnstructuredContent(), namespace, name, resourceHook.Name, hook.Exec)
if err != nil {
hookLog.WithError(err).Error("Error executing hook")
if hook.Exec.OnError == velerov1api.HookErrorModeFail {
return err
}
}
}
}
}
}
return nil
}
func phasedKey(phase hookPhase, key string) string {
if phase != "" {
return fmt.Sprintf("%v.%v", phase, key)
}
return string(key)
}
func getHookAnnotation(annotations map[string]string, key string, phase hookPhase) string {
return annotations[phasedKey(phase, key)]
}
// getPodExecHookFromAnnotations returns an ExecHook based on the annotations, as long as the
// 'command' annotation is present. If it is absent, this returns nil.
// If there is an error in parsing a supplied timeout, it is logged.
func getPodExecHookFromAnnotations(annotations map[string]string, phase hookPhase, log logrus.FieldLogger) *velerov1api.ExecHook {
commandValue := getHookAnnotation(annotations, podBackupHookCommandAnnotationKey, phase)
if commandValue == "" {
return nil
}
container := getHookAnnotation(annotations, podBackupHookContainerAnnotationKey, phase)
onError := velerov1api.HookErrorMode(getHookAnnotation(annotations, podBackupHookOnErrorAnnotationKey, phase))
if onError != velerov1api.HookErrorModeContinue && onError != velerov1api.HookErrorModeFail {
onError = ""
}
var timeout time.Duration
timeoutString := getHookAnnotation(annotations, podBackupHookTimeoutAnnotationKey, phase)
if timeoutString != "" {
if temp, err := time.ParseDuration(timeoutString); err == nil {
timeout = temp
} else {
log.Warn(errors.Wrapf(err, "Unable to parse provided timeout %s, using default", timeoutString))
}
}
return &velerov1api.ExecHook{
Container: container,
Command: parseStringToCommand(commandValue),
OnError: onError,
Timeout: metav1.Duration{Duration: timeout},
}
}
func parseStringToCommand(commandValue string) []string {
var command []string
// check for json array
if commandValue[0] == '[' {
if err := json.Unmarshal([]byte(commandValue), &command); err != nil {
command = []string{commandValue}
}
} else {
command = append(command, commandValue)
}
return command
}
type ResourceHookSelector struct {
Namespaces *collections.IncludesExcludes
Resources *collections.IncludesExcludes
LabelSelector labels.Selector
}
// ResourceHook is a hook for a given resource.
type ResourceHook struct {
Name string
Selector ResourceHookSelector
Pre []velerov1api.BackupResourceHook
Post []velerov1api.BackupResourceHook
}
func (r ResourceHookSelector) applicableTo(groupResource schema.GroupResource, namespace string, labels labels.Set) bool {
if r.Namespaces != nil && !r.Namespaces.ShouldInclude(namespace) {
return false
}
if r.Resources != nil && !r.Resources.ShouldInclude(groupResource.String()) {
return false
}
if r.LabelSelector != nil && !r.LabelSelector.Matches(labels) {
return false
}
return true
}
// ResourceRestoreHook is a restore hook for a given resource.
type ResourceRestoreHook struct {
Name string
Selector ResourceHookSelector
RestoreHooks []velerov1api.RestoreResourceHook
}
func getInitRestoreHookFromAnnotation(podName string, annotations map[string]string, log logrus.FieldLogger) *velerov1api.InitRestoreHook {
containerImage := annotations[podRestoreHookInitContainerImageAnnotationKey]
containerName := annotations[podRestoreHookInitContainerNameAnnotationKey]
command := annotations[podRestoreHookInitContainerCommandAnnotationKey]
if containerImage == "" {
log.Infof("Pod %s has no %s annotation, no initRestoreHook in annotation", podName, podRestoreHookInitContainerImageAnnotationKey)
return nil
}
if command == "" {
log.Infof("RestoreHook init contianer for pod %s is using container's default entrypoint", podName, containerImage)
}
if containerName == "" {
uid, err := uuid.NewV4()
uuidStr := "deadfeed"
if err != nil {
log.Errorf("Failed to generate UUID for container name")
} else {
uuidStr = strings.Split(uid.String(), "-")[0]
}
containerName = fmt.Sprintf("velero-restore-init-%s", uuidStr)
log.Infof("Pod %s has no %s annotation, using generated name %s for initContainer", podName, podRestoreHookInitContainerNameAnnotationKey, containerName)
}
return &velerov1api.InitRestoreHook{
InitContainers: []corev1api.Container{
{
Image: containerImage,
Name: containerName,
Command: parseStringToCommand(command),
},
},
}
}
// GetRestoreHooksFromSpec returns a list of ResourceRestoreHooks from the restore Spec.
func GetRestoreHooksFromSpec(hooksSpec *velerov1api.RestoreHooks) ([]ResourceRestoreHook, error) {
if hooksSpec == nil {
return []ResourceRestoreHook{}, nil
}
restoreHooks := make([]ResourceRestoreHook, 0, len(hooksSpec.Resources))
for _, rs := range hooksSpec.Resources {
rh := ResourceRestoreHook{
Name: rs.Name,
Selector: ResourceHookSelector{
Namespaces: collections.NewIncludesExcludes().Includes(rs.IncludedNamespaces...).Excludes(rs.ExcludedNamespaces...),
// these hooks ara applicable only to pods.
// TODO: resolve the pods resource via discovery?
Resources: collections.NewIncludesExcludes().Includes(kuberesource.Pods.Resource),
},
// TODO does this work for ExecRestoreHook as well?
RestoreHooks: rs.PostHooks,
}
if rs.LabelSelector != nil {
ls, err := metav1.LabelSelectorAsSelector(rs.LabelSelector)
if err != nil {
return nil, errors.WithStack(err)
}
rh.Selector.LabelSelector = ls
}
restoreHooks = append(restoreHooks, rh)
}
return restoreHooks, nil
}
// getPodExecRestoreHookFromAnnotations returns an ExecRestoreHook based on restore annotations, as
// long as the 'command' annotation is present. If it is absent, this returns nil.
func getPodExecRestoreHookFromAnnotations(annotations map[string]string, log logrus.FieldLogger) *velerov1api.ExecRestoreHook {
commandValue := annotations[podRestoreHookCommandAnnotationKey]
if commandValue == "" {
return nil
}
container := annotations[podRestoreHookContainerAnnotationKey]
onError := velerov1api.HookErrorMode(annotations[podRestoreHookOnErrorAnnotationKey])
if onError != velerov1api.HookErrorModeContinue && onError != velerov1api.HookErrorModeFail {
onError = ""
}
var execTimeout time.Duration
execTimeoutString := annotations[podRestoreHookTimeoutAnnotationKey]
if execTimeoutString != "" {
if temp, err := time.ParseDuration(execTimeoutString); err == nil {
execTimeout = temp
} else {
log.Warn(errors.Wrapf(err, "Unable to parse exec timeout %s, ignoring", execTimeoutString))
}
}
var waitTimeout time.Duration
waitTimeoutString := annotations[podRestoreHookWaitTimeoutAnnotationKey]
if waitTimeoutString != "" {
if temp, err := time.ParseDuration(waitTimeoutString); err == nil {
waitTimeout = temp
} else {
log.Warn(errors.Wrapf(err, "Unable to parse wait timeout %s, ignoring", waitTimeoutString))
}
}
return &velerov1api.ExecRestoreHook{
Container: container,
Command: parseStringToCommand(commandValue),
OnError: onError,
ExecTimeout: metav1.Duration{Duration: execTimeout},
WaitTimeout: metav1.Duration{Duration: waitTimeout},
}
}
type PodExecRestoreHook struct {
HookName string
HookSource string
Hook velerov1api.ExecRestoreHook
executed bool
}
// GroupRestoreExecHooks returns a list of hooks to be executed in a pod grouped by
// container name. If an exec hook is defined in annotation that is used, else applicable exec
// hooks from the restore resource are accumulated.
func GroupRestoreExecHooks(
resourceRestoreHooks []ResourceRestoreHook,
pod *corev1api.Pod,
log logrus.FieldLogger,
) (map[string][]PodExecRestoreHook, error) {
byContainer := map[string][]PodExecRestoreHook{}
if pod == nil || len(pod.Spec.Containers) == 0 {
return byContainer, nil
}
metadata, err := meta.Accessor(pod)
if err != nil {
return nil, errors.WithStack(err)
}
hookFromAnnotation := getPodExecRestoreHookFromAnnotations(metadata.GetAnnotations(), log)
if hookFromAnnotation != nil {
// default to first container in pod if unset
if hookFromAnnotation.Container == "" {
hookFromAnnotation.Container = pod.Spec.Containers[0].Name
}
byContainer[hookFromAnnotation.Container] = []PodExecRestoreHook{
{
HookName: "<from-annotation>",
HookSource: "annotation",
Hook: *hookFromAnnotation,
},
}
return byContainer, nil
}
// No hook found on pod's annotations so check for applicable hooks from the restore spec
labels := metadata.GetLabels()
namespace := metadata.GetNamespace()
for _, rrh := range resourceRestoreHooks {
if !rrh.Selector.applicableTo(kuberesource.Pods, namespace, labels) {
continue
}
for _, rh := range rrh.RestoreHooks {
if rh.Exec == nil {
continue
}
named := PodExecRestoreHook{
HookName: rrh.Name,
Hook: *rh.Exec,
HookSource: "backupSpec",
}
// default to first container in pod if unset, without mutating resource restore hook
if named.Hook.Container == "" {
named.Hook.Container = pod.Spec.Containers[0].Name
}
byContainer[named.Hook.Container] = append(byContainer[named.Hook.Container], named)
}
}
return byContainer, nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,275 +0,0 @@
/*
Copyright 2020 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package hook
import (
"context"
"fmt"
"time"
"github.com/sirupsen/logrus"
v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/tools/cache"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/podexec"
"github.com/vmware-tanzu/velero/pkg/util/kube"
)
type WaitExecHookHandler interface {
HandleHooks(
ctx context.Context,
log logrus.FieldLogger,
pod *v1.Pod,
byContainer map[string][]PodExecRestoreHook,
) []error
}
type ListWatchFactory interface {
NewListWatch(namespace string, selector fields.Selector) cache.ListerWatcher
}
type DefaultListWatchFactory struct {
PodsGetter cache.Getter
}
func (d *DefaultListWatchFactory) NewListWatch(namespace string, selector fields.Selector) cache.ListerWatcher {
return cache.NewListWatchFromClient(d.PodsGetter, "pods", namespace, selector)
}
var _ ListWatchFactory = &DefaultListWatchFactory{}
type DefaultWaitExecHookHandler struct {
ListWatchFactory ListWatchFactory
PodCommandExecutor podexec.PodCommandExecutor
}
var _ WaitExecHookHandler = &DefaultWaitExecHookHandler{}
func (e *DefaultWaitExecHookHandler) HandleHooks(
ctx context.Context,
log logrus.FieldLogger,
pod *v1.Pod,
byContainer map[string][]PodExecRestoreHook,
) []error {
if pod == nil {
return nil
}
// If hooks are defined for a container that does not exist in the pod log a warning and discard
// those hooks to avoid waiting for a container that will never become ready. After that if
// there are no hooks left to be executed return immediately.
for containerName := range byContainer {
if !podHasContainer(pod, containerName) {
log.Warningf("Pod %s does not have container %s: discarding post-restore exec hooks", kube.NamespaceAndName(pod), containerName)
delete(byContainer, containerName)
}
}
if len(byContainer) == 0 {
return nil
}
// Every hook in every container can have its own wait timeout. Rather than setting up separate
// contexts for each, find the largest wait timeout for any hook that should be executed in
// the pod and watch the pod for up to that long. Before executing any hook in a container,
// check if that hook has a timeout and skip execution if expired.
ctx, cancel := context.WithCancel(ctx)
maxWait := maxHookWait(byContainer)
// If no hook has a wait timeout then this function will continue waiting for containers to
// become ready until the shared hook context is canceled.
if maxWait > 0 {
ctx, cancel = context.WithTimeout(ctx, maxWait)
}
waitStart := time.Now()
var errors []error
// The first time this handler is called after a container starts running it will execute all
// pending hooks for that container. Subsequent invocations of this handler will never execute
// hooks in that container. It uses the byContainer map to keep track of which containers have
// not yet been observed to be running. It relies on the Informer not to be called concurrently.
// When a container is observed running and its hooks are executed, the container is deleted
// from the byContainer map. When the map is empty the watch is ended.
handler := func(newObj interface{}) {
newPod, ok := newObj.(*v1.Pod)
if !ok {
return
}
podLog := log.WithFields(
logrus.Fields{
"pod": kube.NamespaceAndName(newPod),
},
)
if newPod.Status.Phase == v1.PodSucceeded || newPod.Status.Phase == v1.PodFailed {
err := fmt.Errorf("Pod entered phase %s before some post-restore exec hooks ran", newPod.Status.Phase)
podLog.Warning(err)
cancel()
return
}
for containerName, hooks := range byContainer {
if !isContainerRunning(newPod, containerName) {
podLog.Infof("Container %s is not running: post-restore hooks will not yet be executed", containerName)
continue
}
podMap, err := runtime.DefaultUnstructuredConverter.ToUnstructured(newPod)
if err != nil {
podLog.WithError(err).Error("error unstructuring pod")
cancel()
return
}
// Sequentially run all hooks for the ready container. The container's hooks are not
// removed from the byContainer map until all have completed so that if one fails
// remaining unexecuted hooks can be handled by the outer function.
for i, hook := range hooks {
// This indicates to the outer function not to handle this hook as unexecuted in
// case of terminating before deleting this container's slice of hooks from the
// byContainer map.
byContainer[containerName][i].executed = true
hookLog := podLog.WithFields(
logrus.Fields{
"hookSource": hook.HookSource,
"hookType": "exec",
"hookPhase": "post",
},
)
// Check the individual hook's wait timeout is not expired
if hook.Hook.WaitTimeout.Duration != 0 && time.Since(waitStart) > hook.Hook.WaitTimeout.Duration {
err := fmt.Errorf("Hook %s in container %s expired before executing", hook.HookName, hook.Hook.Container)
hookLog.Error(err)
if hook.Hook.OnError == velerov1api.HookErrorModeFail {
errors = append(errors, err)
cancel()
return
}
}
eh := &velerov1api.ExecHook{
Container: hook.Hook.Container,
Command: hook.Hook.Command,
OnError: hook.Hook.OnError,
Timeout: hook.Hook.ExecTimeout,
}
if err := e.PodCommandExecutor.ExecutePodCommand(hookLog, podMap, pod.Namespace, pod.Name, hook.HookName, eh); err != nil {
hookLog.WithError(err).Error("Error executing hook")
if hook.Hook.OnError == velerov1api.HookErrorModeFail {
errors = append(errors, err)
cancel()
return
}
}
}
delete(byContainer, containerName)
}
if len(byContainer) == 0 {
cancel()
}
}
selector := fields.OneTermEqualSelector("metadata.name", pod.Name)
lw := e.ListWatchFactory.NewListWatch(pod.Namespace, selector)
_, podWatcher := cache.NewInformer(lw, pod, 0, cache.ResourceEventHandlerFuncs{
AddFunc: handler,
UpdateFunc: func(_, newObj interface{}) {
handler(newObj)
},
DeleteFunc: func(obj interface{}) {
err := fmt.Errorf("Pod %s deleted before all hooks were executed", kube.NamespaceAndName(pod))
log.Error(err)
cancel()
},
})
podWatcher.Run(ctx.Done())
// There are some cases where this function could return with unexecuted hooks: the pod may
// be deleted, a hook with OnError mode Fail could fail, or it may timeout waiting for
// containers to become ready.
// Each unexecuted hook is logged as an error but only hooks with OnError mode Fail return
// an error from this function.
for _, hooks := range byContainer {
for _, hook := range hooks {
if hook.executed {
continue
}
err := fmt.Errorf("Hook %s in container %s in pod %s not executed: %v", hook.HookName, hook.Hook.Container, kube.NamespaceAndName(pod), ctx.Err())
hookLog := log.WithFields(
logrus.Fields{
"hookSource": hook.HookSource,
"hookType": "exec",
"hookPhase": "post",
},
)
hookLog.Error(err)
if hook.Hook.OnError == velerov1api.HookErrorModeFail {
errors = append(errors, err)
}
}
}
return errors
}
func podHasContainer(pod *v1.Pod, containerName string) bool {
if pod == nil {
return false
}
for _, c := range pod.Spec.Containers {
if c.Name == containerName {
return true
}
}
return false
}
func isContainerRunning(pod *v1.Pod, containerName string) bool {
if pod == nil {
return false
}
for _, cs := range pod.Status.ContainerStatuses {
if cs.Name != containerName {
continue
}
return cs.State.Running != nil
}
return false
}
// maxHookWait returns 0 to mean wait indefinitely. Any hook without a wait timeout will cause this
// function to return 0.
func maxHookWait(byContainer map[string][]PodExecRestoreHook) time.Duration {
var maxWait time.Duration
for _, hooks := range byContainer {
for _, hook := range hooks {
if hook.Hook.WaitTimeout.Duration <= 0 {
return 0
}
if hook.Hook.WaitTimeout.Duration > maxWait {
maxWait = hook.Hook.WaitTimeout.Duration
}
}
}
return maxWait
}

View File

@@ -1,949 +0,0 @@
/*
Copyright 2020 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package hook
import (
"context"
"testing"
"time"
"github.com/pkg/errors"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/tools/cache"
fcache "k8s.io/client-go/tools/cache/testing"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/builder"
velerotest "github.com/vmware-tanzu/velero/pkg/test"
)
type fakeListWatchFactory struct {
lw cache.ListerWatcher
}
func (f *fakeListWatchFactory) NewListWatch(ns string, selector fields.Selector) cache.ListerWatcher {
return f.lw
}
var _ ListWatchFactory = &fakeListWatchFactory{}
func TestWaitExecHandleHooks(t *testing.T) {
type change struct {
// delta to wait since last change applied or pod added
wait time.Duration
updated *v1.Pod
}
type expectedExecution struct {
hook *velerov1api.ExecHook
name string
error error
pod *v1.Pod
}
tests := []struct {
name string
// Used as argument to HandleHooks and first state added to ListerWatcher
initialPod *v1.Pod
groupResource string
byContainer map[string][]PodExecRestoreHook
expectedExecutions []expectedExecution
expectedErrors []error
// changes represents the states of the pod over time. It can be used to test a container
// becoming ready at some point after it is first observed by the controller.
changes []change
sharedHooksContextTimeout time.Duration
}{
{
name: "should return no error when hook from annotation executes successfully",
initialPod: builder.ForPod("default", "my-pod").
ObjectMeta(builder.WithAnnotations(
podRestoreHookCommandAnnotationKey, "/usr/bin/foo",
podRestoreHookContainerAnnotationKey, "container1",
podRestoreHookOnErrorAnnotationKey, string(velerov1api.HookErrorModeContinue),
podRestoreHookTimeoutAnnotationKey, "1s",
podRestoreHookWaitTimeoutAnnotationKey, "1m",
)).
Containers(&v1.Container{
Name: "container1",
}).
ContainerStatuses(&v1.ContainerStatus{
Name: "container1",
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{},
},
}).
Result(),
groupResource: "pods",
byContainer: map[string][]PodExecRestoreHook{
"container1": {
{
HookName: "<from-annotation>",
HookSource: "annotation",
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeContinue,
ExecTimeout: metav1.Duration{time.Second},
WaitTimeout: metav1.Duration{time.Minute},
},
},
},
},
expectedExecutions: []expectedExecution{
{
name: "<from-annotation>",
hook: &velerov1api.ExecHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeContinue,
Timeout: metav1.Duration{time.Second},
},
error: nil,
pod: builder.ForPod("default", "my-pod").
ObjectMeta(builder.WithResourceVersion("1")).
ObjectMeta(builder.WithAnnotations(
podRestoreHookCommandAnnotationKey, "/usr/bin/foo",
podRestoreHookContainerAnnotationKey, "container1",
podRestoreHookOnErrorAnnotationKey, string(velerov1api.HookErrorModeContinue),
podRestoreHookTimeoutAnnotationKey, "1s",
podRestoreHookWaitTimeoutAnnotationKey, "1m",
)).
Containers(&v1.Container{
Name: "container1",
}).
ContainerStatuses(&v1.ContainerStatus{
Name: "container1",
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{},
},
}).
Result(),
},
},
expectedErrors: nil,
},
{
name: "should return an error when hook from annotation fails with on error mode fail",
initialPod: builder.ForPod("default", "my-pod").
ObjectMeta(builder.WithAnnotations(
podRestoreHookCommandAnnotationKey, "/usr/bin/foo",
podRestoreHookContainerAnnotationKey, "container1",
podRestoreHookOnErrorAnnotationKey, string(velerov1api.HookErrorModeFail),
podRestoreHookTimeoutAnnotationKey, "1s",
podRestoreHookWaitTimeoutAnnotationKey, "1m",
)).
Containers(&v1.Container{
Name: "container1",
}).
ContainerStatuses(&v1.ContainerStatus{
Name: "container1",
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{},
},
}).
Result(),
groupResource: "pods",
byContainer: map[string][]PodExecRestoreHook{
"container1": {
{
HookName: "<from-annotation>",
HookSource: "annotation",
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeFail,
ExecTimeout: metav1.Duration{time.Second},
WaitTimeout: metav1.Duration{time.Minute},
},
},
},
},
expectedExecutions: []expectedExecution{
{
name: "<from-annotation>",
hook: &velerov1api.ExecHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeFail,
Timeout: metav1.Duration{time.Second},
},
error: errors.New("pod hook error"),
pod: builder.ForPod("default", "my-pod").
ObjectMeta(builder.WithResourceVersion("1")).
ObjectMeta(builder.WithAnnotations(
podRestoreHookCommandAnnotationKey, "/usr/bin/foo",
podRestoreHookContainerAnnotationKey, "container1",
podRestoreHookOnErrorAnnotationKey, string(velerov1api.HookErrorModeFail),
podRestoreHookTimeoutAnnotationKey, "1s",
podRestoreHookWaitTimeoutAnnotationKey, "1m",
)).
Containers(&v1.Container{
Name: "container1",
}).
ContainerStatuses(&v1.ContainerStatus{
Name: "container1",
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{},
},
}).
Result(),
},
},
expectedErrors: []error{errors.New("pod hook error")},
},
{
name: "should return no error when hook from annotation fails with on error mode continue",
initialPod: builder.ForPod("default", "my-pod").
ObjectMeta(builder.WithAnnotations(
podRestoreHookCommandAnnotationKey, "/usr/bin/foo",
podRestoreHookContainerAnnotationKey, "container1",
podRestoreHookOnErrorAnnotationKey, string(velerov1api.HookErrorModeContinue),
podRestoreHookTimeoutAnnotationKey, "1s",
podRestoreHookWaitTimeoutAnnotationKey, "1m",
)).
Containers(&v1.Container{
Name: "container1",
}).
ContainerStatuses(&v1.ContainerStatus{
Name: "container1",
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{},
},
}).
Result(),
groupResource: "pods",
byContainer: map[string][]PodExecRestoreHook{
"container1": {
{
HookName: "<from-annotation>",
HookSource: "annotation",
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeContinue,
ExecTimeout: metav1.Duration{time.Second},
WaitTimeout: metav1.Duration{time.Minute},
},
},
},
},
expectedExecutions: []expectedExecution{
{
name: "<from-annotation>",
hook: &velerov1api.ExecHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeContinue,
Timeout: metav1.Duration{time.Second},
},
error: errors.New("pod hook error"),
pod: builder.ForPod("default", "my-pod").
ObjectMeta(builder.WithResourceVersion("1")).
ObjectMeta(builder.WithAnnotations(
podRestoreHookCommandAnnotationKey, "/usr/bin/foo",
podRestoreHookContainerAnnotationKey, "container1",
podRestoreHookOnErrorAnnotationKey, string(velerov1api.HookErrorModeContinue),
podRestoreHookTimeoutAnnotationKey, "1s",
podRestoreHookWaitTimeoutAnnotationKey, "1m",
)).
Containers(&v1.Container{
Name: "container1",
}).
ContainerStatuses(&v1.ContainerStatus{
Name: "container1",
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{},
},
}).
Result(),
},
},
expectedErrors: nil,
},
{
name: "should return no error when hook from annotation executes after 10ms wait for container to start",
initialPod: builder.ForPod("default", "my-pod").
ObjectMeta(builder.WithAnnotations(
podRestoreHookCommandAnnotationKey, "/usr/bin/foo",
podRestoreHookContainerAnnotationKey, "container1",
podRestoreHookOnErrorAnnotationKey, string(velerov1api.HookErrorModeContinue),
podRestoreHookTimeoutAnnotationKey, "1s",
podRestoreHookWaitTimeoutAnnotationKey, "1m",
)).
Containers(&v1.Container{
Name: "container1",
}).
ContainerStatuses(&v1.ContainerStatus{
Name: "container1",
State: v1.ContainerState{
Waiting: &v1.ContainerStateWaiting{},
},
}).
Result(),
groupResource: "pods",
byContainer: map[string][]PodExecRestoreHook{
"container1": {
{
HookName: "<from-annotation>",
HookSource: "annotation",
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeContinue,
ExecTimeout: metav1.Duration{time.Second},
WaitTimeout: metav1.Duration{time.Minute},
},
},
},
},
expectedExecutions: []expectedExecution{
{
name: "<from-annotation>",
hook: &velerov1api.ExecHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeContinue,
Timeout: metav1.Duration{time.Second},
},
error: nil,
pod: builder.ForPod("default", "my-pod").
ObjectMeta(builder.WithResourceVersion("2")).
ObjectMeta(builder.WithAnnotations(
podRestoreHookCommandAnnotationKey, "/usr/bin/foo",
podRestoreHookContainerAnnotationKey, "container1",
podRestoreHookOnErrorAnnotationKey, string(velerov1api.HookErrorModeContinue),
podRestoreHookTimeoutAnnotationKey, "1s",
podRestoreHookWaitTimeoutAnnotationKey, "1m",
)).
Containers(&v1.Container{
Name: "container1",
}).
ContainerStatuses(&v1.ContainerStatus{
Name: "container1",
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{},
},
}).
Result(),
},
},
expectedErrors: nil,
changes: []change{
{
wait: 10 * time.Millisecond,
updated: builder.ForPod("default", "my-pod").
ObjectMeta(builder.WithAnnotations(
podRestoreHookCommandAnnotationKey, "/usr/bin/foo",
podRestoreHookContainerAnnotationKey, "container1",
podRestoreHookOnErrorAnnotationKey, string(velerov1api.HookErrorModeContinue),
podRestoreHookTimeoutAnnotationKey, "1s",
podRestoreHookWaitTimeoutAnnotationKey, "1m",
)).
Containers(&v1.Container{
Name: "container1",
}).
ContainerStatuses(&v1.ContainerStatus{
Name: "container1",
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{},
},
}).
Result(),
},
},
},
{
name: "should return no error when hook from spec executes successfully",
groupResource: "pods",
initialPod: builder.ForPod("default", "my-pod").
Containers(&v1.Container{
Name: "container1",
}).
ContainerStatuses(&v1.ContainerStatus{
Name: "container1",
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{},
},
}).
Result(),
expectedErrors: nil,
byContainer: map[string][]PodExecRestoreHook{
"container1": {
{
HookName: "my-hook-1",
HookSource: "backupSpec",
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
},
},
},
},
expectedExecutions: []expectedExecution{
{
name: "my-hook-1",
hook: &velerov1api.ExecHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
},
pod: builder.ForPod("default", "my-pod").
ObjectMeta(builder.WithResourceVersion("1")).
Containers(&v1.Container{
Name: "container1",
}).
ContainerStatuses(&v1.ContainerStatus{
Name: "container1",
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{},
},
}).
Result(),
},
},
},
{
name: "should return no error when spec hook with wait timeout expires with OnError mode Continue",
groupResource: "pods",
initialPod: builder.ForPod("default", "my-pod").
Containers(&v1.Container{
Name: "container1",
}).
ContainerStatuses(&v1.ContainerStatus{
Name: "container1",
State: v1.ContainerState{
Waiting: &v1.ContainerStateWaiting{},
},
}).
Result(),
expectedErrors: nil,
byContainer: map[string][]PodExecRestoreHook{
"container1": {
{
HookName: "my-hook-1",
HookSource: "backupSpec",
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeContinue,
WaitTimeout: metav1.Duration{time.Millisecond},
},
},
},
},
expectedExecutions: []expectedExecution{},
},
{
name: "should return an error when spec hook with wait timeout expires with OnError mode Fail",
groupResource: "pods",
initialPod: builder.ForPod("default", "my-pod").
Containers(&v1.Container{
Name: "container1",
}).
ContainerStatuses(&v1.ContainerStatus{
Name: "container1",
State: v1.ContainerState{
Waiting: &v1.ContainerStateWaiting{},
},
}).
Result(),
expectedErrors: []error{errors.New("Hook my-hook-1 in container container1 in pod default/my-pod not executed: context deadline exceeded")},
byContainer: map[string][]PodExecRestoreHook{
"container1": {
{
HookName: "my-hook-1",
HookSource: "backupSpec",
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeFail,
WaitTimeout: metav1.Duration{time.Millisecond},
},
},
},
},
expectedExecutions: []expectedExecution{},
},
{
name: "should return an error when shared hooks context is canceled before spec hook with OnError mode Fail executes",
groupResource: "pods",
initialPod: builder.ForPod("default", "my-pod").
Containers(&v1.Container{
Name: "container1",
}).
ContainerStatuses(&v1.ContainerStatus{
Name: "container1",
State: v1.ContainerState{
Waiting: &v1.ContainerStateWaiting{},
},
}).
Result(),
expectedErrors: []error{errors.New("Hook my-hook-1 in container container1 in pod default/my-pod not executed: context deadline exceeded")},
byContainer: map[string][]PodExecRestoreHook{
"container1": {
{
HookName: "my-hook-1",
HookSource: "backupSpec",
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeFail,
},
},
},
},
expectedExecutions: []expectedExecution{},
sharedHooksContextTimeout: time.Millisecond,
},
{
name: "should return no error when shared hooks context is canceled before spec hook with OnError mode Continue executes",
expectedErrors: nil,
groupResource: "pods",
initialPod: builder.ForPod("default", "my-pod").
Containers(&v1.Container{
Name: "container1",
}).
ContainerStatuses(&v1.ContainerStatus{
Name: "container1",
State: v1.ContainerState{
Waiting: &v1.ContainerStateWaiting{},
},
}).
Result(),
byContainer: map[string][]PodExecRestoreHook{
"container1": {
{
HookName: "my-hook-1",
HookSource: "backupSpec",
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
OnError: velerov1api.HookErrorModeContinue,
},
},
},
},
expectedExecutions: []expectedExecution{},
sharedHooksContextTimeout: time.Millisecond,
},
{
name: "shoudl return no error with 2 spec hooks in 2 different containers, 1st container starts running after 10ms, 2nd container after 20ms, both succeed",
groupResource: "pods",
initialPod: builder.ForPod("default", "my-pod").
Containers(&v1.Container{
Name: "container1",
}).
Containers(&v1.Container{
Name: "container2",
}).
// initially both are waiting
ContainerStatuses(&v1.ContainerStatus{
Name: "container1",
State: v1.ContainerState{
Waiting: &v1.ContainerStateWaiting{},
},
}).
ContainerStatuses(&v1.ContainerStatus{
Name: "container2",
State: v1.ContainerState{
Waiting: &v1.ContainerStateWaiting{},
},
}).
Result(),
expectedErrors: nil,
byContainer: map[string][]PodExecRestoreHook{
"container1": {
{
HookName: "my-hook-1",
HookSource: "backupSpec",
Hook: velerov1api.ExecRestoreHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
},
},
},
"container2": {
{
HookName: "my-hook-1",
HookSource: "backupSpec",
Hook: velerov1api.ExecRestoreHook{
Container: "container2",
Command: []string{"/usr/bin/bar"},
},
},
},
},
expectedExecutions: []expectedExecution{
{
name: "my-hook-1",
hook: &velerov1api.ExecHook{
Container: "container1",
Command: []string{"/usr/bin/foo"},
},
error: nil,
pod: builder.ForPod("default", "my-pod").
ObjectMeta(builder.WithResourceVersion("2")).
Containers(&v1.Container{
Name: "container1",
}).
Containers(&v1.Container{
Name: "container2",
}).
ContainerStatuses(&v1.ContainerStatus{
Name: "container1",
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{},
},
}).
// container 2 is still waiting when the first hook executes in container1
ContainerStatuses(&v1.ContainerStatus{
Name: "container2",
State: v1.ContainerState{
Waiting: &v1.ContainerStateWaiting{},
},
}).
Result(),
},
{
name: "my-hook-1",
hook: &velerov1api.ExecHook{
Container: "container2",
Command: []string{"/usr/bin/bar"},
},
error: nil,
pod: builder.ForPod("default", "my-pod").
ObjectMeta(builder.WithResourceVersion("3")).
Containers(&v1.Container{
Name: "container1",
}).
Containers(&v1.Container{
Name: "container2",
}).
ContainerStatuses(&v1.ContainerStatus{
Name: "container1",
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{},
},
}).
ContainerStatuses(&v1.ContainerStatus{
Name: "container2",
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{},
},
}).
Result(),
},
},
changes: []change{
// 1st modification: container1 starts running, resourceVersion 2, container2 still waiting
{
wait: 10 * time.Millisecond,
updated: builder.ForPod("default", "my-pod").
ObjectMeta(builder.WithResourceVersion("2")).
Containers(&v1.Container{
Name: "container1",
}).
Containers(&v1.Container{
Name: "container2",
}).
ContainerStatuses(&v1.ContainerStatus{
Name: "container1",
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{},
},
}).
ContainerStatuses(&v1.ContainerStatus{
Name: "container2",
State: v1.ContainerState{
Waiting: &v1.ContainerStateWaiting{},
},
}).
Result(),
},
// 2nd modification: container2 starts running, resourceVersion 3
{
wait: 10 * time.Millisecond,
updated: builder.ForPod("default", "my-pod").
ObjectMeta(builder.WithResourceVersion("3")).
Containers(&v1.Container{
Name: "container1",
}).
Containers(&v1.Container{
Name: "container2",
}).
ContainerStatuses(&v1.ContainerStatus{
Name: "container1",
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{},
},
}).
ContainerStatuses(&v1.ContainerStatus{
Name: "container2",
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{},
},
}).
Result(),
},
},
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
source := fcache.NewFakeControllerSource()
go func() {
// This is the state of the pod that will be seen by the AddFunc handler.
source.Add(test.initialPod)
// Changes holds the versions of the pod over time. Each of these states
// will be seen by the UpdateFunc handler.
for _, change := range test.changes {
time.Sleep(change.wait)
source.Modify(change.updated)
}
}()
podCommandExecutor := &velerotest.MockPodCommandExecutor{}
defer podCommandExecutor.AssertExpectations(t)
h := &DefaultWaitExecHookHandler{
PodCommandExecutor: podCommandExecutor,
ListWatchFactory: &fakeListWatchFactory{source},
}
for _, e := range test.expectedExecutions {
obj, err := runtime.DefaultUnstructuredConverter.ToUnstructured(e.pod)
assert.Nil(t, err)
podCommandExecutor.On("ExecutePodCommand", mock.Anything, obj, e.pod.Namespace, e.pod.Name, e.name, e.hook).Return(e.error)
}
ctx := context.Background()
if test.sharedHooksContextTimeout > 0 {
ctx, _ = context.WithTimeout(ctx, test.sharedHooksContextTimeout)
}
errs := h.HandleHooks(ctx, velerotest.NewLogger(), test.initialPod, test.byContainer)
// for i, ee := range test.expectedErrors {
require.Len(t, errs, len(test.expectedErrors))
for i, ee := range test.expectedErrors {
assert.EqualError(t, errs[i], ee.Error())
}
})
}
}
func TestPodHasContainer(t *testing.T) {
tests := []struct {
name string
pod *v1.Pod
container string
expect bool
}{
{
name: "has container",
expect: true,
container: "container1",
pod: builder.ForPod("default", "my-pod").
Containers(&v1.Container{
Name: "container1",
}).
Result(),
},
{
name: "does not have container",
expect: false,
container: "container1",
pod: builder.ForPod("default", "my-pod").
Containers(&v1.Container{
Name: "container2",
}).
Result(),
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
actual := podHasContainer(test.pod, test.container)
assert.Equal(t, actual, test.expect)
})
}
}
func TestIsContainerRunning(t *testing.T) {
tests := []struct {
name string
pod *v1.Pod
container string
expect bool
}{
{
name: "should return true when running",
container: "container1",
expect: true,
pod: builder.ForPod("default", "my-pod").
ContainerStatuses(&v1.ContainerStatus{
Name: "container1",
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{},
},
}).
Result(),
},
{
name: "should return false when no state is set",
container: "container1",
expect: false,
pod: builder.ForPod("default", "my-pod").
ContainerStatuses(&v1.ContainerStatus{
Name: "container1",
State: v1.ContainerState{},
}).
Result(),
},
{
name: "should return false when waiting",
container: "container1",
expect: false,
pod: builder.ForPod("default", "my-pod").
ContainerStatuses(&v1.ContainerStatus{
Name: "container1",
State: v1.ContainerState{
Waiting: &v1.ContainerStateWaiting{},
},
}).
Result(),
},
{
name: "should return true when running and first container is terminated",
container: "container1",
expect: true,
pod: builder.ForPod("default", "my-pod").
ContainerStatuses(&v1.ContainerStatus{
Name: "container0",
State: v1.ContainerState{
Terminated: &v1.ContainerStateTerminated{},
},
},
&v1.ContainerStatus{
Name: "container1",
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{},
},
}).
Result(),
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
actual := isContainerRunning(test.pod, test.container)
assert.Equal(t, actual, test.expect)
})
}
}
func TestMaxHookWait(t *testing.T) {
tests := []struct {
name string
byContainer map[string][]PodExecRestoreHook
expect time.Duration
}{
{
name: "should return 0 for nil map",
byContainer: nil,
expect: 0,
},
{
name: "should return 0 if all hooks are 0 or negative",
expect: 0,
byContainer: map[string][]PodExecRestoreHook{
"container1": {
{
Hook: velerov1api.ExecRestoreHook{
ExecTimeout: metav1.Duration{time.Second},
WaitTimeout: metav1.Duration{0},
},
},
{
Hook: velerov1api.ExecRestoreHook{
WaitTimeout: metav1.Duration{-1},
},
},
},
},
},
{
name: "should return biggest wait timeout from multiple hooks in multiple containers",
expect: time.Hour,
byContainer: map[string][]PodExecRestoreHook{
"container1": {
{
Hook: velerov1api.ExecRestoreHook{
WaitTimeout: metav1.Duration{time.Second},
},
},
{
Hook: velerov1api.ExecRestoreHook{
WaitTimeout: metav1.Duration{time.Second},
},
},
},
"container2": {
{
Hook: velerov1api.ExecRestoreHook{
WaitTimeout: metav1.Duration{time.Hour},
},
},
{
Hook: velerov1api.ExecRestoreHook{
WaitTimeout: metav1.Duration{time.Minute},
},
},
},
},
},
{
name: "should return 0 if any hook does not have a wait timeout",
expect: 0,
byContainer: map[string][]PodExecRestoreHook{
"container1": {
{
Hook: velerov1api.ExecRestoreHook{
ExecTimeout: metav1.Duration{time.Second},
WaitTimeout: metav1.Duration{time.Second},
},
},
{
Hook: velerov1api.ExecRestoreHook{
WaitTimeout: metav1.Duration{0},
},
},
},
},
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
actual := maxHookWait(test.byContainer)
assert.Equal(t, actual, test.expect)
})
}
}

View File

@@ -1,96 +0,0 @@
/*
Copyright 2020 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package storage
import (
"context"
"time"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"sigs.k8s.io/controller-runtime/pkg/client"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
)
// DefaultBackupLocationInfo holds server default backup storage location information
type DefaultBackupLocationInfo struct {
// StorageLocation is the name of the backup storage location designated as the default
StorageLocation string
// StoreValidationFrequency is the default validation frequency for any backup storage location
StoreValidationFrequency time.Duration
}
// IsReadyToValidate calculates if a given backup storage location is ready to be validated.
//
// Rules:
// Users can choose a validation frequency per location. This will overrite the server's default value
// To skip/stop validation, set the frequency to zero
// This will always return "true" for the first attempt at validating a location, regardless of its validation frequency setting
// Otherwise, it returns "ready" only when NOW is equal to or after the next validation time
// (next validation time: last validation time + validation frequency)
func IsReadyToValidate(bslValidationFrequency *metav1.Duration, lastValidationTime *metav1.Time, defaultLocationInfo DefaultBackupLocationInfo, log logrus.FieldLogger) bool {
validationFrequency := defaultLocationInfo.StoreValidationFrequency
// If the bsl validation frequency is not specifically set, skip this block and continue, using the server's default
if bslValidationFrequency != nil {
validationFrequency = bslValidationFrequency.Duration
}
if validationFrequency < 0 {
log.Debugf("Validation period must be non-negative, changing from %d to %d", validationFrequency, defaultLocationInfo.StoreValidationFrequency)
validationFrequency = defaultLocationInfo.StoreValidationFrequency
}
lastValidation := lastValidationTime
if lastValidation == nil {
// Regardless of validation frequency, we want to validate all BSLs at least once.
return true
}
if validationFrequency == 0 {
// Validation was disabled so return false.
log.Debug("Validation period for this backup location is set to 0, skipping validation")
return false
}
// We want to validate BSL only if the set validation frequency/ interval has elapsed.
nextValidation := lastValidation.Add(validationFrequency) // next validation time: last validation time + validation frequency
if time.Now().UTC().Before(nextValidation) { // ready only when NOW is equal to or after the next validation time
return false
}
return true
}
// ListBackupStorageLocations verifies if there are any backup storage locations.
// For all purposes, if either there is an error while attempting to fetch items or
// if there are no items an error would be returned since the functioning of the system
// would be haulted.
func ListBackupStorageLocations(ctx context.Context, kbClient client.Client, namespace string) (velerov1api.BackupStorageLocationList, error) {
var locations velerov1api.BackupStorageLocationList
if err := kbClient.List(ctx, &locations, &client.ListOptions{
Namespace: namespace,
}); err != nil {
return velerov1api.BackupStorageLocationList{}, err
}
if len(locations.Items) == 0 {
return velerov1api.BackupStorageLocationList{}, errors.New("no backup storage locations found")
}
return locations, nil
}

View File

@@ -1,176 +0,0 @@
/*Copyright 2020 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package storage
import (
"context"
"testing"
"time"
. "github.com/onsi/gomega"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/builder"
"github.com/vmware-tanzu/velero/pkg/generated/clientset/versioned/scheme"
velerotest "github.com/vmware-tanzu/velero/pkg/test"
)
func TestIsReadyToValidate(t *testing.T) {
tests := []struct {
name string
bslValidationFrequency *metav1.Duration
lastValidationTime *metav1.Time
defaultLocationInfo DefaultBackupLocationInfo
ready bool
}{
{
name: "validate when true when validation frequency is zero and lastValidationTime is nil",
bslValidationFrequency: &metav1.Duration{Duration: 0},
defaultLocationInfo: DefaultBackupLocationInfo{
StoreValidationFrequency: 0,
},
ready: true,
},
{
name: "don't validate when false when validation is disabled and lastValidationTime is not nil",
bslValidationFrequency: &metav1.Duration{Duration: 0},
lastValidationTime: &metav1.Time{Time: time.Now()},
defaultLocationInfo: DefaultBackupLocationInfo{
StoreValidationFrequency: 0,
},
ready: false,
},
{
name: "validate as per location setting, as that takes precedence, and always if it has never been validated before regardless of the frequency setting",
bslValidationFrequency: &metav1.Duration{Duration: 1 * time.Hour},
defaultLocationInfo: DefaultBackupLocationInfo{
StoreValidationFrequency: 0,
},
ready: true,
},
{
name: "don't validate as per location setting, as it is set to zero and that takes precedence",
bslValidationFrequency: &metav1.Duration{Duration: 0},
defaultLocationInfo: DefaultBackupLocationInfo{
StoreValidationFrequency: 1,
},
lastValidationTime: &metav1.Time{Time: time.Now()},
ready: false,
},
{
name: "validate as per default setting when location setting is not set",
defaultLocationInfo: DefaultBackupLocationInfo{
StoreValidationFrequency: 1,
},
ready: true,
},
{
name: "don't validate when default setting is set to zero and the location setting is not set",
defaultLocationInfo: DefaultBackupLocationInfo{
StoreValidationFrequency: 0,
},
lastValidationTime: &metav1.Time{Time: time.Now()},
ready: false,
},
{
name: "don't validate when now is before the NEXT validation time (validation frequency + last validation time)",
bslValidationFrequency: &metav1.Duration{Duration: 1 * time.Second},
lastValidationTime: &metav1.Time{Time: time.Now()},
defaultLocationInfo: DefaultBackupLocationInfo{
StoreValidationFrequency: 0,
},
ready: false,
},
{
name: "validate when now is equal to the NEXT validation time (validation frequency + last validation time)",
bslValidationFrequency: &metav1.Duration{Duration: 1 * time.Second},
lastValidationTime: &metav1.Time{Time: time.Now().Add(-1 * time.Second)},
defaultLocationInfo: DefaultBackupLocationInfo{
StoreValidationFrequency: 0,
},
ready: true,
},
{
name: "validate when now is after the NEXT validation time (validation frequency + last validation time)",
bslValidationFrequency: &metav1.Duration{Duration: 1 * time.Second},
lastValidationTime: &metav1.Time{Time: time.Now().Add(-2 * time.Second)},
defaultLocationInfo: DefaultBackupLocationInfo{
StoreValidationFrequency: 0,
},
ready: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
g := NewWithT(t)
log := velerotest.NewLogger()
actual := IsReadyToValidate(tt.bslValidationFrequency, tt.lastValidationTime, tt.defaultLocationInfo, log)
g.Expect(actual).To(BeIdenticalTo(tt.ready))
})
}
}
func TestListBackupStorageLocations(t *testing.T) {
tests := []struct {
name string
backupLocations *velerov1api.BackupStorageLocationList
expectError bool
}{
{
name: "1 existing location does not return an error",
backupLocations: &velerov1api.BackupStorageLocationList{
Items: []velerov1api.BackupStorageLocation{
*builder.ForBackupStorageLocation("ns-1", "location-1").Result(),
},
},
expectError: false,
},
{
name: "multiple existing location does not return an error",
backupLocations: &velerov1api.BackupStorageLocationList{
Items: []velerov1api.BackupStorageLocation{
*builder.ForBackupStorageLocation("ns-1", "location-1").Result(),
*builder.ForBackupStorageLocation("ns-1", "location-2").Result(),
*builder.ForBackupStorageLocation("ns-1", "location-3").Result(),
},
},
expectError: false,
},
{
name: "no existing locations returns an error",
backupLocations: &velerov1api.BackupStorageLocationList{},
expectError: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
g := NewWithT(t)
client := fake.NewFakeClientWithScheme(scheme.Scheme, tt.backupLocations)
if tt.expectError {
_, err := ListBackupStorageLocations(context.Background(), client, "ns-1")
g.Expect(err).NotTo(BeNil())
} else {
_, err := ListBackupStorageLocations(context.Background(), client, "ns-1")
g.Expect(err).To(BeNil())
}
})
}
}

View File

@@ -1,54 +0,0 @@
/*
Copyright 2020 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// TODO(2.0) After converting all controllers to runttime-controller,
// the functions in this file will no longer be needed and should be removed.
package managercontroller
import (
"context"
"sigs.k8s.io/controller-runtime/pkg/manager"
"github.com/vmware-tanzu/velero/pkg/controller"
)
// Runnable will turn a "regular" runnable component (such as a controller)
// into a controller-runtime Runnable
func Runnable(p controller.Interface, numWorkers int) manager.Runnable {
f := func(stop <-chan struct{}) error {
// Create a cancel context for handling the stop signal.
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// If a signal is received on the stop channel, cancel the
// context. This will propagate the cancel into the p.Run
// function below.
go func() {
select {
case <-stop:
cancel()
case <-ctx.Done():
}
}()
// This is a blocking call that either completes
// or is cancellable on receiving a stop signal.
return p.Run(ctx, numWorkers)
}
return manager.RunnableFunc(f)
}

View File

@@ -1,73 +0,0 @@
/*
Copyright 2020 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package velero
import (
"context"
"github.com/pkg/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/clock"
"sigs.k8s.io/controller-runtime/pkg/client"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/buildinfo"
"github.com/vmware-tanzu/velero/pkg/plugin/framework"
)
// ServerStatus holds information for retrieving installed
// plugins and for updating the ServerStatusRequest timestamp.
type ServerStatus struct {
PluginRegistry PluginLister
Clock clock.Clock
}
// PatchStatusProcessed patches status fields, including loading the plugin info, and updates
// the ServerStatusRequest.Status.Phase to ServerStatusRequestPhaseProcessed.
func (s *ServerStatus) PatchStatusProcessed(kbClient client.Client, req *velerov1api.ServerStatusRequest, ctx context.Context) error {
statusPatch := client.MergeFrom(req.DeepCopyObject())
req.Status.ServerVersion = buildinfo.Version
req.Status.Phase = velerov1api.ServerStatusRequestPhaseProcessed
req.Status.ProcessedTimestamp = &metav1.Time{Time: s.Clock.Now()}
req.Status.Plugins = getInstalledPluginInfo(s.PluginRegistry)
if err := kbClient.Status().Patch(ctx, req, statusPatch); err != nil {
return errors.WithStack(err)
}
return nil
}
type PluginLister interface {
// List returns all PluginIdentifiers for kind.
List(kind framework.PluginKind) []framework.PluginIdentifier
}
func getInstalledPluginInfo(pluginLister PluginLister) []velerov1api.PluginInfo {
var plugins []velerov1api.PluginInfo
for _, v := range framework.AllPluginKinds() {
list := pluginLister.List(v)
for _, plugin := range list {
pluginInfo := velerov1api.PluginInfo{
Name: plugin.Name,
Kind: plugin.Kind.String(),
}
plugins = append(plugins, pluginInfo)
}
}
return plugins
}

View File

@@ -1,202 +0,0 @@
/*
Copyright 2020 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package velero
import (
"context"
"sort"
"testing"
"time"
. "github.com/onsi/gomega"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/util/clock"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/builder"
"github.com/vmware-tanzu/velero/pkg/buildinfo"
"github.com/vmware-tanzu/velero/pkg/generated/clientset/versioned/scheme"
"github.com/vmware-tanzu/velero/pkg/plugin/framework"
)
func statusRequestBuilder(resourceVersion string) *builder.ServerStatusRequestBuilder {
return builder.ForServerStatusRequest(velerov1api.DefaultNamespace, "sr-1", resourceVersion)
}
func TestPatchStatusProcessed(t *testing.T) {
// now will be used to set the fake clock's time; capture
// it here so it can be referenced in the test case defs.
now, err := time.Parse(time.RFC1123, time.RFC1123)
require.NoError(t, err)
now = now.Local()
buildinfo.Version = "test-version-val"
tests := []struct {
name string
req *velerov1api.ServerStatusRequest
reqPluginLister *fakePluginLister
expected *velerov1api.ServerStatusRequest
}{
{
name: "server status request with empty phase gets processed",
req: statusRequestBuilder("0").Result(),
reqPluginLister: &fakePluginLister{
plugins: []framework.PluginIdentifier{
{
Name: "custom.io/myown",
Kind: "VolumeSnapshotter",
},
},
},
expected: statusRequestBuilder("1").
ServerVersion(buildinfo.Version).
Phase(velerov1api.ServerStatusRequestPhaseProcessed).
ProcessedTimestamp(now).
Plugins([]velerov1api.PluginInfo{
{
Name: "custom.io/myown",
Kind: "VolumeSnapshotter",
},
}).
Result(),
},
{
name: "server status request with phase=New gets processed",
req: statusRequestBuilder("0").
Phase(velerov1api.ServerStatusRequestPhaseNew).
Result(),
reqPluginLister: &fakePluginLister{
plugins: []framework.PluginIdentifier{
{
Name: "velero.io/aws",
Kind: "ObjectStore",
},
{
Name: "custom.io/myown",
Kind: "VolumeSnapshotter",
},
},
},
expected: statusRequestBuilder("1").
ServerVersion(buildinfo.Version).
Phase(velerov1api.ServerStatusRequestPhaseProcessed).
ProcessedTimestamp(now).
Plugins([]velerov1api.PluginInfo{
{
Name: "velero.io/aws",
Kind: "ObjectStore",
},
{
Name: "custom.io/myown",
Kind: "VolumeSnapshotter",
},
}).
Result(),
},
{
name: "server status request with phase=Processed gets processed",
req: statusRequestBuilder("0").
Phase(velerov1api.ServerStatusRequestPhaseProcessed).
Result(),
reqPluginLister: &fakePluginLister{
plugins: []framework.PluginIdentifier{
{
Name: "velero.io/aws",
Kind: "ObjectStore",
},
{
Name: "custom.io/myown",
Kind: "VolumeSnapshotter",
},
},
},
expected: statusRequestBuilder("1").
ServerVersion(buildinfo.Version).
Phase(velerov1api.ServerStatusRequestPhaseProcessed).
ProcessedTimestamp(now).
Plugins([]velerov1api.PluginInfo{
{
Name: "velero.io/aws",
Kind: "ObjectStore",
},
{
Name: "custom.io/myown",
Kind: "VolumeSnapshotter",
},
}).
Result(),
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
g := NewWithT(t)
serverStatusInfo := ServerStatus{
PluginRegistry: tc.reqPluginLister,
Clock: clock.NewFakeClock(now),
}
kbClient := fake.NewFakeClientWithScheme(scheme.Scheme, tc.req)
err := serverStatusInfo.PatchStatusProcessed(kbClient, tc.req, context.Background())
assert.Nil(t, err)
key := client.ObjectKey{Name: tc.req.Name, Namespace: tc.req.Namespace}
instance := &velerov1api.ServerStatusRequest{}
err = kbClient.Get(context.Background(), key, instance)
if tc.expected == nil {
g.Expect(apierrors.IsNotFound(err)).To(BeTrue())
} else {
sortPluginsByKindAndName(tc.expected.Status.Plugins)
sortPluginsByKindAndName(instance.Status.Plugins)
g.Expect(instance.Status.Plugins).To(BeEquivalentTo((tc.expected.Status.Plugins)))
g.Expect(instance).To(BeEquivalentTo((tc.expected)))
g.Expect(err).To(BeNil())
}
})
}
}
func sortPluginsByKindAndName(plugins []velerov1api.PluginInfo) {
sort.Slice(plugins, func(i, j int) bool {
if plugins[i].Kind != plugins[j].Kind {
return plugins[i].Kind < plugins[j].Kind
}
return plugins[i].Name < plugins[j].Name
})
}
type fakePluginLister struct {
plugins []framework.PluginIdentifier
}
func (l *fakePluginLister) List(kind framework.PluginKind) []framework.PluginIdentifier {
var plugins []framework.PluginIdentifier
for _, plugin := range l.plugins {
if plugin.Kind == kind {
plugins = append(plugins, plugin)
}
}
return plugins
}

View File

@@ -1,12 +1,4 @@
[build]
base = "site/"
command = "hugo --gc --minify --baseURL $BASE_URL"
publish = "site/public"
[context.production.environment]
HUGO_VERSION = "0.73.0"
BASE_URL = "https://velero.io"
[context.deploy-preview.environment]
HUGO_VERSION = "0.73.0"
BASE_URL = "https://deploy-preview-2720--velero.netlify.app"
command = "jekyll build"
publish = "site/_site"

View File

@@ -1,5 +1,5 @@
/*
Copyright 2020 the Velero contributors.
Copyright 2017, 2019 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -82,19 +82,6 @@ type BackupSpec struct {
// VolumeSnapshotLocations is a list containing names of VolumeSnapshotLocations associated with this backup.
// +optional
VolumeSnapshotLocations []string `json:"volumeSnapshotLocations,omitempty"`
// DefaultVolumesToRestic specifies whether restic should be used to take a
// backup of all pod volumes by default.
// +optional
// + nullable
DefaultVolumesToRestic *bool `json:"defaultVolumesToRestic,omitempty"`
// OrderedResources specifies the backup order of resources of specific Kind.
// The map key is the Kind name and value is a list of resource names separeted by commas.
// Each resource name has format "namespace/resourcename". For cluster resources, simply use "resourcename".
// +optional
// +nullable
OrderedResources map[string]string `json:"orderedResources,omitempty"`
}
// BackupHooks contains custom behaviors that should be executed at different phases of the backup.

View File

@@ -0,0 +1,147 @@
/*
Copyright 2018 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
)
// +genclient
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// BackupStorageLocation is a location where Velero stores backup objects.
type BackupStorageLocation struct {
metav1.TypeMeta `json:",inline"`
// +optional
metav1.ObjectMeta `json:"metadata,omitempty"`
// +optional
Spec BackupStorageLocationSpec `json:"spec,omitempty"`
// +optional
Status BackupStorageLocationStatus `json:"status,omitempty"`
}
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// BackupStorageLocationList is a list of BackupStorageLocations.
type BackupStorageLocationList struct {
metav1.TypeMeta `json:",inline"`
// +optional
metav1.ListMeta `json:"metadata,omitempty"`
Items []BackupStorageLocation `json:"items"`
}
// StorageType represents the type of storage that a backup location uses.
// ObjectStorage must be non-nil, since it is currently the only supported StorageType.
type StorageType struct {
ObjectStorage *ObjectStorageLocation `json:"objectStorage"`
}
// ObjectStorageLocation specifies the settings necessary to connect to a provider's object storage.
type ObjectStorageLocation struct {
// Bucket is the bucket to use for object storage.
Bucket string `json:"bucket"`
// Prefix is the path inside a bucket to use for Velero storage. Optional.
// +optional
Prefix string `json:"prefix,omitempty"`
// CACert defines a CA bundle to use when verifying TLS connections to the provider.
// +optional
CACert []byte `json:"caCert,omitempty"`
}
// BackupStorageLocationSpec defines the specification for a Velero BackupStorageLocation.
type BackupStorageLocationSpec struct {
// Provider is the provider of the backup storage.
Provider string `json:"provider"`
// Config is for provider-specific configuration fields.
// +optional
Config map[string]string `json:"config,omitempty"`
StorageType `json:",inline"`
// AccessMode defines the permissions for the backup storage location.
// +optional
AccessMode BackupStorageLocationAccessMode `json:"accessMode,omitempty"`
// BackupSyncPeriod defines how frequently to sync backup API objects from object storage. A value of 0 disables sync.
// +optional
// +nullable
BackupSyncPeriod *metav1.Duration `json:"backupSyncPeriod,omitempty"`
}
// BackupStorageLocationPhase is the lifecyle phase of a Velero BackupStorageLocation.
// +kubebuilder:validation:Enum=Available;Unavailable
type BackupStorageLocationPhase string
const (
// BackupStorageLocationPhaseAvailable means the location is available to read and write from.
BackupStorageLocationPhaseAvailable BackupStorageLocationPhase = "Available"
// BackupStorageLocationPhaseUnavailable means the location is unavailable to read and write from.
BackupStorageLocationPhaseUnavailable BackupStorageLocationPhase = "Unavailable"
)
// BackupStorageLocationAccessMode represents the permissions for a BackupStorageLocation.
// +kubebuilder:validation:Enum=ReadOnly;ReadWrite
type BackupStorageLocationAccessMode string
const (
// BackupStorageLocationAccessModeReadOnly represents read-only access to a BackupStorageLocation.
BackupStorageLocationAccessModeReadOnly BackupStorageLocationAccessMode = "ReadOnly"
// BackupStorageLocationAccessModeReadWrite represents read and write access to a BackupStorageLocation.
BackupStorageLocationAccessModeReadWrite BackupStorageLocationAccessMode = "ReadWrite"
)
// TODO(2.0): remove the AccessMode field from BackupStorageLocationStatus.
// TODO(2.0): remove the LastSyncedRevision field from BackupStorageLocationStatus.
// BackupStorageLocationStatus describes the current status of a Velero BackupStorageLocation.
type BackupStorageLocationStatus struct {
// Phase is the current state of the BackupStorageLocation.
// +optional
Phase BackupStorageLocationPhase `json:"phase,omitempty"`
// LastSyncedTime is the last time the contents of the location were synced into
// the cluster.
// +optional
// +nullable
LastSyncedTime *metav1.Time `json:"lastSyncedTime,omitempty"`
// LastSyncedRevision is the value of the `metadata/revision` file in the backup
// storage location the last time the BSL's contents were synced into the cluster.
//
// Deprecated: this field is no longer updated or used for detecting changes to
// the location's contents and will be removed entirely in v2.0.
// +optional
LastSyncedRevision types.UID `json:"lastSyncedRevision,omitempty"`
// AccessMode is an unused field.
//
// Deprecated: there is now an AccessMode field on the Spec and this field
// will be removed entirely as of v2.0.
// +optional
AccessMode BackupStorageLocationAccessMode `json:"accessMode,omitempty"`
}

View File

@@ -1,166 +0,0 @@
/*
Copyright 2017, 2020 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
)
// BackupStorageLocationSpec defines the desired state of a Velero BackupStorageLocation
type BackupStorageLocationSpec struct {
// Provider is the provider of the backup storage.
Provider string `json:"provider"`
// Config is for provider-specific configuration fields.
// +optional
Config map[string]string `json:"config,omitempty"`
StorageType `json:",inline"`
// AccessMode defines the permissions for the backup storage location.
// +optional
AccessMode BackupStorageLocationAccessMode `json:"accessMode,omitempty"`
// BackupSyncPeriod defines how frequently to sync backup API objects from object storage. A value of 0 disables sync.
// +optional
// +nullable
BackupSyncPeriod *metav1.Duration `json:"backupSyncPeriod,omitempty"`
// ValidationFrequency defines how frequently to validate the corresponding object storage. A value of 0 disables validation.
// +optional
// +nullable
ValidationFrequency *metav1.Duration `json:"validationFrequency,omitempty"`
}
// BackupStorageLocationStatus defines the observed state of BackupStorageLocation
type BackupStorageLocationStatus struct {
// Phase is the current state of the BackupStorageLocation.
// +optional
Phase BackupStorageLocationPhase `json:"phase,omitempty"`
// LastSyncedTime is the last time the contents of the location were synced into
// the cluster.
// +optional
// +nullable
LastSyncedTime *metav1.Time `json:"lastSyncedTime,omitempty"`
// LastValidationTime is the last time the backup store location was validated
// the cluster.
// +optional
// +nullable
LastValidationTime *metav1.Time `json:"lastValidationTime,omitempty"`
// LastSyncedRevision is the value of the `metadata/revision` file in the backup
// storage location the last time the BSL's contents were synced into the cluster.
//
// Deprecated: this field is no longer updated or used for detecting changes to
// the location's contents and will be removed entirely in v2.0.
// +optional
LastSyncedRevision types.UID `json:"lastSyncedRevision,omitempty"`
// AccessMode is an unused field.
//
// Deprecated: there is now an AccessMode field on the Spec and this field
// will be removed entirely as of v2.0.
// +optional
AccessMode BackupStorageLocationAccessMode `json:"accessMode,omitempty"`
}
// TODO(2.0) After converting all resources to use the runttime-controller client,
// the genclient and k8s:deepcopy markers will no longer be needed and should be removed.
// +genclient
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// +kubebuilder:object:root=true
// +kubebuilder:resource:shortName=bsl
// +kubebuilder:object:generate=true
// +kubebuilder:storageversion
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:name="Phase",type="string",JSONPath=".status.phase",description="Backup Storage Location status such as Available/Unavailable"
// +kubebuilder:printcolumn:name="Last Validated",type="date",JSONPath=".status.lastValidationTime",description="LastValidationTime is the last time the backup store location was validated"
// +kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp"
// BackupStorageLocation is a location where Velero stores backup objects
type BackupStorageLocation struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec BackupStorageLocationSpec `json:"spec,omitempty"`
Status BackupStorageLocationStatus `json:"status,omitempty"`
}
// TODO(2.0) After converting all resources to use the runttime-controller client,
// the k8s:deepcopy marker will no longer be needed and should be removed.
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// +kubebuilder:object:root=true
// +kubebuilder:rbac:groups=velero.io,resources=backupstoragelocations,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=velero.io,resources=backupstoragelocations/status,verbs=get;update;patch
// BackupStorageLocationList contains a list of BackupStorageLocation
type BackupStorageLocationList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []BackupStorageLocation `json:"items"`
}
// StorageType represents the type of storage that a backup location uses.
// ObjectStorage must be non-nil, since it is currently the only supported StorageType.
type StorageType struct {
ObjectStorage *ObjectStorageLocation `json:"objectStorage"`
}
// ObjectStorageLocation specifies the settings necessary to connect to a provider's object storage.
type ObjectStorageLocation struct {
// Bucket is the bucket to use for object storage.
Bucket string `json:"bucket"`
// Prefix is the path inside a bucket to use for Velero storage. Optional.
// +optional
Prefix string `json:"prefix,omitempty"`
// CACert defines a CA bundle to use when verifying TLS connections to the provider.
// +optional
CACert []byte `json:"caCert,omitempty"`
}
// BackupStorageLocationPhase is the lifecycle phase of a Velero BackupStorageLocation.
// +kubebuilder:validation:Enum=Available;Unavailable
// +kubebuilder:default=Unavailable
type BackupStorageLocationPhase string
const (
// BackupStorageLocationPhaseAvailable means the location is available to read and write from.
BackupStorageLocationPhaseAvailable BackupStorageLocationPhase = "Available"
// BackupStorageLocationPhaseUnavailable means the location is unavailable to read and write from.
BackupStorageLocationPhaseUnavailable BackupStorageLocationPhase = "Unavailable"
)
// BackupStorageLocationAccessMode represents the permissions for a BackupStorageLocation.
// +kubebuilder:validation:Enum=ReadOnly;ReadWrite
type BackupStorageLocationAccessMode string
const (
// BackupStorageLocationAccessModeReadOnly represents read-only access to a BackupStorageLocation.
BackupStorageLocationAccessModeReadOnly BackupStorageLocationAccessMode = "ReadOnly"
// BackupStorageLocationAccessModeReadWrite represents read and write access to a BackupStorageLocation.
BackupStorageLocationAccessModeReadWrite BackupStorageLocationAccessMode = "ReadWrite"
)
// TODO(2.0): remove the AccessMode field from BackupStorageLocationStatus.
// TODO(2.0): remove the LastSyncedRevision field from BackupStorageLocationStatus.

View File

@@ -1,36 +0,0 @@
/*
Copyright 2020 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package v1 contains API Schema definitions for the velero v1 API group
// +kubebuilder:object:generate=true
// +groupName=velero.io
package v1
import (
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
)
var (
// SchemeGroupVersion is group version used to register these objects
SchemeGroupVersion = schema.GroupVersion{Group: "velero.io", Version: "v1"}
// SchemeBuilder is used to add go types to the GroupVersionKind scheme
SchemeBuilder = runtime.NewSchemeBuilder(addKnownTypes)
// AddToScheme adds the types in this group-version to the given scheme.
AddToScheme = SchemeBuilder.AddToScheme
)

View File

@@ -22,6 +22,20 @@ import (
"k8s.io/apimachinery/pkg/runtime/schema"
)
var (
// SchemeBuilder collects the scheme builder functions for the Velero API
SchemeBuilder = runtime.NewSchemeBuilder(addKnownTypes)
// AddToScheme applies the SchemeBuilder functions to a specified scheme
AddToScheme = SchemeBuilder.AddToScheme
)
// GroupName is the group name for the Velero API
const GroupName = "velero.io"
// SchemeGroupVersion is the GroupVersion for the Velero API
var SchemeGroupVersion = schema.GroupVersion{Group: GroupName, Version: "v1"}
// Resource gets a Velero GroupResource for a specified resource
func Resource(resource string) schema.GroupResource {
return SchemeGroupVersion.WithResource(resource).GroupResource()

View File

@@ -16,10 +16,7 @@ limitations under the License.
package v1
import (
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
import metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
// RestoreSpec defines the specification for a Velero restore.
type RestoreSpec struct {
@@ -83,100 +80,6 @@ type RestoreSpec struct {
// +optional
// +nullable
IncludeClusterResources *bool `json:"includeClusterResources,omitempty"`
// Hooks represent custom behaviors that should be executed during or post restore.
// +optional
Hooks RestoreHooks `json:"hooks,omitempty"`
}
// RestoreHooks contains custom behaviors that should be executed during or post restore.
type RestoreHooks struct {
Resources []RestoreResourceHookSpec `json:"resources,omitempty"`
}
// RestoreResourceHookSpec defines one or more RestoreResrouceHooks that should be executed based on
// the rules defined for namespaces, resources, and label selector.
type RestoreResourceHookSpec struct {
// Name is the name of this hook.
Name string `json:"name"`
// IncludedNamespaces specifies the namespaces to which this hook spec applies. If empty, it applies
// to all namespaces.
// +optional
// +nullable
IncludedNamespaces []string `json:"includedNamespaces,omitempty"`
// ExcludedNamespaces specifies the namespaces to which this hook spec does not apply.
// +optional
// +nullable
ExcludedNamespaces []string `json:"excludedNamespaces,omitempty"`
// IncludedResources specifies the resources to which this hook spec applies. If empty, it applies
// to all resources.
// +optional
// +nullable
IncludedResources []string `json:"includedResources,omitempty"`
// ExcludedResources specifies the resources to which this hook spec does not apply.
// +optional
// +nullable
ExcludedResources []string `json:"excludedResources,omitempty"`
// LabelSelector, if specified, filters the resources to which this hook spec applies.
// +optional
// +nullable
LabelSelector *metav1.LabelSelector `json:"labelSelector,omitempty"`
// PostHooks is a list of RestoreResourceHooks to execute during and after restoring a resource.
// +optional
PostHooks []RestoreResourceHook `json:"postHooks,omitempty"`
}
// RestoreResourceHook defines a restore hook for a resource.
type RestoreResourceHook struct {
// Exec defines an exec restore hook.
Exec *ExecRestoreHook `json:"exec,omitempty"`
// Init defines an init restore hook.
Init *InitRestoreHook `json:"init,omitempty"`
}
// ExecRestoreHook is a hook that uses pod exec API to execute a command inside a container in a pod
type ExecRestoreHook struct {
// Container is the container in the pod where the command should be executed. If not specified,
// the pod's first container is used.
// +optional
Container string `json:"container,omitempty"`
// Command is the command and arguments to execute from within a container after a pod has been restored.
// +kubebuilder:validation:MinItems=1
Command []string `json:"command"`
// OnError specifies how Velero should behave if it encounters an error executing this hook.
// +optional
OnError HookErrorMode `json:"onError,omitempty"`
// ExecTimeout defines the maximum amount of time Velero should wait for the hook to complete before
// considering the execution a failure.
// +optional
ExecTimeout metav1.Duration `json:"execTimeout,omitempty"`
// WaitTimeout defines the maximum amount of time Velero should wait for the container to be Ready
// before attempting to run the command.
// +optional
WaitTimeout metav1.Duration `json:"waitTimeout,omitempty"`
}
// InitRestoreHook is a hook that adds an init container to a PodSpec to run commands before the
// workload pod is able to start.
type InitRestoreHook struct {
// InitContainers is list of init containers to be added to a pod during its restore.
// +optional
InitContainers []v1.Container `json:"initContainers"`
// Timeout defines the maximum amount of time Velero should wait for the initContainers to complete.
// +optional
Timeout metav1.Duration `json:"timeout,omitempty"`
}
// RestorePhase is a string representation of the lifecycle phase
@@ -234,19 +137,6 @@ type RestoreStatus struct {
// FailureReason is an error that caused the entire restore to fail.
// +optional
FailureReason string `json:"failureReason,omitempty"`
// StartTimestamp records the time the restore operation was started.
// The server's time is used for StartTimestamps
// +optional
// +nullable
StartTimestamp *metav1.Time `json:"startTimestamp,omitempty"`
// CompletionTimestamp records the time the restore operation was completed.
// Completion time is recorded even on failed restore.
// The server's time is used for StartTimestamps
// +optional
// +nullable
CompletionTimestamp *metav1.Time `json:"completionTimestamp,omitempty"`
}
// +genclient

View File

@@ -16,12 +16,7 @@ limitations under the License.
package v1
import (
"fmt"
"time"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
import metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
// ScheduleSpec defines the specification for a Velero schedule
type ScheduleSpec struct {
@@ -100,8 +95,3 @@ type ScheduleList struct {
Items []Schedule `json:"items"`
}
// TimestampedName returns the default backup name format based on the schedule
func (s *Schedule) TimestampedName(timestamp time.Time) string {
return fmt.Sprintf("%s-%s", s.Name, timestamp.Format("20060102150405"))
}

View File

@@ -0,0 +1,94 @@
/*
Copyright 2018 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// +genclient
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// ServerStatusRequest is a request to access current status information about
// the Velero server.
type ServerStatusRequest struct {
metav1.TypeMeta `json:",inline"`
// +optional
metav1.ObjectMeta `json:"metadata,omitempty"`
// +optional
Spec ServerStatusRequestSpec `json:"spec,omitempty"`
// +optional
Status ServerStatusRequestStatus `json:"status,omitempty"`
}
// ServerStatusRequestSpec is the specification for a ServerStatusRequest.
type ServerStatusRequestSpec struct {
}
// ServerStatusRequestPhase represents the lifecycle phase of a ServerStatusRequest.
// +kubebuilder:validation:Enum=New;Processed
type ServerStatusRequestPhase string
const (
// ServerStatusRequestPhaseNew means the ServerStatusRequest has not been processed yet.
ServerStatusRequestPhaseNew ServerStatusRequestPhase = "New"
// ServerStatusRequestPhaseProcessed means the ServerStatusRequest has been processed.
ServerStatusRequestPhaseProcessed ServerStatusRequestPhase = "Processed"
)
// PluginInfo contains attributes of a Velero plugin
type PluginInfo struct {
Name string `json:"name"`
Kind string `json:"kind"`
}
// ServerStatusRequestStatus is the current status of a ServerStatusRequest.
type ServerStatusRequestStatus struct {
// Phase is the current lifecycle phase of the ServerStatusRequest.
// +optional
Phase ServerStatusRequestPhase `json:"phase,omitempty"`
// ProcessedTimestamp is when the ServerStatusRequest was processed
// by the ServerStatusRequestController.
// +optional
// +nullable
ProcessedTimestamp *metav1.Time `json:"processedTimestamp,omitempty"`
// ServerVersion is the Velero server version.
// +optional
ServerVersion string `json:"serverVersion,omitempty"`
// Plugins list information about the plugins running on the Velero server
// +optional
// +nullable
Plugins []PluginInfo `json:"plugins,omitempty"`
}
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// ServerStatusRequestList is a list of ServerStatusRequests.
type ServerStatusRequestList struct {
metav1.TypeMeta `json:",inline"`
// +optional
metav1.ListMeta `json:"metadata,omitempty"`
Items []ServerStatusRequest `json:"items"`
}

View File

@@ -1,106 +0,0 @@
/*
Copyright 2020 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// TODO(2.0) After converting all resources to use the runttime-controller client,
// the genclient and k8s:deepcopy markers will no longer be needed and should be removed.
// +genclient
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// +kubebuilder:object:root=true
// +kubebuilder:resource:shortName=ssr
// +kubebuilder:object:generate=true
// +kubebuilder:storageversion
// +kubebuilder:subresource:status
// ServerStatusRequest is a request to access current status information about
// the Velero server.
type ServerStatusRequest struct {
metav1.TypeMeta `json:",inline"`
// +optional
metav1.ObjectMeta `json:"metadata,omitempty"`
// +optional
Spec ServerStatusRequestSpec `json:"spec,omitempty"`
// +optional
Status ServerStatusRequestStatus `json:"status,omitempty"`
}
// ServerStatusRequestSpec is the specification for a ServerStatusRequest.
type ServerStatusRequestSpec struct {
}
// ServerStatusRequestPhase represents the lifecycle phase of a ServerStatusRequest.
// +kubebuilder:validation:Enum=New;Processed
type ServerStatusRequestPhase string
const (
// ServerStatusRequestPhaseNew means the ServerStatusRequest has not been processed yet.
ServerStatusRequestPhaseNew ServerStatusRequestPhase = "New"
// ServerStatusRequestPhaseProcessed means the ServerStatusRequest has been processed.
ServerStatusRequestPhaseProcessed ServerStatusRequestPhase = "Processed"
)
// PluginInfo contains attributes of a Velero plugin
type PluginInfo struct {
Name string `json:"name"`
Kind string `json:"kind"`
}
// ServerStatusRequestStatus is the current status of a ServerStatusRequest.
type ServerStatusRequestStatus struct {
// Phase is the current lifecycle phase of the ServerStatusRequest.
// +optional
Phase ServerStatusRequestPhase `json:"phase,omitempty"`
// ProcessedTimestamp is when the ServerStatusRequest was processed
// by the ServerStatusRequestController.
// +optional
// +nullable
ProcessedTimestamp *metav1.Time `json:"processedTimestamp,omitempty"`
// ServerVersion is the Velero server version.
// +optional
ServerVersion string `json:"serverVersion,omitempty"`
// Plugins list information about the plugins running on the Velero server
// +optional
// +nullable
Plugins []PluginInfo `json:"plugins,omitempty"`
}
// TODO(2.0) After converting all resources to use the runttime-controller client,
// the k8s:deepcopy marker will no longer be needed and should be removed.
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// +kubebuilder:object:root=true
// +kubebuilder:rbac:groups=velero.io,resources=serverstatusrequests,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=velero.io,resources=serverstatusrequests/status,verbs=get;update;patch
// ServerStatusRequestList is a list of ServerStatusRequests.
type ServerStatusRequestList struct {
metav1.TypeMeta `json:",inline"`
// +optional
metav1.ListMeta `json:"metadata,omitempty"`
Items []ServerStatusRequest `json:"items"`
}

View File

@@ -21,7 +21,6 @@ limitations under the License.
package v1
import (
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
runtime "k8s.io/apimachinery/pkg/runtime"
)
@@ -247,18 +246,6 @@ func (in *BackupSpec) DeepCopyInto(out *BackupSpec) {
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.DefaultVolumesToRestic != nil {
in, out := &in.DefaultVolumesToRestic, &out.DefaultVolumesToRestic
*out = new(bool)
**out = **in
}
if in.OrderedResources != nil {
in, out := &in.OrderedResources, &out.OrderedResources
*out = make(map[string]string, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
return
}
@@ -387,11 +374,6 @@ func (in *BackupStorageLocationSpec) DeepCopyInto(out *BackupStorageLocationSpec
*out = new(metav1.Duration)
**out = **in
}
if in.ValidationFrequency != nil {
in, out := &in.ValidationFrequency, &out.ValidationFrequency
*out = new(metav1.Duration)
**out = **in
}
return
}
@@ -412,10 +394,6 @@ func (in *BackupStorageLocationStatus) DeepCopyInto(out *BackupStorageLocationSt
in, out := &in.LastSyncedTime, &out.LastSyncedTime
*out = (*in).DeepCopy()
}
if in.LastValidationTime != nil {
in, out := &in.LastValidationTime, &out.LastValidationTime
*out = (*in).DeepCopy()
}
return
}
@@ -663,53 +641,6 @@ func (in *ExecHook) DeepCopy() *ExecHook {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ExecRestoreHook) DeepCopyInto(out *ExecRestoreHook) {
*out = *in
if in.Command != nil {
in, out := &in.Command, &out.Command
*out = make([]string, len(*in))
copy(*out, *in)
}
out.ExecTimeout = in.ExecTimeout
out.WaitTimeout = in.WaitTimeout
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ExecRestoreHook.
func (in *ExecRestoreHook) DeepCopy() *ExecRestoreHook {
if in == nil {
return nil
}
out := new(ExecRestoreHook)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *InitRestoreHook) DeepCopyInto(out *InitRestoreHook) {
*out = *in
if in.InitContainers != nil {
in, out := &in.InitContainers, &out.InitContainers
*out = make([]corev1.Container, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
out.Timeout = in.Timeout
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new InitRestoreHook.
func (in *InitRestoreHook) DeepCopy() *InitRestoreHook {
if in == nil {
return nil
}
out := new(InitRestoreHook)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ObjectStorageLocation) DeepCopyInto(out *ObjectStorageLocation) {
*out = *in
@@ -1102,29 +1033,6 @@ func (in *Restore) DeepCopyObject() runtime.Object {
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RestoreHooks) DeepCopyInto(out *RestoreHooks) {
*out = *in
if in.Resources != nil {
in, out := &in.Resources, &out.Resources
*out = make([]RestoreResourceHookSpec, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RestoreHooks.
func (in *RestoreHooks) DeepCopy() *RestoreHooks {
if in == nil {
return nil
}
out := new(RestoreHooks)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RestoreList) DeepCopyInto(out *RestoreList) {
*out = *in
@@ -1158,80 +1066,6 @@ func (in *RestoreList) DeepCopyObject() runtime.Object {
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RestoreResourceHook) DeepCopyInto(out *RestoreResourceHook) {
*out = *in
if in.Exec != nil {
in, out := &in.Exec, &out.Exec
*out = new(ExecRestoreHook)
(*in).DeepCopyInto(*out)
}
if in.Init != nil {
in, out := &in.Init, &out.Init
*out = new(InitRestoreHook)
(*in).DeepCopyInto(*out)
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RestoreResourceHook.
func (in *RestoreResourceHook) DeepCopy() *RestoreResourceHook {
if in == nil {
return nil
}
out := new(RestoreResourceHook)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RestoreResourceHookSpec) DeepCopyInto(out *RestoreResourceHookSpec) {
*out = *in
if in.IncludedNamespaces != nil {
in, out := &in.IncludedNamespaces, &out.IncludedNamespaces
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.ExcludedNamespaces != nil {
in, out := &in.ExcludedNamespaces, &out.ExcludedNamespaces
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.IncludedResources != nil {
in, out := &in.IncludedResources, &out.IncludedResources
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.ExcludedResources != nil {
in, out := &in.ExcludedResources, &out.ExcludedResources
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.LabelSelector != nil {
in, out := &in.LabelSelector, &out.LabelSelector
*out = new(metav1.LabelSelector)
(*in).DeepCopyInto(*out)
}
if in.PostHooks != nil {
in, out := &in.PostHooks, &out.PostHooks
*out = make([]RestoreResourceHook, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RestoreResourceHookSpec.
func (in *RestoreResourceHookSpec) DeepCopy() *RestoreResourceHookSpec {
if in == nil {
return nil
}
out := new(RestoreResourceHookSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RestoreSpec) DeepCopyInto(out *RestoreSpec) {
*out = *in
@@ -1277,7 +1111,6 @@ func (in *RestoreSpec) DeepCopyInto(out *RestoreSpec) {
*out = new(bool)
**out = **in
}
in.Hooks.DeepCopyInto(&out.Hooks)
return
}
@@ -1299,14 +1132,6 @@ func (in *RestoreStatus) DeepCopyInto(out *RestoreStatus) {
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.StartTimestamp != nil {
in, out := &in.StartTimestamp, &out.StartTimestamp
*out = (*in).DeepCopy()
}
if in.CompletionTimestamp != nil {
in, out := &in.CompletionTimestamp, &out.CompletionTimestamp
*out = (*in).DeepCopy()
}
return
}

View File

@@ -1,55 +0,0 @@
/*
Copyright 2020 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package archive
import (
"encoding/json"
"path/filepath"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/util/filesystem"
)
// GetItemFilePath returns an item's file path once extracted from a Velero backup archive.
func GetItemFilePath(rootDir, groupResource, namespace, name string) string {
switch namespace {
case "":
return filepath.Join(rootDir, velerov1api.ResourcesDir, groupResource, velerov1api.ClusterScopedDir, name+".json")
default:
return filepath.Join(rootDir, velerov1api.ResourcesDir, groupResource, velerov1api.NamespaceScopedDir, namespace, name+".json")
}
}
// Unmarshal reads the specified file, unmarshals the JSON contained within it
// and returns an Unstructured object.
func Unmarshal(fs filesystem.Interface, filePath string) (*unstructured.Unstructured, error) {
var obj unstructured.Unstructured
bytes, err := fs.ReadFile(filePath)
if err != nil {
return nil, err
}
err = json.Unmarshal(bytes, &obj)
if err != nil {
return nil, err
}
return &obj, nil
}

View File

@@ -1,31 +0,0 @@
/*
Copyright 2020 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package archive
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestGetItemFilePath(t *testing.T) {
res := GetItemFilePath("root", "resource", "", "item")
assert.Equal(t, "root/resources/resource/cluster/item.json", res)
res = GetItemFilePath("root", "resource", "namespace", "item")
assert.Equal(t, "root/resources/resource/namespaces/namespace/item.json", res)
}

Some files were not shown because too many files have changed in this diff Show More