Compare commits

..

7 Commits

Author SHA1 Message Date
Hoang, Phuong
80828f727e Add missing files 2021-11-07 12:22:01 -05:00
Hoang, Phuong
de360a4b31 Simplify implementation. 2021-11-07 12:22:01 -05:00
Hoang, Phuong
3cbd7976bd Simplify implementation 2021-11-07 12:22:01 -05:00
Hoang, Phuong
79d1616ecb Reformat and add version 2 to mock manager. 2021-11-07 12:22:01 -05:00
Hoang, Phuong
f40f0d4e5b Change to new implementation 2021-11-07 12:22:01 -05:00
Hoang, Phuong
5c707d20c1 Refactor code to make it easier to add future version. 2021-11-07 12:22:01 -05:00
Hoang, Phuong
b059030666 Initial implementation of plugin version v2: Adding context object 2021-11-07 12:22:01 -05:00
1405 changed files with 24724 additions and 142787 deletions

View File

@@ -5,18 +5,15 @@ about: Tell us about a problem you are experiencing
---
**What steps did you take and what happened:**
<!--A clear and concise description of what the bug is, and what commands you ran.-->
[A clear and concise description of what the bug is, and what commands you ran.)
**What did you expect to happen:**
**The following information will help us better understand what's going on**:
_If you are using velero v1.7.0+:_
Please use `velero debug --backup <backupname> --restore <restorename>` to generate the support bundle, and attach to this issue, more options please refer to `velero debug --help`
**The output of the following commands will help us better understand what's going on**:
(Pasting long output into a [GitHub gist](https://gist.github.com) or other pastebin is fine.)
_If you are using earlier versions:_
Please provide the output of the following commands (Pasting long output into a [GitHub gist](https://gist.github.com) or other pastebin is fine.)
- `kubectl logs deployment/velero -n velero`
- `velero backup describe <backupname>` or `kubectl get backup/<backupname> -n velero -o yaml`
- `velero backup logs <backupname>`
@@ -25,7 +22,7 @@ Please provide the output of the following commands (Pasting long output into a
**Anything else you would like to add:**
<!--Miscellaneous information that will assist in solving the issue.-->
[Miscellaneous information that will assist in solving the issue.]
**Environment:**

View File

@@ -5,15 +5,15 @@ about: Suggest an idea for this project
---
**Describe the problem/challenge you have**
<!--A description of the current limitation/problem/challenge that you are experiencing.-->
[A description of the current limitation/problem/challenge that you are experiencing.]
**Describe the solution you'd like**
<!--A clear and concise description of what you want to happen.-->
[A clear and concise description of what you want to happen.]
**Anything else you would like to add:**
<!--Miscellaneous information that will assist in solving the issue.-->
[Miscellaneous information that will assist in solving the issue.]
**Environment:**

View File

@@ -9,19 +9,15 @@ reviewers:
groups:
maintainers:
- zubron
- dsu-igeek
- jenting
- sseago
- reasonerjt
- ywk253100
- blackpiglet
- qiuming-best
- shubham-pampattiwar
- Lyndon-Li
tech-writer:
- sseago
- reasonerjt
- ywk253100
- Lyndon-Li
- a-mccarthy
files:
'site/**':

View File

@@ -1,12 +0,0 @@
version: 2
updates:
# Dependencies listed in go.mod
- package-ecosystem: "gomod"
directory: "/" # Location of package manifests
schedule:
interval: "weekly"
labels:
- "kind/changelog-not-required"
ignore:
- dependency-name: "*"
update-types: ["version-update:semver-major", "version-update:semver-minor", "version-update:semver-patch"]

View File

@@ -9,5 +9,5 @@ Fixes #(issue)
# Please indicate you've done the following:
- [ ] [Accepted the DCO](https://velero.io/docs/v1.5/code-standards/#dco-sign-off). Commits without the DCO will delay acceptance.
- [ ] [Created a changelog file](https://velero.io/docs/v1.5/code-standards/#adding-a-changelog) or added `/kind changelog-not-required` as a comment on this pull request.
- [ ] [Created a changelog file](https://velero.io/docs/v1.5/code-standards/#adding-a-changelog) or added `/kind changelog-not-required`.
- [ ] Updated the corresponding documentation in `site/content/docs/main`.

38
.github/stale.yml vendored Normal file
View File

@@ -0,0 +1,38 @@
# Number of days of inactivity before an issue becomes stale
daysUntilStale: 60
# Number of days of inactivity before a stale issue is closed
daysUntilClose: 14
# Issues with these labels will never be considered stale
exemptLabels:
- Epic
- Area/CLI
- Area/Cloud/AWS
- Area/Cloud/Azure
- Area/Cloud/GCP
- Area/Cloud/vSphere
- Area/CSI
- Area/Design
- Area/Documentation
- Area/Plugins
- Enhancement/User
- kind/tech-debt
- Needs investigation
- P0 - Hair on fire
- P1 - Important
- P2 - Long-term important
- P3 - Wouldn't it be nice if...
- Product Requirements
- Restic - GA
- Restic
- release-blocker
- Security
# Label to use when marking an issue as stale
staleLabel: staled
# Comment to post when marking an issue as stale. Set to `false` to disable
markComment: >
This issue has been automatically marked as stale because it has not had
recent activity. It will be closed if no further activity occurs. Thank you
for your contributions.
# Comment to post when closing a stale issue. Set to `false` to disable
closeComment: >
Closing the stale issue.

View File

@@ -12,9 +12,9 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Set up Go
uses: actions/setup-go@v4
uses: actions/setup-go@v2
with:
go-version: '1.20.10'
go-version: 1.16
id: go
# Look for a CLI that's made for this PR
- name: Fetch built CLI
@@ -49,7 +49,7 @@ jobs:
run: |
make local
# Check the common CLI against all Kubernetes versions
# Check the common CLI against all kubernetes versions
crd-check:
needs: build-cli
runs-on: ubuntu-latest
@@ -57,13 +57,14 @@ jobs:
matrix:
# Latest k8s versions. There's no series-based tag, nor is there a latest tag.
k8s:
- 1.15.12
- 1.16.15
- 1.17.17
- 1.18.15
- 1.19.7
- 1.20.2
- 1.21.1
- 1.22.0
- 1.23.6
- 1.24.2
- 1.25.3
# All steps run in parallel unless otherwise specified.
# See https://docs.github.com/en/actions/learn-github-actions/managing-complex-workflows#creating-dependent-jobs
steps:
@@ -81,7 +82,7 @@ jobs:
velero-${{ github.event.pull_request.number }}-
- uses: engineerd/setup-kind@v0.5.0
with:
version: "v0.17.0"
version: "v0.11.1"
image: "kindest/node:v${{ matrix.k8s }}"
- name: Install CRDs
run: |

View File

@@ -12,9 +12,9 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Set up Go
uses: actions/setup-go@v4
uses: actions/setup-go@v2
with:
go-version: '1.20.10'
go-version: 1.16
id: go
# Look for a CLI that's made for this PR
- name: Fetch built CLI
@@ -53,26 +53,28 @@ jobs:
run: |
IMAGE=velero VERSION=pr-test make container
docker save velero:pr-test -o ./velero.tar
# Run E2E test against all Kubernetes versions on kind
# Run E2E test against all kubernetes versions on kind
run-e2e-test:
needs: build
runs-on: ubuntu-latest
strategy:
matrix:
k8s:
- 1.19.16
- 1.20.15
- 1.21.12
- 1.22.9
- 1.23.6
- 1.24.0
- 1.25.3
# doesn't cover 1.15 as 1.15 doesn't support "apiextensions.k8s.io/v1" that is needed for the case
#- 1.15.12
- 1.16.15
- 1.17.17
- 1.18.15
- 1.19.7
- 1.20.2
- 1.21.1
- 1.22.0
fail-fast: false
steps:
- name: Set up Go
uses: actions/setup-go@v4
uses: actions/setup-go@v2
with:
go-version: '1.20.10'
go-version: 1.16
id: go
- name: Check out the code
uses: actions/checkout@v2
@@ -81,7 +83,7 @@ jobs:
docker run -d --rm -p 9000:9000 -e "MINIO_ACCESS_KEY=minio" -e "MINIO_SECRET_KEY=minio123" -e "MINIO_DEFAULT_BUCKETS=bucket,additional-bucket" bitnami/minio:2021.6.17-debian-10-r7
- uses: engineerd/setup-kind@v0.5.0
with:
version: "v0.17.0"
version: "v0.11.1"
image: "kindest/node:v${{ matrix.k8s }}"
- name: Fetch built CLI
id: cli-cache
@@ -113,22 +115,10 @@ jobs:
aws_access_key_id=minio
aws_secret_access_key=minio123
EOF
# Match kubectl version to k8s server version
curl -LO https://dl.k8s.io/release/v${{ matrix.k8s }}/bin/linux/amd64/kubectl
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
GOPATH=~/go CLOUD_PROVIDER=kind \
OBJECT_STORE_PROVIDER=aws BSL_CONFIG=region=minio,s3ForcePathStyle="true",s3Url=http://$(hostname -i):9000 \
CREDS_FILE=/tmp/credential BSL_BUCKET=bucket \
ADDITIONAL_OBJECT_STORE_PROVIDER=aws ADDITIONAL_BSL_CONFIG=region=minio,s3ForcePathStyle="true",s3Url=http://$(hostname -i):9000 \
ADDITIONAL_CREDS_FILE=/tmp/credential ADDITIONAL_BSL_BUCKET=additional-bucket \
GINKGO_FOCUS='Basic\]\[ClusterResource' VELERO_IMAGE=velero:pr-test \
make -C test/e2e run
timeout-minutes: 30
- name: Upload debug bundle
if: ${{ failure() }}
uses: actions/upload-artifact@v2
with:
name: DebugBundle
path: /home/runner/work/velero/velero/test/e2e/debug-bundle*
GINKGO_FOCUS=Basic VELERO_IMAGE=velero:pr-test \
make -C test/e2e run

18
.github/workflows/milestoned-issues.yml vendored Normal file
View File

@@ -0,0 +1,18 @@
name: Add issues with a milestone to the milestone's board
on:
issues:
types: [milestoned]
jobs:
automate-project-columns:
runs-on: ubuntu-latest
steps:
- uses: alex-page/github-project-automation-plus@v0.3.0
with:
# Do NOT add PRs to the board, as that's duplication. Their corresponding issue should be on the board.
if: ${{ !github.event.issue.pull_request }}
project: "${{ github.event.issue.milestone.title }}"
column: "To Do"
repo-token: ${{ secrets.GH_TOKEN }}

View File

@@ -1,36 +0,0 @@
name: Trivy Nightly Scan
on:
schedule:
- cron: '0 2 * * *' # run at 2 AM UTC
jobs:
nightly-scan:
name: Trivy nightly scan
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
# maintain the versions of Velero those need security scan
versions: [main]
# list of images that need scan
images: [velero, velero-restore-helper]
permissions:
security-events: write # for github/codeql-action/upload-sarif to upload SARIF results
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: 'docker.io/velero/${{ matrix.images }}:${{ matrix.versions }}'
severity: 'CRITICAL,HIGH,MEDIUM'
format: 'template'
template: '@/contrib/sarif.tpl'
output: 'trivy-results.sarif'
- name: Upload Trivy scan results to GitHub Security tab
uses: github/codeql-action/upload-sarif@v2
with:
sarif_file: 'trivy-results.sarif'

View File

@@ -0,0 +1,15 @@
name: Move new issues into Triage
on:
issues:
types: [opened]
jobs:
automate-project-columns:
runs-on: ubuntu-latest
steps:
- uses: alex-page/github-project-automation-plus@v0.3.0
with:
project: "Velero Support Board"
column: "New"
repo-token: ${{ secrets.GH_TOKEN }}

View File

@@ -1,9 +1,5 @@
name: Pull Request Changelog Check
# by setting `on: [pull_request]`, that means action will be trigger when PR is opened, synchronize, reopened.
# Add labeled and unlabeled events too.
on:
pull_request:
types: [opened, synchronize, reopened, labeled, unlabeled]
on: [pull_request]
jobs:
build:

View File

@@ -4,13 +4,11 @@ jobs:
build:
name: Run CI
runs-on: ubuntu-latest
strategy:
fail-fast: false
steps:
- name: Set up Go
uses: actions/setup-go@v4
uses: actions/setup-go@v2
with:
go-version: '1.20.10'
go-version: 1.16
id: go
- name: Check out the code
uses: actions/checkout@v2
@@ -23,10 +21,3 @@ jobs:
${{ runner.os }}-go-
- name: Make ci
run: make ci
- name: Upload test coverage
uses: codecov/codecov-action@v3
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: coverage.out
verbose: true
fail_ci_if_error: true

View File

@@ -14,44 +14,7 @@ jobs:
uses: codespell-project/actions-codespell@master
with:
# ignore the config/.../crd.go file as it's generated binary data that is edited elswhere.
skip: .git,*.png,*.jpg,*.woff,*.ttf,*.gif,*.ico,./config/crd/v1beta1/crds/crds.go,./config/crd/v1/crds/crds.go,./config/crd/v2alpha1/crds/crds.go,./go.sum,./LICENSE
ignore_words_list: iam,aks,ist,bridget,ue,shouldnot,atleast
skip: .git,*.png,*.jpg,*.woff,*.ttf,*.gif,*.ico,./config/crd/v1beta1/crds/crds.go,./config/crd/v1/crds/crds.go
ignore_words_list: iam,aks,ist,bridget,ue
check_filenames: true
check_hidden: true
- name: Velero.io word list check
shell: bash {0}
run: |
IGNORE_COMMENT="Velero.io word list : ignore"
FILES_TO_CHECK=$(find . -type f \
! -path "./.git/*" \
! -path "./site/content/docs/v*" \
! -path "./changelogs/CHANGELOG-*" \
! -path "./.github/workflows/pr-codespell.yml" \
! -path "./site/static/fonts/Metropolis/Open Font License.md" \
! -regex '.*\.\(png\|jpg\|woff\|ttf\|gif\|ico\|svg\)'
)
function check_word_in_files() {
local word=$1
xargs grep -Iinr "$word" <<< "$FILES_TO_CHECK" | \
grep -v "$IGNORE_COMMENT" | \
grep -i --color=always "$word" && \
EXIT_STATUS=1
}
function check_word_case_sensitive_in_files() {
local word=$1
xargs grep -Inr "$word" <<< "$FILES_TO_CHECK" | \
grep -v "$IGNORE_COMMENT" | \
grep --color=always "$word" && \
EXIT_STATUS=1
}
EXIT_STATUS=0
check_word_case_sensitive_in_files ' kubernetes '
check_word_in_files 'on-premise\b'
check_word_in_files 'back-up'
check_word_in_files 'plug-in'
check_word_in_files 'whitelist'
check_word_in_files 'blacklist'
exit $EXIT_STATUS

View File

@@ -1,37 +0,0 @@
name: build Velero containers on Dockerfile change
on:
pull_request:
branches:
- 'main'
- 'release-**'
paths:
- 'Dockerfile'
jobs:
build:
name: Build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
name: Checkout
- name: Set up QEMU
id: qemu
uses: docker/setup-qemu-action@v1
with:
platforms: all
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
with:
version: latest
# Although this action also calls docker-push.sh, it is not triggered
# by push, so BRANCH and TAG are empty by default. docker-push.sh will
# only build Velero image without pushing.
- name: Make Velero container without pushing to registry.
if: github.repository == 'vmware-tanzu/velero'
run: |
./hack/docker-push.sh

View File

@@ -1,29 +0,0 @@
name: Verify goreleaser change
on:
pull_request:
branches:
- 'main'
- 'release-**'
paths:
- '.goreleaser.yml'
- 'hack/release-tools/goreleaser.sh'
jobs:
build:
name: Build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
name: Checkout
- name: Verify .goreleaser.yml and try a dryrun release.
if: github.repository == 'vmware-tanzu/velero'
run: |
CHANGELOG=$(ls changelogs | sort -V -r | head -n 1)
GITHUB_TOKEN=${{ secrets.GITHUB_TOKEN }} \
REGISTRY=velero \
RELEASE_NOTES_FILE=changelogs/$CHANGELOG \
PUBLISH=false \
make release

View File

@@ -12,16 +12,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
# The default value is "1" which fetches only a single commit. If we merge PR without squash or rebase,
# there are at least two commits: the first one is the merge commit and the second one is the real commit
# contains the changes.
# As we use the Dockerfile's commit ID as the tag of the build-image, fetching only 1 commit causes the merge
# commit ID to be the tag.
# While when running make commands locally, as the local git repository usually contains all commits, the Dockerfile's
# commit ID is the second one. This is mismatch with the images in Dockerhub
fetch-depth: 2
- uses: actions/checkout@master
- name: Build
run: make build-image

View File

@@ -2,9 +2,7 @@ name: Main CI
on:
push:
branches:
- 'main'
- 'release-**'
branches: [ main ]
tags:
- '*'
@@ -16,24 +14,13 @@ jobs:
steps:
- name: Set up Go
uses: actions/setup-go@v4
uses: actions/setup-go@v2
with:
go-version: '1.20.10'
go-version: 1.16
id: go
- uses: actions/checkout@v3
# Fix issue of setup-gcloud
- run: |
sudo apt-get install python2.7
export CLOUDSDK_PYTHON="/usr/bin/python2"
- uses: google-github-actions/setup-gcloud@v0
with:
version: '285.0.0'
service_account_key: ${{ secrets.GCS_SA_KEY }}
export_default_credentials: true
- run: gcloud info
- name: Check out code into the Go module directory
uses: actions/checkout@v2
- name: Set up QEMU
id: qemu
@@ -48,56 +35,14 @@ jobs:
version: latest
- name: Build
run: |
make local
# Clean go cache to ease the build environment storage pressure.
go clean -modcache -cache
run: make local
- name: Test
run: make test
- name: Upload test coverage
uses: codecov/codecov-action@v2
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: coverage.out
verbose: true
# Use the JSON key in secret to login gcr.io
- uses: 'docker/login-action@v2'
with:
registry: 'gcr.io' # or REGION.docker.pkg.dev
username: '_json_key'
password: '${{ secrets.GCR_SA_KEY }}'
# Only try to publish the container image from the root repo; forks don't have permission to do so and will always get failures.
- name: Publish container image
if: github.repository == 'vmware-tanzu/velero'
run: |
sudo swapoff -a
sudo rm -f /mnt/swapfile
docker image prune -a --force
# Build and push Velero image to docker registry
docker login -u ${{ secrets.DOCKER_USER }} -p ${{ secrets.DOCKER_PASSWORD }}
VERSION=$(./hack/docker-push.sh | grep 'VERSION:' | awk -F: '{print $2}' | xargs)
# Upload Velero image package to GCS
source hack/ci/build_util.sh
BIN=velero
RESTORE_HELPER_BIN=velero-restore-helper
GCS_BUCKET=velero-builds
VELERO_IMAGE=${BIN}-${VERSION}
VELERO_RESTORE_HELPER_IMAGE=${RESTORE_HELPER_BIN}-${VERSION}
VELERO_IMAGE_FILE=${VELERO_IMAGE}.tar.gz
VELERO_RESTORE_HELPER_IMAGE_FILE=${VELERO_RESTORE_HELPER_IMAGE}.tar.gz
VELERO_IMAGE_BACKUP_FILE=${VELERO_IMAGE}-'build.'${GITHUB_RUN_NUMBER}.tar.gz
VELERO_RESTORE_HELPER_IMAGE_BACKUP_FILE=${VELERO_RESTORE_HELPER_IMAGE}-'build.'${GITHUB_RUN_NUMBER}.tar.gz
cp ${VELERO_IMAGE_FILE} ${VELERO_IMAGE_BACKUP_FILE}
cp ${VELERO_RESTORE_HELPER_IMAGE_FILE} ${VELERO_RESTORE_HELPER_IMAGE_BACKUP_FILE}
uploader ${VELERO_IMAGE_FILE} ${GCS_BUCKET}
uploader ${VELERO_RESTORE_HELPER_IMAGE_FILE} ${GCS_BUCKET}
uploader ${VELERO_IMAGE_BACKUP_FILE} ${GCS_BUCKET}
uploader ${VELERO_RESTORE_HELPER_IMAGE_BACKUP_FILE} ${GCS_BUCKET}
./hack/docker-push.sh

View File

@@ -1,7 +1,8 @@
name: "Close stale issues and PRs"
on:
schedule:
- cron: "30 1 * * *" # Every day at 1:30 UTC
# First of every month
- cron: "30 1 * * *"
jobs:
stale:
@@ -10,14 +11,14 @@ jobs:
- uses: actions/stale@v3
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-issue-message: "This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 14 days. If a Velero team member has requested log or more information, please provide the output of the shared commands."
close-issue-message: "This issue was closed because it has been stalled for 14 days with no activity."
days-before-issue-stale: 60
days-before-issue-close: 14
stale-issue-label: staled
stale-issue-message: "This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days. If a Velero team member has requested log or more information, please provide the output of the shared commands."
close-issue-message: "This issue was closed because it has been stalled for 5 days with no activity."
days-before-issue-stale: 30
days-before-issue-close: 5
# Disable stale PRs for now; they can remain open.
days-before-pr-stale: -1
days-before-pr-close: -1
# Only issues made after Feb 09 2021.
start-date: "2021-09-02T00:00:00"
exempt-issue-labels: "Epic,Area/CLI,Area/Cloud/AWS,Area/Cloud/Azure,Area/Cloud/GCP,Area/Cloud/vSphere,Area/CSI,Area/Design,Area/Documentation,Area/Plugins,Bug,Enhancement/User,kind/requirement,kind/refactor,kind/tech-debt,limitation,Needs investigation,Needs triage,Needs Product,P0 - Hair on fire,P1 - Important,P2 - Long-term important,P3 - Wouldn't it be nice if...,Product Requirements,Restic - GA,Restic,release-blocker,Security"
# Only make issues stale if they have these labels. Comma separated.
only-labels: "Needs info,Duplicate"

8
.gitignore vendored
View File

@@ -38,7 +38,6 @@ _testmain.go
# Hugo compiled data
site/public
site/resources
site/.hugo_build.lock
.vs
@@ -47,10 +46,5 @@ _tiltbuild
tilt-resources/tilt-settings.json
tilt-resources/velero_v1_backupstoragelocation.yaml
tilt-resources/deployment.yaml
tilt-resources/node-agent.yaml
tilt-resources/restic.yaml
tilt-resources/cloud
# test generated files
test/e2e/report.xml
coverage.out
__debug_bin*

View File

@@ -14,7 +14,7 @@
dist: _output
builds:
- main: ./cmd/velero/velero.go
- main: ./cmd/velero/main.go
env:
- CGO_ENABLED=0
goos:
@@ -27,9 +27,11 @@ builds:
- arm64
- ppc64le
ignore:
# don't build arm for darwin and arm/arm64 for windows
# don't build arm/arm64 for darwin or windows
- goos: darwin
goarch: arm
- goos: darwin
goarch: arm64
- goos: darwin
goarch: ppc64le
- goos: windows
@@ -46,9 +48,6 @@ archives:
files:
- LICENSE
- examples/**/*
# Add the setting to resolve the DEPRECATED warning. Actually, Velero's case is not affected by the rlcp behavior change.
# https://github.com/orgs/goreleaser/discussions/3659#discussioncomment-4587257
rlcp: true
checksum:
name_template: 'CHECKSUM'
release:
@@ -57,10 +56,3 @@ release:
name: velero
draft: true
prerelease: auto
git:
# What should be used to sort tags when gathering the current and previous
# tags if there are more than one tag in the same commit.
#
# Default: `-version:refname`
tag_sort: -version:creatordate

View File

@@ -3,7 +3,6 @@
If you're using Velero and want to add your organization to this list,
[follow these directions][1]!
<a href="https://www.pitsdatarecovery.net/" border="0" target="_blank"><img alt="pitsdatarecovery.net" src="site/static/img/adopters/PITSGlobalDataRecoveryServices.svg" height="50"></a>
<a href="https://www.bitgo.com" border="0" target="_blank"><img alt="bitgo.com" src="site/static/img/adopters/BitGo.svg" height="50"></a>&nbsp; &nbsp; &nbsp;
<a href="https://www.nirmata.com" border="0" target="_blank"><img alt="nirmata.com" src="site/static/img/adopters/nirmata.svg" height="50"></a>&nbsp; &nbsp; &nbsp;
<a href="https://kyma-project.io/" border="0" target="_blank"><img alt="kyma-project.io" src="site/static/img/adopters/kyma.svg" height="50"></a>&nbsp; &nbsp; &nbsp;
@@ -15,17 +14,17 @@ If you're using Velero and want to add your organization to this list,
<a href="https://sighup.io/" border="0" target="_blank"><img alt="sighup.io" src="site/static/img/adopters/sighup.svg" height="50"></a>&nbsp; &nbsp; &nbsp;
<a href="https://mayadata.io/" border="0" target="_blank"><img alt="mayadata.io" src="site/static/img/adopters/mayadata.svg" height="50"></a>&nbsp; &nbsp; &nbsp;
<a href="https://www.replicated.com/" border="0" target="_blank"><img alt="replicated.com" src="site/static/img/adopters/replicated-logo-red.svg" height="50"></a>
<a href="https://cloudcasa.io/" border="0" target="_blank"><img alt="cloudcasa.io" src="site/static/img/adopters/cloudcasa.svg" height="50"></a>
## Success Stories
Below is a list of adopters of Velero in **production environments** that have
publicly shared the details of how they use it.
**[BitGo][20]**
BitGo uses Velero backup and restore capabilities to seamlessly provision and scale fullnode statefulsets on the fly as well as having it serve an integral piece for our Kubernetes disaster-recovery story.
BitGo uses Velero backup and restore capabilities to seamlessly provision and scale fullnode statefulsets on the fly as well as having it serve an integral piece for our kubernetes disaster-recovery story.
**[Bugsnag][30]**
We use Velero for managing backups of an internal instance of our on-premise clustered solution. We also recommend our users of [on-premise Bugsnag installations](https://www.bugsnag.com/on-premise) use Velero for [managing their own backups](https://docs.bugsnag.com/on-premise/clustered/backup-restore/). <!-- Velero.io word list : ignore -->
We use Velero for managing backups of an internal instance of our on-premise clustered solution. We also recommend our users of [on-premise Bugsnag installations][31] use Velero for [managing their own backups][32].
**[Banzai Cloud][60]**
[Banzai Cloud Pipeline][61] is a Kubernetes-based microservices platform that integrates services needed for Day-1 and Day-2 operations along with first-class support both for on-prem and hybrid multi-cloud deployments. We use Velero to periodically [backup and restore these clusters in case of disasters][62].
@@ -41,9 +40,7 @@ We have integrated our [solution with Velero][11] to provide our customers with
Kyma [integrates with Velero][41] to effortlessly back up and restore Kyma clusters with all its resources. Velero capabilities allow Kyma users to define and run manual and scheduled backups in order to successfully handle a disaster-recovery scenario.
**[Red Hat][50]**
Red Hat has developed 2 operators for the OpenShift platform:
- [Migration Toolkit for Containers][51] (Crane): This operator uses [Velero and Restic][52] to drive the migration of applications between OpenShift clusters.
- [OADP (OpenShift API for Data Protection) Operator][53]: This operator sets up and installs Velero on the OpenShift platform, allowing users to backup and restore applications.
Red Hat has developed the [Cluster Application Migration Tool][51] which uses [Velero and Restic][52] to drive the migration of applications between OpenShift clusters.
**[Dell EMC][70]**
For Kubernetes environments, [PowerProtect Data Manager][71] leverages the Container Storage Interface (CSI) framework to take snapshots to back up the persistent data or the data that the application creates e.g. databases. [Dell EMC leverages Velero][72] to backup the namespace configuration files (also known as Namespace meta data) for enterprise grade data protection.
@@ -59,11 +56,8 @@ MayaData is a large user of Velero as well as a contributor. MayaData offers a D
Okteto integrates Velero in [Okteto Cloud][94] and [Okteto Enterprise][95] to periodically backup and restore our clusters for disaster recovery. Velero is also a core software building block to provide namespace cloning capabilities, a feature that allows our users cloning staging environments into their personal development namespace for providing production-like development environments.
**[Replicated][100]**<br>
Replicated uses the Velero open source project to enable snapshots in [KOTS][101] to backup Kubernetes manifests & persistent volumes. In addition to the default functionality that Velero provides, [KOTS][101] provides a detailed interface in the [Admin Console][102] that can be used to manage the storage destination and schedule, and to perform and monitor the backup and restore process.<br>
**[CloudCasa][103]**<br>
[Catalogic Software][104] integrates Velero with [CloudCasa][103] - A Smart Home in the Cloud for Backups. CloudCasa is a simple, scalable, cloud-native solution providing data protection and disaster recovery as a service. This solution is built using Kubernetes for protecting Kubernetes clusters.<br>
Replicated uses the Velero open source project to enable snapshots in [KOTS][101] to backup Kubernetes manifests & persistent volumes. In addition to the default functionality that Velero provides, [KOTS][101] provides a detailed interface in the [Admin Console][102] that can be used to manage the storage destination and schedule, and to perform and monitor the backup and restore process.
## Adding your organization to the list of Velero Adopters
If you are using Velero and would like to be included in the list of `Velero Adopters`, add an SVG version of your logo to the `site/static/img/adopters` directory in this repo and submit a [pull request][3] with your change. Name the image file something that reflects your company (e.g., if your company is called Acme, name the image acme.png). See this for an example [PR][4].
@@ -83,6 +77,8 @@ If you would like to add your logo to a future `Adopters of Velero` section on [
[20]: https://bitgo.com
[30]: https://bugsnag.com
[31]: https://www.bugsnag.com/on-premise
[32]: https://docs.bugsnag.com/on-premise/clustered/backup-restore/
[40]: https://kyma-project.io
[41]: https://kyma-project.io/docs/components/backup/#overview-overview
@@ -90,7 +86,6 @@ If you would like to add your logo to a future `Adopters of Velero` section on [
[50]: https://redhat.com
[51]: https://github.com/fusor/mig-operator
[52]: https://github.com/fusor/mig-operator/blob/master/docs/usage/2.md
[53]: https://github.com/openshift/oadp-operator
[60]: https://banzaicloud.com
[61]: https://banzaicloud.com/products/pipeline/
@@ -115,6 +110,3 @@ If you would like to add your logo to a future `Adopters of Velero` section on [
[100]: https://www.replicated.com
[101]: https://kots.io
[102]: https://kots.io/kotsadm/snapshots/overview/
[103]: https://cloudcasa.io/
[104]: https://www.catalogicsoftware.com/

View File

@@ -1,11 +1,7 @@
## Current release:
* [CHANGELOG-1.11.md][21]
* [CHANGELOG-1.7.md][17]
## Older releases:
* [CHANGELOG-1.10.md][20]
* [CHANGELOG-1.9.md][19]
* [CHANGELOG-1.8.md][18]
* [CHANGELOG-1.7.md][17]
* [CHANGELOG-1.6.md][16]
* [CHANGELOG-1.5.md][15]
* [CHANGELOG-1.4.md][14]
@@ -24,10 +20,6 @@
* [CHANGELOG-0.3.md][1]
[21]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.11.md
[20]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.10.md
[19]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.9.md
[18]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.8.md
[17]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.7.md
[16]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.6.md
[15]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.5.md

View File

@@ -11,72 +11,50 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Velero binary build section
FROM --platform=$BUILDPLATFORM golang:1.20.10-bullseye as velero-builder
FROM --platform=$BUILDPLATFORM golang:1.16 as builder-env
ARG GOPROXY
ARG BIN
ARG PKG
ARG VERSION
ARG REGISTRY
ARG GIT_SHA
ARG GIT_TREE_STATE
ARG TARGETOS
ARG TARGETARCH
ARG TARGETVARIANT
ARG REGISTRY
ENV CGO_ENABLED=0 \
GO111MODULE=on \
GOPROXY=${GOPROXY} \
GOOS=${TARGETOS} \
GOARCH=${TARGETARCH} \
GOARM=${TARGETVARIANT} \
LDFLAGS="-X ${PKG}/pkg/buildinfo.Version=${VERSION} -X ${PKG}/pkg/buildinfo.GitSHA=${GIT_SHA} -X ${PKG}/pkg/buildinfo.GitTreeState=${GIT_TREE_STATE} -X ${PKG}/pkg/buildinfo.ImageRegistry=${REGISTRY}"
WORKDIR /go/src/github.com/vmware-tanzu/velero
COPY . /go/src/github.com/vmware-tanzu/velero
RUN mkdir -p /output/usr/bin && \
export GOARM=$( echo "${GOARM}" | cut -c2-) && \
go build -o /output/${BIN} \
-ldflags "${LDFLAGS}" ${PKG}/cmd/${BIN} && \
go build -o /output/velero-helper \
-ldflags "${LDFLAGS}" ${PKG}/cmd/velero-helper && \
go clean -modcache -cache
RUN apt-get update && apt-get install -y bzip2
# Restic binary build section
FROM --platform=$BUILDPLATFORM golang:1.20.10-bullseye as restic-builder
FROM --platform=$BUILDPLATFORM builder-env as builder
ARG BIN
ARG TARGETOS
ARG TARGETARCH
ARG TARGETVARIANT
ARG PKG
ARG BIN
ARG RESTIC_VERSION
ENV CGO_ENABLED=0 \
GO111MODULE=on \
GOPROXY=${GOPROXY} \
GOOS=${TARGETOS} \
ENV GOOS=${TARGETOS} \
GOARCH=${TARGETARCH} \
GOARM=${TARGETVARIANT}
COPY . /go/src/github.com/vmware-tanzu/velero
RUN mkdir -p /output/usr/bin && \
export GOARM=$(echo "${GOARM}" | cut -c2-) && \
/go/src/github.com/vmware-tanzu/velero/hack/build-restic.sh && \
go clean -modcache -cache
bash ./hack/download-restic.sh && \
export GOARM=$( echo "${GOARM}" | cut -c2-) && \
go build -o /output/${BIN} \
-ldflags "${LDFLAGS}" ${PKG}/cmd/${BIN}
# Velero image packing section
FROM paketobuildpacks/run-jammy-tiny:0.2.11
FROM gcr.io/distroless/base-debian10:nonroot
LABEL maintainer="Xun Jiang <jxun@vmware.com>"
LABEL maintainer="Nolan Brubaker <brubakern@vmware.com>"
COPY --from=velero-builder /output /
COPY --from=builder /output /
COPY --from=restic-builder /output /
USER cnb:cnb
USER nonroot:nonroot

View File

@@ -4,16 +4,14 @@
## Maintainers
| Maintainer | GitHub ID | Affiliation |
|---------------------|---------------------------------------------------------------|-------------------------------------------|
| Dave Smith-Uchida | [dsu-igeek](https://github.com/dsu-igeek) | [Kasten](https://github.com/kastenhq/) |
| Scott Seago | [sseago](https://github.com/sseago) | [OpenShift](https://github.com/openshift) |
| Daniel Jiang | [reasonerjt](https://github.com/reasonerjt) | [VMware](https://www.github.com/vmware/) |
| Wenkai Yin | [ywk253100](https://github.com/ywk253100) | [VMware](https://www.github.com/vmware/) |
| Xun Jiang | [blackpiglet](https://github.com/blackpiglet) | [VMware](https://www.github.com/vmware/) |
| Ming Qiu | [qiuming-best](https://github.com/qiuming-best) | [VMware](https://www.github.com/vmware/) |
| Shubham Pampattiwar | [shubham-pampattiwar](https://github.com/shubham-pampattiwar) | [OpenShift](https://github.com/openshift) |
| Yonghui Li | [Lyndon-Li](https://github.com/Lyndon-Li) | [VMware](https://www.github.com/vmware/) |
| Maintainer | GitHub ID | Affiliation |
| --------------- | --------- | ----------- |
| Bridget McErlean | [zubron](https://github.com/zubron) | [VMware](https://www.github.com/vmware/) |
| Dave Smith-Uchida | [dsu-igeek](https://github.com/dsu-igeek) | [VMware](https://www.github.com/vmware/) |
| JenTing Hsiao | [jenting](https://github.com/jenting) | [SUSE](https://github.com/SUSE/)
| Scott Seago | [sseago](https://github.com/sseago) | [OpenShift](https://github.com/openshift)
| Daniel Jiang | [reasonerjt](https://github.com/reasonerjt) | [VMware](https://www.github.com/vmware/)
| Wenkai Yin | [ywk253100](https://github.com/ywk253100) | [VMware](https://www.github.com/vmware/) |
## Emeritus Maintainers
* Adnan Abdulhussein ([prydonius](https://github.com/prydonius))
@@ -23,17 +21,14 @@
* Nolan Brubaker ([nrb](https://github.com/nrb))
* Ashish Amarnath ([ashish-amarnath](https://github.com/ashish-amarnath))
* Carlisia Thompson ([carlisia](https://github.com/carlisia))
* Bridget McErlean ([zubron](https://github.com/zubron))
* JenTing Hsiao ([jenting](https://github.com/jenting))
## Velero Contributors & Stakeholders
| Feature Area | Lead |
|------------------------|:------------------------------------------------------------------------------------:|
| Architect | Dave Smith-Uchida [dsu-igeek](https://github.com/dsu-igeek) |
| Technical Lead | Daniel Jiang [reasonerjt](https://github.com/reasonerjt) |
| Kubernetes CSI Liaison | |
| Deployment | |
| Community Management | Orlin Vasilev [OrlinVasilev](https://github.com/OrlinVasilev) |
| Product Management | Pradeep Kumar Chaturvedi [pradeepkchaturvedi](https://github.com/pradeepkchaturvedi) |
| Feature Area | Lead |
| ----------------------------- | :---------------------: |
| Architect | Dave Smith-Uchida (dsu-igeek) |
| Technical Lead | Daniel Jiang (reasonerjt) |
| Kubernetes CSI Liaison | |
| Deployment | JenTing Hsiao (jenting) |
| Community Management | Jonas Rosland (jonasrosland) |
| Product Management | Eleanor Millman (eleanor-millman) |

View File

@@ -22,11 +22,9 @@ PKG := github.com/vmware-tanzu/velero
# Where to push the docker image.
REGISTRY ?= velero
GCR_REGISTRY ?= gcr.io/velero-gcp
# Image name
IMAGE ?= $(REGISTRY)/$(BIN)
GCR_IMAGE ?= $(GCR_REGISTRY)/$(BIN)
# We allow the Dockerfile to be configurable to enable the use of custom Dockerfiles
# that pull base images from different registries.
@@ -68,10 +66,8 @@ TAG_LATEST ?= false
ifeq ($(TAG_LATEST), true)
IMAGE_TAGS ?= $(IMAGE):$(VERSION) $(IMAGE):latest
GCR_IMAGE_TAGS ?= $(GCR_IMAGE):$(VERSION) $(GCR_IMAGE):latest
else
IMAGE_TAGS ?= $(IMAGE):$(VERSION)
GCR_IMAGE_TAGS ?= $(GCR_IMAGE):$(VERSION)
endif
ifeq ($(shell docker buildx inspect 2>/dev/null | awk '/Status/ { print $$2 }'), running)
@@ -86,9 +82,9 @@ see: https://velero.io/docs/main/build-from-source/#making-images-and-updating-v
endef
# The version of restic binary to be downloaded
RESTIC_VERSION ?= 0.15.0
RESTIC_VERSION ?= 0.12.1
CLI_PLATFORMS ?= linux-amd64 linux-arm linux-arm64 darwin-amd64 darwin-arm64 windows-amd64 linux-ppc64le
CLI_PLATFORMS ?= linux-amd64 linux-arm linux-arm64 darwin-amd64 windows-amd64 linux-ppc64le
BUILDX_PLATFORMS ?= $(subst -,/,$(ARCH))
BUILDX_OUTPUT_TYPE ?= docker
@@ -100,6 +96,9 @@ else
GIT_TREE_STATE ?= clean
endif
# The default linters used by lint and local-lint
LINTERS ?= "gosec,goconst,gofmt,goimports,unparam"
###
### These variables should not need tweaking.
###
@@ -113,20 +112,19 @@ GOPROXY ?= https://proxy.golang.org
# If you want to build all containers, see the 'all-containers' rule.
all:
@$(MAKE) build
@$(MAKE) build BIN=velero-restore-helper
@$(MAKE) build BIN=velero-restic-restore-helper
build-%:
@$(MAKE) --no-print-directory ARCH=$* build
@$(MAKE) --no-print-directory ARCH=$* build BIN=velero-restore-helper
@$(MAKE) --no-print-directory ARCH=$* build BIN=velero-restic-restore-helper
all-build: $(addprefix build-, $(CLI_PLATFORMS))
all-containers:
all-containers: container-builder-env
@$(MAKE) --no-print-directory container
@$(MAKE) --no-print-directory container BIN=velero-restore-helper
@$(MAKE) --no-print-directory container BIN=velero-restic-restore-helper
local: build-dirs
# Add DEBUG=1 to enable debug locally
GOOS=$(GOOS) \
GOARCH=$(GOARCH) \
VERSION=$(VERSION) \
@@ -164,7 +162,6 @@ shell: build-dirs build-env
@# under $GOPATH).
@docker run \
-e GOFLAGS \
-e GOPROXY \
-i $(TTY) \
--rm \
-u $$(id -u):$$(id -g) \
@@ -179,6 +176,20 @@ shell: build-dirs build-env
$(BUILDER_IMAGE) \
/bin/sh $(CMD)
container-builder-env:
ifneq ($(BUILDX_ENABLED), true)
$(error $(BUILDX_ERROR))
endif
@docker buildx build \
--target=builder-env \
--build-arg=GOPROXY=$(GOPROXY) \
--build-arg=PKG=$(PKG) \
--build-arg=VERSION=$(VERSION) \
--build-arg=GIT_SHA=$(GIT_SHA) \
--build-arg=GIT_TREE_STATE=$(GIT_TREE_STATE) \
--build-arg=REGISTRY=$(REGISTRY) \
-f $(VELERO_DOCKERFILE) .
container:
ifneq ($(BUILDX_ENABLED), true)
$(error $(BUILDX_ERROR))
@@ -187,8 +198,6 @@ endif
--output=type=$(BUILDX_OUTPUT_TYPE) \
--platform $(BUILDX_PLATFORMS) \
$(addprefix -t , $(IMAGE_TAGS)) \
$(addprefix -t , $(GCR_IMAGE_TAGS)) \
--build-arg=GOPROXY=$(GOPROXY) \
--build-arg=PKG=$(PKG) \
--build-arg=BIN=$(BIN) \
--build-arg=VERSION=$(VERSION) \
@@ -198,12 +207,6 @@ endif
--build-arg=RESTIC_VERSION=$(RESTIC_VERSION) \
-f $(VELERO_DOCKERFILE) .
@echo "container: $(IMAGE):$(VERSION)"
ifeq ($(BUILDX_OUTPUT_TYPE)_$(REGISTRY), registry_velero)
docker pull $(IMAGE):$(VERSION)
rm -f $(BIN)-$(VERSION).tar
docker save $(IMAGE):$(VERSION) -o $(BIN)-$(VERSION).tar
gzip -f $(BIN)-$(VERSION).tar
endif
SKIP_TESTS ?=
test: build-dirs
@@ -223,21 +226,27 @@ endif
lint:
ifneq ($(SKIP_TESTS), 1)
@$(MAKE) shell CMD="-c 'hack/lint.sh'"
@$(MAKE) shell CMD="-c 'hack/lint.sh $(LINTERS)'"
endif
local-lint:
ifneq ($(SKIP_TESTS), 1)
@hack/lint.sh
@hack/lint.sh $(LINTERS)
endif
lint-all:
ifneq ($(SKIP_TESTS), 1)
@$(MAKE) shell CMD="-c 'hack/lint.sh $(LINTERS) true'"
endif
local-lint-all:
ifneq ($(SKIP_TESTS), 1)
@hack/lint.sh $(LINTERS) true
endif
update:
@$(MAKE) shell CMD="-c 'hack/update-all.sh'"
# update-crd is for development purpose only, it is faster than update, so is a shortcut when you want to generate CRD changes only
update-crd:
@$(MAKE) shell CMD="-c 'hack/update-3generated-crd-code.sh'"
build-dirs:
@mkdir -p _output/bin/$(GOOS)/$(GOARCH)
@mkdir -p .go/src/$(PKG) .go/pkg .go/bin .go/std/$(GOOS)/$(GOARCH) .go/go-build .go/golangci-lint
@@ -329,9 +338,9 @@ changelog:
# PUBLISH=false \
# make release
#
# To run the release, which will publish a *DRAFT* GitHub release in github.com/vmware-tanzu/velero
# To run the release, which will publish a *DRAFT* GitHub release in github.com/vmware-tanzu/velero
# (you still need to review/publish the GitHub release manually):
# GITHUB_TOKEN=your-github-token \
# GITHUB_TOKEN=your-github-token \
# RELEASE_NOTES_FILE=changelogs/CHANGELOG-1.2.md \
# PUBLISH=true \
# make release
@@ -349,8 +358,8 @@ serve-docs: build-image-hugo
-v "$$(pwd)/site:/srv/hugo" \
-it -p 1313:1313 \
$(HUGO_IMAGE) \
server --bind=0.0.0.0 --enableGitInfo=false
# gen-docs generates a new versioned docs directory under site/content/docs.
hugo server --bind=0.0.0.0 --enableGitInfo=false
# gen-docs generates a new versioned docs directory under site/content/docs.
# Please read the documentation in the script for instructions on how to use it.
gen-docs:
@hack/release-tools/gen-docs.sh
@@ -358,10 +367,3 @@ gen-docs:
.PHONY: test-e2e
test-e2e: local
$(MAKE) -e VERSION=$(VERSION) -C test/e2e run
.PHONY: test-perf
test-perf: local
$(MAKE) -e VERSION=$(VERSION) -C test/perf run
go-generate:
go generate ./pkg/...

7
PROJECT Normal file
View File

@@ -0,0 +1,7 @@
domain: io
repo: github.com/vmware-tanzu/velero
resources:
- group: velero
kind: BackupStorageLocation
version: v1
version: "2"

View File

@@ -1,13 +1,11 @@
![100]
[![Build Status][1]][2] [![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/3811/badge)](https://bestpractices.coreinfrastructure.org/projects/3811)
![GitHub release (latest SemVer)](https://img.shields.io/github/v/release/vmware-tanzu/velero)
## Overview
Velero (formerly Heptio Ark) gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. You can run Velero with a public cloud platform or on-premises.
Velero lets you:
Velero (formerly Heptio Ark) gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. You can run Velero with a public cloud platform or on-premises. Velero lets you:
* Take backups of your cluster and restore in case of loss.
* Migrate cluster resources to other clusters.
@@ -20,7 +18,7 @@ Velero consists of:
## Documentation
[The documentation][29] provides a getting started guide and information about building from source, architecture, extending Velero and more.
[The documentation][29] provides a getting started guide and information about building from source, architecture, extending Velero, and more.
Please use the version selector at the top of the site to ensure you are using the appropriate documentation for your version of Velero.
@@ -36,26 +34,6 @@ If you are ready to jump in and test, add code, or help with documentation, foll
See [the list of releases][6] to find out about feature changes.
### Velero compatibility matrix
The following is a list of the supported Kubernetes versions for each Velero version.
| Velero version | Expected Kubernetes version compatibility | Tested on Kubernetes version |
|----------------|-------------------------------------------|----------------------------------------|
| 1.12 | 1.18-latest | 1.25.7, 1.26.5, 1.27.6 and 1.28.0 |
| 1.11 | 1.18-latest | 1.23.10, 1.24.9, 1.25.5, and 1.26.1 |
| 1.10 | 1.18-latest | 1.22.5, 1.23.8, 1.24.6 and 1.25.1 |
| 1.9 | 1.18-latest | 1.20.5, 1.21.2, 1.22.5, 1.23, and 1.24 |
| 1.8 | 1.18-latest | |
Velero supports IPv4, IPv6, and dual stack environments. Support for this was tested against Velero v1.8.
The Velero maintainers are continuously working to expand testing coverage, but are not able to test every combination of Velero and supported Kubernetes versions for each Velero release. The table above is meant to track the current testing coverage and the expected supported Kubernetes versions for each Velero version. If you have a question about test coverage before v1.9, please reach out in the [#velero-users](https://kubernetes.slack.com/archives/C6VCGP4MT) Slack channel.
If you are interested in using a different version of Kubernetes with a given Velero version, we'd recommend that you perform testing before installing or upgrading your environment. For full information around capabilities within a release, also see the Velero [release notes](https://github.com/vmware-tanzu/velero/releases) or Kubernetes [release notes](https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG). See the Velero [support page](https://velero.io/docs/latest/support-process/) for information about supported versions of Velero.
For each release, Velero maintainers run the test to ensure the upgrade path from n-2 minor release. For example, before the release of v1.10.x, the test will verify that the backup created by v1.9.x and v1.8.x can be restored using the build to be tagged as v1.10.x.
[1]: https://github.com/vmware-tanzu/velero/workflows/Main%20CI/badge.svg
[2]: https://github.com/vmware-tanzu/velero/actions?query=workflow%3A"Main+CI"
[4]: https://github.com/vmware-tanzu/velero/issues

View File

@@ -1 +1,42 @@
# Please go to the [Velero Wiki](https://github.com/vmware-tanzu/velero/wiki/) to see our latest roadmap, archived roadmaps and roadmap guidance.
## Velero Roadmap
### About this document
This document provides a link to the [Velero Project boards](https://github.com/vmware-tanzu/velero/projects) that serves as the up to date description of items that are in the release pipeline. The release boards have separate swim lanes based on prioritization. Most items are gathered from the community or include a feedback loop with the community. This should serve as a reference point for Velero users and contributors to understand where the project is heading, and help determine if a contribution could be conflicting with a longer term plan.
### How to help?
Discussion on the roadmap can take place in threads under [Issues](https://github.com/vmware-tanzu/velero/issues) or in [community meetings](https://velero.io/community/). Please open and comment on an issue if you want to provide suggestions, use cases, and feedback to an item in the roadmap. Please review the roadmap to avoid potential duplicated effort.
### How to add an item to the roadmap?
One of the most important aspects in any open source community is the concept of proposals. Large changes to the codebase and / or new features should be preceded by a [proposal](https://github.com/vmware-tanzu/velero/blob/main/GOVERNANCE.md#proposal-process) in our repo.
For smaller enhancements, you can open an issue to track that initiative or feature request.
We work with and rely on community feedback to focus our efforts to improve Velero and maintain a healthy roadmap.
### Current Roadmap
The following table includes the current roadmap for Velero. If you have any questions or would like to contribute to Velero, please attend a [community meeting](https://velero.io/community/) to discuss with our team. If you don't know where to start, we are always looking for contributors that will help us reduce technical, automation, and documentation debt.
Please take the timelines & dates as proposals and goals. Priorities and requirements change based on community feedback, roadblocks encountered, community contributions, etc. If you depend on a specific item, we encourage you to attend community meetings to get updated status information, or help us deliver that feature by contributing to Velero.
`Last Updated: October 2021`
#### 1.8.0 Roadmap (to be delivered January/February 2021)
|Issue|Description|Timeline|Notes|
|---|---|---|---|
|[4108](https://github.com/vmware-tanzu/velero/issues/4108), [4109](https://github.com/vmware-tanzu/velero/issues/4109)|Solution for CSI - Azure and AWS|2022 H1|Currently, Velero plugins for AWS and Azure cannot back up persistent volumes that were provisioned using the CSI driver. This will fix that.|
|[3229](https://github.com/vmware-tanzu/velero/issues/3229),[4112](https://github.com/vmware-tanzu/velero/issues/4112)|Moving data mover functionality from the Velero Plugin for vSphere into Velero proper|2022 H1|This work is a precursor to decoupling the Astrolabe snapshotting infrastructure.|
|[3533](https://github.com/vmware-tanzu/velero/issues/3533)|Upload Progress Monitoring|2022 H1|Finishing up the work done in the 1.7 timeframe. The data mover work depends on this.|
|[1975](https://github.com/vmware-tanzu/velero/issues/1975)|Test dual stack mode|2022 H1|We already tested IPv6, but we want to confirm that dual stack mode works as well.|
|[2082](https://github.com/vmware-tanzu/velero/issues/2082)|Delete Backup CRs on removing target location. |2022 H1||
|[3516](https://github.com/vmware-tanzu/velero/issues/3516)|Restore issue with MutatingWebhookConfiguration v1beta1 API version|2022 H1||
|[2308](https://github.com/vmware-tanzu/velero/issues/2308)|Restoring nodePort service that has nodePort preservation always fails if service already exists in the namespace|2022 H1||
|[4115](https://github.com/vmware-tanzu/velero/issues/4115)|Support for multiple set of credentials for VolumeSnapshotLocations|2022 H1||
|[1980](https://github.com/vmware-tanzu/velero/issues/1980)|Velero triggers backup immediately for scheduled backups|2022 H1||
|[4067](https://github.com/vmware-tanzu/velero/issues/4067)|Pre and post backup and restore hooks|2022 H1||
|[3742](https://github.com/vmware-tanzu/velero/issues/3742)|Carvel packaging for Velero for vSphere|2022 H1|AWS and Azure have been completed already.|
|[3285](https://github.com/vmware-tanzu/velero/issues/3285)|Design doc for Velero plugin versioning|2022 H1||
|[4231](https://github.com/vmware-tanzu/velero/issues/4231)|Technical health (prioritizing giving developers confidence and saving developers time)|2022 H1|More automated tests (especially the pre-release manual tests) and more automation of the running of tests.|
|[4110](https://github.com/vmware-tanzu/velero/issues/4110)|Solution for CSI - GCP|2022 H1|Currently, the Velero plugin for GCP cannot back up persistent volumes that were provisioned using the CSI driver. This will fix that.|
|[3742](https://github.com/vmware-tanzu/velero/issues/3742)|Carvel packaging for Velero for restic|2022 H1|AWS and Azure have been completed already.|
|[3454](https://github.com/vmware-tanzu/velero/issues/3454),[4134](https://github.com/vmware-tanzu/velero/issues/4134),[4135](https://github.com/vmware-tanzu/velero/issues/4135)|Kubebuilder tech debt|2022 H1||
|[4111](https://github.com/vmware-tanzu/velero/issues/4111)|Ignore items returned by ItemSnapshotter.AlsoHandles during backup|2022 H1|This will enable backup of complex objects, because we can then tell Velero to ignore things that were already backed up when Velero was previously called recursively.|
Other work may make it into the 1.8 release, but this is the work that will be prioritized first.

View File

@@ -7,19 +7,17 @@ k8s_yaml([
'config/crd/v1/bases/velero.io_downloadrequests.yaml',
'config/crd/v1/bases/velero.io_podvolumebackups.yaml',
'config/crd/v1/bases/velero.io_podvolumerestores.yaml',
'config/crd/v1/bases/velero.io_backuprepositories.yaml',
'config/crd/v1/bases/velero.io_resticrepositories.yaml',
'config/crd/v1/bases/velero.io_restores.yaml',
'config/crd/v1/bases/velero.io_schedules.yaml',
'config/crd/v1/bases/velero.io_serverstatusrequests.yaml',
'config/crd/v1/bases/velero.io_volumesnapshotlocations.yaml',
'config/crd/v2alpha1/bases/velero.io_datauploads.yaml',
'config/crd/v2alpha1/bases/velero.io_datadownloads.yaml',
])
# default values
settings = {
"default_registry": "docker.io/velero",
"use_node_agent": False,
"enable_restic": False,
"enable_debug": False,
"debug_continue_on_start": True, # Continue the velero process by default when in debug mode
"create_backup_locations": False,
@@ -36,9 +34,9 @@ k8s_yaml(kustomize('tilt-resources'))
k8s_yaml('tilt-resources/deployment.yaml')
if settings.get("enable_debug"):
k8s_resource('velero', port_forwards = '2345')
# TODO: Need to figure out how to apply port forwards for all node-agent pods
if settings.get("use_node_agent"):
k8s_yaml('tilt-resources/node-agent.yaml')
# TODO: Need to figure out how to apply port forwards for all restic pods
if settings.get("enable_restic"):
k8s_yaml('tilt-resources/restic.yaml')
if settings.get("create_backup_locations"):
k8s_yaml('tilt-resources/velero_v1_backupstoragelocation.yaml')
if settings.get("setup-minio"):
@@ -52,7 +50,7 @@ git_sha = str(local("git rev-parse HEAD", quiet = True, echo_off = True)).strip(
tilt_helper_dockerfile_header = """
# Tilt image
FROM golang:1.20.10 as tilt-helper
FROM golang:1.16.6 as tilt-helper
# Support live reloading with Tilt
RUN wget --output-document /restart.sh --quiet https://raw.githubusercontent.com/windmilleng/rerun-process-wrapper/master/restart.sh && \
@@ -62,9 +60,9 @@ RUN wget --output-document /restart.sh --quiet https://raw.githubusercontent.com
additional_docker_helper_commands = """
# Install delve to allow debugging
RUN go install github.com/go-delve/delve/cmd/dlv@latest
RUN go get github.com/go-delve/delve/cmd/dlv
RUN wget -qO- https://dl.k8s.io/v1.25.2/kubernetes-client-linux-amd64.tar.gz | tar xvz
RUN wget -qO- https://dl.k8s.io/v1.19.2/kubernetes-client-linux-amd64.tar.gz | tar xvz
RUN wget -qO- https://get.docker.com | sh
"""
@@ -105,12 +103,12 @@ local_resource(
local_resource(
"restic_binary",
cmd = 'cd ' + '.' + ';mkdir -p _tiltbuild/restic; BIN=velero GOOS=linux GOARCH=amd64 GOARM="" RESTIC_VERSION=0.13.1 OUTPUT_DIR=_tiltbuild/restic ./hack/build-restic.sh',
cmd = 'cd ' + '.' + ';mkdir -p _tiltbuild/restic; BIN=velero GOOS=' + local_goos + ' GOARCH=amd64 RESTIC_VERSION=0.12.0 OUTPUT_DIR=_tiltbuild/restic ./hack/download-restic.sh',
)
# Note: we need a distro with a bash shell to exec into the Velero container
tilt_dockerfile_header = """
FROM ubuntu:22.04 as tilt
FROM ubuntu:focal as tilt
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -qq -y ca-certificates tzdata && rm -rf /var/lib/apt/lists/*
@@ -218,7 +216,7 @@ def enable_provider(provider):
# Note: we need a distro with a shell to do a copy of the plugin binary
tilt_dockerfile_header = """
FROM ubuntu:22.04 as tilt
FROM ubuntu:focal as tilt
WORKDIR /
COPY --from=tilt-helper /start.sh .
COPY --from=tilt-helper /restart.sh .

View File

@@ -1,6 +1,6 @@
# Velero Assets
This folder contains logo images for Velero in gray (for light backgrounds) and white (for dark backgrounds like black t-shirts or dark mode!) horizontal and stacked… in .eps and .svg.
This folder contains logo images for Velero in gray (for light backgrounds) and white (for dark backgrounds like black tshirts or dark mode!) horizontal and stacked… in .eps and .svg.
## Some general guidelines for usage

View File

@@ -154,7 +154,7 @@
* Skip completed jobs and pods when restoring (#463, @nrb)
* Set namespace correctly when syncing backups from object storage (#472, @skriss)
* When building on macOS, bind-mount volumes with delegated config (#478, @skriss)
* Add replica sets and daemonsets to cohabiting resources so they're not backed up twice (#482 #485, @skriss)
* Add replica sets and daemonsets to cohabitating resources so they're not backed up twice (#482 #485, @skriss)
* Shut down the Ark server gracefully on SIGINT/SIGTERM (#483, @skriss)
* Only back up resources that support GET and DELETE in addition to LIST and CREATE (#486, @nrb)
* Show a better error message when trying to get an incomplete restore's logs (#496, @nrb)

View File

@@ -1,190 +0,0 @@
## v1.10.0
### 2022-11-23
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.10.0
### Container Image
`velero/velero:v1.10.0`
### Documentation
https://velero.io/docs/v1.10/
### Upgrading
https://velero.io/docs/v1.10/upgrade-to-1.10/
### Highlights
#### Unified Repository and Kopia integration
In this release, we introduced the Unified Repository architecture to build a data path where data movers and the backup repository are decoupled and a unified backup repository could serve various data movement activities.
In this release, we also deeply integrate Velero with Kopia, specifically, Kopia's uploader modules are isolated as a generic file system uploader; Kopia's repository modules are encapsulated as the unified backup repository.
For more information, refer to the [design document](https://github.com/vmware-tanzu/velero/blob/v1.10.0/design/unified-repo-and-kopia-integration/unified-repo-and-kopia-integration.md).
#### File system backup refactor
Velero's file system backup (a.k.s. pod volume backup or formerly restic backup) is refactored as the first user of the Unified Repository architecture. Specifically, we added a new path, the Kopia path, besides the existing Restic path. While Restic path is still available and set as default, you can opt in Kopia path by specifying the `uploader-type` parameter at installation time. Meanwhile, you are free to restore from existing backups under either path, Velero dynamically switches to the correct path to process the restore.
Because of the new path, we renamed some modules and parameters, refer to the Break Changes section for more details.
For more information, visit the [file system backup document](https://velero.io/docs/v1.10/file-system-backup/) and [v1.10 upgrade guide document](https://velero.io/docs/v1.10/upgrade-to-1.10/).
Meanwhile, we've created a performance guide for both Restic path and Kopia path, which helps you to choose between the two paths and provides you the best practice to configure them under different scenarios. Please note that the results in the guide are based on our testing environments, you may get different results when testing in your own ones. For more information, visit the [performance guide document](https://velero.io/docs/v1.10/performance-guidance/).
#### Plugin versioning V1 refactor
In this release, Velero moves plugins BackupItemAction, RestoreItemAction and VolumeSnapshotterAction to version v1, this allows future plugin changes that do not support backward compatibility, so is a preparation for various complex tasks, for example, data movement tasks.
For more information, refer to the [plugin versioning design document](https://github.com/vmware-tanzu/velero/blob/v1.10.0/design/plugin-versioning.md).
#### Refactor the controllers using Kubebuilder v3
In this release we continued our code modernization work, rewriting some controllers using Kubebuilder v3. This work is ongoing and we will continue to make progress in future releases.
#### Add credentials to volume snapshot locations
In this release, we enabled dedicate credentials options to volume snapshot locations so that you can specify credentials per volume snapshot location as same as backup storage location.
For more information, please visit the [locations document](https://velero.io/docs/v1.10/locations/).
#### CSI snapshot enhancements
In this release we added several changes to enhance the robustness of CSI snapshot procedures, for example, some protection code for error handling, and a mechanism to skip exclusion checks so that CSI snapshot works with various backup resource filters.
#### Backup schedule pause/unpause
In this release, Velero supports to pause/unpause a backup schedule during or after its creation. Specifically:
At creation time, you can specify `paused` flag to `velero schedule create` command, if so, you will create a paused schedule that will not run until it is unpaused
After creation, you can run `velero schedule pause` or `velero schedule unpause` command to pause/unpause a schedule
#### Runtime and dependencies
In order to fix CVEs, we changed Velero's runtime and dependencies as follows:
Bump go runtime to v1.18.8
Bump some core dependent libraries to newer versions
Compile Restic (v0.13.1) with go 1.18.8 instead of packaging the official binary
#### Breaking changes
Due to file system backup refactor, below modules and parameters name have been changed in this release:
`restic` daemonset is renamed to `node-agent`
`resticRepository` CR is renamed to `backupRepository`
`velero restic repo` command is renamed to `velero repo`
`velero-restic-credentials` secret is renamed to `velero-repo-credentials`
`default-volumes-to-restic` parameter is renamed to `default-volumes-to-fs-backup`
`restic-timeout` parameter is renamed to `fs-backup-timeout`
`default-restic-prune-frequency` parameter is renamed to `default-repo-maintain-frequency`
#### Upgrade
Due to the major changes of file system backup, the old upgrade steps are not suitable any more. For the new upgrade steps, visit [v1.10 upgrade guide document](https://velero.io/docs/v1.10/upgrade-to-1.10/).
#### Limitations/Known issues
In this release, Kopia backup repository (so the Kopia path of file system backup) doesn't support self signed certificate for S3 compatible storage. To track this problem, refer to this [Velero issue](https://github.com/vmware-tanzu/velero/issues/5123) or [Kopia issue](https://github.com/kopia/kopia/issues/1443).
Due to the code change in Velero, there will be some code change required in vSphere plugin, without which the functionality may be impacted. Therefore, if you are using vSphere plugin in your workflow, please hold the upgrade until the issue [#485](https://github.com/vmware-tanzu/velero-plugin-for-vsphere/issues/485) is fixed in vSphere plugin.
### All changes
* Restore ClusterBootstrap before Cluster otherwise a new default ClusterBootstrap object is create for the cluster (#5616, @ywk253100)
* Add compile restic binary for CVE fix (#5574, @qiuming-best)
* Fix controller problematic log output (#5572, @qiuming-best)
* Enhance the restore priorities list to support specifying the low prioritized resources that need to be restored in the last (#5535, @ywk253100)
* fix restic backup progress error (#5534, @qiuming-best)
* fix restic backup failure with self-signed certification backend storage (#5526, @qiuming-best)
* Add credential store in backup deletion controller to support VSL credential. (#5521, @blackpiglet)
* Fix issue 5505: the pod volume backups/restores except the first one fail under the kopia path if "AZURE_CLOUD_NAME" is specified (#5512, @Lyndon-Li)
* After Pod Volume Backup/Restore refactor, remove all the unreasonable appearance of "restic" word from documents (#5499, @Lyndon-Li)
* Refactor Pod Volume Backup/Restore doc to match the new behavior (#5484, @Lyndon-Li)
* Remove redundancy code block left by #5388. (#5483, @blackpiglet)
* Issue fix 5477: create the common way to support S3 compatible object storages that work for both Restic and Kopia; Keep the resticRepoPrefix parameter for compatibility (#5478, @Lyndon-Li)
* Update the k8s.io dependencies to 0.24.0.
This also required an update to github.com/bombsimon/logrusr/v3.
Removed the `WithClusterName` method
as it is a "legacy field that was
always cleared by the system and never used" as per upstream k8s
https://github.com/kubernetes/apimachinery/blob/release-1.24/pkg/apis/meta/v1/types.go#L257-L259 (#5471, @kcboyle)
* Add v1.10 velero upgrade doc (#5468, @qiuming-best)
* Upgrade velero docker image to use go 1.18 and upgrade golangci-lint to 1.45.0 (#5459, @Lyndon-Li)
* Add VolumeSnapshot client back. (#5449, @blackpiglet)
* Change subcommand `velero restic repo` to `velero repo` (#5446, @allenxu404)
* Remove irrational "Restic" names in Velero code after the PVBR refactor (#5444, @Lyndon-Li)
* moved RIA execute input/output structs back to velero package (#5441, @sseago)
* Rename Velero pod volume restore init helper from "velero-restic-restore-helper" to "velero-restore-helper" (#5432, @Lyndon-Li)
* Skip the exclusion check for additional resources returned by BIA (#5429, @reasonerjt)
* Change B/R describe CLI to support Kopia (#5412, @allenxu404)
* Add nil check before execution of csi snapshot delete (#5401, @shubham-pampattiwar)
* update velero using klog to version v2.9.0 (#5396, @blackpiglet)
* Fix Test_prepareBackupRequest_BackupStorageLocation UT failure. (#5394, @blackpiglet)
* Rename Velero daemonset from "restic" to "node-agent" (#5390, @Lyndon-Li)
* Add some corner cases checking for CSI snapshot in backup controller. (#5388, @blackpiglet)
* Fix issue 5386: Velero providers a full URL as the S3Url while the underlying minio client only accept the host part of the URL as the endpoint and the schema should be specified separately. (#5387, @Lyndon-Li)
* Fix restore error with flag namespace-mappings (#5377, @qiuming-best)
* Pod Volume Backup/Restore Refactor: Rename parameters in CRDs and commands to remove "Restic" word (#5370, @Lyndon-Li)
* Added backupController's UT to test the prepareBackupRequest() method BackupStorageLocation processing logic (#5362, @niulechuan)
* Fix a repoEnsurer problem introduced by the refactor - The repoEnsurer didn't check "" state of BackupRepository, as a result, the function GetBackupRepository always returns without an error even though the ensreReady is specified. (#5359, @Lyndon-Li)
* Add E2E test for schedule backup (#5355, @danfengliu)
* Add useOwnerReferencesInBackup field doc for schedule. (#5353, @cleverhu)
* Clarify the help message for the default value of parameter --snapshot-volumes, when it's not set. (#5350, @blackpiglet)
* Fix restore cmd extraflag overwrite bug (#5347, @qiuming-best)
* Resolve gopkg.in/yaml.v3 vulnerabilities by upgrading gopkg.in/yaml.v3 to v3.0.1 (#5344, @kaovilai)
* Increase ensure restic repository timeout to 5m (#5335, @shubham-pampattiwar)
* Add opt-in and opt-out PersistentVolume backup to E2E tests (#5331, @danfengliu)
* Cancel downloadRequest when timeout without downloadURL (#5329, @kaovilai)
* Fix PVB finds wrong parent snapshot (#5322, @qiuming-best)
* Fix issue 4874 and 4752: check the daemonset pod is running in the node where the workload pod resides before running the PVB for the pod (#5319, @Lyndon-Li)
* plugin versioning v1 refactor for VolumeSnapshotter (#5318, @sseago)
* Change the status of restore to completed from partially failed when restore empty backup (#5314, @allenxu404)
* RestoreItemAction v1 refactoring for plugin api versioning (#5312, @sseago)
* Refactor the repoEnsurer code to use controller runtime client and wrap some common BackupRepository operations to share with other modules (#5308, @Lyndon-Li)
* Remove snapshot related lister, informer and client from backup controller. (#5299, @jxun)
* Remove github.com/apex/log logger. (#5297, @blackpiglet)
* change CSISnapshotTimeout from pointer to normal variables. (#5294, @cleverhu)
* Optimize code for restore exists resources. (#5293, @cleverhu)
* Add more detailed comments for labels columns. (#5291, @cleverhu)
* Add backup status checking in schedule controller. (#5283, @blackpiglet)
* Add changes for problems/enhancements found during smoking test for Kopia pod volume backup/restore (#5282, @Lyndon-Li)
* Support pause/unpause schedules (#5279, @ywk253100)
* plugin/clientmgmt refactoring for BackupItemAction v1 (#5271, @sseago)
* Don't move velero v1 plugins to new proto dir (#5263, @sseago)
* Fill gaps for Kopia path of PVBR: integrate Repo Manager with Unified Repo; pass UploaderType to PVBR backupper and restorer; pass RepositoryType to BackupRepository controller and Repo Ensurer (#5259, @Lyndon-Li)
* Add csiSnapshotTimeout for describe backup (#5252, @cleverhu)
* equip gc controller with configurable frequency (#5248, @allenxu404)
* Fix nil pointer panic when restoring StatefulSets (#5247, @divolgin)
* Controller refactor code modifications. (#5241, @jxun)
* Fix edge cases for already exists resources (#5239, @shubham-pampattiwar)
* Check for empty ns list before checking nslist[0] (#5236, @sseago)
* Remove reference to non-existent doc (#5234, @reasonerjt)
* Add changes for Kopia Integration: Kopia Lib - method implementation. Add changes to write Kopia Repository logs to Velero log (#5233, @Lyndon-Li)
* Add changes for Kopia Integration: Kopia Lib - initialize Kopia repo (#5231, @Lyndon-Li)
* Uploader Implementation: Kopia backup and restore (#5221, @qiuming-best)
* Migrate backup sync controller from code-generator to kubebuilder. (#5218, @jxun)
* check vsc null pointer (#5217, @lilongfeng0902)
* Refactor GCController with kubebuilder (#5215, @allenxu404)
* Uploader Implementation: Restic backup and restore (#5214, @qiuming-best)
* Add parameter "uploader-type" to velero server (#5212, @reasonerjt)
* Add annotation "pv.kubernetes.io/migrated-to" for CSI checking. (#5181, @jxun)
* Add changes for Kopia Integration: Unified Repository Provider - method implementation (#5179, @Lyndon-Li)
* Treat namespaces with exclude label as excludedNamespaces
Related issue: #2413 (#5178, @allenxu404)
* Reduce CRD size. (#5174, @jxun)
* Fix restic backups to multiple backup storage locations bug (#5172, @qiuming-best)
* Add changes for Kopia Integration: Unified Repository Provider - Repo Password (#5167, @Lyndon-Li)
* Skip registering "crd-remap-version" plugin when feature flag "EnableAPIGroupVersions" is set (#5165, @reasonerjt)
* Kopia uploader integration on shim progress uploader module (#5163, @qiuming-best)
* Add labeled and unlabeled events for PR changelog check action. (#5157, @jxun)
* VolumeSnapshotLocation refactor with kubebuilder. (#5148, @jxun)
* Delay CA file deletion in PVB controller. (#5145, @jxun)
* This commit splits the pkg/restic package into several packages to support Kopia integration works (#5143, @ywk253100)
* Kopia Integration: Add the Unified Repository Interface definition. Kopia Integration: Add the changes for Unified Repository storage config. Related Issues; #5076, #5080 (#5142, @Lyndon-Li)
* Update the CRD for kopia integration (#5135, @reasonerjt)
* Let "make shell xxx" respect GOPROXY (#5128, @reasonerjt)
* Modify BackupStoreGetter to avoid BSL spec changes (#5122, @sseago)
* Dump stack trace when the plugin server handles panic (#5110, @reasonerjt)
* Make CSI snapshot creation timeout configurable. (#5104, @jxun)
* Fix bsl validation bug: the BSL is validated continually and doesn't respect the validation period configured (#5101, @ywk253100)
* Exclude "csinodes.storage.k8s.io" and "volumeattachments.storage.k8s.io" from restore by default. (#5064, @jxun)
* Move 'velero.io/exclude-from-backup' label string to const (#5053, @niulechuan)
* Modify Github actions. (#5052, @jxun)
* Fix typo in doc, in https://velero.io/docs/main/restore-reference/ "Restore order" section, "Mamespace" should be "Namespace". (#5051, @niulechuan)
* Delete opened issues triage action. (#5041, @jxun)
* When spec.RestoreStatus is empty, don't restore status (#5008, @sseago)
* Added DownloadTargetKindCSIBackupVolumeSnapshots for retrieving the signed URL to download only the `<backup name>`-csi-volumesnapshots.json.gz and DownloadTargetKindCSIBackupVolumeSnapshotContents to download only `<backup name>`-csi-volumesnapshotcontents.json.gz in the DownloadRequest CR structure. These files are already present in the backup layout. (#4980, @anshulahuja98)
* Refactor BackupItemAction proto and related code to backupitemaction/v1 package. This is part of implementation of the plugin version design https://github.com/vmware-tanzu/velero/blob/main/design/plugin-versioning.md (#4943, @phuongatemc)
* Unified Repository Design (#4926, @Lyndon-Li)
* Add credentials to volume snapshot locations (#4864, @sseago)

View File

@@ -1,126 +0,0 @@
## v1.11
### 2023-04-07
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.11.0
### Container Image
`velero/velero:v1.11.0`
### Documentation
https://velero.io/docs/v1.11/
### Upgrading
https://velero.io/docs/v1.11/upgrade-to-1.11/
### Highlights
#### BackupItemAction v2
This feature implements the BackupItemAction v2. BIA v2 has two new methods: Progress() and Cancel() and modifies the Execute() return value.
The API change is needed to facilitate long-running BackupItemAction plugin actions that may not be complete when the Execute() method returns. This will allow long-running BackupItemAction plugin actions to continue in the background while the Velero moves to the following plugin or the next item.
#### RestoreItemAction v2
This feature implemented the RestoreItemAction v2. RIA v2 has three new methods: Progress(), Cancel(), and AreAdditionalItemsReady(), and it modifies RestoreItemActionExecuteOutput() structure in the RIA return value.
The Progress() and Cancel() methods are needed to facilitate long-running RestoreItemAction plugin actions that may not be complete when the Execute() method returns. This will allow long-running RestoreItemAction plugin actions to continue in the background while the Velero moves to the following plugin or the next item. The AreAdditionalItemsReady() method is needed to allow plugins to tell Velero to wait until the returned additional items have been restored and are ready for use in the cluster before restoring the current item.
#### Plugin Progress Monitoring
This is intended as a replacement for the previously-approved Upload Progress Monitoring design ([Upload Progress Monitoring](https://github.com/vmware-tanzu/velero/blob/main/design/upload-progress.md)) to expand the supported use cases beyond snapshot upload to include what was previously called Async Backup/Restore Item Actions.
#### Flexible resource policy that can filter volumes to skip in the backup
This feature provides a flexible policy to filter volumes in the backup without requiring patching any labels or annotations to the pods or volumes. This policy is configured as k8s ConfigMap and maintained by the users themselves, and it can be extended to more scenarios in the future. By now, the policy rules out volumes from backup depending on the CSI driver, NFS setting, volume size, and StorageClass setting. Please refer to [policy API design](https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/handle-backup-of-volumes-by-resources-filters.md#api-design) for the policy's ConifgMap format. It is not guaranteed to work on unofficial third-party plugins as it may not follow the existing backup workflow code logic of Velero.
#### Resource Filters that can distinguish cluster scope and namespace scope resources
This feature adds four new resource filters for backup. The new filters are separated into cluster scope and namespace scope. Before this feature, Velero could not filter cluster scope resources precisely. This feature provides the ability and refactors existing resource filter parameters.
#### Add a parameter for setting the Velero server connection with the k8s API server's timeout
In Velero, some code pieces need to communicate with the k8s API server. Before v1.11, these code pieces used hard-code timeout settings. This feature adds a resource-timeout parameter in the velero server binary to make it configurable.
#### Add resource list in the output of the restore describe command
Before this feature, Velero restore didn't have a restored resources list as the Velero backup. It's not convenient for users to learn what is restored. This feature adds the resources list and the handling result of the resources (including created, updated, failed, and skipped).
#### Refactor controllers with controller-runtime
In v1.11, Backup Controller and Restore controller are refactored with controller-runtime. Till v1.11, all Velero controllers use the controller-runtime framework.
#### Runtime and dependencies
To fix CVEs and keep pace with Golang, Velero made changes as follows:
* Bump Golang runtime to v1.19.8.
* Bump several dependent libraries to new versions.
* Compile Restic (v0.15.0) with Golang v1.19.8 instead of packaging the official binary.
### Breaking changes
* The Velero CSI plugin now determines whether to restore Volume's data from snapshots on the restore's restorePVs setting. Before v1.11, the CSI plugin doesn't check the restorePVs parameter setting.
### Limitations/Known issues
* The Flexible resource policy that can filter volumes to skip in the backup is not guaranteed to work on unofficial third-party plugins because the plugins may not follow the existing backup workflow code logic of Velero. The ConfigMap used as the policy is supposed to be maintained by users.
### All Changes
* Modify new scope resource filters name. (#6089, @blackpiglet)
* Make Velero not exits when EnableCSI is on and CSI snapshot not installed (#6062, @blackpiglet)
* Restore Services before Clusters (#6057, @ywk253100)
* Fixed backup deletion bug related to async operations (#6041, @sseago)
* Update Golang version to v1.19 for branch main. (#6039, @blackpiglet)
* Fix issue #5972, don't assume errorField as error type when dealing with logger.WithError (#6028, @Lyndon-Li)
* distinguish between New and InProgress operations (#6012, @sseago)
* Modify golangci.yaml file. Resolve found lint issues. (#6008, @blackpiglet)
* Remove Reference of itemsnapshotter (#5997, @reasonerjt)
* minor fixes for backup_operations_controller (#5996, @sseago)
* RIAv2 async operations controller work (#5993, @sseago)
* Follow-on fixes for BIAv2 controller work (#5971, @sseago)
* Refactor backup controller based on the controller-runtime framework. (#5969, @qiuming-best)
* Fix client wait problem after async operation change, velero backup/restore --wait should check a full list of the terminal status (#5964, @Lyndon-Li)
* Fix issue #5935, refactor the logics for backup/restore persistent log, so as to remove the contest to gzip writer (#5956, @Lyndon-Li)
* Switch the base image to distroless/base-nossl-debian11 to reduce the CVE triage efforts (#5939, @ywk253100)
* Wait for additional items to be ready before restoring current item (#5933, @sseago)
* Add configurable server setting for default timeouts (#5926, @eemcmullan)
* Add warning/error result to cmd `velero backup describe` (#5916, @allenxu404)
* Fix Dependabot alerts. Use 1.18 and 1.19 golang instead of patch image in dockerfile. Add release-1.10 and release-1.9 in Trivy daily scan. (#5911, @blackpiglet)
* Update client-go to v0.25.6 (#5907, @kaovilai)
* Limit the concurrent number for backup's VolumeSnapshot operation. (#5900, @blackpiglet)
* Fix goreleaser issue for resolving tags and updated it's version. (#5899, @anshulahuja98)
* This is to fix issue 5881, enhance the PVB tracker in two modes, Track and Taken (#5894, @Lyndon-Li)
* Add labels for velero installed namespace to support PSA. (#5873, @blackpiglet)
* Add restored resource list in the restore describe command (#5867, @ywk253100)
* Add a json output to cmd velero backup describe (#5865, @allenxu404)
* Make restore controller adopting the controller-runtime framework. (#5864, @blackpiglet)
* Replace k8s.io/apimachinery/pkg/util/clock with k8s.io/utils/clock (#5859, @hezhizhen)
* Restore finalizer and managedFields of metadata during the restoration (#5853, @ywk253100)
* BIAv2 async operations controller work (#5849, @sseago)
* Add secret restore item action to handle service account token secret (#5843, @ywk253100)
* Add new resource filters can separate cluster and namespace scope resources. (#5838, @blackpiglet)
* Correct PVB/PVR Failed Phase patching during startup (#5828, @kaovilai)
* bump up golang net to fix CVE-2022-41721 (#5812, @Lyndon-Li)
* Update CRD descriptions for SnapshotVolumes and restorePVs (#5807, @shubham-pampattiwar)
* Add mapped selected-node existence check (#5806, @blackpiglet)
* Add option "--service-account-name" to install cmd (#5802, @reasonerjt)
* Enable staticcheck linter. (#5788, @blackpiglet)
* Set Kopia IgnoreUnknownTypes in ErrorHandlingPolicy to True for ignoring backup unknown file type (#5786, @qiuming-best)
* Bump up Restic version to 0.15.0 (#5784, @qiuming-best)
* Add File system backup related metrics to Grafana dashboard
- Add metrics backup_warning_total for record of total warnings
- Add metrics backup_last_status for record of last status of the backup (#5779, @allenxu404)
* Design for Handling backup of volumes by resources filters (#5773, @qiuming-best)
* Add PR container build action, which will not push image. Add GOARM parameter. (#5771, @blackpiglet)
* Fix issue 5458, track pod volume backup until the CR is submitted in case it is skipped half way (#5769, @Lyndon-Li)
* Fix issue 5226, invalidate the related backup repositories whenever the backup storage info change in BSL (#5768, @Lyndon-Li)
* Add Restic builder in Dockerfile, and keep the used built Golang image version in accordance with upstream Restic. (#5764, @blackpiglet)
* Fix issue 5043, after the restore pod is scheduled, check if the node-agent pod is running in the same node. (#5760, @Lyndon-Li)
* Remove restore controller's redundant client. (#5759, @blackpiglet)
* Define itemoperations.json format and update DownloadRequest API (#5752, @sseago)
* Add Trivy nightly scan. (#5740, @jxun)
* Fix issue 5696, check if the repo is still openable before running the prune and forget operation, if not, try to reconnect the repo (#5715, @Lyndon-Li)
* Fix error with Restic backup empty volumes (#5713, @qiuming-best)
* new backup and restore phases to support async plugin operations:
- WaitingForPluginOperations
- WaitingForPluginOperationsPartiallyFailed (#5710, @sseago)
* Prevent nil panic on exec restore hooks (#5675, @dymurray)
* Fix CVEs scanned by trivy (#5653, @qiuming-best)
* Publish backupresults json to enhance error info during backups. (#5576, @anshulahuja98)
* RestoreItemAction v2 API implementation (#5569, @sseago)
* add new RestoreItemAction of "velero.io/change-image-name" to handle the issue mentioned at #5519 (#5540, @wenterjoy)
* BackupItemAction v2 API implementation (#5442, @sseago)
* Proposal to separate resource filter into cluster scope and namespace scope (#5333, @blackpiglet)

View File

@@ -1,262 +0,0 @@
## v1.12.4
### 2024-02-26
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.12.4
### Container Image
`velero/velero:v1.12.4`
### Documentation
https://velero.io/docs/v1.12/
### Upgrading
https://velero.io/docs/v1.12/upgrade-to-1.12/
### All changes
* BackupRepositories associated with a BSL are invalidated when BSL is (re-)created. (#7397, @kaovilai)
* Check resource Group Version and Kind is available in cluster before attempting restore to prevent being stuck. (#7337, @kaovilai)
* Make "disable-informer-cache" option false(enabled) by default to keep it consistent with the help message (#7298, @ywk253100)
* Add description markers for dataupload and datadownload CRDs (#7042, @shubham-pampattiwar)
## v1.12.3
### 2024-01-09
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.12.3
### Container Image
`velero/velero:v1.12.3`
### Documentation
https://velero.io/docs/v1.12/
### Upgrading
https://velero.io/docs/v1.12/upgrade-to-1.12/
### All changes
* Fix issue #7244. By the end of the upload, check the outstanding incomplete snapshots and delete them by calling ApplyRetentionPolicy (#7247, @Lyndon-Li)
* Fix issue #7189, data mover generic restore - don't assume the first volume as the restore volume (#7203, @Lyndon-Li)
* Update CSIVolumeSnapshotsCompleted in backup's status and the metric
during backup finalize stage according to async operations content. (#7202, @blackpiglet)
* Node agent restart enhancement (#7130, @qiuming-best)
* Fix issue #6928, remove snapshot deletion timeout for PVB (#7283, @Lyndon-Li)
## v1.12.2
### 2023-11-20
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.12.2
### Container Image
`velero/velero:v1.12.2`
### Documentation
https://velero.io/docs/v1.12/
### Upgrading
https://velero.io/docs/v1.12/upgrade-to-1.12/
### All changes
* Fix issue #7068, due to a behavior of CSI external snapshotter, manipulations of VS and VSC may not be handled in the same order inside external snapshotter as the API is called. So add a protection finalizer to ensure the order (#7114, @Lyndon-Li)
* Update Backup.Status.CSIVolumeSnapshotsCompleted during finalize (#7111, @kaovilai)
* Cherry-pick #6917 - Support JSON Merge Patch and Strategic Merge Patch in Resource Modifiers (#7049, @27149chen)
* Bump up Velero base image to latest patch release (#7110, @allenxu404)
* Fix the node-agent missing metrics-address defines. (#7098, @yanggangtony)
* Fix issue #7094, fallback to full backup if previous snapshot is not found (#7097, @Lyndon-Li)
* Add DataUpload Result and CSI VolumeSnapshot check for restore PV. (#7087, @blackpiglet)
* Fix issue #7027, data mover backup exposer should not assume the first volume as the backup volume in backup pod (#7060, @Lyndon-Li)
* Truncate the credential file to avoid the change of secret content messing it up (#7058, @ywk253100)
* restore: Use warning when Create IsAlreadyExist and Get error (#7054, @kaovilai)
* Read information from the credential specified by BSL (#7033, @ywk253100)
* Fix issue 6913: Velero Built-in Datamover: Backup stucks in phase WaitingForPluginOperations when Node Agent pod gets restarted (#7025, @shubham-pampattiwar)
* Fix unified repository (kopia) s3 credentials profile selection (#6997, @kaovilai)
## v1.12.1
### 2023-10-20
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.12.1
### Container Image
`velero/velero:v1.12.1`
### Documentation
https://velero.io/docs/v1.12/
### Upgrading
https://velero.io/docs/v1.12/upgrade-to-1.12/
### Highlights
#### Data Mover Adds Support for Block Mode Volumes
For PersistentVolumes with volumeMode set as Block, the volumes are mounted as raw block devices in pods, in 1.12.1, Velero CSI snapshot data movement supports to backup and restore this kind of volumes under linux based Kubernetes clusters.
#### New Parameter in Installation to Enable Data Mover
The `velero install` sub-command now includes a new parameter,`--default-snapshot-move-data`, which configures Velero server to move data by default for all snapshots supporting data movement. This feature is useful for users who will always want to use VBDM for backups instead of plain CSI , as they no longer need to specify the `--snapshot-move-data` flag for each individual backup.
#### Velero Base Image change
The base image previously used by Velero was `distroless`, which contains several CVEs cannot be addressed quickly. As a result, Velero will now use `paketobuildpacks` image starting from this new version.
### Limitations/Known issues
* The data mover's support for block mode volumes is currently only applicable to Linux environments.
### All changes
* Import auth provider plugins (#6970, @0x113)
* Perf improvements for existing resource restore (#6948, @sseago)
* Retry failed create when using generateName (#6943, @sseago)
* Fix issue #6647, add the --default-snapshot-move-data parameter to Velero install, so that users don't need to specify --snapshot-move-data per backup when they want to move snapshot data for all backups (#6940, @Lyndon-Li)
* Partially fix #6734, guide Kubernetes' scheduler to spread backup pods evenly across nodes as much as possible, so that data mover backup could achieve better parallelism (#6935, @Lyndon-Li)
* Replace the base image with paketobuildpacks image (#6934, @ywk253100)
* Add support for block volumes with Kopia (#6897, @dzaninovic)
* Set ParallelUploadAboveSize as MaxInt64 and flush repo after setting up policy so that policy is retrieved correctly by TreeForSource (#6886, @Lyndon-Li)
* Kubernetes 1.27 new job label batch.kubernetes.io/controller-uid are deleted during restore per https://github.com/kubernetes/kubernetes/pull/114930 (#6713, @kaovilai)
* Add `orLabelSelectors` for backup, restore commands (#6881, @nilesh-akhade)
* Fix issue #6859, move plugin depending podvolume functions to util pkg, so as to remove the dependencies to unnecessary repository packages like kopia, azure, etc. (#6877, @Lyndon-Li)
* Fix issue #6786, always delete VSC regardless of the deletion policy (#6873, @Lyndon-Li)
* Fix #6988, always get region from BSL if it is not empty (#6991, @Lyndon-Li)
* Add both non-Windows version and Windows version code for PVC block mode logic. (#6986, @blackpiglet)
## v1.12
### 2023-08-18
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.12.0
### Container Image
`velero/velero:v1.12.0`
### Documentation
https://velero.io/docs/v1.12/
### Upgrading
https://velero.io/docs/v1.12/upgrade-to-1.12/
### Highlights
#### CSI Snapshot Data Movement
CSI Snapshot Data Movement refers to back up CSI snapshot data from the volatile and limited production environment into durable, heterogeneous, and scalable backup storage in a consistent manner; and restore the data to volumes in the original or alternative environment.
CSI Snapshot Data Movement is useful in below scenarios:
* For on-premises users, the storage usually doesn't support durable snapshots, so it is impossible/less efficient/cost ineffective to keep volume snapshots by the storage This feature helps to move the snapshot data to a storage with lower cost and larger scale for long time preservation.
* For public cloud users, this feature helps users to fulfill the multiple cloud strategy. It allows users to back up volume snapshots from one cloud provider and preserve or restore the data to another cloud provider. Then users will be free to flow their business data across cloud providers based on Velero backup and restore
CSI Snapshot Data Movement is built according to the Volume Snapshot Data Movement design ([Volume Snapshot Data Movement design](https://github.com/vmware-tanzu/velero/blob/main/design/volume-snapshot-data-movement/volume-snapshot-data-movement.md)). Additionally, guidance on how to use the feature can be found in the Volume Snapshot Data Movement doc([Volume Snapshot Data Movement doc](https://velero.io/docs/v1.12/csi-snapshot-data-movement)).
#### Resource Modifiers
In many use cases, customers often need to substitute specific values in Kubernetes resources during the restoration process like changing the namespace, changing the storage class, etc.
To address this need, Resource Modifiers (also known as JSON Substitutions) offer a generic solution in the restore workflow. It allows the user to define filters for specific resources and then specify a JSON patch (operator, path, value) to apply to the resource. This feature simplifies the process of making substitutions without requiring the implementation of a new RestoreItemAction plugin. More design details can be found in Resource Modifiers design ([Resource Modifiers design](https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/json-substitution-action-design.md)). For instructions on how to use the feature, please refer to Resource Modifiers doc([Resource Modifiers doc](https://velero.io/docs/v1.12/restore-resource-modifiers)).
#### Multiple VolumeSnapshotClasses
Prior to version 1.12, the Velero CSI plugin would choose the VolumeSnapshotClass in the cluster based on matching driver names and the presence of the "velero.io/csi-volumesnapshot-class" label. However, this approach proved inadequate for many user scenarios.
With the introduction of version 1.12, Velero now offers support for multiple VolumeSnapshotClasses in the CSI Plugin, enabling users to select a specific class for a particular backup. More design details can be found in Multiple VolumeSnapshotClasses design ([Multiple VolumeSnapshotClasses design](https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/multiple-csi-volumesnapshotclass-support.md)). For instructions on how to use the feature, please refer to Multiple VolumeSnapshotClasses doc ([Multiple VolumeSnapshotClasses doc](https://velero.io/docs/v1.12/csi/#implementation-choices)).
#### Restore Finalizer
Before v1.12, the restore controller would only delete restore resources but wouldnt delete restore data from the backup storage location when the command `velero restore delete` was executed. The only chance Velero deletes restores data from the backup storage location is when the associated backup is deleted.
In this version, Velero introduces a finalizer that ensures the cleanup of all associated data for restores when running the command `velero restore delete`.
#### Runtime and dependencies
To fix CVEs and keep pace with Golang, Velero made changes as follows:
* Bump Golang runtime to v1.20.7.
* Bump several dependent libraries to new versions.
* Bump Kopia to v0.13.
### Breaking changes
* Prior to v1.12, the parameter `uploader-type` for Velero installation had a default value of "restic". However, starting from this version, the default value has been changed to "kopia". This means that Velero will now use Kopia as the default path for file system backup.
* The ways of setting CSI snapshot time have changed in v1.12. First, the sync waiting time for creating a snapshot handle in the CSI plugin is changed from the fixed 10 minutes into backup.Spec.CSISnapshotTimeout. The second, the async waiting time for VolumeSnapshot and VolumeSnapshotContent's status turning into `ReadyToUse` in operation uses the operation's timeout. The default value is 4 hours.
* As from [Velero helm chart v4.0.0](https://github.com/vmware-tanzu/helm-charts/releases/tag/velero-4.0.0), it supports multiple BSL and VSL, and the BSL and VSL have changed from the map into a slice, and[ this breaking change](https://github.com/vmware-tanzu/helm-charts/pull/413) is not backward compatible. So it would be best to change the BSL and VSL configuration into slices before the Upgrade.
* Prior to v1.12, deleting the Velero namespace would easily remove all the resources within it. However, with the introduction of finalizers attached to the Velero CR including `restore`, `dataupload`, and `datadownload` in this version, directly deleting Velero namespace may get stuck indefinitely because the pods responsible for handling the finalizers might be deleted before the resources attached to the finalizers. To avoid this issue, please use the command `velero uninstall` to delete all the Velero resources or ensure that you handle the finalizer appropriately before deleting the Velero namespace.
### Limitations/Known issues
* The Azure plugin supports Azure AD Workload identity way, but it only works for Velero native snapshots. It cannot support filesystem backup and snapshot data mover scenarios.
* File System backup under Kopia path and CSI Snapshot Data Movement backup fail to back up files that are large the 2GiB due to issue https://github.com/vmware-tanzu/velero/issues/6668.
### All Changes
* Fixes #6498. Get resource client again after restore actions in case resource's gv is changed. This is an improvement of pr #6499, to support group changes. A group change usually happens in a restore plugin which is used for resource conversion: convert a resource from a not supported gv to a supported gv (#6634, @27149chen)
* Add API support for volMode block, only error for now. (#6608, @shawn-hurley)
* Fix how the AWS credentials are obtained from configuration (#6598, @aws_creds)
* Add performance E2E test (#6569, @qiuming-best)
* Non default s3 credential profiles work on Unified Repository Provider (kopia) (#6558, @kaovilai)
* Fix issue #6571, fix the problem for restore item operation to set the errors correctly so that they can be recorded by Velero restore and then reflect the correct status for Velero restore. (#6594, @Lyndon-Li)
* Fix issue 6575, flush the repo after delete the snapshot, otherwise, the changes(deleting repo snapshot) cannot be committed to the repo. (#6587, @Lyndon-Li)
* Delete moved snapshots when the backup is deleted (#6547, @reasonerjt)
* check if restore crd exist before operating restores (#6544, @allenxu404)
* Remove PVC's selector in backup's PVC action. (#6481, @blackpiglet)
* Delete the expired deletebackuprequests that are stuck in "InProgress" (#6476, @reasonerjt)
* Fix issue #6534, reset PVB CR's StorageLocation to the latest one during backup sync as same as the backup CR. Also fix similar problem with DataUploadResult for data mover restore. (#6533, @Lyndon-Li)
* Fix issue #6519. Restrict the client manager of node-agent server to include only Velero resources from the server's namespace, otherwise, the controllers will try to reconcile CRs from all the installed Velero namespaces. (#6523, @Lyndon-Li)
* Track the skipped PVC and print the summary in backup log (#6496, @reasonerjt)
* Add restore finalizer to clean up external resources (#6479, @allenxu404)
* fix: Typos and add more spell checking rules to CI (#6415, @mateusoliveira43)
* Add missing CompletionTimestamp and metrics when restore moved into terminal phase in restoreOperationsReconciler (#6397, @Nutrymaco)
* Add support for resource Modifications in the restore flow. Also known as JSON Substitutions. (#6452, @anshulahuja98)
* Remove dependency of the legacy client code from pkg/cmd directory part 2 (#6497, @blackpiglet)
* Add data upload and download metrics (#6493, @allenxu404)
* Fix issue 6490, If a backup/restore has multiple async operations and one operation fails while others are still in-progress, when all the operations finish, the backup/restore will be set as Completed falsely (#6491, @Lyndon-Li)
* Velero Plugins no longer need kopia indirect dependency in their go.mod (#6484, @kaovilai)
* Remove dependency of the legacy client code from pkg/cmd directory (#6469, @blackpiglet)
* Add support for OpenStack CSI drivers topology keys (#6464, @openstack-csi-topology-keys)
* Add exit code log and possible memory shortage warning log for Restic command failure. (#6459, @blackpiglet)
* Modify DownloadRequest controller logic (#6433, @blackpiglet)
* Add data download controller for data mover (#6436, @qiuming-best)
* Fix hook filter display issue for backup describer (#6434, @allenxu404)
* Retrieve DataUpload into backup result ConfigMap during volume snapshot restore. (#6410, @blackpiglet)
* Design to add support for Multiple VolumeSnapshotClasses in CSI Plugin. (#5774, @anshulahuja98)
* Clarify the deletion frequency for gc controller (#6414, @allenxu404)
* Add unit tests for pkg/archive (#6396, @allenxu404)
* Add UT for pkg/discovery (#6394, @qiuming-best)
* Add UT for pkg/util (#6368, @Lyndon-Li)
* Add the code for data mover restore expose (#6357, @Lyndon-Li)
* Restore Endpoints before Services (#6315, @ywk253100)
* Add warning message for volume snapshotter in data mover case. (#6377, @blackpiglet)
* Add unit test for pkg/uploader (#6374, @qiuming-best)
* Change kopia as the default path of PVB (#6370, @Lyndon-Li)
* Do not persist VolumeSnapshot and VolumeSnapshotContent for snapshot DataMover case. (#6366, @blackpiglet)
* Add data mover related options in CLI (#6365, @ywk253100)
* Add dataupload controller (#6337, @qiuming-best)
* Add UT cases for pkg/podvolume (#6336, @Lyndon-Li)
* Remove Wait VolumeSnapshot to ReadyToUse logic. (#6327, @blackpiglet)
* Enhance the code because of #6297, the return value of GetBucketRegion is not recorded, as a result, when it fails, we have no way to get the cause (#6326, @Lyndon-Li)
* Skip updating status when CRDs are restored (#6325, @reasonerjt)
* Include namespaces needed by namespaced-scope resources in backup. (#6320, @blackpiglet)
* Update metrics when backup failed with validation error (#6318, @ywk253100)
* Add the code for data mover backup expose (#6308, @Lyndon-Li)
* Fix a PVR issue for generic data path -- the namespace remap was not honored, and enhance the code for better error handling (#6303, @Lyndon-Li)
* Add default values for defaultItemOperationTimeout and itemOperationSyncFrequency in velero CLI (#6298, @shubham-pampattiwar)
* Add UT cases for pkg/repository (#6296, @Lyndon-Li)
* Fix issue #5875. Since Kopia has supported IAM, Velero should not require static credentials all the time (#6283, @Lyndon-Li)
* Fixed a bug where status.progress is not getting updated for backups. (#6276, @kkothule)
* Add code change for async generic data path that is used by both PVB/PVR and data mover (#6226, @Lyndon-Li)
* Add data mover CRD under v2alpha1, include DataUpload CRD and DataDownload CRD (#6176, @Lyndon-Li)
* Remove any dataSource or dataSourceRef fields from PVCs in PVC BIA for cases of
prior PVC restores with CSI (#6111, @eemcmullan)
* Add the design for Volume Snapshot Data Movement (#5968, @Lyndon-Li)
* Fix issue #5123, Kopia repository supports self-cert CA for S3 compatible storage. (#6268, @Lyndon-Li)
* Bump up Kopia to v0.13 (#6248, @Lyndon-Li)
* log volumes to backup to help debug why `IsPodRunning` is called. (#6232, @kaovilai)
* Enable errcheck linter and resolve found issues (#6208, @blackpiglet)
* Enable more linters, and remove mal-functioned milestoned issue action. (#6194, @blackpiglet)
* Enable stylecheck linter and resolve found issues. (#6185, @blackpiglet)
* Fix issue #6182. If pod is not running, don't treat it as an error, let it go and leave a warning. (#6184, @Lyndon-Li)
* Enable staticcheck and resolve found issues (#6183, @blackpiglet)
* Enable linter revive and resolve found errors: part 2 (#6177, @blackpiglet)
* Enable linter revive and resolve found errors: part 1 (#6173, @blackpiglet)
* Fix usestdlibvars and whitespace linters issues. (#6162, @blackpiglet)
* Update Golang to v1.20 for main. (#6158, @blackpiglet)
* Make GetPluginConfig accessible from other packages. (#6151, @tkaovila)
* Ignore not found error during patching managedFields (#6136, @ywk253100)
* Fix the goreleaser issues and add a new goreleaser action (#6109, @blackpiglet)
* Add CSI snapshot data movement doc (#6793, @Lyndon-Li)
* Use old(origin) namespace in resource modifier conditions in case namespace may change during restore (#6724, @27149chen)
* Fix #6752: add namespace exclude check. (#6762, @blackpiglet)
* Update restore controller logic for restore deletion (#6761, @ywk253100)
* Fix issue #6753, remove the check for read-only BSL in restore async operation controller since Velero cannot fully support read-only mode BSL in restore at present (#6758, @Lyndon-Li)
* Fixes #6636, skip subresource in resource discovery (#6688, @27149chen)
* This pr made some improvements in Resource Modifiers:1. add label selector 2. change the field name from groupKind to groupResource (#6704, @27149chen)

View File

@@ -1,110 +0,0 @@
## v1.8.0
### 2022-01-14
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.8.0
### Container Image
`velero/velero:v1.8.0`
### Documentation
https://velero.io/docs/v1.8
### Upgrading
https://velero.io/docs/v1.8/upgrade-to-1.8/
### Highlights
#### Velero plugins now support handling volumes created by the CSI drivers of cloud providers
Versions 1.4 of the Velero plugins for AWS, Azure and GCP now support snapshotting and restoring the persistent volumes provisioned by CSI driver via the APIs of the cloud providers. With this enhancement, users can backup and restore the persistent volumes on these cloud providers without using the Velero CSI plugin. The CSI plugin will remain beta and the feature flag `EnableCSI` will be disabled by default.
For the version of the plugins and the CSI drivers they support respectively please see the table:
| Plugin | Version | CSI Driver |
| --- | ----------- | ---------- |
| velero-plugin-for-aws | v1.4.0 | ebs.csi.aws.com |
| velero-plugin-for-microsoft-azure | v1.4.0 | disk.csi.azure.com |
| velero-plugin-for-gcp | v1.4.0 | pd.csi.storage.gke.io |
#### IPv6 dual stack support
We've verified the functionality of Velero on IPv6 dual stack by successfully running the E2E test on IPv6 dual stack environment.
#### Refactor the controllers using Kubebuilder v3
In this release we continued our code modernization work, rewriting some controllers using Kubebuilder v3. This work is ongoing and we will continue to make progress in future releases.
#### Enhancements to E2E test cases
More test cases have been added to the E2E test suite to improve the release health.
#### Respect the cron setting of scheduled backup
The creation time is now taken into account to calculate the next run for scheduled backup.
#### Deleting BSLs also cleans up related resources
When a Backup Storage Location (BSL) is deleted, backup and Restic repository resources will also be deleted.
#### Breaking changes
Starting in v1.8, Velero will only support Kubernetes v1 CRD meaning that Velero v1.8+ will only run on Kubernetes v1.16+. Before upgrading, make sure you are running a supported Kubernetes version. For more information, see our [compatibility matrix](https://github.com/vmware-tanzu/velero#velero-compatibility-matrix).
#### Upload Progress Monitoring and Item Snapshotter
Item Snapshotter plugin API was merged. This will support both Upload Progress
monitoring and the planned Data Mover. Upload Progress monitoring PRs are
in progress for 1.9.
### All changes
* E2E test on ssr object with controller namespace mix-ups (#4521, @mqiu)
* Check whether the volume is provisioned by CSI driver or not by the annotation as well (#4513, @ywk253100)
* Initialize the labels field of `velero backup-location create` option to avoid #4484 (#4491, @ywk253100)
* Fix e2e 2500 namespaces scale test timeout problem (#4480, @mqiu)
* Add backup deletion e2e test (#4401, @danfengliu)
* Return the error when getting backup store in backup deletion controller (#4465, @reasonerjt)
* Ignore the provided port is already allocated error when restoring the LoadBalancer service (#4462, @ywk253100)
* Revert #4423 migrate backup sync controller to kubebuilder. (#4457, @jxun)
* Add rbac and annotation test cases (#4455, @mqiu)
* remove --crds-version in velero install command. (#4446, @jxun)
* Upgrade e2e test vsphere plugin (#4440, @mqiu)
* Fix e2e test failures for the inappropriate optimize of velero install (#4438, @mqiu)
* Limit backup namespaces on test resource filtering cases (#4437, @mqiu)
* Bump up Go to 1.17 (#4431, @reasonerjt)
* Added `<backup name>`-itemsnapshots.json.gz to the backup format. This file exists
when item snapshots are taken and contains an array of volume.Itemsnapshots
containing the information about the snapshots. This will not be used unless
upload progress monitoring and item snapshots are enabled and an ItemSnapshot
plugin is used to take snapshots.
Also added DownloadTargetKindBackupItemSnapshots for retrieving the signed URL to download only the `<backup name>`-itemsnapshots.json.gz part of a backup for use by
`velero backup describe`. (#4429, @dsmithuchida)
* Migrate backup sync controller from code-generator to kubebuilder. (#4423, @jxun)
* Added UploadProgressFeature flag to enable Upload Progress Monitoring and Item
Snapshotters. (#4416, @dsmithuchida)
* Added BackupWithResolvers and RestoreWithResolvers calls. Will eventually replace Backup and Restore methods.
Adds ItemSnapshotters to Backup and Restore workflows. (#4410, @dsu)
* Build for darwin-arm64 (#4409, @epk)
* Add resource filtering test cases (#4404, @mqiu)
* Fix the issue that the backup cannot be deleted after the application uninstalled (#4398, @ywk253100)
* Add restoreactionitem plugin to handle admission webhook configurations (#4397, @reasonerjt)
* Keep the annotation "pv.kubernetes.io/provisioned-by" when restoring PVs (#4391, @ywk253100)
* Adjust structure of e2e test codes (#4386, @mqiu)
* feat: migrate velero controller from kubebuilder v2 to v3
From Velero v1.8, apiextesions.k8s.io/v1beta1 is no longer supported,
which means only CRD of apiextensions.k8s.io/v1 is supported,
and the supported Kubernetes version is updated to v1.16 and later. (#4382, @jxun)
* Delete backups and Restic repos associated with deleted BSL(s) (#4377, @codegold79)
* Add the key for GKE zone for AZ collection (#4376, @reasonerjt)
* Fix statefulsets volumeClaimTemplates storageClassName when use Changing PV/PVC Storage Classes (#4375, @Box-Cube)
* Fix snapshot e2e test issue of jsonpath (#4372, @danfengliu)
* Modify the timestamp in the name of a backup generated from schedule to use UTC. (#4353, @jxun)
* Read Availability zone from nodeAffinity requirements (#4350, @reasonerjt)
* Use factory.Namespace() to replace hardcoded velero namespace (#4346, @half-life666)
* Return the error if velero failed to detect S3 region for restic repo (#4343, @reasonerjt)
* Add init log option for velero controller-runtime manager. (#4341, @jxun)
* Ignore the `provided port is already allocated` error when restoring the `NodePort` service (#4336, @ywk253100)
* Fixed an issue with the `backup-location create` command where the BSL Credential field would be set to an invalid empty SecretKeySelector when no credential details were provided. (#4322, @zubron)
* fix buggy pager func (#4306, @alaypatel07)
* Don't create a backup immediately after creating a schedule (#4281, @ywk253100)
* Fix CVE-2020-29652 and CVE-2020-26160 (#4274, @ywk253100)
* Refine tag-release.sh to align with change in release process (#4185, @reasonerjt)
* Fix plugins incompatible issue in upgrade test (#4141, @danfengliu)
* Verify group before treating resource as cohabiting (#4126, @sseago)
* Added ItemSnapshotter plugin definition and plugin framework - addresses #3533.
Part of the Upload Progress enhancement (#3533) (#4077, @dsmithuchida)
* Add upgrade test in E2E test (#4058, @danfengliu)
* Handle namespace mapping for PVs without snapshots on restore (#3708, @sseago)

View File

@@ -1,104 +0,0 @@
## v1.9.0
### 2022-06-13
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.9.0
### Container Image
`velero/velero:v1.9.0`
### Documentation
https://velero.io/docs/v1.9/
### Upgrading
https://velero.io/docs/v1.9/upgrade-to-1.9/
### Highlights
#### Improvement to the CSI plugin
- Bump up to the CSI volume snapshot v1 API
- No VolumeSnapshot will be left in the source namespace of the workload
- Report metrics for CSI snapshots
More improvements please refer to [CSI plugin improvement](https://github.com/vmware-tanzu/velero/issues?q=is%3Aissue+label%3A%22CSI+plugin+-+GA+-+phase1%22+is%3Aclosed)
With these improvements we'll provide official support for CSI snapshots on AKS/EKS clusters. (with CSI plugin v0.3.0)
#### Refactor the controllers using Kubebuilder v3
In this release we continued our code modernization work, rewriting some controllers using Kubebuilder v3. This work is ongoing and we will continue to make progress in future releases.
#### Optionally restore status on selected resources
Options are added to the CLI and Restore spec to control the group of resources whose status will be restored.
#### ExistingResourcePolicy in the restore API
Users can choose to overwrite or patch the existing resources during restore by setting this policy.
#### Upgrade integrated Restic version and add skip TLS validation in Restic command
Upgrade integrated Restic version, which will resolve some of the CVEs, and support skip TLS validation in Restic backup/restore.
#### Breaking changes
With bumping up the API to v1 in CSI plugin, the v0.3.0 CSI plugin will only work for Kubernetes v1.20+
### All changes
* restic: add full support for setting SecurityContext for restore init container from configMap. (#4084, @MatthieuFin)
* Add metrics backup_items_total and backup_items_errors (#4296, @tobiasgiese)
* Convert PodVolumebackup controller to the Kubebuilder framework (#4436, @fgold)
* Skip not mounted volumes when backing up (#4497, @dkeven)
* Update doc for v1.8 (#4517, @reasonerjt)
* Fix bug to make the restic prune frequency configurable (#4518, @ywk253100)
* Add E2E test of backups sync from BSL (#4545, @mqiu)
* Fix: OrderedResources in Schedules (#4550, @dbrekau)
* Skip volumes of non-running pods when backing up (#4584, @bynare)
* E2E SSR test add retry mechanism and logs (#4591, @mqiu)
* Add pushing image to GCR in github workflow to facilitate some environments that have rate limitation to docker hub, e.g. vSphere. (#4623, @jxun)
* Add existingResourcePolicy to Restore API (#4628, @shubham-pampattiwar)
* Fix E2E backup namespaces test (#4634, @qiuming-best)
* Update image used by E2E test to gcr.io (#4639, @jxun)
* Add multiple label selector support to Velero Backup and Restore APIs (#4650, @shubham-pampattiwar)
* Convert Pod Volume Restore resource/controller to the Kubebuilder framework (#4655, @ywk253100)
* Update --use-owner-references-in-backup description in velero command line. (#4660, @jxun)
* Avoid overwritten hook's exec.container parameter when running pod command executor. (#4661, @jxun)
* Support regional pv for GKE (#4680, @jxun)
* Bypass the remap CRD version plugin when v1beta1 CRD is not supported (#4686, @reasonerjt)
* Add GINKGO_SKIP to support skip specific case in e2e test. (#4692, @jxun)
* Add --pod-labels flag to velero install (#4694, @j4m3s-s)
* Enable coverage in test.sh and upload to codecov (#4704, @reasonerjt)
* Mark the BSL as "Unavailable" when gets any error and add a new field "Message" to the status to record the error message (#4719, @ywk253100)
* Support multiple skip option for E2E test (#4725, @jxun)
* Add PriorityClass to the AdditionalItems of Backup's PodAction and Restore's PodAction plugin to backup and restore PriorityClass if it is used by a Pod. (#4740, @phuongatemc)
* Insert all restore errors and warnings into restore log. (#4743, @sseago)
* Refactor schedule controller with kubebuilder (#4748, @ywk253100)
* Garbage collector now adds labels to backups that failed to delete for BSLNotFound, BSLCannotGet, BSLReadOnly reasons. (#4757, @kaovilai)
* Skip podvolumerestore creation when restore excludes pv/pvc (#4769, @half-life666)
* Add parameter for e2e test to support modify kibishii install path. (#4778, @jxun)
* Ensure the restore hook applied to new namespace based on the mapping (#4779, @reasonerjt)
* Add ability to restore status on selected resources (#4785, @RafaeLeal)
* Do not take snapshot for PV to avoid duplicated snapshotting, when CSI feature is enabled. (#4797, @jxun)
* Bump up to v1 API for CSI snapshot (#4800, @reasonerjt)
* fix: delete empty backups (#4817, @yuvalman)
* Add CSI VolumeSnapshot related metrics. (#4818, @jxun)
* Fix default-backup-ttl not work (#4831, @qiuming-best)
* Make the vsc created by backup sync controller deletable (#4832, @reasonerjt)
* Make in-progress backup/restore as failed when doing the reconcile to avoid hanging in in-progress status (#4833, @ywk253100)
* Use controller-gen to generate the deep copy methods for objects (#4838, @ywk253100)
* Update integrated Restic version and add insecureSkipTLSVerify for Restic CLI. (#4839, @jxun)
* Modify CSI VolumeSnapshot metric related code. (#4854, @jxun)
* Refactor backup deletion controller based on kubebuilder (#4855, @reasonerjt)
* Remove VolumeSnapshots created during backup when CSI feature is enabled. (#4858, @jxun)
* Convert Restic Repository resource/controller to the Kubebuilder framework (#4859, @qiuming-best)
* Add ClusterClasses to the restore priority list (#4866, @reasonerjt)
* Cleanup the .velero folder after restic done (#4872, @big-appled)
* Delete orphan CSI snapshots in backup sync controller (#4887, @reasonerjt)
* Make waiting VolumeSnapshot to ready process parallel. (#4889, @jxun)
* continue rather than return for non-matching restore action label (#4890, @sseago)
* Make in-progress PVB/PVR as failed when restic controller restarts to avoid hanging backup/restore (#4893, @ywk253100)
* Refactor BSL controller with periodical enqueue source (#4894, @jxun)
* Make garbage collection for expired backups configurable (#4897, @ywk253100)
* Bump up the version of distroless to base-debian11 (#4898, @ywk253100)
* Add schedule ordered resources E2E test (#4913, @qiuming-best)
* Make velero completion zsh command output can be used by `source` command. (#4914, @jxun)
* Enhance the map flag to support parsing input value contains entry delimiters (#4920, @ywk253100)
* Fix E2E test [Backups][Deletion][Restic] on GCP. (#4968, @jxun)
* Disable status as sub resource in CRDs (#4972, @ywk253100)
* Add more information for failing to get path or snapshot in restic backup and restore. (#4988, @jxun)

View File

@@ -0,0 +1 @@
Add upgrade test in E2E test

View File

@@ -0,0 +1 @@
Verify group before treating resource as cohabitating

View File

@@ -0,0 +1 @@
Fix plugins incompatible issue in upgrade test

View File

@@ -0,0 +1 @@
Refine tag-release.sh to align with change in release process

View File

@@ -0,0 +1 @@
Fix CVE-2020-29652 and CVE-2020-26160

View File

@@ -0,0 +1 @@
Don't create a backup immediately after creating a schedule

View File

@@ -1,27 +0,0 @@
package main
import (
"fmt"
"os"
"time"
)
const (
// workingModePause indicates it is for general purpose to hold the pod under running state
workingModePause = "pause"
)
func main() {
if len(os.Args) < 2 {
fmt.Fprintln(os.Stderr, "ERROR: at least one argument must be provided, the working mode")
os.Exit(1)
}
switch os.Args[1] {
case workingModePause:
time.Sleep(time.Duration(1<<63 - 1))
default:
fmt.Fprintln(os.Stderr, "ERROR: wrong working mode provided")
os.Exit(1)
}
}

View File

@@ -18,6 +18,7 @@ package main
import (
"fmt"
"io/ioutil"
"os"
"path/filepath"
"time"
@@ -33,16 +34,12 @@ func main() {
defer ticker.Stop()
for {
<-ticker.C
if done() {
fmt.Println("All restic restores are done")
err := removeFolder()
if err != nil {
fmt.Println(err)
} else {
fmt.Println("Done cleanup .velero folder")
select {
case <-ticker.C:
if done() {
fmt.Println("All restic restores are done")
return
}
return
}
}
}
@@ -51,7 +48,7 @@ func main() {
// within the .velero/ subdirectory whose name is equal to os.Args[1], or
// false otherwise
func done() bool {
children, err := os.ReadDir("/restores")
children, err := ioutil.ReadDir("/restores")
if err != nil {
fmt.Fprintf(os.Stderr, "ERROR reading /restores directory: %s\n", err)
return false
@@ -78,28 +75,3 @@ func done() bool {
return true
}
// remove .velero folder
func removeFolder() error {
children, err := os.ReadDir("/restores")
if err != nil {
return err
}
for _, child := range children {
if !child.IsDir() {
fmt.Printf("%s is not a directory, skipping.\n", child.Name())
continue
}
donePath := filepath.Join("/restores", child.Name(), ".velero")
err = os.RemoveAll(donePath)
if err != nil {
return err
}
fmt.Printf("Deleted %s", donePath)
}
return nil
}

View File

@@ -20,7 +20,7 @@ import (
"os"
"path/filepath"
"k8s.io/klog/v2"
"k8s.io/klog"
"github.com/vmware-tanzu/velero/pkg/cmd"
"github.com/vmware-tanzu/velero/pkg/cmd/velero"

View File

@@ -1,9 +1,11 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.12.0
controller-gen.kubebuilder.io/version: v0.3.0
creationTimestamp: null
name: backups.velero.io
spec:
group: velero.io
@@ -35,46 +37,10 @@ spec:
spec:
description: BackupSpec defines the specification for a Velero backup.
properties:
csiSnapshotTimeout:
description: CSISnapshotTimeout specifies the time used to wait for
CSI VolumeSnapshot status turns to ReadyToUse during creation, before
returning error as timeout. The default value is 10 minute.
type: string
datamover:
description: DataMover specifies the data mover to be used by the
backup. If DataMover is "" or "velero", the built-in data mover
will be used.
type: string
defaultVolumesToFsBackup:
description: DefaultVolumesToFsBackup specifies whether pod volume
file system backup should be used for all volumes by default.
nullable: true
type: boolean
defaultVolumesToRestic:
description: "DefaultVolumesToRestic specifies whether restic should
be used to take a backup of all pod volumes by default. \n Deprecated:
this field is no longer used and will be removed entirely in future.
Use DefaultVolumesToFsBackup instead."
nullable: true
description: DefaultVolumesToRestic specifies whether restic should
be used to take a backup of all pod volumes by default.
type: boolean
excludedClusterScopedResources:
description: ExcludedClusterScopedResources is a slice of cluster-scoped
resource type names to exclude from the backup. If set to "*", all
cluster-scoped resource types are excluded. The default value is
empty.
items:
type: string
nullable: true
type: array
excludedNamespaceScopedResources:
description: ExcludedNamespaceScopedResources is a slice of namespace-scoped
resource type names to exclude from the backup. If set to "*", all
namespace-scoped resource types are excluded. The default value
is empty.
items:
type: string
nullable: true
type: array
excludedNamespaces:
description: ExcludedNamespaces contains a list of namespaces that
are not included in the backup.
@@ -178,7 +144,6 @@ spec:
contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
name:
description: Name is the name of this hook.
type: string
@@ -281,23 +246,6 @@ spec:
resources should be included for consideration in the backup.
nullable: true
type: boolean
includedClusterScopedResources:
description: IncludedClusterScopedResources is a slice of cluster-scoped
resource type names to include in the backup. If set to "*", all
cluster-scoped resource types are included. The default value is
empty, which means only related cluster-scoped resources are included.
items:
type: string
nullable: true
type: array
includedNamespaceScopedResources:
description: IncludedNamespaceScopedResources is a slice of namespace-scoped
resource type names to include in the backup. The default value
is "*".
items:
type: string
nullable: true
type: array
includedNamespaces:
description: IncludedNamespaces is a slice of namespace names to include
objects from. If empty, all namespaces are included.
@@ -312,11 +260,6 @@ spec:
type: string
nullable: true
type: array
itemOperationTimeout:
description: ItemOperationTimeout specifies the time used to wait
for asynchronous BackupItemAction operations The default value is
1 hour.
type: string
labelSelector:
description: LabelSelector is a metav1.LabelSelector to filter with
when adding individual objects to the backup. If empty or nil, all
@@ -364,7 +307,6 @@ spec:
are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
metadata:
properties:
labels:
@@ -372,101 +314,18 @@ spec:
type: string
type: object
type: object
orLabelSelectors:
description: OrLabelSelectors is list of metav1.LabelSelector to filter
with when adding individual objects to the backup. If multiple provided
they will be joined by the OR operator. LabelSelector as well as
OrLabelSelectors cannot co-exist in backup request, only one of
them can be used.
items:
description: A label selector is a label query over a set of resources.
The result of matchLabels and matchExpressions are ANDed. An empty
label selector matches all objects. A null label selector matches
no objects.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements.
The requirements are ANDed.
items:
description: A label selector requirement is a selector that
contains values, a key, and an operator that relates the
key and values.
properties:
key:
description: key is the label key that the selector applies
to.
type: string
operator:
description: operator represents a key's relationship
to a set of values. Valid operators are In, NotIn, Exists
and DoesNotExist.
type: string
values:
description: values is an array of string values. If the
operator is In or NotIn, the values array must be non-empty.
If the operator is Exists or DoesNotExist, the values
array must be empty. This array is replaced during a
strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs. A single
{key,value} in the matchLabels map is equivalent to an element
of matchExpressions, whose key field is "key", the operator
is "In", and the values array contains only "value". The requirements
are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
nullable: true
type: array
orderedResources:
additionalProperties:
type: string
description: OrderedResources specifies the backup order of resources
of specific Kind. The map key is the resource name and value is
a list of object names separated by commas. Each resource name has
format "namespace/objectname". For cluster resources, simply use
"objectname".
of specific Kind. The map key is the Kind name and value is a list
of resource names separated by commas. Each resource name has format
"namespace/resourcename". For cluster resources, simply use "resourcename".
nullable: true
type: object
resourcePolicy:
description: ResourcePolicy specifies the referenced resource policies
that backup should follow
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced.
If APIGroup is not specified, the specified Kind must be in
the core API group. For any other third-party types, APIGroup
is required.
type: string
kind:
description: Kind is the type of resource being referenced
type: string
name:
description: Name is the name of resource being referenced
type: string
required:
- kind
- name
type: object
x-kubernetes-map-type: atomic
snapshotMoveData:
description: SnapshotMoveData specifies whether snapshot data should
be moved
nullable: true
type: boolean
snapshotVolumes:
description: SnapshotVolumes specifies whether to take snapshots of
any PV's referenced in the set of objects included in the Backup.
description: SnapshotVolumes specifies whether to take cloud snapshots
of any PV's referenced in the set of objects included in the Backup.
nullable: true
type: boolean
storageLocation:
@@ -487,20 +346,6 @@ spec:
status:
description: BackupStatus captures the current status of a Velero backup.
properties:
backupItemOperationsAttempted:
description: BackupItemOperationsAttempted is the total number of
attempted async BackupItemAction operations for this backup.
type: integer
backupItemOperationsCompleted:
description: BackupItemOperationsCompleted is the total number of
successfully completed async BackupItemAction operations for this
backup.
type: integer
backupItemOperationsFailed:
description: BackupItemOperationsFailed is the total number of async
BackupItemAction operations for this backup which ended with an
error.
type: integer
completionTimestamp:
description: CompletionTimestamp records the time a backup was completed.
Completion time is recorded even on failed backups. Completion time
@@ -509,14 +354,6 @@ spec:
format: date-time
nullable: true
type: string
csiVolumeSnapshotsAttempted:
description: CSIVolumeSnapshotsAttempted is the total number of attempted
CSI VolumeSnapshots for this backup.
type: integer
csiVolumeSnapshotsCompleted:
description: CSIVolumeSnapshotsCompleted is the total number of successfully
completed CSI VolumeSnapshots for this backup.
type: integer
errors:
description: Errors is a count of all error messages that were generated
during execution of the backup. The actual errors are in the backup's
@@ -527,10 +364,6 @@ spec:
format: date-time
nullable: true
type: string
failureReason:
description: FailureReason is an error that caused the entire backup
to fail.
type: string
formatVersion:
description: FormatVersion is the backup format version, including
major, minor, and patch version.
@@ -541,10 +374,6 @@ spec:
- New
- FailedValidation
- InProgress
- WaitingForPluginOperations
- WaitingForPluginOperationsPartiallyFailed
- Finalizing
- FinalizingPartiallyFailed
- Completed
- PartiallyFailed
- Failed
@@ -603,3 +432,9 @@ spec:
type: object
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@@ -1,9 +1,11 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.12.0
controller-gen.kubebuilder.io/version: v0.3.0
creationTimestamp: null
name: backupstoragelocations.velero.io
spec:
group: velero.io
@@ -90,7 +92,6 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
default:
description: Default indicates this location is the default backup
storage location.
@@ -157,10 +158,6 @@ spec:
format: date-time
nullable: true
type: string
message:
description: Message is a message about the backup storage location's
status.
type: string
phase:
description: Phase is the current state of the BackupStorageLocation.
enum:
@@ -171,4 +168,11 @@ spec:
type: object
served: true
storage: true
subresources: {}
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@@ -1,9 +1,11 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.12.0
controller-gen.kubebuilder.io/version: v0.3.0
creationTimestamp: null
name: deletebackuprequests.velero.io
spec:
group: velero.io
@@ -14,16 +16,7 @@ spec:
singular: deletebackuprequest
scope: Namespaced
versions:
- additionalPrinterColumns:
- description: The name of the backup to be deleted
jsonPath: .spec.backupName
name: BackupName
type: string
- description: The status of the deletion request
jsonPath: .status.phase
name: Status
type: string
name: v1
- name: v1
schema:
openAPIV3Schema:
description: DeleteBackupRequest is a request to delete one or more backups.
@@ -70,4 +63,9 @@ spec:
type: object
served: true
storage: true
subresources: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@@ -1,9 +1,11 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.12.0
controller-gen.kubebuilder.io/version: v0.3.0
creationTimestamp: null
name: downloadrequests.velero.io
spec:
group: velero.io
@@ -44,18 +46,12 @@ spec:
- BackupLog
- BackupContents
- BackupVolumeSnapshots
- BackupItemOperations
- BackupResourceList
- BackupResults
- RestoreLog
- RestoreResults
- RestoreResourceList
- RestoreItemOperations
- CSIBackupVolumeSnapshots
- CSIBackupVolumeSnapshotContents
type: string
name:
description: Name is the name of the Kubernetes resource with
description: Name is the name of the kubernetes resource with
which the file is associated.
type: string
required:
@@ -88,3 +84,11 @@ spec:
type: object
served: true
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@@ -1,9 +1,11 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.12.0
controller-gen.kubebuilder.io/version: v0.3.0
creationTimestamp: null
name: podvolumebackups.velero.io
spec:
group: velero.io
@@ -14,44 +16,7 @@ spec:
singular: podvolumebackup
scope: Namespaced
versions:
- additionalPrinterColumns:
- description: Pod Volume Backup status such as New/InProgress
jsonPath: .status.phase
name: Status
type: string
- description: Time when this backup was started
jsonPath: .status.startTimestamp
name: Created
type: date
- description: Namespace of the pod containing the volume to be backed up
jsonPath: .spec.pod.namespace
name: Namespace
type: string
- description: Name of the pod containing the volume to be backed up
jsonPath: .spec.pod.name
name: Pod
type: string
- description: Name of the volume to be backed up
jsonPath: .spec.volume
name: Volume
type: string
- description: Backup repository identifier for this backup
jsonPath: .spec.repoIdentifier
name: Repository ID
type: string
- description: The type of the uploader to handle data transfer
jsonPath: .spec.uploaderType
name: Uploader Type
type: string
- description: Name of the Backup Storage Location where this backup should be
stored
jsonPath: .spec.backupStorageLocation
name: Storage Location
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1
- name: v1
schema:
openAPIV3Schema:
properties:
@@ -72,7 +37,7 @@ spec:
properties:
backupStorageLocation:
description: BackupStorageLocation is the name of the backup storage
location where the backup repository is stored.
location where the restic repository is stored.
type: string
node:
description: Node is the name of the node that the Pod is running
@@ -115,9 +80,8 @@ spec:
description: 'UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids'
type: string
type: object
x-kubernetes-map-type: atomic
repoIdentifier:
description: RepoIdentifier is the backup repository identifier.
description: RepoIdentifier is the restic repository identifier.
type: string
tags:
additionalProperties:
@@ -125,14 +89,6 @@ spec:
description: Tags are a map of key-value pairs that should be applied
to the volume backup as tags.
type: object
uploaderType:
description: UploaderType is the type of the uploader to handle the
data transfer.
enum:
- kopia
- restic
- ""
type: string
volume:
description: Volume is the name of the volume within the Pod to be
backed up.
@@ -197,4 +153,9 @@ spec:
type: object
served: true
storage: true
subresources: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@@ -1,9 +1,11 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.12.0
controller-gen.kubebuilder.io/version: v0.3.0
creationTimestamp: null
name: podvolumerestores.velero.io
spec:
group: velero.io
@@ -14,41 +16,7 @@ spec:
singular: podvolumerestore
scope: Namespaced
versions:
- additionalPrinterColumns:
- description: Namespace of the pod containing the volume to be restored
jsonPath: .spec.pod.namespace
name: Namespace
type: string
- description: Name of the pod containing the volume to be restored
jsonPath: .spec.pod.name
name: Pod
type: string
- description: The type of the uploader to handle data transfer
jsonPath: .spec.uploaderType
name: Uploader Type
type: string
- description: Name of the volume to be restored
jsonPath: .spec.volume
name: Volume
type: string
- description: Pod Volume Restore status such as New/InProgress
jsonPath: .status.phase
name: Status
type: string
- description: Pod Volume Restore status such as New/InProgress
format: int64
jsonPath: .status.progress.totalBytes
name: TotalBytes
type: integer
- description: Pod Volume Restore status such as New/InProgress
format: int64
jsonPath: .status.progress.bytesDone
name: BytesDone
type: integer
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1
- name: v1
schema:
openAPIV3Schema:
properties:
@@ -69,7 +37,7 @@ spec:
properties:
backupStorageLocation:
description: BackupStorageLocation is the name of the backup storage
location where the backup repository is stored.
location where the restic repository is stored.
type: string
pod:
description: Pod is a reference to the pod containing the volume to
@@ -108,25 +76,12 @@ spec:
description: 'UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids'
type: string
type: object
x-kubernetes-map-type: atomic
repoIdentifier:
description: RepoIdentifier is the backup repository identifier.
description: RepoIdentifier is the restic repository identifier.
type: string
snapshotID:
description: SnapshotID is the ID of the volume snapshot to be restored.
type: string
sourceNamespace:
description: SourceNamespace is the original namespace for namaspace
mapping.
type: string
uploaderType:
description: UploaderType is the type of the uploader to handle the
data transfer.
enum:
- kopia
- restic
- ""
type: string
volume:
description: Volume is the name of the volume within the Pod to be
restored.
@@ -136,7 +91,6 @@ spec:
- pod
- repoIdentifier
- snapshotID
- sourceNamespace
- volume
type: object
status:
@@ -182,4 +136,9 @@ spec:
type: object
served: true
storage: true
subresources: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@@ -1,27 +1,22 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.12.0
name: backuprepositories.velero.io
controller-gen.kubebuilder.io/version: v0.3.0
creationTimestamp: null
name: resticrepositories.velero.io
spec:
group: velero.io
names:
kind: BackupRepository
listKind: BackupRepositoryList
plural: backuprepositories
singular: backuprepository
kind: ResticRepository
listKind: ResticRepositoryList
plural: resticrepositories
singular: resticrepository
scope: Namespaced
versions:
- additionalPrinterColumns:
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
- jsonPath: .spec.repositoryType
name: Repository Type
type: string
name: v1
- name: v1
schema:
openAPIV3Schema:
properties:
@@ -38,7 +33,7 @@ spec:
metadata:
type: object
spec:
description: BackupRepositorySpec is the specification for a BackupRepository.
description: ResticRepositorySpec is the specification for a ResticRepository.
properties:
backupStorageLocation:
description: BackupStorageLocation is the name of the BackupStorageLocation
@@ -48,19 +43,12 @@ spec:
description: MaintenanceFrequency is how often maintenance should
be run.
type: string
repositoryType:
description: RepositoryType indicates the type of the backend repository
enum:
- kopia
- restic
- ""
type: string
resticIdentifier:
description: ResticIdentifier is the full restic-compatible string
for identifying this repository.
type: string
volumeNamespace:
description: VolumeNamespace is the namespace this backup repository
description: VolumeNamespace is the namespace this restic repository
contains pod volume backups for.
type: string
required:
@@ -70,7 +58,7 @@ spec:
- volumeNamespace
type: object
status:
description: BackupRepositoryStatus is the current status of a BackupRepository.
description: ResticRepositoryStatus is the current status of a ResticRepository.
properties:
lastMaintenanceTime:
description: LastMaintenanceTime is the last time maintenance was
@@ -80,10 +68,10 @@ spec:
type: string
message:
description: Message is a message about the current status of the
BackupRepository.
ResticRepository.
type: string
phase:
description: Phase is the current state of the BackupRepository.
description: Phase is the current state of the ResticRepository.
enum:
- New
- Ready
@@ -93,4 +81,9 @@ spec:
type: object
served: true
storage: true
subresources: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

File diff suppressed because it is too large Load Diff

View File

@@ -1,9 +1,11 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.12.0
controller-gen.kubebuilder.io/version: v0.3.0
creationTimestamp: null
name: schedules.velero.io
spec:
group: velero.io
@@ -14,26 +16,7 @@ spec:
singular: schedule
scope: Namespaced
versions:
- additionalPrinterColumns:
- description: Status of the schedule
jsonPath: .status.phase
name: Status
type: string
- description: A Cron expression defining when to run the Backup
jsonPath: .spec.schedule
name: Schedule
type: string
- description: The last time a Backup was run for this schedule
jsonPath: .status.lastBackup
name: LastBackup
type: date
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
- jsonPath: .spec.paused
name: Paused
type: boolean
name: v1
- name: v1
schema:
openAPIV3Schema:
description: Schedule is a Velero resource that represents a pre-scheduled
@@ -54,9 +37,6 @@ spec:
spec:
description: ScheduleSpec defines the specification for a Velero schedule
properties:
paused:
description: Paused specifies whether the schedule is paused or not
type: boolean
schedule:
description: Schedule is a Cron expression defining when to run the
Backup.
@@ -65,46 +45,10 @@ spec:
description: Template is the definition of the Backup to be run on
the provided schedule
properties:
csiSnapshotTimeout:
description: CSISnapshotTimeout specifies the time used to wait
for CSI VolumeSnapshot status turns to ReadyToUse during creation,
before returning error as timeout. The default value is 10 minute.
type: string
datamover:
description: DataMover specifies the data mover to be used by
the backup. If DataMover is "" or "velero", the built-in data
mover will be used.
type: string
defaultVolumesToFsBackup:
description: DefaultVolumesToFsBackup specifies whether pod volume
file system backup should be used for all volumes by default.
nullable: true
type: boolean
defaultVolumesToRestic:
description: "DefaultVolumesToRestic specifies whether restic
should be used to take a backup of all pod volumes by default.
\n Deprecated: this field is no longer used and will be removed
entirely in future. Use DefaultVolumesToFsBackup instead."
nullable: true
description: DefaultVolumesToRestic specifies whether restic should
be used to take a backup of all pod volumes by default.
type: boolean
excludedClusterScopedResources:
description: ExcludedClusterScopedResources is a slice of cluster-scoped
resource type names to exclude from the backup. If set to "*",
all cluster-scoped resource types are excluded. The default
value is empty.
items:
type: string
nullable: true
type: array
excludedNamespaceScopedResources:
description: ExcludedNamespaceScopedResources is a slice of namespace-scoped
resource type names to exclude from the backup. If set to "*",
all namespace-scoped resource types are excluded. The default
value is empty.
items:
type: string
nullable: true
type: array
excludedNamespaces:
description: ExcludedNamespaces contains a list of namespaces
that are not included in the backup.
@@ -209,7 +153,6 @@ spec:
requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
name:
description: Name is the name of this hook.
type: string
@@ -316,24 +259,6 @@ spec:
resources should be included for consideration in the backup.
nullable: true
type: boolean
includedClusterScopedResources:
description: IncludedClusterScopedResources is a slice of cluster-scoped
resource type names to include in the backup. If set to "*",
all cluster-scoped resource types are included. The default
value is empty, which means only related cluster-scoped resources
are included.
items:
type: string
nullable: true
type: array
includedNamespaceScopedResources:
description: IncludedNamespaceScopedResources is a slice of namespace-scoped
resource type names to include in the backup. The default value
is "*".
items:
type: string
nullable: true
type: array
includedNamespaces:
description: IncludedNamespaces is a slice of namespace names
to include objects from. If empty, all namespaces are included.
@@ -348,11 +273,6 @@ spec:
type: string
nullable: true
type: array
itemOperationTimeout:
description: ItemOperationTimeout specifies the time used to wait
for asynchronous BackupItemAction operations The default value
is 1 hour.
type: string
labelSelector:
description: LabelSelector is a metav1.LabelSelector to filter
with when adding individual objects to the backup. If empty
@@ -400,7 +320,6 @@ spec:
"value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
metadata:
properties:
labels:
@@ -408,100 +327,18 @@ spec:
type: string
type: object
type: object
orLabelSelectors:
description: OrLabelSelectors is list of metav1.LabelSelector
to filter with when adding individual objects to the backup.
If multiple provided they will be joined by the OR operator.
LabelSelector as well as OrLabelSelectors cannot co-exist in
backup request, only one of them can be used.
items:
description: A label selector is a label query over a set of
resources. The result of matchLabels and matchExpressions
are ANDed. An empty label selector matches all objects. A
null label selector matches no objects.
properties:
matchExpressions:
description: matchExpressions is a list of label selector
requirements. The requirements are ANDed.
items:
description: A label selector requirement is a selector
that contains values, a key, and an operator that relates
the key and values.
properties:
key:
description: key is the label key that the selector
applies to.
type: string
operator:
description: operator represents a key's relationship
to a set of values. Valid operators are In, NotIn,
Exists and DoesNotExist.
type: string
values:
description: values is an array of string values.
If the operator is In or NotIn, the values array
must be non-empty. If the operator is Exists or
DoesNotExist, the values array must be empty. This
array is replaced during a strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs.
A single {key,value} in the matchLabels map is equivalent
to an element of matchExpressions, whose key field is
"key", the operator is "In", and the values array contains
only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
nullable: true
type: array
orderedResources:
additionalProperties:
type: string
description: OrderedResources specifies the backup order of resources
of specific Kind. The map key is the resource name and value
is a list of object names separated by commas. Each resource
name has format "namespace/objectname". For cluster resources,
simply use "objectname".
of specific Kind. The map key is the Kind name and value is
a list of resource names separated by commas. Each resource
name has format "namespace/resourcename". For cluster resources,
simply use "resourcename".
nullable: true
type: object
resourcePolicy:
description: ResourcePolicy specifies the referenced resource
policies that backup should follow
properties:
apiGroup:
description: APIGroup is the group for the resource being
referenced. If APIGroup is not specified, the specified
Kind must be in the core API group. For any other third-party
types, APIGroup is required.
type: string
kind:
description: Kind is the type of resource being referenced
type: string
name:
description: Name is the name of resource being referenced
type: string
required:
- kind
- name
type: object
x-kubernetes-map-type: atomic
snapshotMoveData:
description: SnapshotMoveData specifies whether snapshot data
should be moved
nullable: true
type: boolean
snapshotVolumes:
description: SnapshotVolumes specifies whether to take snapshots
description: SnapshotVolumes specifies whether to take cloud snapshots
of any PV's referenced in the set of objects included in the
Backup.
nullable: true
@@ -556,4 +393,9 @@ spec:
type: object
served: true
storage: true
subresources: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@@ -1,9 +1,11 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.12.0
controller-gen.kubebuilder.io/version: v0.3.0
creationTimestamp: null
name: serverstatusrequests.velero.io
spec:
group: velero.io
@@ -75,3 +77,11 @@ spec:
type: object
served: true
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@@ -1,9 +1,11 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.12.0
controller-gen.kubebuilder.io/version: v0.3.0
creationTimestamp: null
name: volumesnapshotlocations.velero.io
spec:
group: velero.io
@@ -11,8 +13,6 @@ spec:
kind: VolumeSnapshotLocation
listKind: VolumeSnapshotLocationList
plural: volumesnapshotlocations
shortNames:
- vsl
singular: volumesnapshotlocation
scope: Namespaced
versions:
@@ -43,25 +43,6 @@ spec:
type: string
description: Config is for provider-specific configuration fields.
type: object
credential:
description: Credential contains the credential information intended
to be used with this location
properties:
key:
description: The key of the secret to select from. Must be a
valid secret key.
type: string
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
optional:
description: Specify whether the Secret or its key must be defined
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
provider:
description: Provider is the provider of the volume storage.
type: string
@@ -83,3 +64,9 @@ spec:
type: object
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,439 @@
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.3.0
creationTimestamp: null
name: backups.velero.io
spec:
group: velero.io
names:
kind: Backup
listKind: BackupList
plural: backups
singular: backup
preserveUnknownFields: false
scope: Namespaced
validation:
openAPIV3Schema:
description: Backup is a Velero resource that represents the capture of Kubernetes
cluster state at a point in time (API objects and associated volume state).
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: BackupSpec defines the specification for a Velero backup.
properties:
defaultVolumesToRestic:
description: DefaultVolumesToRestic specifies whether restic should
be used to take a backup of all pod volumes by default.
type: boolean
excludedNamespaces:
description: ExcludedNamespaces contains a list of namespaces that are
not included in the backup.
items:
type: string
nullable: true
type: array
excludedResources:
description: ExcludedResources is a slice of resource names that are
not included in the backup.
items:
type: string
nullable: true
type: array
hooks:
description: Hooks represent custom behaviors that should be executed
at different phases of the backup.
properties:
resources:
description: Resources are hooks that should be executed when backing
up individual instances of a resource.
items:
description: BackupResourceHookSpec defines one or more BackupResourceHooks
that should be executed based on the rules defined for namespaces,
resources, and label selector.
properties:
excludedNamespaces:
description: ExcludedNamespaces specifies the namespaces to
which this hook spec does not apply.
items:
type: string
nullable: true
type: array
excludedResources:
description: ExcludedResources specifies the resources to
which this hook spec does not apply.
items:
type: string
nullable: true
type: array
includedNamespaces:
description: IncludedNamespaces specifies the namespaces to
which this hook spec applies. If empty, it applies to all
namespaces.
items:
type: string
nullable: true
type: array
includedResources:
description: IncludedResources specifies the resources to
which this hook spec applies. If empty, it applies to all
resources.
items:
type: string
nullable: true
type: array
labelSelector:
description: LabelSelector, if specified, filters the resources
to which this hook spec applies.
nullable: true
properties:
matchExpressions:
description: matchExpressions is a list of label selector
requirements. The requirements are ANDed.
items:
description: A label selector requirement is a selector
that contains values, a key, and an operator that
relates the key and values.
properties:
key:
description: key is the label key that the selector
applies to.
type: string
operator:
description: operator represents a key's relationship
to a set of values. Valid operators are In, NotIn,
Exists and DoesNotExist.
type: string
values:
description: values is an array of string values.
If the operator is In or NotIn, the values array
must be non-empty. If the operator is Exists or
DoesNotExist, the values array must be empty.
This array is replaced during a strategic merge
patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs.
A single {key,value} in the matchLabels map is equivalent
to an element of matchExpressions, whose key field is
"key", the operator is "In", and the values array contains
only "value". The requirements are ANDed.
type: object
type: object
name:
description: Name is the name of this hook.
type: string
post:
description: PostHooks is a list of BackupResourceHooks to
execute after storing the item in the backup. These are
executed after all "additional items" from item actions
are processed.
items:
description: BackupResourceHook defines a hook for a resource.
properties:
exec:
description: Exec defines an exec hook.
properties:
command:
description: Command is the command and arguments
to execute.
items:
type: string
minItems: 1
type: array
container:
description: Container is the container in the pod
where the command should be executed. If not specified,
the pod's first container is used.
type: string
onError:
description: OnError specifies how Velero should
behave if it encounters an error executing this
hook.
enum:
- Continue
- Fail
type: string
timeout:
description: Timeout defines the maximum amount
of time Velero should wait for the hook to complete
before considering the execution a failure.
type: string
required:
- command
type: object
required:
- exec
type: object
type: array
pre:
description: PreHooks is a list of BackupResourceHooks to
execute prior to storing the item in the backup. These are
executed before any "additional items" from item actions
are processed.
items:
description: BackupResourceHook defines a hook for a resource.
properties:
exec:
description: Exec defines an exec hook.
properties:
command:
description: Command is the command and arguments
to execute.
items:
type: string
minItems: 1
type: array
container:
description: Container is the container in the pod
where the command should be executed. If not specified,
the pod's first container is used.
type: string
onError:
description: OnError specifies how Velero should
behave if it encounters an error executing this
hook.
enum:
- Continue
- Fail
type: string
timeout:
description: Timeout defines the maximum amount
of time Velero should wait for the hook to complete
before considering the execution a failure.
type: string
required:
- command
type: object
required:
- exec
type: object
type: array
required:
- name
type: object
nullable: true
type: array
type: object
includeClusterResources:
description: IncludeClusterResources specifies whether cluster-scoped
resources should be included for consideration in the backup.
nullable: true
type: boolean
includedNamespaces:
description: IncludedNamespaces is a slice of namespace names to include
objects from. If empty, all namespaces are included.
items:
type: string
nullable: true
type: array
includedResources:
description: IncludedResources is a slice of resource names to include
in the backup. If empty, all resources are included.
items:
type: string
nullable: true
type: array
labelSelector:
description: LabelSelector is a metav1.LabelSelector to filter with
when adding individual objects to the backup. If empty or nil, all
objects are included. Optional.
nullable: true
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements.
The requirements are ANDed.
items:
description: A label selector requirement is a selector that contains
values, a key, and an operator that relates the key and values.
properties:
key:
description: key is the label key that the selector applies
to.
type: string
operator:
description: operator represents a key's relationship to a
set of values. Valid operators are In, NotIn, Exists and
DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator
is In or NotIn, the values array must be non-empty. If the
operator is Exists or DoesNotExist, the values array must
be empty. This array is replaced during a strategic merge
patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs. A single
{key,value} in the matchLabels map is equivalent to an element
of matchExpressions, whose key field is "key", the operator is
"In", and the values array contains only "value". The requirements
are ANDed.
type: object
type: object
metadata:
properties:
labels:
additionalProperties:
type: string
type: object
type: object
orderedResources:
additionalProperties:
type: string
description: OrderedResources specifies the backup order of resources
of specific Kind. The map key is the Kind name and value is a list
of resource names separated by commas. Each resource name has format
"namespace/resourcename". For cluster resources, simply use "resourcename".
nullable: true
type: object
snapshotVolumes:
description: SnapshotVolumes specifies whether to take cloud snapshots
of any PV's referenced in the set of objects included in the Backup.
nullable: true
type: boolean
storageLocation:
description: StorageLocation is a string containing the name of a BackupStorageLocation
where the backup should be stored.
type: string
ttl:
description: TTL is a time.Duration-parseable string describing how
long the Backup should be retained for.
type: string
volumeSnapshotLocations:
description: VolumeSnapshotLocations is a list containing names of VolumeSnapshotLocations
associated with this backup.
items:
type: string
type: array
type: object
status:
description: BackupStatus captures the current status of a Velero backup.
properties:
completionTimestamp:
description: CompletionTimestamp records the time a backup was completed.
Completion time is recorded even on failed backups. Completion time
is recorded before uploading the backup object. The server's time
is used for CompletionTimestamps
format: date-time
nullable: true
type: string
errors:
description: Errors is a count of all error messages that were generated
during execution of the backup. The actual errors are in the backup's
log file in object storage.
type: integer
expiration:
description: Expiration is when this Backup is eligible for garbage-collection.
format: date-time
nullable: true
type: string
formatVersion:
description: FormatVersion is the backup format version, including major,
minor, and patch version.
type: string
phase:
description: Phase is the current state of the Backup.
enum:
- New
- FailedValidation
- InProgress
- Completed
- PartiallyFailed
- Failed
- Deleting
type: string
progress:
description: Progress contains information about the backup's execution
progress. Note that this information is best-effort only -- if Velero
fails to update it during a backup for any reason, it may be inaccurate/stale.
nullable: true
properties:
itemsBackedUp:
description: ItemsBackedUp is the number of items that have actually
been written to the backup tarball so far.
type: integer
totalItems:
description: TotalItems is the total number of items to be backed
up. This number may change throughout the execution of the backup
due to plugins that return additional related items to back up,
the velero.io/exclude-from-backup label, and various other filters
that happen as items are processed.
type: integer
type: object
startTimestamp:
description: StartTimestamp records the time a backup was started. Separate
from CreationTimestamp, since that value changes on restores. The
server's time is used for StartTimestamps
format: date-time
nullable: true
type: string
validationErrors:
description: ValidationErrors is a slice of all validation errors (if
applicable).
items:
type: string
nullable: true
type: array
version:
description: 'Version is the backup format major version. Deprecated:
Please see FormatVersion'
type: integer
volumeSnapshotsAttempted:
description: VolumeSnapshotsAttempted is the total number of attempted
volume snapshots for this backup.
type: integer
volumeSnapshotsCompleted:
description: VolumeSnapshotsCompleted is the total number of successfully
completed volume snapshots for this backup.
type: integer
warnings:
description: Warnings is a count of all warning messages that were generated
during execution of the backup. The actual warnings are in the backup's
log file in object storage.
type: integer
type: object
type: object
version: v1
versions:
- name: v1
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@@ -0,0 +1,179 @@
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.3.0
creationTimestamp: null
name: backupstoragelocations.velero.io
spec:
additionalPrinterColumns:
- JSONPath: .status.phase
description: Backup Storage Location status such as Available/Unavailable
name: Phase
type: string
- JSONPath: .status.lastValidationTime
description: LastValidationTime is the last time the backup store location was
validated
name: Last Validated
type: date
- JSONPath: .metadata.creationTimestamp
name: Age
type: date
- JSONPath: .spec.default
description: Default backup storage location
name: Default
type: boolean
group: velero.io
names:
kind: BackupStorageLocation
listKind: BackupStorageLocationList
plural: backupstoragelocations
shortNames:
- bsl
singular: backupstoragelocation
preserveUnknownFields: false
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
description: BackupStorageLocation is a location where Velero stores backup
objects
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: BackupStorageLocationSpec defines the desired state of a Velero
BackupStorageLocation
properties:
accessMode:
description: AccessMode defines the permissions for the backup storage
location.
enum:
- ReadOnly
- ReadWrite
type: string
backupSyncPeriod:
description: BackupSyncPeriod defines how frequently to sync backup
API objects from object storage. A value of 0 disables sync.
nullable: true
type: string
config:
additionalProperties:
type: string
description: Config is for provider-specific configuration fields.
type: object
credential:
description: Credential contains the credential information intended
to be used with this location
properties:
key:
description: The key of the secret to select from. Must be a valid
secret key.
type: string
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
optional:
description: Specify whether the Secret or its key must be defined
type: boolean
required:
- key
type: object
default:
description: Default indicates this location is the default backup storage
location.
type: boolean
objectStorage:
description: ObjectStorageLocation specifies the settings necessary
to connect to a provider's object storage.
properties:
bucket:
description: Bucket is the bucket to use for object storage.
type: string
caCert:
description: CACert defines a CA bundle to use when verifying TLS
connections to the provider.
format: byte
type: string
prefix:
description: Prefix is the path inside a bucket to use for Velero
storage. Optional.
type: string
required:
- bucket
type: object
provider:
description: Provider is the provider of the backup storage.
type: string
validationFrequency:
description: ValidationFrequency defines how frequently to validate
the corresponding object storage. A value of 0 disables validation.
nullable: true
type: string
required:
- objectStorage
- provider
type: object
status:
description: BackupStorageLocationStatus defines the observed state of BackupStorageLocation
properties:
accessMode:
description: "AccessMode is an unused field. \n Deprecated: there is
now an AccessMode field on the Spec and this field will be removed
entirely as of v2.0."
enum:
- ReadOnly
- ReadWrite
type: string
lastSyncedRevision:
description: "LastSyncedRevision is the value of the `metadata/revision`
file in the backup storage location the last time the BSL's contents
were synced into the cluster. \n Deprecated: this field is no longer
updated or used for detecting changes to the location's contents and
will be removed entirely in v2.0."
type: string
lastSyncedTime:
description: LastSyncedTime is the last time the contents of the location
were synced into the cluster.
format: date-time
nullable: true
type: string
lastValidationTime:
description: LastValidationTime is the last time the backup store location
was validated the cluster.
format: date-time
nullable: true
type: string
phase:
description: Phase is the current state of the BackupStorageLocation.
enum:
- Available
- Unavailable
type: string
type: object
type: object
version: v1
versions:
- name: v1
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@@ -0,0 +1,73 @@
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.3.0
creationTimestamp: null
name: deletebackuprequests.velero.io
spec:
group: velero.io
names:
kind: DeleteBackupRequest
listKind: DeleteBackupRequestList
plural: deletebackuprequests
singular: deletebackuprequest
preserveUnknownFields: false
scope: Namespaced
validation:
openAPIV3Schema:
description: DeleteBackupRequest is a request to delete one or more backups.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: DeleteBackupRequestSpec is the specification for which backups
to delete.
properties:
backupName:
type: string
required:
- backupName
type: object
status:
description: DeleteBackupRequestStatus is the current status of a DeleteBackupRequest.
properties:
errors:
description: Errors contains any errors that were encountered during
the deletion process.
items:
type: string
nullable: true
type: array
phase:
description: Phase is the current state of the DeleteBackupRequest.
enum:
- New
- InProgress
- Processed
type: string
type: object
type: object
version: v1
versions:
- name: v1
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@@ -0,0 +1,96 @@
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.3.0
creationTimestamp: null
name: downloadrequests.velero.io
spec:
group: velero.io
names:
kind: DownloadRequest
listKind: DownloadRequestList
plural: downloadrequests
singular: downloadrequest
preserveUnknownFields: false
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
description: DownloadRequest is a request to download an artifact from backup
object storage, such as a backup log file.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: DownloadRequestSpec is the specification for a download request.
properties:
target:
description: Target is what to download (e.g. logs for a backup).
properties:
kind:
description: Kind is the type of file to download.
enum:
- BackupLog
- BackupContents
- BackupVolumeSnapshots
- BackupResourceList
- RestoreLog
- RestoreResults
type: string
name:
description: Name is the name of the kubernetes resource with which
the file is associated.
type: string
required:
- kind
- name
type: object
required:
- target
type: object
status:
description: DownloadRequestStatus is the current status of a DownloadRequest.
properties:
downloadURL:
description: DownloadURL contains the pre-signed URL for the target
file.
type: string
expiration:
description: Expiration is when this DownloadRequest expires and can
be deleted by the system.
format: date-time
nullable: true
type: string
phase:
description: Phase is the current state of the DownloadRequest.
enum:
- New
- Processed
type: string
type: object
type: object
version: v1
versions:
- name: v1
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@@ -0,0 +1,162 @@
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.3.0
creationTimestamp: null
name: podvolumebackups.velero.io
spec:
group: velero.io
names:
kind: PodVolumeBackup
listKind: PodVolumeBackupList
plural: podvolumebackups
singular: podvolumebackup
preserveUnknownFields: false
scope: Namespaced
validation:
openAPIV3Schema:
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: PodVolumeBackupSpec is the specification for a PodVolumeBackup.
properties:
backupStorageLocation:
description: BackupStorageLocation is the name of the backup storage
location where the restic repository is stored.
type: string
node:
description: Node is the name of the node that the Pod is running on.
type: string
pod:
description: Pod is a reference to the pod containing the volume to
be backed up.
properties:
apiVersion:
description: API version of the referent.
type: string
fieldPath:
description: 'If referring to a piece of an object instead of an
entire object, this string should contain a valid JSON/Go field
access statement, such as desiredState.manifest.containers[2].
For example, if the object reference is to a container within
a pod, this would take on a value like: "spec.containers{name}"
(where "name" refers to the name of the container that triggered
the event) or if no container name is specified "spec.containers[2]"
(container with index 2 in this pod). This syntax is chosen only
to have some well-defined way of referencing a part of an object.
TODO: this design is not final and this field is subject to change
in the future.'
type: string
kind:
description: 'Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names'
type: string
namespace:
description: 'Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/'
type: string
resourceVersion:
description: 'Specific resourceVersion to which this reference is
made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency'
type: string
uid:
description: 'UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids'
type: string
type: object
repoIdentifier:
description: RepoIdentifier is the restic repository identifier.
type: string
tags:
additionalProperties:
type: string
description: Tags are a map of key-value pairs that should be applied
to the volume backup as tags.
type: object
volume:
description: Volume is the name of the volume within the Pod to be backed
up.
type: string
required:
- backupStorageLocation
- node
- pod
- repoIdentifier
- volume
type: object
status:
description: PodVolumeBackupStatus is the current status of a PodVolumeBackup.
properties:
completionTimestamp:
description: CompletionTimestamp records the time a backup was completed.
Completion time is recorded even on failed backups. Completion time
is recorded before uploading the backup object. The server's time
is used for CompletionTimestamps
format: date-time
nullable: true
type: string
message:
description: Message is a message about the pod volume backup's status.
type: string
path:
description: Path is the full path within the controller pod being backed
up.
type: string
phase:
description: Phase is the current state of the PodVolumeBackup.
enum:
- New
- InProgress
- Completed
- Failed
type: string
progress:
description: Progress holds the total number of bytes of the volume
and the current number of backed up bytes. This can be used to display
progress information about the backup operation.
properties:
bytesDone:
format: int64
type: integer
totalBytes:
format: int64
type: integer
type: object
snapshotID:
description: SnapshotID is the identifier for the snapshot of the pod
volume.
type: string
startTimestamp:
description: StartTimestamp records the time a backup was started. Separate
from CreationTimestamp, since that value changes on restores. The
server's time is used for StartTimestamps
format: date-time
nullable: true
type: string
type: object
type: object
version: v1
versions:
- name: v1
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@@ -0,0 +1,145 @@
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.3.0
creationTimestamp: null
name: podvolumerestores.velero.io
spec:
group: velero.io
names:
kind: PodVolumeRestore
listKind: PodVolumeRestoreList
plural: podvolumerestores
singular: podvolumerestore
preserveUnknownFields: false
scope: Namespaced
validation:
openAPIV3Schema:
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: PodVolumeRestoreSpec is the specification for a PodVolumeRestore.
properties:
backupStorageLocation:
description: BackupStorageLocation is the name of the backup storage
location where the restic repository is stored.
type: string
pod:
description: Pod is a reference to the pod containing the volume to
be restored.
properties:
apiVersion:
description: API version of the referent.
type: string
fieldPath:
description: 'If referring to a piece of an object instead of an
entire object, this string should contain a valid JSON/Go field
access statement, such as desiredState.manifest.containers[2].
For example, if the object reference is to a container within
a pod, this would take on a value like: "spec.containers{name}"
(where "name" refers to the name of the container that triggered
the event) or if no container name is specified "spec.containers[2]"
(container with index 2 in this pod). This syntax is chosen only
to have some well-defined way of referencing a part of an object.
TODO: this design is not final and this field is subject to change
in the future.'
type: string
kind:
description: 'Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names'
type: string
namespace:
description: 'Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/'
type: string
resourceVersion:
description: 'Specific resourceVersion to which this reference is
made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency'
type: string
uid:
description: 'UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids'
type: string
type: object
repoIdentifier:
description: RepoIdentifier is the restic repository identifier.
type: string
snapshotID:
description: SnapshotID is the ID of the volume snapshot to be restored.
type: string
volume:
description: Volume is the name of the volume within the Pod to be restored.
type: string
required:
- backupStorageLocation
- pod
- repoIdentifier
- snapshotID
- volume
type: object
status:
description: PodVolumeRestoreStatus is the current status of a PodVolumeRestore.
properties:
completionTimestamp:
description: CompletionTimestamp records the time a restore was completed.
Completion time is recorded even on failed restores. The server's
time is used for CompletionTimestamps
format: date-time
nullable: true
type: string
message:
description: Message is a message about the pod volume restore's status.
type: string
phase:
description: Phase is the current state of the PodVolumeRestore.
enum:
- New
- InProgress
- Completed
- Failed
type: string
progress:
description: Progress holds the total number of bytes of the snapshot
and the current number of restored bytes. This can be used to display
progress information about the restore operation.
properties:
bytesDone:
format: int64
type: integer
totalBytes:
format: int64
type: integer
type: object
startTimestamp:
description: StartTimestamp records the time a restore was started.
The server's time is used for StartTimestamps
format: date-time
nullable: true
type: string
type: object
type: object
version: v1
versions:
- name: v1
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@@ -0,0 +1,89 @@
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.3.0
creationTimestamp: null
name: resticrepositories.velero.io
spec:
group: velero.io
names:
kind: ResticRepository
listKind: ResticRepositoryList
plural: resticrepositories
singular: resticrepository
preserveUnknownFields: false
scope: Namespaced
validation:
openAPIV3Schema:
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: ResticRepositorySpec is the specification for a ResticRepository.
properties:
backupStorageLocation:
description: BackupStorageLocation is the name of the BackupStorageLocation
that should contain this repository.
type: string
maintenanceFrequency:
description: MaintenanceFrequency is how often maintenance should be
run.
type: string
resticIdentifier:
description: ResticIdentifier is the full restic-compatible string for
identifying this repository.
type: string
volumeNamespace:
description: VolumeNamespace is the namespace this restic repository
contains pod volume backups for.
type: string
required:
- backupStorageLocation
- maintenanceFrequency
- resticIdentifier
- volumeNamespace
type: object
status:
description: ResticRepositoryStatus is the current status of a ResticRepository.
properties:
lastMaintenanceTime:
description: LastMaintenanceTime is the last time maintenance was run.
format: date-time
nullable: true
type: string
message:
description: Message is a message about the current status of the ResticRepository.
type: string
phase:
description: Phase is the current state of the ResticRepository.
enum:
- New
- Ready
- NotReady
type: string
type: object
type: object
version: v1
versions:
- name: v1
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,401 @@
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.3.0
creationTimestamp: null
name: schedules.velero.io
spec:
group: velero.io
names:
kind: Schedule
listKind: ScheduleList
plural: schedules
singular: schedule
preserveUnknownFields: false
scope: Namespaced
validation:
openAPIV3Schema:
description: Schedule is a Velero resource that represents a pre-scheduled or
periodic Backup that should be run.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: ScheduleSpec defines the specification for a Velero schedule
properties:
schedule:
description: Schedule is a Cron expression defining when to run the
Backup.
type: string
template:
description: Template is the definition of the Backup to be run on the
provided schedule
properties:
defaultVolumesToRestic:
description: DefaultVolumesToRestic specifies whether restic should
be used to take a backup of all pod volumes by default.
type: boolean
excludedNamespaces:
description: ExcludedNamespaces contains a list of namespaces that
are not included in the backup.
items:
type: string
nullable: true
type: array
excludedResources:
description: ExcludedResources is a slice of resource names that
are not included in the backup.
items:
type: string
nullable: true
type: array
hooks:
description: Hooks represent custom behaviors that should be executed
at different phases of the backup.
properties:
resources:
description: Resources are hooks that should be executed when
backing up individual instances of a resource.
items:
description: BackupResourceHookSpec defines one or more BackupResourceHooks
that should be executed based on the rules defined for namespaces,
resources, and label selector.
properties:
excludedNamespaces:
description: ExcludedNamespaces specifies the namespaces
to which this hook spec does not apply.
items:
type: string
nullable: true
type: array
excludedResources:
description: ExcludedResources specifies the resources
to which this hook spec does not apply.
items:
type: string
nullable: true
type: array
includedNamespaces:
description: IncludedNamespaces specifies the namespaces
to which this hook spec applies. If empty, it applies
to all namespaces.
items:
type: string
nullable: true
type: array
includedResources:
description: IncludedResources specifies the resources
to which this hook spec applies. If empty, it applies
to all resources.
items:
type: string
nullable: true
type: array
labelSelector:
description: LabelSelector, if specified, filters the
resources to which this hook spec applies.
nullable: true
properties:
matchExpressions:
description: matchExpressions is a list of label selector
requirements. The requirements are ANDed.
items:
description: A label selector requirement is a selector
that contains values, a key, and an operator that
relates the key and values.
properties:
key:
description: key is the label key that the selector
applies to.
type: string
operator:
description: operator represents a key's relationship
to a set of values. Valid operators are In,
NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values.
If the operator is In or NotIn, the values
array must be non-empty. If the operator is
Exists or DoesNotExist, the values array must
be empty. This array is replaced during a
strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs.
A single {key,value} in the matchLabels map is equivalent
to an element of matchExpressions, whose key field
is "key", the operator is "In", and the values array
contains only "value". The requirements are ANDed.
type: object
type: object
name:
description: Name is the name of this hook.
type: string
post:
description: PostHooks is a list of BackupResourceHooks
to execute after storing the item in the backup. These
are executed after all "additional items" from item
actions are processed.
items:
description: BackupResourceHook defines a hook for a
resource.
properties:
exec:
description: Exec defines an exec hook.
properties:
command:
description: Command is the command and arguments
to execute.
items:
type: string
minItems: 1
type: array
container:
description: Container is the container in the
pod where the command should be executed.
If not specified, the pod's first container
is used.
type: string
onError:
description: OnError specifies how Velero should
behave if it encounters an error executing
this hook.
enum:
- Continue
- Fail
type: string
timeout:
description: Timeout defines the maximum amount
of time Velero should wait for the hook to
complete before considering the execution
a failure.
type: string
required:
- command
type: object
required:
- exec
type: object
type: array
pre:
description: PreHooks is a list of BackupResourceHooks
to execute prior to storing the item in the backup.
These are executed before any "additional items" from
item actions are processed.
items:
description: BackupResourceHook defines a hook for a
resource.
properties:
exec:
description: Exec defines an exec hook.
properties:
command:
description: Command is the command and arguments
to execute.
items:
type: string
minItems: 1
type: array
container:
description: Container is the container in the
pod where the command should be executed.
If not specified, the pod's first container
is used.
type: string
onError:
description: OnError specifies how Velero should
behave if it encounters an error executing
this hook.
enum:
- Continue
- Fail
type: string
timeout:
description: Timeout defines the maximum amount
of time Velero should wait for the hook to
complete before considering the execution
a failure.
type: string
required:
- command
type: object
required:
- exec
type: object
type: array
required:
- name
type: object
nullable: true
type: array
type: object
includeClusterResources:
description: IncludeClusterResources specifies whether cluster-scoped
resources should be included for consideration in the backup.
nullable: true
type: boolean
includedNamespaces:
description: IncludedNamespaces is a slice of namespace names to
include objects from. If empty, all namespaces are included.
items:
type: string
nullable: true
type: array
includedResources:
description: IncludedResources is a slice of resource names to include
in the backup. If empty, all resources are included.
items:
type: string
nullable: true
type: array
labelSelector:
description: LabelSelector is a metav1.LabelSelector to filter with
when adding individual objects to the backup. If empty or nil,
all objects are included. Optional.
nullable: true
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements.
The requirements are ANDed.
items:
description: A label selector requirement is a selector that
contains values, a key, and an operator that relates the
key and values.
properties:
key:
description: key is the label key that the selector applies
to.
type: string
operator:
description: operator represents a key's relationship
to a set of values. Valid operators are In, NotIn, Exists
and DoesNotExist.
type: string
values:
description: values is an array of string values. If the
operator is In or NotIn, the values array must be non-empty.
If the operator is Exists or DoesNotExist, the values
array must be empty. This array is replaced during a
strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs. A single
{key,value} in the matchLabels map is equivalent to an element
of matchExpressions, whose key field is "key", the operator
is "In", and the values array contains only "value". The requirements
are ANDed.
type: object
type: object
metadata:
properties:
labels:
additionalProperties:
type: string
type: object
type: object
orderedResources:
additionalProperties:
type: string
description: OrderedResources specifies the backup order of resources
of specific Kind. The map key is the Kind name and value is a
list of resource names separated by commas. Each resource name
has format "namespace/resourcename". For cluster resources, simply
use "resourcename".
nullable: true
type: object
snapshotVolumes:
description: SnapshotVolumes specifies whether to take cloud snapshots
of any PV's referenced in the set of objects included in the Backup.
nullable: true
type: boolean
storageLocation:
description: StorageLocation is a string containing the name of
a BackupStorageLocation where the backup should be stored.
type: string
ttl:
description: TTL is a time.Duration-parseable string describing
how long the Backup should be retained for.
type: string
volumeSnapshotLocations:
description: VolumeSnapshotLocations is a list containing names
of VolumeSnapshotLocations associated with this backup.
items:
type: string
type: array
type: object
useOwnerReferencesInBackup:
description: UseOwnerReferencesBackup specifies whether to use OwnerReferences
on backups created by this Schedule.
nullable: true
type: boolean
required:
- schedule
- template
type: object
status:
description: ScheduleStatus captures the current state of a Velero schedule
properties:
lastBackup:
description: LastBackup is the last time a Backup was run for this Schedule
schedule
format: date-time
nullable: true
type: string
phase:
description: Phase is the current phase of the Schedule
enum:
- New
- Enabled
- FailedValidation
type: string
validationErrors:
description: ValidationErrors is a slice of all validation errors (if
applicable)
items:
type: string
type: array
type: object
type: object
version: v1
versions:
- name: v1
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@@ -0,0 +1,89 @@
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.3.0
creationTimestamp: null
name: serverstatusrequests.velero.io
spec:
group: velero.io
names:
kind: ServerStatusRequest
listKind: ServerStatusRequestList
plural: serverstatusrequests
shortNames:
- ssr
singular: serverstatusrequest
preserveUnknownFields: false
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
description: ServerStatusRequest is a request to access current status information
about the Velero server.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: ServerStatusRequestSpec is the specification for a ServerStatusRequest.
type: object
status:
description: ServerStatusRequestStatus is the current status of a ServerStatusRequest.
properties:
phase:
description: Phase is the current lifecycle phase of the ServerStatusRequest.
enum:
- New
- Processed
type: string
plugins:
description: Plugins list information about the plugins running on the
Velero server
items:
description: PluginInfo contains attributes of a Velero plugin
properties:
kind:
type: string
name:
type: string
required:
- kind
- name
type: object
nullable: true
type: array
processedTimestamp:
description: ProcessedTimestamp is when the ServerStatusRequest was
processed by the ServerStatusRequestController.
format: date-time
nullable: true
type: string
serverVersion:
description: ServerVersion is the Velero server version.
type: string
type: object
type: object
version: v1
versions:
- name: v1
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@@ -0,0 +1,74 @@
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.3.0
creationTimestamp: null
name: volumesnapshotlocations.velero.io
spec:
group: velero.io
names:
kind: VolumeSnapshotLocation
listKind: VolumeSnapshotLocationList
plural: volumesnapshotlocations
singular: volumesnapshotlocation
preserveUnknownFields: false
scope: Namespaced
validation:
openAPIV3Schema:
description: VolumeSnapshotLocation is a location where Velero stores volume
snapshots.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: VolumeSnapshotLocationSpec defines the specification for a
Velero VolumeSnapshotLocation.
properties:
config:
additionalProperties:
type: string
description: Config is for provider-specific configuration fields.
type: object
provider:
description: Provider is the provider of the volume storage.
type: string
required:
- provider
type: object
status:
description: VolumeSnapshotLocationStatus describes the current status of
a Velero VolumeSnapshotLocation.
properties:
phase:
description: VolumeSnapshotLocationPhase is the lifecycle phase of a
Velero VolumeSnapshotLocation.
enum:
- Available
- Unavailable
type: string
type: object
type: object
version: v1
versions:
- name: v1
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

File diff suppressed because one or more lines are too long

View File

@@ -1,4 +1,4 @@
// Package crds embeds the controller-tools generated CRD manifests
package crds
//go:generate go run ../../../../hack/crd-gen/v1/main.go
//go:generate go run ../../../../hack/crd-gen/v1beta1/main.go

View File

@@ -1,178 +0,0 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.12.0
name: datadownloads.velero.io
spec:
group: velero.io
names:
kind: DataDownload
listKind: DataDownloadList
plural: datadownloads
singular: datadownload
scope: Namespaced
versions:
- additionalPrinterColumns:
- description: DataDownload status such as New/InProgress
jsonPath: .status.phase
name: Status
type: string
- description: Time duration since this DataDownload was started
jsonPath: .status.startTimestamp
name: Started
type: date
- description: Completed bytes
format: int64
jsonPath: .status.progress.bytesDone
name: Bytes Done
type: integer
- description: Total bytes
format: int64
jsonPath: .status.progress.totalBytes
name: Total Bytes
type: integer
- description: Name of the Backup Storage Location where the backup data is stored
jsonPath: .spec.backupStorageLocation
name: Storage Location
type: string
- description: Time duration since this DataDownload was created
jsonPath: .metadata.creationTimestamp
name: Age
type: date
- description: Name of the node where the DataDownload is processed
jsonPath: .status.node
name: Node
type: string
name: v2alpha1
schema:
openAPIV3Schema:
description: DataDownload acts as the protocol between data mover plugins
and data mover controller for the datamover restore operation
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: DataDownloadSpec is the specification for a DataDownload.
properties:
backupStorageLocation:
description: BackupStorageLocation is the name of the backup storage
location where the backup repository is stored.
type: string
cancel:
description: Cancel indicates request to cancel the ongoing DataDownload.
It can be set when the DataDownload is in InProgress phase
type: boolean
dataMoverConfig:
additionalProperties:
type: string
description: DataMoverConfig is for data-mover-specific configuration
fields.
type: object
datamover:
description: DataMover specifies the data mover to be used by the
backup. If DataMover is "" or "velero", the built-in data mover
will be used.
type: string
operationTimeout:
description: OperationTimeout specifies the time used to wait internal
operations, before returning error as timeout.
type: string
snapshotID:
description: SnapshotID is the ID of the Velero backup snapshot to
be restored from.
type: string
sourceNamespace:
description: SourceNamespace is the original namespace where the volume
is backed up from. It may be different from SourcePVC's namespace
if namespace is remapped during restore.
type: string
targetVolume:
description: TargetVolume is the information of the target PVC and
PV.
properties:
namespace:
description: Namespace is the target namespace
type: string
pv:
description: PV is the name of the target PV that is created by
Velero restore
type: string
pvc:
description: PVC is the name of the target PVC that is created
by Velero restore
type: string
required:
- namespace
- pv
- pvc
type: object
required:
- backupStorageLocation
- operationTimeout
- snapshotID
- sourceNamespace
- targetVolume
type: object
status:
description: DataDownloadStatus is the current status of a DataDownload.
properties:
completionTimestamp:
description: CompletionTimestamp records the time a restore was completed.
Completion time is recorded even on failed restores. The server's
time is used for CompletionTimestamps
format: date-time
nullable: true
type: string
message:
description: Message is a message about the DataDownload's status.
type: string
node:
description: Node is name of the node where the DataDownload is processed.
type: string
phase:
description: Phase is the current state of the DataDownload.
enum:
- New
- Accepted
- Prepared
- InProgress
- Canceling
- Canceled
- Completed
- Failed
type: string
progress:
description: Progress holds the total number of bytes of the snapshot
and the current number of restored bytes. This can be used to display
progress information about the restore operation.
properties:
bytesDone:
format: int64
type: integer
totalBytes:
format: int64
type: integer
type: object
startTimestamp:
description: StartTimestamp records the time a restore was started.
The server's time is used for StartTimestamps
format: date-time
nullable: true
type: string
type: object
type: object
served: true
storage: true
subresources: {}

View File

@@ -1,202 +0,0 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.12.0
name: datauploads.velero.io
spec:
group: velero.io
names:
kind: DataUpload
listKind: DataUploadList
plural: datauploads
singular: dataupload
scope: Namespaced
versions:
- additionalPrinterColumns:
- description: DataUpload status such as New/InProgress
jsonPath: .status.phase
name: Status
type: string
- description: Time duration since this DataUpload was started
jsonPath: .status.startTimestamp
name: Started
type: date
- description: Completed bytes
format: int64
jsonPath: .status.progress.bytesDone
name: Bytes Done
type: integer
- description: Total bytes
format: int64
jsonPath: .status.progress.totalBytes
name: Total Bytes
type: integer
- description: Name of the Backup Storage Location where this backup should be
stored
jsonPath: .spec.backupStorageLocation
name: Storage Location
type: string
- description: Time duration since this DataUpload was created
jsonPath: .metadata.creationTimestamp
name: Age
type: date
- description: Name of the node where the DataUpload is processed
jsonPath: .status.node
name: Node
type: string
name: v2alpha1
schema:
openAPIV3Schema:
description: DataUpload acts as the protocol between data mover plugins and
data mover controller for the datamover backup operation
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: DataUploadSpec is the specification for a DataUpload.
properties:
backupStorageLocation:
description: BackupStorageLocation is the name of the backup storage
location where the backup repository is stored.
type: string
cancel:
description: Cancel indicates request to cancel the ongoing DataUpload.
It can be set when the DataUpload is in InProgress phase
type: boolean
csiSnapshot:
description: If SnapshotType is CSI, CSISnapshot provides the information
of the CSI snapshot.
nullable: true
properties:
snapshotClass:
description: SnapshotClass is the name of the snapshot class that
the volume snapshot is created with
type: string
storageClass:
description: StorageClass is the name of the storage class of
the PVC that the volume snapshot is created from
type: string
volumeSnapshot:
description: VolumeSnapshot is the name of the volume snapshot
to be backed up
type: string
required:
- storageClass
- volumeSnapshot
type: object
dataMoverConfig:
additionalProperties:
type: string
description: DataMoverConfig is for data-mover-specific configuration
fields.
nullable: true
type: object
datamover:
description: DataMover specifies the data mover to be used by the
backup. If DataMover is "" or "velero", the built-in data mover
will be used.
type: string
operationTimeout:
description: OperationTimeout specifies the time used to wait internal
operations, before returning error as timeout.
type: string
snapshotType:
description: SnapshotType is the type of the snapshot to be backed
up.
type: string
sourceNamespace:
description: SourceNamespace is the original namespace where the volume
is backed up from. It is the same namespace for SourcePVC and CSI
namespaced objects.
type: string
sourcePVC:
description: SourcePVC is the name of the PVC which the snapshot is
taken for.
type: string
required:
- backupStorageLocation
- operationTimeout
- snapshotType
- sourceNamespace
- sourcePVC
type: object
status:
description: DataUploadStatus is the current status of a DataUpload.
properties:
completionTimestamp:
description: CompletionTimestamp records the time a backup was completed.
Completion time is recorded even on failed backups. Completion time
is recorded before uploading the backup object. The server's time
is used for CompletionTimestamps
format: date-time
nullable: true
type: string
dataMoverResult:
additionalProperties:
type: string
description: DataMoverResult stores data-mover-specific information
as a result of the DataUpload.
nullable: true
type: object
message:
description: Message is a message about the DataUpload's status.
type: string
node:
description: Node is name of the node where the DataUpload is processed.
type: string
path:
description: Path is the full path of the snapshot volume being backed
up.
type: string
phase:
description: Phase is the current state of the DataUpload.
enum:
- New
- Accepted
- Prepared
- InProgress
- Canceling
- Canceled
- Completed
- Failed
type: string
progress:
description: Progress holds the total number of bytes of the volume
and the current number of backed up bytes. This can be used to display
progress information about the backup operation.
properties:
bytesDone:
format: int64
type: integer
totalBytes:
format: int64
type: integer
type: object
snapshotID:
description: SnapshotID is the identifier for the snapshot in the
backup repository.
type: string
startTimestamp:
description: StartTimestamp records the time a backup was started.
Separate from CreationTimestamp, since that value changes on restores.
The server's time is used for StartTimestamps
format: date-time
nullable: true
type: string
type: object
type: object
served: true
storage: true
subresources: {}

File diff suppressed because one or more lines are too long

View File

@@ -1,67 +1,11 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: velero-perms
creationTimestamp: null
name: manager-role
rules:
- apiGroups:
- ""
resources:
- persistentvolumerclaims
verbs:
- get
- apiGroups:
- ""
resources:
- persistentvolumes
verbs:
- get
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- velero.io
resources:
- backuprepositories
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- backuprepositories/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- backups
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- backups/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
@@ -82,66 +26,6 @@ rules:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- datadownloads
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- datadownloads/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- datauploads
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- datauploads/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- deletebackuprequests
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- deletebackuprequests/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
@@ -162,86 +46,6 @@ rules:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- podvolumebackups
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- podvolumebackups/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- podvolumerestores
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- podvolumerestores/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- restores
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- restores/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- schedules
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- schedules/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
@@ -262,15 +66,3 @@ rules:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- volumesnapshotlocations
verbs:
- create
- delete
- get
- list
- patch
- update
- watch

View File

@@ -1,40 +0,0 @@
# Delete Backup and Restic Repo Resources when BSL is Deleted
## Abstract
Issue #2082 requested that with the command `velero backup-location delete <bsl name>` (implemented in Velero 1.6 with #3073), the following will be deleted:
- associated Velero backups (to be clear, these are custom Kubernetes resources called "backups" that are stored in the API server)
- associated Restic repositories (custom Kubernetes resources called "resticrepositories")
This design doc explains how the request will be implemented.
## Background
When a BSL resource is deleted from its Velero namespace, the associated custom Kubernetes resources, backups and Restic repositories, can no longer be used.
It makes sense to clean those resources up when a BSL is deleted.
## Goals
Update the `velero backup-location delete <bsl name>` command to delete associated backup and Restic repository resources in the same Velero namespace.
## Non Goals
[It was suggested](https://github.com/vmware-tanzu/velero/issues/2082#issuecomment-827951311) to fix bug #2697 alongside this issue.
However, I think that should be fixed separately because although it is similar (restore objects are not being deleted), it is also quite different.
One is adding a command feature update (this issue) and the other is a bug fix and each affect different parts of the code base.
## High-Level Design
Update the `velero backup-location delete <bsl name>` command to do the following:
- find in the same Velero namespace from which the BSL was deleted the associated backup resources and Restic repositories, called "backups.velero.io" and "resticrepositories.velero.io" respectively
- delete the resources found
The above logic will be added to [where BSLs are deleted](https://github.com/vmware-tanzu/velero/blob/main/pkg/cmd/cli/backuplocation/delete.go).
## Alternative Considered
I had considered deleting the backup files (the ones in json format and tarballs) in the BSL itself.
However, a standard use case is to back up a cluster and then restore into a new cluster.
Deleting the backup storage location in either location is not expected to remove all of the backups in the backup storage location and should not be done.

View File

@@ -505,11 +505,9 @@ spec:
- BackupResourceList
- RestoreLog
- RestoreResults
- CSIBackupVolumeSnapshots
- CSIBackupVolumeSnapshotContents
type: string
name:
description: Name is the name of the Kubernetes resource with
description: Name is the name of the kubernetes resource with
which the file is associated.
type: string
required:

View File

@@ -57,7 +57,7 @@ spec:
- emptyDir: {}
name: scratch
---
apiVersion: rbac.authorization.k8s.io/v1
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
labels:

View File

@@ -5,22 +5,22 @@ metadata:
creationTimestamp: null
labels:
component: velero
name: node-agent
name: restic
namespace: velero
spec:
selector:
matchLabels:
name: node-agent
name: restic
template:
metadata:
creationTimestamp: null
labels:
component: velero
name: node-agent
name: restic
spec:
containers:
- args:
- node-agent
- restic
- server
command:
- /velero
@@ -43,15 +43,12 @@ spec:
value: /credentials/cloud
image: velero/velero:latest
imagePullPolicy: Always
name: node-agent
name: restic
resources: {}
volumeMounts:
- mountPath: /host_pods
mountPropagation: HostToContainer
name: host-pods
- mountPath: /var/lib/kubelet/plugins
mountPropagation: HostToContainer
name: host-plugins
- mountPath: /scratch
name: scratch
- mountPath: /credentials
@@ -63,9 +60,6 @@ spec:
- hostPath:
path: /var/lib/kubelet/pods
name: host-pods
- hostPath:
path: /var/lib/kubelet/plugins
name: host-plugins
- emptyDir: {}
name: scratch
- name: cloud-credentials

Binary file not shown.

Before

Width:  |  Height:  |  Size: 75 KiB

View File

@@ -2,17 +2,17 @@
This document proposes a solution that allows user to specify a backup order for resources of specific resource type.
## Background
During backup process, user may need to back up resources of specific type in some specific order to ensure the resources were backup properly because these resources are related and ordering might be required to preserve the consistency for the apps to recover itself from the backup image
During backup process, user may need to back up resources of specific type in some specific order to ensure the resources were backup properly because these resources are related and ordering might be required to preserve the consistency for the apps to recover itself <EFBFBD>from the backup image
(Ex: primary-secondary database pods in a cluster).
## Goals
- Enable user to specify an order of backup resources belong to specific resource type
- Enable user to specify an order of back up resources belong to specific resource type
## Alternatives Considered
- Use a plugin to backup an resources and all the sub resources. For example use a plugin for StatefulSet and backup pods belong to the StatefulSet in specific order. This plugin solution is not generic and requires plugin for each resource type.
## High-Level Design
User will specify a map of resource type to list resource names (separate by semicolons). Each name will be in the format "namespaceName/resourceName" to enable ordering across namespaces. Based on this map, the resources of each resource type will be sorted by the order specified in the list of resources. If a resource instance belong to that specific type but its name is not in the order list, then it will be put behind other resources that are in the list.
User will specify a map of resource type to list resource names (separate by semicolons). Each name will be in the format "namespaceName/resourceName" to enable ordering accross namespaces. Based on this map, the resources of each resource type will be sorted by the order specified in the list of resources. If a resource instance belong to that specific type but its name is not in the order list, then it will be put behind other resources that are in the list.
### Changes to BackupSpec
Add new field to BackupSpec
@@ -36,5 +36,5 @@ Example:
>velero backup create mybackup --ordered-resources "pod=ns1/pod1,ns1/pod2;persistentvolumeclaim=n2/slavepod,ns2/primarypod"
## Open Issues
- In the CLI, the design proposes to use commas to separate items of a resource type and semicolon to separate key-value pairs. This follows the convention of using commas to separate items in a list (For example: --include-namespaces ns1,ns2). However, the syntax for map in labels and annotations use commas to separate key-value pairs. So it introduces some inconsistency.
- In the CLI, the design proposes to use commas to separate items of a resource type and semicolon to separate key-value pairs. This follows the convention of using commas to separate items in a list (For example: --include-namespaces ns1,ns2). However, the syntax for map in labels and annotations use commas to seperate key-value pairs. So it introduces some inconsistency.
- For pods that managed by Deployment or DaemonSet, this design may not work because the pods' name is randomly generated and if pods are restarted, they would have different names so the Backup operation may not consider the restarted pods in the sorting algorithm. This problem will be addressed when we enhance the design to use regular expression to specify the OrderResources instead of exact match.

View File

@@ -1,103 +0,0 @@
# Design for BackupItemAction v2 API
## Abstract
This design includes the changes to the BackupItemAction (BIA) api design as required by the [Item Action Progress Monitoring](general-progress-monitoring.md) feature.
The BIA v2 interface will have two new methods, and the Execute() return signature will be modified.
If there are any additional BIA API changes that are needed in the same Velero release cycle as this change, those can be added here as well.
## Background
This API change is needed to facilitate long-running plugin actions that may not be complete when the Execute() method returns.
It is an optional feature, so plugins which don't need this feature can simply return an empty operation ID and the new methods can be no-ops.
This will allow long-running plugin actions to continue in the background while Velero moves on to the next plugin, the next item, etc.
## Goals
- Allow for BIA Execute() to optionally initiate a long-running operation and report on operation status.
## Non Goals
- Allowing velero control over when the long-running operation begins.
## High-Level Design
As per the [Plugin Versioning](plugin-versioning.md) design, a new BIAv2 plugin `.proto` file will be created to define the GRPC interface.
v2 go files will also be created in `plugin/clientmgmt/backupitemaction` and `plugin/framework/backupitemaction`, and a new PluginKind will be created.
The velero Backup process will be modified to reference v2 plugins instead of v1 plugins.
An adapter will be created so that any existing BIA v1 plugin can be executed as a v2 plugin when executing a backup.
## Detailed Design
### proto changes (compiled into golang by protoc)
The v2 BackupItemAction.proto will be like the current v1 version with the following changes:
ExecuteResponse gets a new field:
```
message ExecuteResponse {
bytes item = 1;
repeated generated.ResourceIdentifier additionalItems = 2;
string operationID = 3;
repeated generated.ResourceIdentifier itemsToUpdate = 4;
}
```
The BackupItemAction service gets two new rpc methods:
```
service BackupItemAction {
rpc AppliesTo(BackupItemActionAppliesToRequest) returns (BackupItemActionAppliesToResponse);
rpc Execute(ExecuteRequest) returns (ExecuteResponse);
rpc Progress(BackupItemActionProgressRequest) returns (BackupItemActionProgressResponse);
rpc Cancel(BackupItemActionCancelRequest) returns (google.protobuf.Empty);
}
```
To support these new rpc methods, we define new request/response message types:
```
message BackupItemActionProgressRequest {
string plugin = 1;
string operationID = 2;
bytes backup = 3;
}
message BackupItemActionProgressResponse {
generated.OperationProgress progress = 1;
}
message BackupItemActionCancelRequest {
string plugin = 1;
string operationID = 2;
bytes backup = 3;
}
```
One new shared message type will be added, as this will also be needed for v2 RestoreItemAction and VolmeSnapshotter:
```
message OperationProgress {
bool completed = 1;
string err = 2;
int64 nCompleted = 3;
int64 nTotal = 4;
string operationUnits = 5;
string description = 6;
google.protobuf.Timestamp started = 7;
google.protobuf.Timestamp updated = 8;
}
```
In addition to the two new rpc methods added to the BackupItemAction interface, there is also a new `Name()` method. This one is only actually used internally by Velero to get the name that the plugin was registered with, but it still must be defined in a plugin which implements BackupItemActionV2 in order to implement the interface. It doesn't really matter what it returns, though, as this particular method is not delegated to the plugin via RPC calls. The new (and modified) interface methods for `BackupItemAction` are as follows:
```
type BackupItemAction interface {
...
Name() string
...
Execute(item runtime.Unstructured, backup *api.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, []velero.ResourceIdentifier, error)
Progress(operationID string, backup *api.Backup) (velero.OperationProgress, error)
Cancel(operationID string, backup *api.Backup) error
...
}
```
A new PluginKind, `BackupItemActionV2`, will be created, and the backup process will be modified to use this plugin kind.
See [Plugin Versioning](plugin-versioning.md) for more details on implementation plans, including v1 adapters, etc.
## Compatibility
The included v1 adapter will allow any existing BackupItemAction plugin to work as expected, with an empty operation ID returned from Execute() and no-op Progress() and Cancel() methods.
## Implementation
This will be implemented during the Velero 1.11 development cycle.

View File

@@ -1,402 +0,0 @@
# Proposal to add resource filters for backup can distinguish whether resource is cluster-scoped or namespace-scoped.
- [Proposal to add resource filters for backup can distinguish whether resource is cluster-scoped or namespace-scoped.](#proposal-to-add-resource-filters-for-backup-can-distinguish-whether-resource-is-cluster-scoped-or-namespace-scoped)
- [Abstract](#abstract)
- [Background](#background)
- [Goals](#goals)
- [Non Goals](#non-goals)
- [High-Level Design](#high-level-design)
- [Parameters Rules](#parameters-rules)
- [Using scenarios:](#using-scenarios)
- [no namespace-scoped resources + some cluster-scoped resources](#no-namespace-scoped-resources--some-cluster-scoped-resources)
- [no namespace-scoped resources + all cluster-scoped resources](#no-namespace-scoped-resources--all-cluster-scoped-resources)
- [some namespace-scoped resources + no cluster-scoped resources](#some-namespace-scoped-resources--no-cluster-scoped-resources)
- [scenario 1](#scenario-1)
- [scenario 2](#scenario-2)
- [scenario 3](#scenario-3)
- [scenario 4](#scenario-4)
- [some namespace-scoped resources + only related cluster-scoped resources](#some-namespace-scoped-resources--only-related-cluster-scoped-resources)
- [scenario 1](#scenario-1-1)
- [scenario 2](#scenario-2-1)
- [scenario 3](#scenario-3-1)
- [some namespace-scoped resources + some additional cluster-scoped resources](#some-namespace-scoped-resources--some-additional-cluster-scoped-resources)
- [scenario 1](#scenario-1-2)
- [scenario 2](#scenario-2-2)
- [scenario 3](#scenario-3-2)
- [scenario 4](#scenario-4-1)
- [some namespace-scoped resources + all cluster-scoped resources](#some-namespace-scoped-resources--all-cluster-scoped-resources)
- [scenario 1](#scenario-1-3)
- [scenario 2](#scenario-2-3)
- [scenario 3](#scenario-3-3)
- [all namespace-scoped resources + no cluster-scoped resources](#all-namespace-scoped-resources--no-cluster-scoped-resources)
- [all namespace-scoped resources + some additional cluster-scoped resources](#all-namespace-scoped-resources--some-additional-cluster-scoped-resources)
- [all namespace-scoped resources + all cluster-scoped resources](#all-namespace-scoped-resources--all-cluster-scoped-resources)
- [describe command change](#describe-command-change)
- [Detailed Design](#detailed-design)
- [Alternatives Considered](#alternatives-considered)
- [Security Considerations](#security-considerations)
- [Compatibility](#compatibility)
- [Implementation](#implementation)
- [Open Issues](#open-issues)
## Abstract
The current filter (IncludedResources/ExcludedResources + IncludeClusterResources flag) is not enough for some special cases, e.g. all namespace-scoped resources + some kind of cluster-scoped resource and all namespace-scoped resources + cluster-scoped resource excludes.
Propose to add a new group of resource filtering parameters, which can distinguish cluster-scoped and namespace-scoped resources.
## Background
There are two sets of resource filters for Velero: `IncludedNamespaces/ExcludedNamespaces` and `IncludedResources/ExcludedResources`.
`IncludedResources` means only including the resource types specified in the parameter. Both cluster-scoped and namespace-scoped resources are handled in this parameter by now.
The k8s resources are separated into cluster-scoped and namespace-scoped.
As a result, it's hard to include all resources in one group and only including specified resource in the other group.
## Goals
- Make Velero can support more complicated namespace-scoped and cluster-scoped resources filtering scenarios in backup.
## Non Goals
- Enrich the resource filtering rules, for example, advanced PV filtering and filtering by resource names.
## High-Level Design
Four new parameters are added into command `velero backup create`: `--include-cluster-scoped-resources`, `--exclude-cluster-scoped-resources`, `--include-namespace-scoped-resources` and `--exclude-namespace-scoped-resources`.
`--include-cluster-scoped-resources` and `--exclude-cluster-scoped-resources` are used to filter cluster-scoped resources included or excluded in backup per resource type.
`--include-namespace-scoped-resources` and `--exclude-namespace-scoped-resources` are used to filter namespace-scoped resources included or excluded in backup per resource type.
Restore and other code pieces also use resource filtering will be handled in future releases.
### Parameters Rules
* `--include-cluster-scoped-resources`, `--include-namespace-scoped-resources`, `--exclude-cluster-scoped-resources` and `--exclude-namespace-scoped-resources` valid value include `*` and comma separated string. Each element of the CSV string should a k8s resource name. The format should be `resource.group`, such as `storageclasses.storage.k8s.io.`.
* `--include-cluster-scoped-resources`, `--include-namespace-scoped-resources`, `--exclude-cluster-scoped-resources` and `--exclude-namespace-scoped-resources` parameters are mutual exclusive with `--include-cluster-resources`, `--include-resources` and `--exclude-resources` parameters. If both sets of parameters are provisioned, validation failure should be returned.
* `--include-cluster-scoped-resources` and `--exclude-cluster-scoped-resources` should only contain cluster-scoped resource type names. If namespace-scoped resource type names are included, they are ignored.
* If there are conflicts between `--include-cluster-scoped-resources` and `--exclude-cluster-scoped-resources` specified resources type lists, `--exclude-cluster-scoped-resources` parameter has higher priority.
* `--include-namespace-scoped-resources` and `--exclude-namespace-scoped-resources` should only contain namespace-scoped resource type names. If cluster-scoped resource type names are included, they are ignored.
* If there are conflicts between `--include-namespace-scoped-resources` and `--exclude-namespace-scoped-resources` specified resources type lists, `--exclude-namespace-scoped-resources` parameter has higher priority.
* If `--include-namespace-scoped-resources` is not present, it means all namespace-scoped resources are included per resource type.
* If both `--include-cluster-scoped-resources` and `--exclude-cluster-scoped-resources` are not present, it means no additional cluster-scoped resource is included per resource type, just as the existing `--include-cluster-resources` parameter not setting value. Cluster-scoped resources are related to the namespace-scoped resources, which means those are returned in the namespace-scoped resources' BackupItemAction's result AdditionalItems array, are still included in backup by default. Taking backing up PVC scenario as an example, PVC is namespace-scoped, PV is cluster-scoped. PVC's BIA will include PVC related PV into backup too.
### Using scenarios:
Please notice, if the scenario give the example of using old filtering parameters (`--include-cluster-resources`, `--include-resources` and `--exclude-resources`), that means the old parameters also work for this case. If old parameters example is not given, that means they don't work for this scenario, only new parameters (`--include-cluster-scoped-resources`, `--include-namespace-scoped-resources`, `--exclude-cluster-scoped-resources` and `--exclude-namespace-scoped-resources`) work.
#### no namespace-scoped resources + some cluster-scoped resources
The following command means backup no namespace-scoped resources and some cluster-scoped resources.
``` bash
velero backup create <backup-name>
--exclude-namespace-scoped-resources=*
--include-cluster-scoped-resources=storageclass
```
#### no namespace-scoped resources + all cluster-scoped resources
The following command means backup no namespace-scoped resources and all cluster-scoped resources.
``` bash
velero backup create <backup-name>
--exclude-namespace-scoped-resources=*
--include-cluster-scoped-resources=*
```
#### some namespace-scoped resources + no cluster-scoped resources
##### scenario 1
The following commands mean backup all resources in namespaces default and kube-system, and no cluster-scoped resources.
Example of new parameters:
``` bash
velero backup create <backup-name>
--include-namespaces=default,kube-system
--exclude-cluster-scoped-resources=*
```
Example of old parameters:
``` bash
velero backup create <backup-name>
--include-namespaces=default,kube-system
--include-cluster-resources=false
```
##### scenario 2
The following commands mean backup PVC, Deployment, Service, Endpoint, Pod and ReplicaSet resources in all namespaces, and no cluster-scoped resources. Although PVC's related PV should be included, due to no cluster-scoped resources are included, so they are ruled out too.
Example of new parameters:
``` bash
velero backup create <backup-name>
--include-namespace-scoped-resources=persistentvolumeclaim,deployment,service,endpoint,pod,replicaset
--exclude-cluster-scope-resources=*
```
Example of old parameters:
``` bash
velero backup create <backup-name>
--include-resources=persistentvolumeclaim,deployment,service,endpoint,pod,replicaset
--include-cluster-resources=false
```
##### scenario 3
The following commands mean backup PVC, Deployment, Service, Endpoint, Pod and ReplicaSet resources in namespace default and kube-system, and no cluster-scoped resources. Although PVC's related PV should be included, due to no cluster-scoped resources are included, so they are ruled out too.
Example of new parameters:
``` bash
velero backup create <backup-name>
--include-namespaces=default,kube-system
--include-namespace-scoped-resources=persistentvolumeclaim,deployment,service,endpoint,pod,replicaset
--exclude-cluster-scope-resources=*
```
Example of old parameters:
``` bash
velero backup create <backup-name>
--include-namespaces=default,kube-system
--include-resources=persistentvolumeclaim,deployment,service,endpoint,pod,replicaset
--include-cluster-resources=false
```
##### scenario 4
The following commands mean backup all resources except Ingress type resources in all namespaces, and no cluster-scoped resources.
Example of new parameters:
``` bash
velero backup create <backup-name>
--exclude-namespace-scoped-resources=ingress
--exclude-cluster-scoped-resources=*
```
Example of old parameters:
``` bash
velero backup create <backup-name>
--exclude-resources=ingress
--include-cluster-resources=false
```
#### some namespace-scoped resources + only related cluster-scoped resources
##### scenario 1
This means backup all resources in namespaces default and kube-system, and related cluster-scoped resources.
``` bash
velero backup create <backup-name>
--include-namespaces=default,kube-system
```
##### scenario 2
This means backup pods and configmaps in namespaces default and kube-system, and related cluster-scoped resources.
``` bash
velero backup create <backup-name>
--include-namespaces=default,kube-system
--include-namespace-scoped-resources=pods,configmaps
```
##### scenario 3
This means backup all resources except Ingress type resources in all namespaces, and related cluster-scoped resources.
Example of new parameters:
``` bash
velero backup create <backup-name>
--exclude-namespace-scoped-resources=ingress
```
Example of old parameters:
``` bash
velero backup create <backup-name>
--exclude-resources=ingress
```
#### some namespace-scoped resources + some additional cluster-scoped resources
##### scenario 1
This means backup all resources in namespace in default, kube-system, and related cluster-scoped resources, plus all StorageClass resources.
``` bash
velero backup create <backup-name>
--include-namespaces=default,kube-system
--include-cluster-scoped-resources=storageclass
```
##### scenario 2
This means backup PVC, Deployment, Service, Endpoint, Pod and ReplicaSet resources in all namespaces, and related cluster-scoped resources, plus all StorageClass resources, and PVC related PV.
``` bash
velero backup create <backup-name>
--include-namespace-scoped-resources=persistentvolumeclaim,deployment,service,endpoint,pod,replicaset
--include-cluster-scoped-resources=storageclass
```
##### scenario 3
This means backup PVC, Deployment, Service, Endpoint, Pod and ReplicaSet resources in default and kube-system namespaces, and related cluster-scoped resources, plus all StorageClass resources, and PVC related PV.
``` bash
velero backup create <backup-name>
--include-namespace-scoped-resources=persistentvolumeclaim,deployment,service,endpoint,pod,replicaset
--include-namespaces=default,kube-system
--include-cluster-scoped-resources=storageclass
```
##### scenario 4
This means backup PVC, Deployment, Service, Endpoint, Pod and ReplicaSet resources in default and kube-system namespaces, and related cluster-scoped resources, plus all cluster-scoped resources except StorageClass type resources.
``` bash
velero backup create <backup-name>
--include-namespace-scoped-resources=persistentvolumeclaim,deployment,service,endpoint,pod,replicaset
--include-namespaces=default,kube-system
--exclude-cluster-scoped-resources=storageclass
```
#### some namespace-scoped resources + all cluster-scoped resources
##### scenario 1
The following commands mean backup all resources in namespace in default, kube-system, and all cluster-scoped resources.
Example of new parameters:
``` bash
velero backup create <backup-name>
--include-namespaces=default,kube-system
--include-cluster-scoped-resources=*
```
Example of old parameters:
``` bash
velero backup create <backup-name>
--include-namespaces=default,kube-system
--include-cluster-resources=true
```
##### scenario 2
This means backup Deployment, Service, Endpoint, Pod and ReplicaSet resources in all namespaces, and all cluster-scoped resources.
``` bash
velero backup create <backup-name>
--include-namespace-scoped-resources=deployment,service,endpoint,pod,replicaset
--include-cluster-scoped-resources=*
```
##### scenario 3
This means backup Deployment, Service, Endpoint, Pod and ReplicaSet resources in default and kube-system namespaces, and all cluster-scoped resources.
``` bash
velero backup create <backup-name>
--include-namespaces=default,kube-system
--include-namespace-scoped-resources=deployment,service,endpoint,pod,replicaset
--include-cluster-scoped-resources=*
```
#### all namespace-scoped resources + no cluster-scoped resources
The following commands all mean backup all namespace-scoped resources and no cluster-scoped resources.
Example of new parameters:
``` bash
velero backup create <backup-name>
--exclude-cluster-scoped-resources=*
```
Example of old parameters:
``` bash
velero backup create <backup-name>
--include-cluster-resources=false
```
#### all namespace-scoped resources + some additional cluster-scoped resources
This command means backup all namespace-scoped resources, and related cluster-scoped resources, plus all PersistentVolume resources.
``` bash
velero backup create <backup-name>
--include-namespaces=*
--include-cluster-scoped-resources=persistentvolume
```
#### all namespace-scoped resources + all cluster-scoped resources
The following commands have the same meaning: backup all namespace-scoped resources, and all cluster-scoped resources.
``` bash
velero backup create <backup-name>
--include-cluster-scoped-resources=*
```
``` bash
velero backup create <backup-name>
--include-cluster-resources=true
```
#### describe command change
In `velero backup describe` command, the four new parameters should be outputted too.
``` bash
velero backup describe <backup-name>
......
Namespaces:
Included: ns2
Excluded: <none>
Resources:
Included cluster-scoped: StorageClass,PersistentVolume
Excluded cluster-scoped: <none>
Included namespace-scoped: default
Excluded namespace-scoped: <none>
......
```
**Note:** `velero restore` command doesn't support those four new parameter in Velero v1.11, but `velero schedule` supports the four new parameters through backup specification.
## Detailed Design
With adding `IncludedNamespaceScopedResources`, `ExcludedNamespaceScopedResources`, `IncludedClusterScopedResources` and `ExcludedClusterScopedResources`, the `BackupSpec` looks like:
``` go
type BackupSpec struct {
......
// IncludedResources is a slice of resource names to include
// in the backup. If empty, all resources are included.
// +optional
// +nullable
IncludedResources []string `json:"includedResources,omitempty"`
// ExcludedResources is a slice of resource names that are not
// included in the backup.
// +optional
// +nullable
ExcludedResources []string `json:"excludedResources,omitempty"`
// IncludeClusterResources specifies whether cluster-scoped resources
// should be included for consideration in the backup.
// +optional
// +nullable
IncludeClusterResources *bool `json:"includeClusterResources,omitempty"`
// IncludedClusterScopedResources is a slice of cluster-scoped
// resource type names to include in the backup.
// If set to "*", all cluster scope resource types are included.
// The default value is empty, which means only related cluster
// scope resources are included.
// +optional
// +nullable
IncludedClusterScopedResources []string `json:"includedClusterScopedResources,omitempty"`
// ExcludedClusterScopedResources is a slice of cluster-scoped
// resource type names to exclude from the backup.
// If set to "*", all cluster scope resource types are excluded.
// +optional
// +nullable
ExcludedClusterScopedResources []string `json:"excludedClusterScopedResources,omitempty"`
// IncludedNamespaceScopedResources is a slice of namespace-scoped
// resource type names to include in the backup.
// The default value is "*".
// +optional
// +nullable
IncludedNamespaceScopedResources []string `json:"includedNamespaceScopedResources,omitempty"`
// ExcludedNamespaceScopedResources is a slice of namespace-scoped
// resource type names to exclude from the backup.
// If set to "*", all namespace scope resource types are excluded.
// +optional
// +nullable
ExcludedNamespaceScopedResources []string `json:"excludedNamespaceScopedResources,omitempty"`
......
}
```
## Alternatives Considered
Proposal from Jibu Data [Issue 5120](https://github.com/vmware-tanzu/velero/issues/5120#issue-1304534563)
## Security Considerations
No security impact.
## Compatibility
The four new parameters cannot be mixed with existing resource filter parameters: `IncludedResources`, `ExcludedResources` and `IncludeClusterResources`.
If the new parameters and old parameters both appears in command line, or are specified in backup spec, the command line and the backup should fail.
## Implementation
This change should be included into Velero v1.11.
New parameters will coexist with `IncludedResources`, `ExcludedResources` and `IncludeClusterResources`.
Plan to deprecate `IncludedResources`, `ExcludedResources` and `IncludeClusterResources` in future releases, but also open to the community's feedback.
## Open Issues
`LabelSelector/OrLabelSelectors` apply to namespace-scoped resources.
It may be reasonable to make them also working on cluster-scoped resources.
An issue is created to trace this topic [resource label selector not work for cluster-scoped resources](https://github.com/vmware-tanzu/velero/issues/5787)

View File

@@ -304,8 +304,8 @@ Without these objects, the provider-level snapshots cannot be located in order t
[1]: https://kubernetes.io/blog/2018/10/09/introducing-volume-snapshot-alpha-for-kubernetes/
[2]: https://github.com/kubernetes-csi/external-snapshotter/blob/master/client/apis/volumesnapshot/v1/types.go#L42
[3]: https://github.com/kubernetes-csi/external-snapshotter/blob/master/client/apis/volumesnapshot/v1/types.go#L262
[2]: https://github.com/kubernetes-csi/external-snapshotter/blob/master/pkg/apis/volumesnapshot/v1alpha1/types.go#L41
[3]: https://github.com/kubernetes-csi/external-snapshotter/blob/master/pkg/apis/volumesnapshot/v1alpha1/types.go#L161
[4]: https://github.com/heptio/velero/blob/main/pkg/volume/snapshot.go#L21
[5]: https://github.com/heptio/velero/blob/main/pkg/apis/velero/v1/pod_volume_backup.go#L88
[6]: https://github.com/heptio/velero-csi-plugin/

View File

@@ -175,7 +175,7 @@ If there are one or more, download the backup tarball from backup storage, untar
## Alternatives Considered
Another proposal for higher level `DeleteItemActions` was initially included, which would require implementers to individually download the backup tarball themselves.
Another proposal for higher level `DeleteItemActions` was initially included, which would require implementors to individually download the backup tarball themselves.
While this may be useful long term, it is not a good fit for the current goals as each plugin would be re-implementing a lot of boilerplate.
See the deletion-plugins.md file for this alternative proposal in more detail.

View File

@@ -1,262 +0,0 @@
# Add support for `ExistingResourcePolicy` to restore API
## Abstract
Velero currently does not support any restore policy on Kubernetes resources that are already present in-cluster. Velero skips over the restore of the resource if it already exists in the namespace/cluster irrespective of whether the resource present in the restore is the same or different from the one present on the cluster. It is desired that Velero gives the option to the user to decide whether or not the resource in backup should overwrite the one present in the cluster.
## Background
As of Today, Velero will skip over the restoration of resources that already exist in the cluster. The current workflow followed by Velero is (Using a `service` that is backed up for example):
- Velero tries to attempt restore of the `service`
- Fetches the `service` from the cluster
- If the `service` exists then:
- Checks whether the `service` instance in the cluster is equal to the `service` instance present in backup
- If not equal then skips the restore of the `service` and adds a restore warning (except for [ServiceAccount objects](https://github.com/vmware-tanzu/velero/blob/574baeb3c920f97b47985ec3957debdc70bcd5f8/pkg/restore/restore.go#L1246))
- If equal then skips the restore of the `service` and mentions that the restore of resource `service` is skipped in logs
It is desired to add the functionality to specify whether or not to overwrite the instance of resource `service` in cluster with the one present in backup during the restore process.
Related issue: https://github.com/vmware-tanzu/velero/issues/4066
## Goals
- Add support for `ExistingResourcePolicy` to restore API for Kubernetes resources.
## Non Goals
- Change existing restore workflow for `ServiceAccount` objects
- Add support for `ExistingResourcePolicy` as `recreate` for Kubernetes resources. (Future scope feature)
## Unrelated Proposals (Completely different functionalities than the one proposed in the design)
- Add support for `ExistingResourcePolicy` to restore API for Non-Kubernetes resources.
- Add support for `ExistingResourcePolicy` to restore API for `PersistentVolume` data.
### Use-cases/Scenarios
### A. Production Cluster - Backup Cluster:
Let's say you have a Backup Cluster which is identical to the Production Cluster. After some operations/usage/time the Production Cluster had changed itself, there might be new deployments, some secrets might have been updated. Now, this means that the Backup cluster will no longer be identical to the Production Cluster. In order to keep the Backup Cluster up to date/identical to the Production Cluster with respect to Kubernetes resources except PV data we would like to use Velero for scheduling new backups which would in turn help us update the Backup Cluster via Velero restore.
Reference: https://github.com/vmware-tanzu/velero/issues/4066#issuecomment-954320686
### B. Help identify resource delta:
Here delta resources mean the resources restored by a previous backup, but they are no longer in the latest backup. Let's follow a sequence of steps to understand this scenario:
- Consider there are 2 clusters, Cluster A, which has 3 resources - P1, P2 and P3.
- Create a Backup1 from Cluster A which has P1, P2 and P3.
- Perform restore on a new Cluster B using Backup1.
- Now, Lets say in Cluster A resource P1 gets deleted and resource P2 gets updated.
- Create a new Backup2 with the new state of Cluster A, keep in mind Backup1 has P1, P2 and P3 while Backup2 has P2' and P3.
- So the Delta here is (|Cluster B - Backup2|), Delete P1 and Update P2.
- During Restore time we would want the Restore to help us identify this resource delta.
Reference: https://github.com/vmware-tanzu/velero/pull/4613#issuecomment-1027260446
## High-Level Design
### Approach 1: Add a new spec field `existingResourcePolicy` to the Restore API
In this approach we do *not* change existing velero behavior. If the resource to restore in cluster is equal to the one backed up then do nothing following current Velero behavior. For resources that already exist in the cluster that are not equal to the resource in the backup (other than Service Accounts). We add a new optional spec field `existingResourcePolicy` which can have the following values:
1. `none`: This is the existing behavior, if Velero encounters a resource that already exists in the cluster, we simply
skip restoration.
2. `update`: This option would provide the following behavior.
- Unchanged resources: Velero would update the backup/restore labels on the unchanged resources, if labels patch fails Velero adds a restore error.
- Changed resources: Velero will first try to patch the changed resource, Now if the patch:
- succeeds: Then the in-cluster resource gets updated with the labels as well as the resource diff
- fails: Velero adds a restore warning and tries to just update the backup/restore labels on the resource, if the labels patch also fails then we add restore error.
3. `recreate`: If resource already exists, then Velero will delete it and recreate the resource.
*Note:* The `recreate` option is a non-goal for this enhancement proposal, but it is considered as a future scope.
Another thing to highlight is that Velero will not be deleting any resources in any of the policy options proposed in
this design but Velero will patch the resources in `update` policy option.
Example:
A. The following Restore will execute the `existingResourcePolicy` restore type `none` for the `services` and `deployments` present in the `velero-protection` namespace.
```
Kind: Restore
includeNamespaces: velero-protection
includeResources:
- services
- deployments
existingResourcePolicy: none
```
B. The following Restore will execute the `existingResourcePolicy` restore type `update` for the `secrets` and `daemonsets` present in the `gdpr-application` namespace.
```
Kind: Restore
includeNamespaces: gdpr-application
includeResources:
- secrets
- daemonsets
existingResourcePolicy: update
```
### Approach 2: Add a new spec field `existingResourcePolicyConfig` to the Restore API
In this approach we give user the ability to specify which resources are to be included for a particular kind of force update behaviour, essentially a more granular approach where in the user is able to specify a resource:behaviour mapping. It would look like:
`existingResourcePolicyConfig`:
- `patch:`
- `includedResources:` [ ]string
- `recreate:`
- `includedResources:` [ ]string
*Note:*
- There is no `none` behaviour in this approach as that would conform to the current/default Velero restore behaviour.
- The `recreate` option is a non-goal for this enhancement proposal, but it is considered as a future scope.
Example:
A. The following Restore will execute the restore type `patch` and apply the `existingResourcePolicyConfig` for `secrets` and `daemonsets` present in the `inventory-app` namespace.
```
Kind: Restore
includeNamespaces: inventory-app
existingResourcePolicyConfig:
patch:
includedResources
- secrets
- daemonsets
```
### Approach 3: Combination of Approach 1 and Approach 2
Now, this approach is somewhat a combination of the aforementioned approaches. Here we propose addition of two spec fields to the Restore API - `existingResourceDefaultPolicy` and `existingResourcePolicyOverrides`. As the names suggest ,the idea being that `existingResourceDefaultPolicy` would describe the default velero behaviour for this restore and `existingResourcePolicyOverrides` would override the default policy explicitly for some resources.
Example:
A. The following Restore will execute the restore type `patch` as the `existingResourceDefaultPolicy` but will override the default policy for `secrets` using the `existingResourcePolicyOverrides` spec as `none`.
```
Kind: Restore
includeNamespaces: inventory-app
existingResourceDefaultPolicy: patch
existingResourcePolicyOverrides:
none:
includedResources
- secrets
```
## Detailed Design
### Approach 1: Add a new spec field `existingResourcePolicy` to the Restore API
The `existingResourcePolicy` spec field will be an `PolicyType` type field.
Restore API:
```
type RestoreSpec struct {
.
.
.
// ExistingResourcePolicy specifies the restore behaviour for the Kubernetes resource to be restored
// +optional
ExistingResourcePolicy PolicyType
}
```
PolicyType:
```
type PolicyType string
const PolicyTypeNone PolicyType = "none"
const PolicyTypePatch PolicyType = "update"
```
### Approach 2: Add a new spec field `existingResourcePolicyConfig` to the Restore API
The `existingResourcePolicyConfig` will be a spec of type `PolicyConfiguration` which gets added to the Restore API.
Restore API:
```
type RestoreSpec struct {
.
.
.
// ExistingResourcePolicyConfig specifies the restore behaviour for a particular/list of Kubernetes resource(s) to be restored
// +optional
ExistingResourcePolicyConfig []PolicyConfiguration
}
```
PolicyConfiguration:
```
type PolicyConfiguration struct {
PolicyTypeMapping map[PolicyType]ResourceList
}
```
PolicyType:
```
type PolicyType string
const PolicyTypePatch PolicyType = "patch"
const PolicyTypeRecreate PolicyType = "recreate"
```
ResourceList:
```
type ResourceList struct {
IncludedResources []string
}
```
### Approach 3: Combination of Approach 1 and Approach 2
Restore API:
```
type RestoreSpec struct {
.
.
.
// ExistingResourceDefaultPolicy specifies the default restore behaviour for the Kubernetes resource to be restored
// +optional
existingResourceDefaultPolicy PolicyType
// ExistingResourcePolicyOverrides specifies the restore behaviour for a particular/list of Kubernetes resource(s) to be restored
// +optional
existingResourcePolicyOverrides []PolicyConfiguration
}
```
PolicyType:
```
type PolicyType string
const PolicyTypeNone PolicyType = "none"
const PolicyTypePatch PolicyType = "patch"
const PolicyTypeRecreate PolicyType = "recreate"
```
PolicyConfiguration:
```
type PolicyConfiguration struct {
PolicyTypeMapping map[PolicyType]ResourceList
}
```
ResourceList:
```
type ResourceList struct {
IncludedResources []string
}
```
The restore workflow changes will be done [here](https://github.com/vmware-tanzu/velero/blob/b40bbda2d62af2f35d1406b9af4d387d4b396839/pkg/restore/restore.go#L1245)
### CLI changes for Approach 1
We would introduce a new CLI flag called `existing-resource-policy` of string type. This flag would be used to accept the
policy from the user. The velero restore command would look somewhat like this:
```
velero create restore <restore_name> --existing-resource-policy=update
```
Help message `Restore Policy to be used during the restore workflow, can be - none, update`
The CLI changes will go at `pkg/cmd/cli/restore/create.go`
We would also add a validation which checks for invalid policy values provided to this flag.
Restore describer will also be updated to reflect the policy `pkg/cmd/util/output/restore_describer.go`
### Implementation Decision
We have decided to go ahead with the implementation of Approach 1 as:
- It is easier to implement
- It is also easier to scale and leaves room for improvement and the door open to expanding to approach 3
- It also provides an option to preserve the existing velero restore workflow

View File

@@ -1,579 +0,0 @@
# Plugin Progress Monitoring
This is intended as a replacement for the previously-approved Upload Progress Monitoring design
([Upload Progress Monitoring](upload-progress.md)) in order to expand the supported use cases beyond
snapshot uploads to include what was previously called Async Backup/Restore Item Actions. This
updated design should handle the combined set of use cases for those previously separate designs.
Volume snapshotter plugin are used by Velero to take snapshots of persistent volume contents.
Depending on the underlying storage system, those snapshots may be available to use immediately,
they may be uploaded to stable storage internally by the plugin or they may need to be uploaded after
the snapshot has been taken. We would like for Velero to continue on to the next part of the backup as quickly
as possible but we would also like the backup to not be marked as complete until it is a usable backup. We'd also
eventually like to bring the control of upload under the control of Velero and allow the user to make decisions
about the ultimate destination of backup data independent of the storage system they're using.
We would also like any internal or third party Backup or Restore Item Action to have the option of
making use of this same ability to run some external process without blocking the current backup or
restore operation. Beyond Volume Snapshotters, this is also needed for data mover operations on both
backup and restore, and potentially useful for other third party operations -- for example
in-cluster registry image backup or restore could make use of this feature in a third party plugin).
### Glossary
- <b>BIA</b>: BackupItemAction
- <b>RIA</b>: RestoreItemAction
## Examples
- AWS - AWS snapshots return quickly, but are then uploaded in the background and cannot be used until EBS moves
the data into S3 internally.
- vSphere - The vSphere plugin takes a local snapshot and then the vSphere plugin uploads the data to S3. The local
snapshot is usable before the upload completes.
- Restic - Does not go through the volume snapshot path. Restic backups will block Velero progress
until completed. However, with the more generalized approach in the revised design, restic/kopia
backup and restore *could* make use of this framework if their actions are refactored as
Backup/RestoreItemActions.
- Data Movers
- Data movers are asynchronous processes executed inside backup/restore item actions that applies to a specific Kubernetes resources. A common use case for data mover is to backup/restore PVCs whose data we want to move to some form of backup storage outside of using velero kopia/restic implementations.
- Workflow
- User takes velero backup of PVC A
- BIA plugin applies to PVCs with compatible storage driver
- BIA plugin triggers data mover
- Most common use case would be for the plugin action to create a new CR which would
trigger an external controller action
- Another possible use case would be for the plugin to run its own async action in a
goroutine, although this would be less resilient to plugin container restarts.
- BIA plugin returns
- Velero backup process continues
- Main velero backup process monitors running BIA threads via gRPC to determine if process is done and healthy
## Primary changes from the original Upload Progress Monitoring design
The most fundamental change here is that rather than proposing a new special-purpose
SnapshotItemAction, the existing BackupItemAction plugin will be modified to accommodate an optional
snapshot (or other item operation) ID return. The primary reasons for this change are as follows:
1. The intended scope has moved beyond snapshot processing, so it makes sense to support
asynchronous operations in other backup or restore item actions.
2. We expect to have plugin API versioning implemented in Velero 1.10, making it feasible to
implement changes in the existing plugin APIs now.
3. We will need this feature on both backup and restore, meaning that if we took the "new plugin
type" approach, we'd need two new plugin types.
4. Other than the snapshot/operation ID return, the rest of the plugin processing is identical to
Backup/RestoreItemActions. With separate plugin types, we'd have to repeat all of that logic
(including dealing with additional items, etc.) twice.
The other major change is that we will be applying this to both backups and restores, although the
Volume Snapshotter use case only needs this on backup. This means that everything we're doing around
backup phase and workflow will also need to be done for restore.
Then there are various minor changes around terminology to make things more generic. Instead of
"snapshotID", we'll have "operationID" (which for volume snapshotters will be a snapshot
ID).
## Goals
- Enable monitoring of backup/restore item action operations that continue after snapshotting and other operations have completed
- Keep non-usable backups and restores (upload/persistence has not finished, etc.) from appearing as completed
- Make use of plugin API versioning functionality to manage changes to Backup/RestoreItemAction interfaces
- Enable vendors to plug their own data movers into velero using BIA/RIA plugins
## Non-goals
- Today, Velero is unable to recover from an in progress backup when the velero server crashes (pod is deleted). This has an impact on running asynchronous processes, but its not something we intend to solve in this design.
## Models
### Internal configuration and management
In this model, movement of the snapshot to stable storage is under the control of the snapshot
plugin. Decisions about where and when the snapshot gets moved to stable storage are not
directly controlled by Velero. This is the model for the current VolumeSnapshot plugins.
### Velero controlled management
In this model, the snapshot is moved to external storage under the control of Velero. This
enables Velero to move data between storage systems. This also allows backup partners to use
Velero to snapshot data and then move the data into their backup repository.
## Backup and Restore phases
Velero currently has backup/restore phases "InProgress" and "Completed". A backup moves to the
Completed phase when all of the volume snapshots have completed and the Kubernetes metadata has been
written into the object store. However, the actual data movement may be happening in the background
after the backup has been marked "Completed". The backup is not actually a stable backup until the
data has been persisted properly. In some cases (e.g. AWS) the backup cannot be restored from until
the snapshots have been persisted.
Once the snapshots have been taken, however, it is possible for additional backups or restores (as
long as they don't use not-yet-completed backups) to be made without interference. Waiting until
all data has been moved before starting the next backup will slow the progress of the system without
adding any actual benefit to the user.
New backup/restore phases, "WaitingForPluginOperations" and
"WaitingForPluginOperationsPartiallyFailed" will be introduced. When a backup or restore has
entered one of these phases, Velero is free to start another backup/restore. The backup/restore
will remain in the "WaitingForPluginOperations" phase until all BIA/RIA operations have completed
(for example, for a volume snapshotter, until all data has been successfully moved to persistent
storage). The backup/restore will not fail once it reaches this phase, although an error return
from a plugin could cause a backup or restore to move to "PartiallyFailed". If the backup is
deleted (cancelled), the plugins will attempt to delete the snapshots and stop the data movement -
this may not be possible with all storage systems.
In addition, for backups (but not restores), there will also be two additional phases, "Finalizing"
and "FinalizingPartiallyFailed", which will handle any steps required after plugin operations have
all completed. Initially, this will just include adding any required resources to the backup that
might have changed during asynchronous operation execution, although eventually other cleanup
actions could be added to this phase.
### State progression
![image](AsyncActionFSM.png)
### New
When a backup/restore request is initially created, it is in the "New" phase.
The next state is either "InProgress" or "FailedValidation"
### FailedValidation
If the backup/restore request is incorrectly formed, it goes to the "FailedValidation" phase and
terminates
### InProgress
When work on the backup/restore begins, it moves to the "InProgress" phase. It remains in the
"InProgress" phase until all pre/post execution hooks have been executed, all snapshots have been
taken and the Kubernetes metadata and backup/restore info is safely written to the object store
plugin.
In the current implementation, Restic backups will move data during the "InProgress" phase. In the
future, it may be possible to combine a snapshot with a Restic (or equivalent) backup which would
allow for data movement to be handled in the "WaitingForPluginOperations" phase,
The next phase would be "WaitingForPluginOperations" for backups or restores which have unfinished
asynchronous plugin operations and no errors so far, "WaitingForPluginOperationsPartiallyFailed" for
backups or restores which have unfinished asynchronous plugin operations at least one error,
"Completed" for restores with no unfinished asynchronous plugin operations and no errors,
"PartiallyFailed" for restores with no unfinished asynchronous plugin operations and at least one
error, "Finalizing" for backups with no unfinished asynchronous plugin operations and no errors,
"FinalizingPartiallyFailed" for backups with no unfinished asynchronous plugin operations and at
least one error, or "PartiallyFailed". Backups/restores which would have a final phase of
"Completed" or "PartiallyFailed" may move to the "WaitingForPluginOperations" or
"WaitingForPluginOperationsPartiallyFailed" state. A backup/restore which will be marked "Failed"
will go directly to the "Failed" phase. Uploads may continue in the background for snapshots that
were taken by a "Failed" backup/restore, but no progress will not be monitored or updated. If there
are any operations in progress when a backup is moved to the "Failed" phase (although with the
current workflow, that shouldn't happen), Cancel() should be called on these operations. When a
"Failed" backup is deleted, all snapshots will be deleted and at that point any uploads still in
progress should be aborted.
### WaitingForPluginOperations (new)
The "WaitingForPluginOperations" phase signifies that the main part of the backup/restore, including
snapshotting has completed successfully and uploading and any other asynchronous BIA/RIA plugin
operations are continuing. In the event of an error during this phase, the phase will change to
WaitingForPluginOperationsPartiallyFailed. On success, the phase changes to
"Finalizing" for backups and "Completed" for restores. Backups cannot be
restored from when they are in the WaitingForPluginOperations state.
### WaitingForPluginOperationsPartiallyFailed (new)
The "WaitingForPluginOperationsPartiallyFailed" phase signifies that the main part of the
backup/restore, including snapshotting has completed, but there were partial failures either during
the main part or during any async operations, including snapshot uploads. Backups cannot be
restored from when they are in the WaitingForPluginOperationsPartiallyFailed state.
### Finalizing (new)
The "Finalizing" phase signifies that asynchronous backup operations have all completed successfully
and Velero is currently backing up any resources indicated by asynchronous plugins as items to back
up after operations complete. Once this is done, the phase changes to Completed. Backups cannot be
restored from when they are in the Finalizing state.
### FinalizingPartiallyFailed (new)
The "FinalizingPartiallyFailed" phase signifies that, for a backup which had errors during initial
processing or asynchronous plugin operation, asynchronous backup operations have all completed and
Velero is currently backing up any resources indicated by asynchronous plugins as items to back up
after operations complete. Once this is done, the phase changes to PartiallyFailed. Backups cannot
be restored from when they are in the FinalizingPartiallyFailed state.
### Failed
When a backup/restore has had fatal errors it is marked as "Failed" Backups in this state cannot be
restored from.
### Completed
The "Completed" phase signifies that the backup/restore has completed, all data has been transferred
to stable storage (or restored to the cluster) and any backup in this state is ready to be used in a
restore. When the Completed phase has been reached for a backup it is safe to remove any of the
items that were backed up.
### PartiallyFailed
The "PartiallyFailed" phase signifies that the backup/restore has completed and at least part of the
backup/restore is usable. Restoration from a PartiallyFailed backup will not result in a complete
restoration but pieces may be available.
## Workflow
When a Backup or Restore Action is executed, any BackupItemAction, RestoreItemAction, or
VolumeSnapshot plugins will return operation IDs (snapshot IDs or other plugin-specific
identifiers). The plugin should be able to provide status on the progress for the snapshot and
handle cancellation of the operation/upload if the snapshot is deleted. If the plugin is restarted,
the operation ID should remain valid.
When all snapshots have been taken and Kubernetes resources have been persisted to the ObjectStorePlugin
the backup will either have fatal errors or will be at least partially usable.
If the backup/restore has fatal errors it will move to the "Failed" state and finish. If a
backup/restore fails, the upload or other operation will not be cancelled but it will not be
monitored either. For backups in any phase, all snapshots will be deleted when the backup is
deleted. Plugins will cancel any data movement or other operations and remove snapshots and other
associated resources when the VolumeSnapshotter DeleteSnapshot method or DeleteItemAction Execute
method is called.
Velero will poll the plugins for status on the operations when the backup/restore exits the
"InProgress" phase and has no fatal errors.
If any operations are not complete, the backup/restore will move to either WaitingForPluginOperations
or WaitingForPluginOperationsPartiallyFailed or Failed.
Post-snapshot and other operations may take a long time and Velero and its plugins may be restarted
during this time. Once a backup/restore has moved into the WaitingForPluginOperations or
WaitingForPluginOperationsPartiallyFailed phase, another backup/restore may be started.
While in the WaitingForPluginOperations or WaitingForPluginOperationsPartiallyFailed phase, the
snapshots and item actions will be periodically polled. When all of the snapshots and item actions
have reported success, restores will move directly to the Completed or PartiallyFailed phase, and
backups will move to the Finalizing or FinalizingPartiallyFailed phase, depending on whether the
backup/restore was in the WaitingForPluginOperations or WaitingForPluginOperationsPartiallyFailed
phase.
While in the Finalizing or FinalizingPartiallyFailed phase, Velero will update the backup with any
resources indicated by plugins that they must be added to the backup after operations are completed,
and then the backup will move to the Completed or PartiallyFailed phase, depending on whether there
are any backup errors.
The Backup resources will be written to object storage at the time the backup leaves the InProgress
phase, but it will not be synced to other clusters (or usable for restores in the current cluster)
until the backup has entered a final phase: Completed, Failed or PartiallyFailed. During the
Finalizing phases, a the backup resources will be updated with any required resources related to
asynchronous plugins.
## Reconciliation of InProgress backups
InProgress backups will not have a `velero-backup.json` present in the object store. During
reconciliation, backups which do not have a `velero-backup.json` object in the object store will be
ignored.
## Plugin API changes
### OperationProgress struct
type OperationProgress struct {
Completed bool // True when the operation has completed, either successfully or with a failure
Err string // Set when the operation has failed
NCompleted, NTotal int64 // Quantity completed so far and the total quantity associated with the operation in operationUnits
// For data mover and volume snapshotter use cases, this would be in bytes
// On successful completion, completed and total should be the same.
OperationUnits string // Units represented by completed and total -- for data mover and item
// snapshotters, this will usually be bytes.
Description string // Optional description of operation progress
Started, Updated time.Time // When the upload was started and when the last update was seen. Not all
// systems retain when the upload was begun, return Time 0 (time.Unix(0, 0))
// if unknown.
}
### VolumeSnapshotter changes
Two new methods will be added to the VolumeSnapshotter interface:
Progress(snapshotID string) (OperationProgress, error)
Cancel(snapshotID string) (error)
Progress will report the current status of a snapshot upload. This should be callable at
any time after the snapshot has been taken. In the event a plugin is restarted, if the operationID
(snapshot ID) continues to be valid it should be possible to retrieve the progress.
`error` is set if there is an issue retrieving progress. If the snapshot is has encountered an
error during the upload, the error should be returned in OperationProgress and error should be nil.
### BackupItemAction and RestoreItemAction plugin changes
Currently CSI snapshots and the Velero Plugin for vSphere are implemented as BackupItemAction
plugins. While the majority of BackupItemAction plugins do not take snapshots or upload data, this
functionality is useful for any longstanding plugin operation managed by an external
process/controller so we will modify BackupItemAction and RestoreItemAction to optionally return an
operationID in addition to the modified item.
Velero can attempt to cancel an operation by calling the Cancel API call on the BIA/RIA. The plugin
can then take any appropriate action as needed. Cancel will be called for unfinished operations on
backup deletion, and possibly reaching timeout. Cancel is not intended to be used to delete/remove
the results of completed actions and will have no effect on a completed action. Cancel has no return
value apart from the standard Error return, but this should only be used for unexpected
failures. Under normal operations, Cancel will simply return a nil error (and nothing else), whether
or not the plugin is able to cancel the operation.
_AsyncOperationsNotSupportedError_ should only be returned (by Progress) if the
Backup/RestoreItemAction plugin should not be handling the item. If the Backup/RestoreItemAction
plugin should handle the item but, for example, the item/snapshot ID cannot be found to report
progress, Progress will return an InvalidOperationIDError error rather than a populated
OperationProgress struct. If the item action does not start an asynchronous operation, then
operationID will be empty.
Three new methods will be added to the BackupItemAction interface, and the Execute() return signature
will be modified:
// Name returns the name of this BIA. Plugins which implement this interface must defined Name,
// but its content is unimportant, as it won't actually be called via RPC. Velero's plugin infrastructure
// will implement this directly rather than delegating to the RPC plugin in order to return the name
// that the plugin was registered under. The plugins must implement the method to complete the interface.
Name() string
// Execute allows the BackupItemAction to perform arbitrary logic with the item being backed up,
// including mutating the item itself prior to backup. The item (unmodified or modified)
// should be returned, along with an optional slice of ResourceIdentifiers specifying
// additional related items that should be backed up now, an optional operationID for actions which
// initiate asynchronous actions, and a second slice of ResourceIdentifiers specifying related items
// which should be backed up after all asynchronous operations have completed. This last field will be
// ignored if operationID is empty, and should not be filled in unless the resource must be updated in the
// backup after async operations complete (i.e. some of the item's Kubernetes metadata will be updated
// during the asynch operation which will be required during restore)
Execute(item runtime.Unstructured, backup *api.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, []velero.ResourceIdentifier, error)
// Progress
Progress(operationID string, backup *api.Backup) (velero.OperationProgress, error)
// Cancel
Cancel(operationID string, backup *api.Backup) error
Three new methods will be added to the RestoreItemAction interface, and the
RestoreItemActionExecuteOutput struct will be modified:
// Name returns the name of this RIA. Plugins which implement this interface must defined Name,
// but its content is unimportant, as it won't actually be called via RPC. Velero's plugin infrastructure
// will implement this directly rather than delegating to the RPC plugin in order to return the name
// that the plugin was registered under. The plugins must implement the method to complete the interface.
Name() string
// Execute allows the ItemAction to perform arbitrary logic with the item being restored,
// including mutating the item itself prior to restore. The item (unmodified or modified)
// should be returned, an optional OperationID, along with an optional slice of ResourceIdentifiers
// specifying additional related items that should be restored, a warning (which will be
// logged but will not prevent the item from being restored) or error (which will be logged
// and will prevent the item from being restored) if applicable. If OperationID is specified
// then velero will wait for this operation to complete before the restore is marked Completed.
Execute(input *RestoreItemActionExecuteInput) (*RestoreItemActionExecuteOutput, error)
// Progress
Progress(operationID string, restore *api.Restore) (velero.OperationProgress, error)
// Cancel
Cancel(operationID string, restore *api.Restore) error
// RestoreItemActionExecuteOutput contains the output variables for the ItemAction's Execution function.
type RestoreItemActionExecuteOutput struct {
// UpdatedItem is the item being restored mutated by ItemAction.
UpdatedItem runtime.Unstructured
// AdditionalItems is a list of additional related items that should
// be restored.
AdditionalItems []ResourceIdentifier
// SkipRestore tells velero to stop executing further actions
// on this item, and skip the restore step. When this field's
// value is true, AdditionalItems will be ignored.
SkipRestore bool
// OperationID is an identifier which indicates an ongoing asynchronous action which Velero will
// continue to monitor after restoring this item. If left blank, then there is no ongoing operation
OperationID string
}
## Changes in Velero backup format
No changes to the existing format are introduced by this change. As part of the backup workflow changes, a
`<backup-name>-itemoperations.json.gz` file will be added that contains the items and operation IDs
(snapshotIDs) returned by VolumeSnapshotter and BackupItemAction plugins. Also, the creation of the
`velero-backup.json` object will not occur until the backup moves to one of the terminal phases
(_Completed_, _PartiallyFailed_, or _Failed_). Reconciliation should ignore backups that do not
have a `velero-backup.json` object.
The Backup/RestoreItemAction plugin identifier as well as the ItemID and OperationID will be stored
in the `<backup-name>-itemoperations.json.gz`. When checking for progress, this info will be used
to select the appropriate Backup/RestoreItemAction plugin to query for progress. Here's an example
of what a record for a datamover plugin might look like:
```
{
"spec": {
"backupName": "backup-1",
"backupUID": "f8c72709-0f73-46e1-a071-116bc4a76b07",
"backupItemAction": "velero.io/volumesnapshotcontent-backup",
"resourceIdentifier": {
"Group": "snapshot.storage.k8s.io",
"Resource": "VolumeSnapshotContent",
"Namespace": "my-app",
"Name": "my-volume-vsc"
},
"operationID": "<DataMoverBackup objectReference>",
"itemsToUpdate": [
{
"Group": "velero.io",
"Resource": "VolumeSnapshotBackup",
"Namespace": "my-app",
"Name": "vsb-1"
}
]
},
"status": {
"operationPhase": "Completed",
"error": "",
"nCompleted": 12345,
"nTotal": 12345,
"operationUnits": "byte",
"description": "",
"Created": "2022-12-14T12:00:00Z",
"Started": "2022-12-14T12:01:00Z",
"Updated": "2022-12-14T12:11:02Z"
},
}
```
The cluster that is creating the backup will have the Backup resource present and will be able to
manage the backup before the backup completes.
If the Backup resource is removed (e.g. Velero is uninstalled) before a backup completes and writes
its `velero-backup.json` object, the other objects in the object store for the backup will be
effectively orphaned. This can currently happen but the current window is much smaller.
### `<backup-name>-itemoperations.json.gz`
The itemoperations file is similar to the existing `<backup-name>-itemsnapshots.json.gz` Each snapshot taken via
BackupItemAction will have a JSON record in the file. Exact format TBD.
This file will be uploaded to object storage at the end of processing all of the items in the
backup, before the phase moves away from `InProgress`.
## Changes to Velero restores
A `<restore-name>-itemoperations.json.gz` file will be added that contains the items and operation
IDs returned by RestoreItemActions. The format will be the same as the
`<backup-name>-itemoperations.json.gz` generated for backups.
This file will be uploaded to object storage at the end of processing all of the items in the
restore, before the phase moves away from `InProgress`.
## CSI snapshots
For systems such as EBS, a snapshot is not available until the storage system has transferred the
snapshot to stable storage. CSI snapshots expose the _readyToUse_ state that, in the case of EBS,
indicates that the snapshot has been transferred to durable storage and is ready to be used. The
CSI BackupItemAction.Progress method will poll that field and when completed, return completion.
## vSphere plugin
The vSphere Plugin for Velero uploads snapshots to S3 in the background. This is also a
BackupItemAction plugin, it will check the status of the Upload records for the snapshot and return
progress.
## Backup workflow changes
The backup workflow remains the same until we get to the point where the `velero-backup.json` object
is written. At this point, Velero will
run across all of the VolumeSnapshotter/BackupItemAction operations and call the _Progress_ method
on each of them.
If all backup item operations have finished (either successfully or failed), the backup will move to
one of the finalize phases.
If any of the snapshots or backup items are still being processed, the phase of the backup will be
set to the appropriate phase (_WaitingForPluginOperations_ or
_WaitingForPluginOperationsPartiallyFailed_), and the async backup operations controller will
reconcile periodically and call Progress on any unfinished operations. In the event of any of the
progress checks return an error, the phase will move to _WaitingForPluginOperationsPartiallyFailed_.
Once all operations have completed, the backup will be moved to one of the finalize phases, and the
backup finalizer controller will update the the `velero-backup.json`in the object store with any
resources necessary after asynchronous operations are complete and the backup will move to the
appropriate terminal phase.
## Restore workflow changes
The restore workflow remains the same until velero would currently move the backup into one of the
terminal states. At this point, Velero will run across all of the RestoreItemAction operations and
call the _Progress_ method on each of them.
If all restore item operations have finished (either successfully or failed), the restore will be
completed and the restore will move to the appropriate terminal phase and the restore will be
complete.
If any of the restore items are still being processed, the phase of the restore will be set to the
appropriate phase (_WaitingForPluginOperations_ or _WaitingForPluginOperationsPartiallyFailed_), and
the async restore operations controller will reconcile periodically and call Progress on any
unfinished operations. In the event of any of the progress checks return an error, the phase will
move to _WaitingForPluginOperationsPartiallyFailed_. Once all of the operations have completed, the
restore will be moved to the appropriate terminal phase.
## Restart workflow
On restart, the Velero server will scan all Backup/Restore resources. Any Backup/Restore resources
which are in the _InProgress_ phase will be moved to the _Failed_ phase. Any Backup/Restore
resources in the _WaitingForPluginOperations_ or _WaitingForPluginOperationsPartiallyFailed_ phase
will be treated as if they have been requeued and progress checked and the backup/restore will be
requeued or moved to a terminal phase as appropriate.
## Notes on already-merged code which may need updating
Since this design is modifying a previously-approved design, there is some preparation work based on
the earlier upload progress monitoring design that may need modification as a result of these
updates. Here is a list of some of these items:
1. Consts for the "Uploading" and "UploadingPartiallyFailed" phases have already been defined. These
will need to be removed when the "WaitingForPluginOperations" and
"WaitingForPluginOperationsPartiallyFailed" phases are defined.
- https://github.com/vmware-tanzu/velero/pull/3805
1. Remove the ItemSnapshotter plugin APIs (and related code) since the revised design will reuse
VolumeSnapshotter and BackupItemAction plugins.
- https://github.com/vmware-tanzu/velero/pull/4077
- https://github.com/vmware-tanzu/velero/pull/4417
1. UploadProgressFeatureFlag shouldn't be needed anymore. The current design won't really need a
feature flag here -- the new features will be added to V2 of the VolumeSnapshotter,
BackupItemAction, and RestoreItemAction plugins, and it will only be used if there are plugins which
return operation IDs.
- https://github.com/vmware-tanzu/velero/pull/4416
1. Adds <backup-name>-itemsnapshots.gz file to backup (when provided) -- this is still part of the
revised design, so it should stay.
- https://github.com/vmware-tanzu/velero/pull/4429
1. Upload Progress Monitoring and Item Snapshotter basic support: This PR is not yet merged, so
nothing will need to be reverted. While the implementation here will be useful in informing the
implementation of the new design, several things have changed in the design proposal since the PR
was written.
- https://github.com/vmware-tanzu/velero/pull/4467
# Implementation tasks
VolumeSnapshotter new plugin APIs
BackupItemAction new plugin APIs
RestoreItemAction new plugin APIs
New backup phases
New restore phases
Defer uploading `velero-backup.json`
AWS EBS plugin Progress implementation
Operation monitoring
Implementation of `<backup-name>-itemoperations.json.gz` file
Implementation of `<restore-name>-itemoperations.json.gz` file
Restart logic
Change in reconciliation logic to ignore backups/restores that have not completed
CSI plugin BackupItemAction Progress implementation
vSphere plugin BackupItemAction Progress implementation (vSphere plugin team)
# Open Questions
1. Do we need a Cancel operation for VolumeSnapshotter?
- From feedback, I'm thinking we probably don't need it. The only real purpose of Cancel
here is to tell the plugin that Velero won't be waiting anymore, so if there are any
required custom cancellation actions, now would be a good time to perform them. For snapshot
uploads that are already in proress, there's not really anything else to cancel.
2. Should we actually write the backup *before* moving to the WaitingForPluginOperations or
WaitingForPluginOperationsPartiallyFailed phase rather than waiting until all operations
have completed? The operations in question won't affect what gets written to object storage
for the backup, and since we've already written the list of operations we're waiting for to
object storage, writing the backup now would make the process resilient to Velero restart if
it happens during WaitingForPluginOperations or WaitingForPluginOperationsPartiallyFailed

View File

@@ -28,7 +28,7 @@ This document proposes adding _controller-tools_ to the project to automatically
_controller-tools_ works by reading the Go files that contain the API type definitions.
It uses a combination of the struct fields, types, tags and comments to build the OpenAPIv3 schema for the CRDs. The tooling makes some assumptions based on conventions followed in upstream Kubernetes and the ecosystem, which involves some changes to the Velero API type definitions, especially around optional fields.
In order for _controller-tools_ to read the Go files containing Velero API type definitions, the CRDs need to be generated at build time, as these files are not available at runtime (i.e. the Go files are not accessible by the compiled binary).
In order for _controller-tools_ to read the Go files containing Velero API type defintiions, the CRDs need to be generated at build time, as these files are not available at runtime (i.e. the Go files are not accessible by the compiled binary).
These generated CRD manifests (YAML) will then need to be available to the `pkg/install` package for it to include when installing Velero resources.
## Detailed Design

View File

@@ -1,324 +0,0 @@
# Handle backup of volumes by resources filters
## Abstract
Currently, Velero doesn't have one flexible way to handle volumes.
If users want to skip the backup of volumes or only backup some volumes in different namespaces in batch, currently they need to use the opt-in and opt-out approach one by one, or use label-selector but if it has big different labels on each different related pod, which is cumbersome when they have lots of volumes to handle with. it would be convenient if Velero could provide policies to handle the backup of volumes just by `some specific volumes conditions`.
## Background
As of Today, Velero has lots of filters to handle (backup or skip backup) resources including resources filters like `IncludedNamespaces, ExcludedNamespaces`, label selectors like `LabelSelector, OrLabelSelectors`, annotation like `backup.velero.io/must-include-additional-items` etc. But it's not enough flexible to handle volumes, we need one generic way to handle volumes.
## Goals
- Introducing flexible policies to handle volumes, and do not patch any labels or annotations to the pods or volumes.
## Non-Goals
- We only handle volumes for backup and do not support restore.
- Currently, only handles volumes, and does not support other resources.
- Only environment-unrelated and platform-independent general volumes attributes are supported, do not support volumes attributes related to a specific environment.
## Use-cases/Scenarios
### Skip backup volumes by some attributes
Users want to skip PV with the requirements:
- option to skip all PV data
- option to skip specified PV type (RBD, NFS)
- option to skip specified PV size
- option to skip specified storage-class
## High-Level Design
First, Velero will provide the user with one YAML file template and all supported volume policies will be in.
Second, writing your own configuration file by imitating the YAML template, it could be partial volume policies from the template.
Third, create one configmap from your own configuration file, and the configmap should be in Velero install namespace.
Fourth, create a backup with the command `velero backup create --resource-policies-configmap $policiesConfigmap`, which will reference the current backup to your volume policies. At the same time, Velero will validate all volume policies user imported, the backup will fail if the volume policies are not supported or some items could not be parsed.
Fifth, the current backup CR will record the reference of volume policies configmap.
Sixth, Velero first filters volumes by other current supported filters, at last, it will apply the volume policies to the filtered volumes to get the final matched volume to handle.
## Detailed Design
The volume resources policies should contain a list of policies which is the combination of conditions and related `action`, when target volumes meet the conditions, the related `action` will take effection.
Below is the API Design for the user configuration:
### API Design
```go
type VolumeActionType string
const Skip VolumeActionType = "skip"
// Action defined as one action for a specific way of backup
type Action struct {
// Type defined specific type of action, it could be 'file-system-backup', 'volume-snapshot', or 'skip' currently
Type VolumeActionType `yaml:"type"`
// Parameters defined map of parameters when executing a specific action
// +optional
// +nullable
Parameters map[string]interface{} `yaml:"parameters,omitempty"`
}
// VolumePolicy defined policy to conditions to match Volumes and related action to handle matched Volumes
type VolumePolicy struct {
// Conditions defined list of conditions to match Volumes
Conditions map[string]interface{} `yaml:"conditions"`
Action Action `yaml:"action"`
}
// ResourcePolicies currently defined slice of volume policies to handle backup
type ResourcePolicies struct {
Version string `yaml:"version"`
VolumePolicies []VolumePolicy `yaml:"volumePolicies"`
// we may support other resource policies in the future, and they could be added separately
// OtherResourcePolicies: []OtherResourcePolicy
}
```
The policies YAML config file would look like this:
```yaml
version: v1
volumePolicies:
# it's a list and if the input item matches the first policy, the latters will be ignored
# each policy consists of a list of conditions and an action
# each key in the object is one condition, and one policy will apply to resources that meet ALL conditions
- conditions:
# capacity condition matches the volumes whose capacity falls into the range
capacity: "0,100Gi"
csi:
driver: aws.ebs.csi.driver
fsType: ext4
storageClass:
- gp2
- ebs-sc
action:
type: volume-snapshot
parameters:
# optional parameters which are custom-defined parameters when doing an action
volume-snapshot-timeout: "6h"
- conditions:
capacity: "0,100Gi"
storageClass:
- gp2
- ebs-sc
action:
type: file-system-backup
- conditions:
nfs:
server: 192.168.200.90
action:
# type of file-system-backup could be defined a second time
type: file-system-backup
- conditions:
nfs: {}
action:
type: skip
- conditions:
csi:
driver: aws.efs.csi.driver
action:
type: skip
```
### Filter rules
#### VolumePolicies
The whole resource policies consist of groups of volume policies.
For one specific volume policy which is a combination of one action and serval conditions. which means one action and serval conditions are the smallest unit of volume policy.
Volume policies are a list and if the target volumes match the first policy, the latter will be ignored, which would reduce the complexity of matching volumes especially when there are multiple complex volumes policies.
#### Action
`Action` defined one action for a specific way of backup:
- if choosing `Kopia` or `Restic`, the action value would be `file-system-backup`.
- if choosing volume snapshot, the action value would be `volume-snapshot`.
- if choosing skip backup of volume, the action value would be `skip`, and it will skip backup of volume no matter is `file-system-backup` or `volume-snapshot`.
The policies could be extended for later other ways of backup, which means it may have some other `Action` value that will be assigned in the future.
Both `file-system-backup` `volume-snapshot`, and `skip` could be partially or fully configured in the YAML file. And configuration could take effect only for the related action.
#### Conditions
The conditions are serials of volume attributes, the matched Volumes should meet all the volume attributes in one conditions configuration.
##### Supported conditions
In Velero 1.11, we want to support the volume attributes listed below:
- capacity: matching volumes have the capacity that falls within this `capacity` range.
- storageClass: matching volumes those with specified `storageClass`, such as `gp2`, `ebs-sc` in eks.
- matching volumes that used specified volume sources.
##### Parameters
Parameters are optional for one specific action. For example, it could be `csi-snapshot-timeout: 6h` for CSI snapshot.
#### Special rule definitions:
- One single condition in `Conditions` with a specific key and empty value, which means the value matches any value. For example, if the `conditions.nfs` is `{}`, it means if `NFS` is used as `persistentVolumeSource` in Persistent Volume will be skipped no matter what the NFS server or NFS Path is.
- The size of each single filter value should limit to 256 bytes in case of an unfriendly long variable assignment.
- For capacity for PV or size for Volume, the value should include the lower value and upper value concatenated by commas. And it has several combinations below:
- "0,5Gi" or "0Gi,5Gi" which means capacity or size matches from 0 to 5Gi, including value 0 and value 5Gi
- ",5Gi" which is equal to "0,5Gi"
- "5Gi," which means capacity or size matches larger than 5Gi, including value 5Gi
- "5Gi" which is not supported and will be failed in validating configuration.
### Configmap Reference
Currently, resources policies are defined in `BackupSpec` struct, it will be more and more bloated with adding more and more filters which makes the size of `Backup` CR bigger and bigger, so we want to store the resources policies in configmap, and `Backup` CRD reference to current configmap.
the `configmap` user created would be like this:
```yaml
apiVersion: v1
data:
policies.yaml:
----
version: v1
volumePolicies:
- conditions:
capacity: "0,100Gi"
csi:
driver: aws.ebs.csi.driver
fsType: ext4
storageClass:
- gp2
- ebs-sc
action:
type: volume-snapshot
parameters:
volume-snapshot-timeout: "6h"
kind: ConfigMap
metadata:
creationTimestamp: "2023-01-16T14:08:12Z"
name: backup01
namespace: velero
resourceVersion: "17891025"
uid: b73e7f76-fc9e-4e72-8e2e-79db717fe9f1
```
A new variable `resourcePolices` would be added into `BackupSpec`, it's value is assigned with the current resources policy configmap
```yaml
apiVersion: velero.io/v1
kind: Backup
metadata:
name: backup-1
spec:
resourcePolices:
refType: Configmap
ref: backup01
...
```
The configmap only stores those assigned values, not the whole resources policies.
The name of the configmap is `$BackupName`, and it's in Velero install namespace.
#### Resource policies configmap related
The life cycle of resource policies configmap is managed by the user instead of Velero, which could make it more flexible and easy to maintain.
- The resource policies configmap will remain in the cluster until the user deletes it.
- Unlike backup, the resource policies configmap will not sync to the new cluster. So if the user wants to use one resource policies that do not sync to the new cluster, the backup will fail with resource policies not found.
- One resource policies configmap could be used by multiple backups.
- If the backup referenced resource policies configmap is been deleted, it won't affect the already existing backups, but if the user wants to reference the deleted configmap to create one new backup, it will fail with resource policies not found.
#### Versioning
We want to introduce the version field in the YAML data to contain break changes. Therefore, we won't follow a semver paradigm, for example in v1.11 the data look like this:
```yaml
version: v1
volumePolicies:
....
```
Hypothetically, in v1.12 we add new fields like clusterResourcePolicies, the version will remain as v1 b/c this change is backward compatible:
```yaml
version: v1
volumePolicies:
....
clusterResourcePolicies:
....
```
Suppose in v1.13, we have to introduce a break change, at this time we will bump up the version:
```yaml
version: v2
# This is just an example, we should try to avoid break change
volume-policies:
....
```
We only support one version in Velero, so it won't be recognized if backup using a former version of YAML data.
#### Multiple versions supporting
To manage the effort for maintenance, we will only support one version of the data in Velero. Suppose that there is one break change for the YAML data in Velero v1.13, we should bump up the config version to v2, and v2 is only supported in v1.13. For the existing data with version: v1, it should migrate them when the Velero startup, this won't hurt the existing backup schedule CR as it only references the configmap. To make the migration easier, the configmap for such resource filter policies should be labeled manually before Velero startup like this, Velero will migrate the labeled configmap.
We only support migrating from the previous version to the current version in case of complexity in data format conversion, which users could regenerate configmap in the new YAML data version, and it is easier to do version control.
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
labels:
# This label can be optional but if this is not set, the backup will fail after the breaking change and the user will need to update the data manually
velero.io/resource-filter-policies: "true"
name: example
namespace: velero
data:
.....
```
### Display of resources policies
As the resource policies configmap is referenced by backup CR, the policies in configmap are not so intuitive, so we need to integrate policies in configmap to the output of the command `velero backup describe`, and make it more readable.
## Compatibility
Currently, we have these resources filters:
- IncludedNamespaces
- ExcludedNamespaces
- IncludedResources
- ExcludedResources
- LabelSelector
- OrLabelSelectors
- IncludeClusterResources
- UseVolumeSnapshots
- velero.io/exclude-from-backup=true
- backup.velero.io/backup-volumes-excludes
- backup.velero.io/backup-volumes
- backup.velero.io/must-include-additional-items
So it should be careful with the combination of volumes resources policies and the above resources filters.
- When volume resource policies conflict with the above resource filters, we should respect the above resource filters. For example, if the user used the opt-out approach to `backup.velero.io/backup-volumes-excludes` annotation on the pod and also defined include volume in volumes resources filters configuration, we should respect the opt-out approach to skip backup of the volume.
- If volume resource policies conflict with themselves, the first matched policy will be respect.
## Implementation
This implementation should be included in Velero v1.11.0
Currently, in Velero v1.11.0 we only support `Action`
`skip`, and support `file-system-backup` and `volume-snapshot` for the later version. And `Parameters` in `Action` is also not supported in v1.11.0, we will support in a later version.
In Velero 1.11, we supported Conditions and format listed below:
- capacity
```yaml
capacity: "10Gi,100Gi" // match volume has the size between 10Gi and 100Gi
```
- storageClass
```yaml
storageClass: // match volume has the storage class gp2 or ebs-sc
- gp2
- ebs-sc
```
- volume sources (currently only support below format and attributes)
1. Specify the volume source name, the name could be `nfs`, `rbd`, `iscsi`, `csi` etc.
```yaml
nfs : {} // match any volume has nfs volume source
csi : {} // match any volume has csi volume source
```
2. Specify details for the related volume source (currently we only support csi driver filter and nfs server or path filter)
```yaml
csi: // match volume has nfs volume source and using `aws.efs.csi.driver`
driver: aws.efs.csi.driver
nfs: // match volume has nfs volume source and using below server and path
server: 192.168.200.90
path: /mnt/nfs
```
The conditions also could be extended in later versions, such as we could further supporting filtering other volume source detail not only NFS and CSI.
## Alternatives Considered
### Configmap VS CRD
Here we support the user define the YAML config file and storing the resources policies into configmap, also we could define one resource's policies CRD and store policies imported from the user-defined config file in the related CR.
But CRD is more like one kind of resource with status, Kubernetes API Server handles the lifecycle of a CR and handles it in different statuses. Compared to CRD, Configmap is more focused to store data.
## Open Issues
Should we support more than one version of filter policies configmap?

View File

@@ -1,161 +0,0 @@
# Proposal to add support for Resource Modifiers (AKA JSON Substitutions) in Restore Workflow
- [Proposal to add support for Resource Modifiers (AKA JSON Substitutions) in Restore Workflow](#proposal-to-add-support-for-resource-modifiers-aka-json-substitutions-in-restore-workflow)
- [Abstract](#abstract)
- [Goals](#goals)
- [Non Goals](#non-goals)
- [User Stories](#user-stories)
- [Scenario 1](#scenario-1)
- [Scenario 2](#scenario-2)
- [Detailed Design](#detailed-design)
- [Reference in velero API](#reference-in-velero-api)
- [ConfigMap Structure](#configmap-structure)
- [Operations supported by the JSON Patch library:](#operations-supported-by-the-json-patch-library)
- [Advance scenarios](#advance-scenarios)
- [Conditional patches using test operation](#conditional-patches-using-test-operation)
- [Alternatives Considered](#alternatives-considered)
- [Security Considerations](#security-considerations)
- [Compatibility](#compatibility)
- [Implementation](#implementation)
- [Future Enhancements](#future-enhancements)
- [Open Issues](#open-issues)
## Abstract
Currently velero supports substituting certain values in the K8s resources during restoration like changing the namespace, changing the storage class, etc. This proposal is to add generic support for JSON substitutions in the restore workflow. This will allow the user specify filters for particular resources and then specify a JSON patch (operator, path, value) to apply on a resource. This will allow the user to substitute any value in the K8s resource without having to write a new RestoreItemAction plugin for each kind of substitution.
<!-- ## Background -->
## Goals
- Allow the user to specify a GroupResource, Name(optional), JSON patch for modification.
- Allow the user to specify multiple JSON patch.
## Non Goals
- Deprecating the existing RestoreItemAction plugins for standard substitutions(like changing the namespace, changing the storage class, etc.)
## User Stories
### Scenario 1
- Alice has a PVC which is encrypted using a DES(Disk Encryption Set - Azure example) mentioned in the PVC YAML through the StorageClass YAML.
- Alice wishes to restore this snapshot to a different cluster. The new cluster does not have access to the same DES to provision disk's out of the snapshot.
- She wishes to use a different DES for all the PVCs which use the certain DES.
- She can use this feature to substitute the DES in all StorageClass YAMLs with the new DES without having to create a fresh storageclass, or understanding the name of the storageclass.
### Scenario 2
- Bob has multi zone cluster where nodes are spread across zones.
- Bob has pinned certain pods to a particular zone using nodeSelector/ nodeaffinity on the pod spec.
- In case of zone outage of the cloudprovider, Bob wishes to restore the workload to a different namespace in the same cluster, but change the zone pinning of the workload.
- Bob can use this feature to substitute the nodeSelector/ nodeaffinity in the pod spec with the new zone pinning to quickly failover the workload to a different zone's nodes.
## Detailed Design
- The design and approach is inspired from [kubectl patch command](https://github.com/kubernetes/kubectl/blob/0a61782351a027411b8b45b1443ec3dceddef421/pkg/cmd/patch/patch.go#L102C2-L104C1)
```bash
# Update a container's image using a json patch with positional arrays
kubectl patch pod valid-pod -type='json' -p='[{"op": "replace", "path": "/spec/containers/0/image", "value":"new image"}]'
```
- The user is expected to create a configmap with the desired Resource Modifications. Then the reference of the configmap will be provided in the RestoreSpec.
- The core restore workflow before creating/updating a particular resource in the cluster will be checked against the filters provided and respective substitutions will be applied on it.
### Reference in velero API
> Example of Reference to configmap in RestoreSpec
```yaml
apiVersion: velero.io/v1
kind: Restore
metadata:
name: restore-1
spec:
resourceModifier:
refType: Configmap
ref: resourcemodifierconfigmap
```
> Example CLI Command
```bash
velero restore create --from-backup backup-1 --resource-modifier-configmap resourcemodifierconfigmap
```
### Resource Modifier ConfigMap Structure
- User first needs to provide details on which resources the JSON Substitutions need to be applied.
- For this the user will provide 4 inputs - Namespaces(for NS Scoped resources), GroupResource (resource.group format similar to includeResources field in velero) and Name Regex(optional).
- If the user does not provide the Name, the JSON Substitutions will be applied to all the resources of the given Group and Kind under the given namespaces.
- Further the use will specify the JSON Patch using the structure of kubectl's "JSON Patch" based inputs.
- Sample data in ConfigMap
```yaml
version: v1
resourceModifierRules:
- conditions:
groupResource: persistentvolumeclaims
resourceNameRegex: "mysql.*"
namespaces:
- bar
- foo
patches:
- operation: replace
path: "/spec/storageClassName"
value: "premium"
- operation: remove
path: "/metadata/labels/test"
```
- The above configmap will apply the JSON Patch to all the PVCs in the namespaces bar and foo with name starting with mysql. The JSON Patch will replace the storageClassName with "premium" and remove the label "test" from the PVCs.
- Note that the Namespace here is the original namespace of the backed up resource, not the new namespace where the resource is going to be restored.
- The user can specify multiple JSON Patches for a particular resource. The patches will be applied in the order specified in the configmap. A subsequent patch is applied in order and if multiple patches are specified for the same path, the last patch will override the previous patches.
- The user can specify multiple resourceModifierRules in the configmap. The rules will be applied in the order specified in the configmap.
> Users need to create one configmap in Velero install namespace from a YAML file that defined resource modifiers. The creating command would be like the below:
```bash
kubectl create cm <configmap-name> --from-file <yaml-file> -n velero
```
### Operations supported by the JSON Patch library:
- add
- remove
- replace
- move
- copy
- test (covered below)
### Advance scenarios
#### **Conditional patches using test operation**
The `test` operation can be used to check if a particular value is present in the resource. If the value is present, the patch will be applied. If the value is not present, the patch will not be applied. This can be used to apply a patch only if a particular value is present in the resource. For example, if the user wishes to change the storage class of a PVC only if the PVC is using a particular storage class, the user can use the following configmap.
```yaml
version: v1
resourceModifierRules:
- conditions:
groupResource: persistentvolumeclaims.storage.k8s.io
resourceNameRegex: ".*"
namespaces:
- bar
- foo
patches:
- operation: test
path: "/spec/storageClassName"
value: "premium"
- operation: replace
path: "/spec/storageClassName"
value: "standard"
```
## Alternatives Considered
1. JSON Path based addressal of json fields in the resource
- This was the initial planned approach, but there is no open source library which gives satisfactory edit functionality with support for all operators supported by the JsonPath RFC.
- We attempted modifying the [https://kubernetes.io/docs/reference/kubectl/jsonpath/](https://kubernetes.io/docs/reference/kubectl/jsonpath/) but given the complexity of the code it did not make sense to change it since it would become a long term maintainability problem.
1. RestoreItemAction for each kind of standard substitution
- Not an extensible design. If a new kind of substitution is required, a new RestoreItemAction needs to be written.
1. RIA for JSON Substitution: The approach of doing JSON Substitution through a RestoreItemAction plugin was considered. But it is likely to have performance implications as the plugin will be invoked for all the resources.
## Security Considerations
No security impact.
## Compatibility
Compatibility with existing StorageClass mapping RestoreItemAction and similar plugins needs to be evaluated.
## Implementation
- Changes in Restore CRD. Add a new field to the RestoreSpec to reference the configmap.
- One example of where code will be modified: https://github.com/vmware-tanzu/velero/blob/eeee4e06d209df7f08bfabda326b27aaf0054759/pkg/restore/restore.go#L1266 On the obj before Creation, we can apply the conditions to check if the resource is filtered out using given parameters. Then using JsonPatch provided, we can update the resource.
- For Jsonpatch - https://github.com/evanphx/json-patch library is used.
- JSON Patch RFC https://datatracker.ietf.org/doc/html/rfc6902
## Future enhancements
- Additional features such as wildcard support in path, regex match support in value, etc. can be added in future. This would involve forking the https://github.com/evanphx/json-patch library and adding the required features, since those features are not supported by the library currently and are not part of jsonpatch RFC.
## Open Issues
NA

View File

@@ -1,177 +0,0 @@
# Proposal to add support for Multiple VolumeSnapshotClasses in CSI Plugin
- [Proposal to add support for Multiple VolumeSnapshotClasses in CSI Plugin](#proposal-to-add-support-for-multiple-volumesnapshotclasses-in-csi-plugin)
- [Abstract](#abstract)
- [Background](#background)
- [Goals](#goals)
- [Non Goals](#non-goals)
- [User Stories](#user-stories)
- [Scenario 1](#scenario-1)
- [Scenario 2](#scenario-2)
- [Detailed Design](#detailed-design)
- [Plugin Inputs Contract Changes](#plugin-inputs-contract-changes)
- [Using Plugin Inputs for CSI Plugin](#using-plugin-inputs-for-csi-plugin)
- [Annotations overrides on PVC for CSI Plugin](#annotations-overrides-on-pvc-for-csi-plugin)
- [Using Plugin Inputs for Other Plugins](#using-plugin-inputs-for-other-plugins)
- [Alternatives Considered](#alternatives-considered)
- [Security Considerations](#security-considerations)
- [Compatibility](#compatibility)
- [Implementation](#implementation)
- [Open Issues](#open-issues)
## Abstract
Currently the Velero CSI plugin chooses the VolumeSnapshotClass in the cluster that has the same driver name and also has the velero.io/csi-volumesnapshot-class label set on it. This global selection is not sufficient for many use cases. This proposal is to add support for multiple VolumeSnapshotClasses in CSI Plugin where the user can specify the VolumeSnapshotClass to use for a particular driver and backup.
## Background
The Velero CSI plugin chooses the VolumeSnapshotClass in the cluster that has the same driver name and also has the velero.io/csi-volumesnapshot-class label set on it. This global selection is not sufficient for many use cases. For example, if a cluster has multiple VolumeSnapshotClasses for the same driver, the user may want to use a VolumeSnapshotClass that is different from the default one. The user might also have different schedules set up for backing up different parts of the cluster and might wish to use different VolumeSnapshotClasses for each of these backups.
## Goals
- Allow the user to specify the VolumeSnapshotClass to use for a particular driver and backup.
## Non Goals
- Deprecating existing VSC selection behaviour. (The current behaviour will remain the default behaviour if the user does not specify the VolumeSnapshotClass to use for a particular driver and backup.)
## User Stories
### Scenario 1
- Consider Alice is a cluster admin and has a cluster with multiple VolumeSnapshotClasses for the same driver. Each VSC stores the snapshots taken in different ResourceGroup(Azure equivalent).
- Alice has configured multiple scheduled backups each covering a different set of namespaces, representing different apps owned by different teams.
- Alice wants to use a different VolumeSnapshotClass for each backup such that each snapshot goes in it's respective ResourceGroup to simply management of snapshots(COGS, RBAC etc).
- In current velero, Alice can't achieve this as the CSI plugin will use the default VolumeSnapshotClass for the driver and all snapshots will go in the same ResourceGroup.
- Proposed design will allow Alice to achieve this by specifying the VolumeSnapshotClass to use for a particular driver and backup/schedule.
## Scenario 2
- Bob is a cluster admin has PVCs storing different types of data.
- Most of the PVCs are used for storing non-sensitive application data. But certain PVCs store critical financial data.
- For such PVCs Bob wants to use a VolumeSnapshotClass with certain encryption related parameters set.
- In current velero, Bob can't achieve this as the CSI plugin will use the default VolumeSnapshotClass for the driver and all snapshots will be taken using the same VolumeSnapshotClass.
- Proposed design will allow Bob to achieve this by overriding the VolumeSnapshotClass to use for a particular driver and backup/schedule using annotations on those specific PVCs.
## Detailed Design
### Staged Approach:
### Stage 1 Approach
#### Through Annotations
1. **Support VolumeSnapshotClass selection at backup/schedule level**
The user can annotate the backup/ schedule with driver and VolumeSnapshotClass name. The CSI plugin will use the VolumeSnapshotClass specified in the annotation. If the annotation is not present, the CSI plugin will use the default VolumeSnapshotClass for the driver.
*example annotation on backup/schedule:*
```yaml
apiVersion: velero.io/v1
kind: Backup
metadata:
name: backup-1
annotations:
velero.io/csi-volumesnapshot-class_csi.cloud.disk.driver: csi-diskdriver-snapclass
velero.io/csi-volumesnapshot-class_csi.cloud.file.driver: csi-filedriver-snapclass
velero.io/csi-volumesnapshot-class_<driver name>: csi-snapclass
```
To query the annotations on a backup: "velero.io/csi-volumesnapshot-class_'driver name'" - where driver names comes from the PVC's driver.
2. **Support VolumeSnapshotClass selection at PVC level**
The user can annotate the PVCs with driver and VolumeSnapshotClass name. The CSI plugin will use the VolumeSnapshotClass specified in the annotation. If the annotation is not present, the CSI plugin will use the default VolumeSnapshotClass for the driver. If the VolumeSnapshotClass provided is of a different driver, the CSI plugin will use the default VolumeSnapshotClass for the driver.
*example annotation on PVC:*
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-1
annotations:
velero.io/csi-volumesnapshot-class: csi-diskdriver-snapclass
```
Consider this as a override option in conjunction to part 1.
**Note**: The user has to annotate the PVCs or backups with the VolumeSnapshotClass to use for each driver. This is not ideal for the user experience.
- **Mitigation**: We can extend Velero CLI to also annotate backups/schedules with the VolumeSnapshotClass to use for each driver. This will make it easier for the user to annotate the backups/schedules. This mitigation is not for the PVCs though, since PVCs is anyways a specific use case. Similar to : " kubectl run --image myimage --annotations="foo=bar" --annotations="another=one" mypod"
We can add support for - velero backup create my-backup --annotations "velero.io/csi:csi.cloud.disk.driver=csi-diskdriver-snapclass"
### Stage 2 Approach
The above annotations route is to get started and for initial design closure/ implementation, north star is to either introduce CSI specific fields (considering that CSI might be a very core part of velero going forward) in the backup/restore CR OR leverage the pluginInputs field as being tracked in: https://github.com/vmware-tanzu/velero/pull/5981
Refer section Alternatives 2. **Through generic property bag in the velero contracts**: in the design doc for more details on the pluginInputs field.
## Alternatives Considered
1. **Through CSI Specific Fields in Velero contracts**
**Considerations**
- Since CSI snapshotting is done through the plugin, we don't intend to bloat up the Backup Spec with CSI specific fields.
- But considering that CSI Snapshotting is the way forward, we can debate if we should add a CSI section to the Backup Spec.
**Approach**: Similar to VolumeSnapshotLocation param in the Backup Spec, we can add a VolumeSnapshotClass param in the Backup Spec. This will allow the user to specify the VolumeSnapshotClass to use for the backup. The CSI plugin will use the VolumeSnapshotClass specified in the Backup Spec. If the VolumeSnapshotClass is not specified, the CSI plugin will use the default VolumeSnapshotClass for the driver.
*example of VolumeSnapshotClass param in the Backup Spec:*
```yaml
apiVersion: velero.io/v1
kind: Backup
metadata:
name: backup-1
spec:
csiParameters:
volumeSnapshotClasses:
driver: csi.cloud.disk.driver
snapClass: csi-diskdriver-snapclass
timeout: 10m
```
1. **Through changes in velero contracts**
1. **Through configmap references.**
Currently even the storageclass mapping plugin expects the user to create a configmap which is used globally, and fetched through labels. This behaviour has same issue as the VolumeSnapshotClass selection. We can introduce a field in the velero contracts which allow passing configmap references for each plugin. And then the plugin can honour the configmap passed in as reference. The configmap can be used to pass the VolumeSnapshotClass to use for the backup, and also other parameters to tweak. This can help in making plugins more flexible while not depending on global behaviour.
*example of configmap reference in the velero contracts:*
```yaml
apiVersion: velero.io/v1
kind: Backup
metadata:
name: backup-1
spec:
configmapRefs:
- name: csi-volumesnapshotclass-configmap
- namespace: velero
- plugin: velero.io/csi
```
2. **Through generic property bag in the velero contracts**: We can introduce a field in the velero contracts which allow passing a generic property bag for each plugin. And then the plugin can honour the property bag passed in.
*example of property bag in the velero contracts:*
```yaml
apiVersion: velero.io/v1
kind: Backup
metadata:
name: backup-1
spec:
pluginInputs:
- name: velero.io/csi
- properties:
- key: csi.cloud.disk.driver
- value: csi-diskdriver-snapclass
- key: csi.cloud.file.driver
- value: csi-filedriver-snapclass
```
**Note**: Both these approaches can also be used to tweak other parameters such as CSI Snapshotting Timeout/intervals. And further can be used by other plugins.
## Security Considerations
No security impact.
## Compatibility
Existing behaviour of csi plugin will be retained where it fetches the VolumeSnapshotClass through the label. This will be the default behaviour if the user does not specify the VolumeSnapshotClass.
## Implementation
TBD based on closure of high level design proposals.
## Open Issues
NA

View File

@@ -1,138 +0,0 @@
# Ensure support for backing up resources based on multiple labels
## Abstract
As of today Velero supports filtering of resources based on single label selector per backup. It is desired that Velero
support backing up of resources based on multiple labels (OR logic).
**Note:** This solution is required because Kubernetes label selectors only allow AND logic of labels.
## Background
Currently, Velero's Backup/Restore API has a spec field `LabelSelector` which helps in filtering of resources based on
a **single** label value per backup/restore request. For instance, if the user specifies the `Backup.Spec.LabelSelector` as
`data-protection-app: true`, Velero will grab all the resources that possess this label and perform the backup
operation on them. The `LabelSelector` field does not accept more than one labels, and thus if the user want to take
backup for resources consisting of a label from a set of labels (label1 OR label2 OR label3) then the user needs to
create multiple backups per label rule. It would be really useful if Velero Backup API could respect a set of
labels (OR Rule) for a single backup request.
Related Issue: https://github.com/vmware-tanzu/velero/issues/1508
## Goals
- Enable support for backing up resources based on multiple labels (OR Logic) in a single backup config.
- Enable support for restoring resources based on multiple labels (OR Logic) in a single restore config.
## Use Case/Scenario
Let's say as a Velero user you want to take a backup of secrets, but all these secrets do not have one single consistent
label on them. We want to take backup of secrets having any one label in `app=gdpr`, `app=wpa` and `app=ccpa`. Here
we would have to create 3 instances of backup for each label rule. This can become cumbersome at scale.
## High-Level Design
### Addition of `OrLabelSelectors` spec to Velero Backup/Restore API
For Velero to back up resources if they consist of any one label from a set of labels, we would like to add a new spec
field `OrLabelSelectors` which would enable user to specify them. The Velero backup would somewhat look like:
```
apiVersion: velero.io/v1
kind: Backup
metadata:
name: backup-101
namespace: openshift-adp
spec:
includedNamespaces:
- test
storageLocation: velero-sample-1
ttl: 720h0m0s
orLabelSelectors:
- matchLabels:
app=gdpr
- matchLabels:
app=wpa
- matchLabels:
app=ccpa
```
**Note:** This approach will **not** be changing any current behavior related to Backup API spec `LabelSelector`. Rather we
propose that the label in `LabelSelector` spec and labels in `OrLabelSelectors` should be treated as different Velero functionalities.
Both these fields will be treated as separate Velero Backup API specs. If `LabelSelector` (singular) is present then just match that label.
And if `OrLabelSelectors` is present then match to any label in the set specified by the user. For backup case, if both the `LabelSelector` and `OrLabelSelectors`
are specified (we do not anticipate this as a real world use-case) then the `OrLabelSelectors` will take precedence, `LabelSelector` will
only be used to filter only when `OrLabelSelectors` is not specified by the user. This helps to keep both spec behaviour independent and not confuse the users.
This way we preserve the existing Velero behaviour and implement the new functionality in a much cleaner way.
For instance, let's take a look the following cases:
1. Only `LabelSelector` specified: Velero will create a backup with resources matching label `app=protect-db`
```
apiVersion: velero.io/v1
kind: Backup
metadata:
name: backup-101
namespace: openshift-adp
spec:
includedNamespaces:
- test
storageLocation: velero-sample-1
ttl: 720h0m0s
labelSelector:
- matchLabels:
app=gdpr
```
2. Only `OrLabelSelectors` specified: Velero will create a backup with resources matching any label from set `{app=gdpr, app=wpa, app=ccpa}`
```
apiVersion: velero.io/v1
kind: Backup
metadata:
name: backup-101
namespace: openshift-adp
spec:
includedNamespaces:
- test
storageLocation: velero-sample-1
ttl: 720h0m0s
orLabelSelectors:
- matchLabels:
app=gdpr
- matchLabels:
app=wpa
- matchLabels:
app=ccpa
```
Similar implementation will be done for the Restore API as well.
## Detailed Design
With the Introduction of `OrLabelSelectors` the BackupSpec and RestoreSpec will look like:
BackupSpec:
```
type BackupSpec struct {
[...]
// OrLabelSelectors is a set of []metav1.LabelSelector to filter with
// when adding individual objects to the backup. Resources matching any one
// label from the set of labels will be added to the backup. If empty
// or nil, all objects are included. Optional.
// +optional
OrLabelSelectors []\*metav1.LabelSelector
[...]
}
```
RestoreSpec:
```
type RestoreSpec struct {
[...]
// OrLabelSelectors is a set of []metav1.LabelSelector to filter with
// when restoring objects from the backup. Resources matching any one
// label from the set of labels will be restored from the backup. If empty
// or nil, all objects are included from the backup. Optional.
// +optional
OrLabelSelectors []\*metav1.LabelSelector
[...]
}
```
The logic to collect resources to be backed up for a particular backup will be updated in the `backup/item_collector.go`
around [here](https://github.com/vmware-tanzu/velero/blob/574baeb3c920f97b47985ec3957debdc70bcd5f8/pkg/backup/item_collector.go#L294).
And for filtering the resources to be restored, the changes will go [here](https://github.com/vmware-tanzu/velero/blob/d1063bda7e513150fd9ae09c3c3c8b1115cb1965/pkg/restore/restore.go#L1769)
**Note:**
- This feature will not be exposed via Velero CLI.

View File

@@ -393,7 +393,7 @@ Deletion of `VolumePluginBackup` CR can be delegated to plugin. Plugin can perfo
### 'core' Velero client/server required changes
- Creation of the VolumePluginBackup/VolumePluginRestore CRDs at installation time
- Persistence of VolumePluginBackup CRs towards the end of the backup operation
- Persistence of VolumePluginBackup CRs towards the end of the back up operation
- As part of backup synchronization, VolumePluginBackup CRs related to the backup will be synced.
- Deletion of VolumePluginBackup when volumeshapshotter's DeleteSnapshot is called
- Deletion of VolumePluginRestore as part of handling deletion of Restore CR
@@ -429,7 +429,7 @@ Instead, a new method for 'Progress' will be added to interface. Velero server r
But, this involves good amount of changes and needs a way for backward compatibility.
As volume plugins are mostly K8s native, its fine to go ahead with current limitation.
As volume plugins are mostly K8s native, its fine to go ahead with current limiation.
### Update Backup CR
Instead of creating new CRs, plugins can directly update the status of Backup CR. But, this deviates from current approach of having separate CRs like PodVolumeBackup/PodVolumeRestore to know operations progress.

View File

@@ -1,292 +0,0 @@
# Plugin Versioning
## Abstract
This proposal outlines an approach to support versioning of Velero's plugin APIs to enable changes to those APIs.
It will allow for backwards compatible changes to be made, such as the addition of new plugin methods, but also backwards incompatible changes such as method removal or method signature changes.
## Background
When changes are made to Veleros plugin APIs, there is no mechanism for the Velero server to communicate the version of the API that is supported, or for plugins to communicate what version they implement.
This means that any modification to a plugin API is a backwards incompatible change as it requires all plugins which implement the API to update and implement the new method.
There are several components involved to use plugins within Velero.
From the perspective of the core Velero codebase, all plugin kinds (e.g. `ObjectStore`, `BackupItemAction`) are defined by a single API interface and all interactions with plugins are managed by a plugin manager which provides an implementation of the plugin API interface for Velero to use.
Velero communicates with plugins via gRPC.
The core Velero project provides a framework (using the [go-plugin project](https://github.com/hashicorp/go-plugin)) for plugin authors to use to implement their plugins which manages the creation of gRPC servers and clients.
Velero plugins import the Velero plugin library in order to use this framework.
When a change is made to a plugin API, it needs to be made to the Go interface used by the Velero codebase, and also to the rpc service definition which is compiled to form part of the framework.
As each plugin kind is defined by a single interface, when a plugin imports the latest version of the Velero framework, it will need to implement the new APIs in order to build and run successfully.
If a plugin does not use the latest version of the framework, and is used with a newer version of Velero that expects the plugin to implement those methods, this will result in a runtime error as the plugin is incompatible.
With this proposal, we aim to break this coupling and introduce plugin API versions.
## Scenarios to Support
The following describes interactions between Velero and its plugins that will be supported with the implementation of this proposal.
For the purposes of this list, we will refer to existing Velero and plugin versions as `v1` and all following versions as version `n`.
Velero client communicating with plugins or plugin client calling other plugins:
- Version `n` client will be able to communicate with Version `n` plugin
- Version `n` client will be able to communicate with all previous versions of the plugin (Version `n-1` back to `v1`)
Velero plugins importing Velero framework:
- `v1` plugin built against Version `n` Velero framework
- A plugin may choose to only implement a `v1` API, but it must be able to be built using Version `n` of the Velero framework
## Goals
- Allow plugin APIs to change without requiring all plugins to implement the latest changes (even if they upgrade the version of Velero that is imported)
- Allow plugins to choose which plugin versions they support and enable them to support multiple versions
- Support breaking changes in the plugin APIs such as method removal or method signature changes
- Establish a design process for modifying plugin APIs such as method addition and removal and signature changes
- Establish a process for newer Velero clients to use older versions of a plugin API through adaptation
## Non Goals
- Change how plugins are managed or added
- Allow older plugin clients to communicate with new versions of plugins
## High-Level Design
With each change to a plugin API, a new version of the plugin interface and the proto service definition will be created which describes the new plugin API.
The plugin framework will be adapted to allow these new plugin versions to be registered.
Plugins can opt to implement any or all versions of an API, however Velero will always attempt to use the latest version, and the plugin management will be modified to adapt earlier versions of a plugin to be compatible with the latest API where possible.
Under the existing plugin framework, any new plugin version will be treated as a new plugin with a new kind.
The plugin manager (which provides implementations of a plugin to Velero) will include an adapter layer which will manage the different versions and provide the adaptation for versions which do not implement the latest version of the plugin API.
Providing an adaptation layer enables Velero and other plugin clients to use an older version of a plugin if it can be safely adapted.
As the plugins will be able to introduce backwards incompatible changes, it will _not_ be possible for older version of Velero to use plugins which only support the latest versions of the plugin APIs.
Although adding new rpc methods to a service is considered a backwards compatible change within gRPC, due to the way the proto definitions are compiled and included in the framework used by plugins, this will require every plugin to implement the new methods.
Instead, we are opting to treat the addition of a method to an API as one requiring versioning.
The addition of optional fields to existing structs which are used as parameters to or return values of API methods will not be considered as a change requiring versioning.
These kinds of changes do not modify method signatures and have been safely made in the past with no impact on existing plugins.
## Detailed Design
The following areas will need to be adapted to support plugin versioning.
### Plugin Interface Definitions
To provide versioned plugins, any change to a plugin interface (method addition, removal, or signature change) will require a new versioned interface to be created.
Currently, all plugin interface definitions reside in `pkg/plugin/velero` in a file corresponding to their plugin kind.
These files will be rearranged to be grouped by kind and then versioned: `pkg/plugin/velero/<plugin_kind>/<version>/`.
The following are examples of how each change may be treated:
#### Complete Interface Change
If the entire `ObjectStore` interface is being changed such that no previous methods are being included, a file would be added to `pkg/plugin/velero/objectstore/v2/` and would contain the new interface definition:
```
type ObjectStore interface {
// Only include new methods that the new API version will support
NewMethod()
// ...
}
```
#### Method Addition
If a method is being added to the `ObjectStore` API, a file would be added to `pkg/plugin/velero/objectstore/v2/` and may contain a new API definition as follows:
```
import "github.com/vmware-tanzu/velero/pkg/plugin/velero/objectstore/v1"
type ObjectStore interface {
// Import all the methods from the previous version of the API if they are to be included as is
v1.ObjectStore
// Provide definitions of any new methods
NewMethod()
```
#### Method Removal
If a method is being removed from the `ObjectStore` API, a file would be added to `pkg/plugin/velero/objectstore/v2/` and may contain a new API definition as follows:
```
type ObjectStore interface {
// Methods which are required from the previous API version must be included, for example
Init(config)
PutObject(bucket, key, body)
// ...
// Methods which are to be removed are not included
```
#### Method Signature modification
If a method signature in the `ObjectStore` API is being modified, a file would be added to `pkg/plugin/velero/objectstore/v2/` and may contain a new API definition as follows:
```
type ObjectStore interface {
// Methods which are required from the previous API version must be included, for example
Init(config)
PutObject(bucket, key, body)
// ...
// Provide new definitions for methods which are being modified
List(bucket, prefix, newParameter)
}
```
### Proto Service Definitions
The proto service definitions of the plugins will also be versioned and arranged by their plugin kind.
Currently, all the proto definitions reside under `pkg/plugin/proto` in a file corresponding to their plugin kind.
These files will be rearranged to be grouped by kind and then versioned: `pkg/plugin/proto/<plugin_kind>/<version>`,
except for the current v1 plugins. Those will remain in their current package/location for backwards compatibility.
This will allow plugin images built with earlier versions of velero to work with the latest velero (for v1 plugins
only). The go_package option will be added to all proto service definitions to allow the proto compilation script
to place the generated go code for each plugin api version in the proper go package directory.
It is not possible to import an existing proto service into a new one, so any methods will need to be duplicated across versions if they are required by the new version.
The message definitions can be shared however, so these could be extracted from the service definition files and placed in a file that can be shared across all versions of the service.
### Plugin Framework
To allow plugins to register which versions of the API they implement, the plugin framework will need to be adapted to accept new versions.
Currently, the plugin manager stores a [`map[string]RestartableProcess`](https://github.com/vmware-tanzu/velero/blob/main/pkg/plugin/clientmgmt/manager.go#L69), where the string key is the binary name for the plugin process (e.g. "velero-plugin-for-aws").
Each `RestartableProcess` contains a [`map[kindAndName]interface{}`](https://github.com/vmware-tanzu/velero/blob/main/pkg/plugin/clientmgmt/restartable_process.go#L60) which represents each of the unique plugin implementations provided by that binary.
[`kindAndName`](https://github.com/vmware-tanzu/velero/blob/main/pkg/plugin/clientmgmt/registry.go#L42) is a struct which combines the plugin kind (`ObjectStore`, `VolumeSnapshotter`) and the plugin name ("velero.io/aws", "velero.io/azure").
Each plugin version registration must be unique (to allow for multiple versions to be implemented within the same plugin binary).
This will be achieved by adding a specific registration method for each version to the Server interface in the plugin framework.
For example, if adding a V2 `RestoreItemAction` plugin, the Server interface would be modified to add the `RegisterRestoreItemActionV2` method.
This would require [adding a new plugin Kind const](https://github.com/vmware-tanzu/velero/blob/main/pkg/plugin/framework/plugin_kinds.go#L28-L46) to represent the new plugin version, e.g. `PluginKindRestoreItemActionV2`.
It also requires the creation of a new implementation of the go-plugin interface ([example](https://github.com/vmware-tanzu/velero/blob/main/pkg/plugin/framework/object_store.go)) to support that version and use the generated gRPC code from the proto definition (including a client and server implementation).
The Server will also need to be adapted to recognize this new plugin Kind and to serve the new implementation.
Existing plugin Kind consts and registration methods will be left unchanged and will correspond to the current version of the plugin APIs (assumed to be v1).
### Plugin Manager
The plugin manager is responsible for managing the lifecycle of plugins.
It provides an interface which is used by Velero to retrieve an instance of a plugin kind with a specific name (e.g. `ObjectStore` with the name "velero.io/aws").
The manager contains a registry of all available plugins which is populated during the main Velero server startup.
When the plugin manager is requested to provide a particular plugin, it checks the registry for that plugin kind and name.
If it is available in the registry, the manager retrieves a `RestartableProcess` for the plugin binary, creating it if it does not already exist.
That `RestartableProcess` is then used by individual restartable implementations of a plugin kind (e.g. `restartableObjectStore`, `restartableVolumeSnapshotter`).
As new plugin versions are added, the plugin manager will be modified to always retrieve the latest version of a plugin kind.
This is to allow the remainder of the Velero codebase to assume that it will always interact with the latest version of a plugin.
If the latest version of a plugin is not available, it will attempt to fall back to previous versions and use an implementation adapted to the latest version if available.
It will be up to the author of new plugin versions to determine whether a previous version of a plugin can be adapted to work with the interface of the new version.
For each plugin kind, a new `Restartable<PluginKind>` struct will be introduced which will contain the plugin Kind and a function, `Get`, which will instantiate a restartable instance of that plugin kind and perform any adaptation required to make it compatible with the latest version.
For example, `RestartableObjectStore` or `RestartableVolumeSnapshotter`.
For each restartable plugin kind, a new function will be introduced which will return a slice of `Restartable<PluginKind>` objects, sorted by version in descending order.
The manager will iterate through the list of `Restartable<PluginKind>`s and will check the registry for the given plugin kind and name.
If the requested version is not found, it will skip and continue to iterate, attempting to fetch previous versions of the plugin kind.
Once the requested version is found, the `Get` function will be called, returning the restartable implementation of the latest version of that plugin Kind.
```
type RestartableObjectStore struct {
kind framework.PluginKind
// Get returns a restartable ObjectStore for the given name and process, wrapping if necessary
Get func(name string, restartableProcess RestartableProcess) v2.ObjectStore
}
func (m *manager) restartableObjectStores() []RestartableObjectStore {
return []RestartableObjectStore{
{
kind: framework.PluginKindObjectStoreV2,
Get: newRestartableObjectStoreV2,
},
{
kind: framework.PluginKindObjectStore,
Get: func(name string, restartableProcess RestartableProcess) v2.ObjectStore {
// Adapt the existing restartable v1 plugin to be compatible with the v2 interface
return newAdaptedV1ObjectStore(newRestartableObjectStore(name, restartableProcess))
},
},
}
}
// GetObjectStore returns a restartableObjectStore for name.
func (m *manager) GetObjectStore(name string) (v2.ObjectStore, error) {
name = sanitizeName(name)
for _, restartableObjStore := range m.restartableObjectStores() {
restartableProcess, err := m.getRestartableProcess(restartableObjStore.kind, name)
if err != nil {
// Check if plugin was not found
if errors.Is(err, &pluginNotFoundError{}) {
continue
}
return nil, err
}
return restartableObjStore.Get(name, restartableProcess), nil
}
return nil, fmt.Errorf("unable to get valid ObjectStore for %q", name)
}
```
If the previous version is not available, or can not be adapted to the latest version, it should not be included in the `restartableObjectStores` slice.
This will result in an error being returned as is currently the case when a plugin implementation for a particular kind and provider can not be found.
There are situations where it may be beneficial to check at the point where a plugin API call is made whether it implements a specific version of the API.
This is something that can be addressed with future amendments to this design, however it does not seem to be necessary at this time.
#### Plugin Adaptation
When a new plugin API version is being proposed, it will be up to the author and the maintainer team to determine whether older versions of an API can be safely adapted to the latest version.
An adaptation will implement the latest version of the plugin API interface but will use the methods from the version that is being adapted.
In cases where the methods signatures remain the same, the adaptation layer will call through to the same method in the version being adapted.
Examples where an adaptation may be safe:
- A method signature is being changed to add a new parameter but the parameter could be optional (for example, adding a context parameter). The adaptation could call through to the method provided in the previous version but omit the parameter.
- A method signature is being changed to remove a parameter, but it is safe to pass a default value to the previous version. The adaptation could call through to the method provided in the previous version but use a default value for the parameter.
- A new method is being added but does not impact any existing behaviour of Velero (for example, a new method which will allow Velero to [wait for additional items to be ready](https://github.com/vmware-tanzu/velero/blob/main/design/wait-for-additional-items.md)). The adaptation would return a value which allows the existing behaviour to be performed.
- A method is being deleted as it is no longer used. The adaptation would call through to any methods which are still included but would omit the deleted method in the adaptation.
Examples where an adaptation may not be safe:
- A new method is added which is used to provide new critical functionality in Velero. If this functionality can not be replicated using existing plugin methods in previous API versions, this should not be adapted and instead the plugin manager should return an error indicating that the plugin implementation can not be found.
### Restartable Plugin Process
As new versions of plugins are added, new restartable implementations of plugins will also need to be created.
These are currently located within "pkg/plugin/clientmgmt" but will be rearranged to be grouped by kind and version like other plugin files.
## Versioning Considerations
It should be noted that if changes are being made to a plugin's API, it will only be necessary to bump the API version once within a release cycle, regardless of how many changes are made within that cycle.
This is because the changes will only be available to consumers when they upgrade to the next minor version of the Velero library.
New plugin API versions will not be introduced or backported to patch releases.
Once a new minor or major version of Velero has been released however, any further changes will need to follow the process above and use a new API version.
## Alternatives Considered
### Relying on gRPCs backwards compatibility when adding new methods
One approach for adapting the plugin APIs would have been to rely on the fact that adding methods to gRPC services is a backwards compatible change.
This approach would allow older clients to communicate with newer plugins as the existing interface would still be provided.
This was considered but ruled out as our current framework would require any plugin that recompiles using the latest version of the framework to adapt to the new version.
Also, without specific versioned interfaces, it would require checking plugin implementations at runtime for the specific methods that are supported.
## Compatibility
This design doc aims to allow plugin API changes to be made in a manner that may provide some backwards compatibility.
Older versions of Velero will not be able to make use of new plugin versions however may continue to use previous versions of a plugin API if supported by the plugin.
All compatibility concerns are addressed earlier in the document.
## Implementation
This design document primarily outlines an approach to allow future plugin API changes to be made.
However, there are changes to the existing code base that will be made to allow plugin authors to more easily propose and introduce changes to these APIs.
* Plugin interface definitions (currently in `pkg/plugin/velero`) will be rearranged to be grouped by kind and then versioned: `pkg/plugin/velero/<plugin_kind>/<version>/`.
* Proto definitions (currently in `pkg/plugin/proto`) will be rearranged to be grouped by kind and then versioned: `pkg/plugin/proto/<plugin_kind>/<version>`.
* This will also require changes to the `make update` build task to correctly find the new proto location and output to the versioned directories.
It is anticipated that changes to the plugin APIs will be made as part of the 1.9 release cycle.
To assist with this work, an additional follow-up task to the ones listed above would be to prepare a V2 version of each of the existing plugins.
These new versions will not yet provide any new API methods but will provide a layout for new additions to be made
## Open Issues

View File

@@ -1,130 +0,0 @@
# Design for RestoreItemAction v2 API
## Abstract
This design includes the changes to the RestoreItemAction (RIA) api design as required by the [Item Action Progress Monitoring](general-progress-monitoring.md) feature.
It also includes changes as required by the [Wait For Additional Items](wait-for-additional-items.md) feature.
The BIA v2 interface will have three new methods, and the RestoreItemActionExecuteOutput() struct in the return from Execute() will have three optional fields added.
If there are any additional RIA API changes that are needed in the same Velero release cycle as this change, those can be added here as well.
## Background
This API change is needed to facilitate long-running plugin actions that may not be complete when the Execute() method returns.
It is an optional feature, so plugins which don't need this feature can simply return an empty operation ID and the new methods can be no-ops.
This will allow long-running plugin actions to continue in the background while Velero moves on to the next plugin, the next item, etc.
The other change allows Velero to wait until newly-restored AdditionalItems returned by a RIA plugin are ready before moving on to restoring the current item.
## Goals
- Allow for RIA Execute() to optionally initiate a long-running operation and report on operation status.
- Allow for RIA to allow Velero to call back into the plugin to wait until AdditionalItems are ready before continuing with restore.
## Non Goals
- Allowing velero control over when the long-running operation begins.
## High-Level Design
As per the [Plugin Versioning](plugin-versioning.md) design, a new RIAv2 plugin `.proto` file will be created to define the GRPC interface.
v2 go files will also be created in `plugin/clientmgmt/restoreitemaction` and `plugin/framework/restoreitemaction`, and a new PluginKind will be created.
Changes to RestoreItemActionExecuteOutput will be made to the existing struct.
Since the new fields are optional elements of the struct, the new enlarged struct will work with both v1 and v2 plugins.
The velero Restore process will be modified to reference v2 plugins instead of v1 plugins.
An adapter will be created so that any existing RIA v1 plugin can be executed as a v2 plugin when executing a restore.
## Detailed Design
### proto changes (compiled into golang by protoc)
The v2 RestoreItemAction.proto will be like the current v1 version with the following changes:
RestoreItemActionExecuteOutput gets three new fields (defined in the current (v1) RestoreItemAction.proto file:
```
message RestoreItemActionExecuteResponse {
bytes item = 1;
repeated ResourceIdentifier additionalItems = 2;
bool skipRestore = 3;
string operationID = 4;
bool waitForAdditionalItems = 5;
google.protobuf.Duration additionalItemsReadyTimeout = 6;
}
```
The RestoreItemAction service gets three new rpc methods:
```
service RestoreItemAction {
rpc AppliesTo(RestoreItemActionAppliesToRequest) returns (RestoreItemActionAppliesToResponse);
rpc Execute(RestoreItemActionExecuteRequest) returns (RestoreItemActionExecuteResponse);
rpc Progress(RestoreItemActionProgressRequest) returns (RestoreItemActionProgressResponse);
rpc Cancel(RestoreItemActionCancelRequest) returns (google.protobuf.Empty);
rpc AreAdditionalItemsReady(RestoreItemActionItemsReadyRequest) returns (RestoreItemActionItemsReadyResponse);
}
```
To support these new rpc methods, we define new request/response message types:
```
message RestoreItemActionProgressRequest {
string plugin = 1;
string operationID = 2;
bytes restore = 3;
}
message RestoreItemActionProgressResponse {
generated.OperationProgress progress = 1;
}
message RestoreItemActionCancelRequest {
string plugin = 1;
string operationID = 2;
bytes restore = 3;
}
message RestoreItemActionItemsReadyRequest {
string plugin = 1;
bytes restore = 2;
repeated ResourceIdentifier additionalItems = 3;
}
message RestoreItemActionItemsReadyResponse {
bool ready = 1;
}
```
One new shared message type will be needed, as defined in the v2 BackupItemAction design:
```
message OperationProgress {
bool completed = 1;
string err = 2;
int64 completed = 3;
int64 total = 4;
string operationUnits = 5;
string description = 6;
google.protobuf.Timestamp started = 7;
google.protobuf.Timestamp updated = 8;
}
```
In addition to the three new rpc methods added to the RestoreItemAction interface, there is also a new `Name()` method. This one is only actually used internally by Velero to get the name that the plugin was registered with, but it still must be defined in a plugin which implements RestoreItemActionV2 in order to implement the interface. It doesn't really matter what it returns, though, as this particular method is not delegated to the plugin via RPC calls. The new (and modified) interface methods for `RestoreItemAction` are as follows:
```
type BackupItemAction interface {
...
Name() string
...
Progress(operationID string, restore *api.Restore) (velero.OperationProgress, error)
Cancel(operationID string, backup *api.Restore) error
AreAdditionalItemsReady(AdditionalItems []velero.ResourceIdentifier, restore *api.Restore) (bool, error)
...
}
type RestoreItemActionExecuteOutput struct {
UpdatedItem runtime.Unstructured
AdditionalItems []ResourceIdentifier
SkipRestore bool
OperationID string
WaitForAdditionalItems bool
}
```
A new PluginKind, `RestoreItemActionV2`, will be created, and the restore process will be modified to use this plugin kind.
See [Plugin Versioning](plugin-versioning.md) for more details on implementation plans, including v1 adapters, etc.
## Compatibility
The included v1 adapter will allow any existing RestoreItemAction plugin to work as expected, with no-op AreAdditionalItemsReady(), Progress(), and Cancel() methods.
## Implementation
This will be implemented during the Velero 1.11 development cycle.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 141 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 78 KiB

Some files were not shown because too many files have changed in this diff Show More