Compare commits

..

61 Commits

Author SHA1 Message Date
Anshul Ahuja
89be8e00d3 Reset VolumeSnapshotRef in Backup Sync Flow (#8005)
Signed-off-by: Anshul Ahuja <anshulahuja@microsoft.com>
Co-authored-by: Anshul Ahuja <anshulahuja@microsoft.com>
2024-07-12 13:20:13 +05:30
Xun Jiang/Bruce Jiang
5158490834 Merge pull request #7914 from kaovilai/ignore.git-velero1.13
release-1.13: ignore .git/ when formatting
2024-06-24 21:51:19 +08:00
Tiger Kaovilai
a2a97c4da1 ignore .git dir when formatting
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-06-21 13:52:05 -04:00
Xun Jiang/Bruce Jiang
ac7e36abf9 Merge pull request #7789 from blackpiglet/cherry_pick_7515
[cherry-pick][1.13]Check whether the VolumeSnapshot's source PVC is nil before using it
2024-05-20 14:21:45 +08:00
Xun Jiang
40eed65bc3 Skip populate VolumeInfo for data-moved PV when CSI is not enabled.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-05-13 15:18:38 +08:00
Xun Jiang
5fc1de8858 Check whether the VolumeSnapshot's source PVC is nil before using it.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-05-13 15:15:42 +08:00
Wenkai Yin(尹文开)
6499444106 Merge pull request #7780 from Lyndon-Li/release-1.13
[1.13] Issue 7535: don't skip must have resources for label selector
2024-05-08 17:14:07 +08:00
Lyndon-Li
b02d5fbb7f issue 7535: don't skip must have resources for label selector
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-05-08 16:59:00 +08:00
Wenkai Yin(尹文开)
4d961fb6fe Merge pull request #7652 from ywk253100/240410_changelog
Add changelog for v1.13.2
2024-04-11 10:39:20 +08:00
Wenkai Yin(尹文开)
17da80ff6a Add changelog for v1.13.2
Add changelog for v1.13.2

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-04-11 09:51:49 +08:00
Wenkai Yin(尹文开)
8f7121d471 Merge pull request #7606 from blackpiglet/bump_golang_version
Bump Golang version, and bump protobuf version.
2024-04-11 09:21:40 +08:00
Xun Jiang
2400651557 Bump Golang version, and bump protobuf version.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-04-10 18:30:22 +08:00
qiuming
35177cdf46 Merge pull request #7644 from ywk253100/240409_list
[cherry-pick]Empty the list before next round of listing
2024-04-10 10:56:37 +08:00
Wenkai Yin(尹文开)
27a4bfc7ba Empty the list before next round of listing
Empty the list before next round of listing

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-04-09 17:35:32 +08:00
Xun Jiang/Bruce Jiang
2c57ed8cbf Merge pull request #7645 from ywk253100/240409_action
[cherry-pick]Upgrade codecov action to v4
2024-04-09 17:34:39 +08:00
Wenkai Yin(尹文开)
c35fd60d2b Upgrade codecov action to v4
Upgrade codecov action to v4

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-04-09 17:21:30 +08:00
qiuming
9f9464c5fd Merge pull request #7586 from Lyndon-Li/release-1.13
[1.13] Issue 7535: add the MustHave resource check during item collection and item filter for restore
2024-03-29 12:20:45 +08:00
lyndon-li
6bcd5bee7c Merge branch 'release-1.13' into release-1.13 2024-03-29 10:58:26 +08:00
Lyndon-Li
c9d7708cd9 issue 7535: don't exclude resources in MustHave list during restore
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-03-29 10:55:24 +08:00
Lyndon-Li
420a123105 issue 7535: don't exclude resources in MustHave list during restore
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-03-29 10:52:49 +08:00
Daniel Jiang
4142722b29 Merge pull request #7577 from ywk253100/240328_bump
[cherry-pick]Bump up the versions of severel Kubernetes-related libs
2024-03-28 16:26:32 +08:00
Wenkai Yin(尹文开)
7b95d58d1a Bump up the versions of severel Kubernetes-related libs
Bump up the versions of severel Kubernetes-related libs

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-03-28 15:00:23 +08:00
Wenkai Yin(尹文开)
ea5a89f83b Merge pull request #7500 from ywk253100/240307_1.13.1
Generate the changelog for release 1.13.1
2024-03-08 13:03:11 +08:00
Wenkai Yin(尹文开)
642924d2bd Generate the changelog for release 1.13.1
Generate the changelog for release 1.13.1

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-03-07 11:23:07 +08:00
lyndon-li
8dca539314 Merge pull request #7468 from blackpiglet/7464_fix_release_1.13
[release-1.13]Modify the label used by the restore CLI to filter the PVR.
2024-03-01 09:47:55 +08:00
Xun Jiang
a6a6da5a72 Modify the label used by the restore CLI to filter the PVR.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-02-29 10:21:57 +08:00
danfeng
99376a3de6 Merge pull request #7461 from danfengliu/bumpup-upgrade-path
bump up upgrade path to 1.13
2024-02-27 14:51:41 +08:00
danfeng
eed1c383c8 Merge branch 'release-1.13' into bumpup-upgrade-path 2024-02-27 14:39:48 +08:00
Xun Jiang/Bruce Jiang
941ad1a993 Merge pull request #7450 from allenxu404/release-1.13
[cherry-pick]adjust the logic for the backup_last_status metrics to stop incorrectly incrementing over time
2024-02-26 10:04:06 +08:00
allenxu404
02d229cd06 Adjust the logic for the backup_last_status metrics to stop incorrectly incrementing over time
Signed-off-by: allenxu404 <qix2@vmware.com>
2024-02-26 09:26:04 +08:00
danfengl
c859f7bf11 bump up upgrade path to 1.13
Signed-off-by: danfengl <danfengl@vmware.com>
2024-02-23 06:42:29 +00:00
lyndon-li
e1222ffd74 Merge pull request #7459 from Lyndon-Li/release-1.13
[1.13] Issue 7308: change the data path requeue time to 5 second
2024-02-22 16:17:52 +08:00
Lyndon-Li
9cdaeadef3 issue 7308: change the data path requeue time to 5 second
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-02-22 16:02:35 +08:00
Wenkai Yin(尹文开)
cb7211d997 Merge pull request #7453 from ywk253100/240221_credential
[cherry-pick]Don't return error when no credential file found
2024-02-21 16:58:22 +08:00
Wenkai Yin(尹文开)
df08980618 Don't return error when no credential file found
Don't return error when no credential file found

Fixes #7395

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-02-21 16:05:15 +08:00
lyndon-li
51a90e7d2f Merge pull request #7399 from kaovilai/restic-recreate-repo-vel1.13
release-1.13: BackupRepositories associated with a BSL are invalidated when BSL is (re-)created. (#7380)
2024-02-20 11:13:46 +08:00
lyndon-li
62a531785f Merge branch 'release-1.13' into restic-recreate-repo-vel1.13 2024-02-20 10:50:18 +08:00
qiuming
5dd1d3bfe5 Merge pull request #7407 from blackpiglet/fix_velero_repo_get_bug_1.13
[cherry-pick][release-1.13]Fix the `velero repo get` nil pointer issue.
2024-02-19 10:53:44 +08:00
Xun Jiang
701e786150 Fix the velero repo get nil pointer issue.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-02-08 14:31:59 +08:00
Tiger Kaovilai
170fcc53ba BackupRepositories associated with a BSL are invalidated when BSL is (re-)created. (#7380)
* Add BackupRepositories invalidation on BSL Create
Simplify comments

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

* Simplify

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

---------

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-02-06 16:35:40 -05:00
Xun Jiang/Bruce Jiang
44aa6a7c6b Merge pull request #7372 from blackpiglet/add_uploader_config_for_schedule_v1.13
Add `ParallelFilesUpload` for schedule creation.
2024-01-31 15:42:04 +08:00
Xun Jiang
2a9f4fa576 Add ParallelFilesUpload for schedule creation.
Modify restore-helper print information.

Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-01-31 13:35:10 +08:00
Wenkai Yin(尹文开)
4d27ca99c1 Merge pull request #7369 from qiuming-best/release-1.13
[Cherry-Pick] Fix server start failure when no default BSL
2024-01-30 17:10:45 +08:00
Ming Qiu
8914c7209b Fix server start failure when no default BSL
Signed-off-by: Ming Qiu <mqiu@vmware.com>
2024-01-30 08:33:54 +00:00
Wenkai Yin(尹文开)
76670e940c Merge pull request #7351 from ywk253100/240124_log
Log the error details
2024-01-24 13:54:27 +08:00
Wenkai Yin(尹文开)
25d977e5bc Log the error details
Log the error details

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-01-24 12:43:59 +08:00
qiuming
94c7d4b6d4 Merge pull request #7346 from ywk253100/240122_changelog
Check whether the API resource exists before creating the informer cache
2024-01-24 10:47:16 +08:00
Wenkai Yin(尹文开)
09401c8454 Check whether the API resource exists before creating the informer cache
Check whether the API resource exists before creating the informer cache

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-01-23 17:19:09 +08:00
qiuming
981d64a1b8 Merge pull request #7338 from ywk253100/240122_changelog
Move unreleased changelogs to 1.13 changelog
2024-01-23 10:19:56 +08:00
Wenkai Yin(尹文开)
16b8b8da72 Move unreleased changelogs to 1.13 changelog
Move unreleased changelogs to 1.13 changelog

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-01-23 10:06:15 +08:00
lyndon-li
9fd73b2d13 Merge pull request #7339 from ywk253100/240122_log_erro
Log the error got from the discovery helper
2024-01-22 14:11:38 +08:00
Wenkai Yin(尹文开)
c377e472e8 Log the error got from the discovery helper
Log the error got from the discovery helper

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-01-22 11:12:00 +08:00
Wenkai Yin(尹文开)
f5714cb636 [cherry-pick]Do not attempt restore resource with no available GVK in cluster (#7336)
* Specify the Kind explicitly in the API resource

Specify the Kind explicitly in the API resource to avoid wrong Kind conversion


* Do not attempt restore resource with no available GVK in cluster (#7322)

Check for GVK before attempting restore.


---------

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
Co-authored-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-01-22 10:51:36 +08:00
Wenkai Yin(尹文开)
5ffa12189b Merge pull request #7328 from ywk253100/240118_release_node
Add release note for the informer cache memory consumption
2024-01-18 15:27:43 +08:00
Wenkai Yin(尹文开)
1882be763e Add release note for the informer cache memory consumption
Add release note for the informer cache memory consumption

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-01-18 13:47:34 +08:00
Wenkai Yin(尹文开)
42bbf87197 Merge pull request #7325 from ywk253100/240116_informer
Create informer per resources to avoid huge memory consumption
2024-01-18 10:44:15 +08:00
Wenkai Yin(尹文开)
8aa6a8e59d Create informer per resources to avoid huge memory consumption
Create informer per resources to avoid huge memory consumption

Fixes #7323

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-01-17 22:37:49 +08:00
Xun Jiang/Bruce Jiang
fdb29819b4 Merge pull request #7304 from blackpiglet/fix_7268_release_1.13
Add detail for parameter s3ForcePathStyle in MinIO page.
2024-01-15 13:31:30 +08:00
Xun Jiang
74f225037c Add detail for parameter s3ForcePathStyle in MinIO page.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-01-12 16:55:38 +08:00
Wenkai Yin(尹文开)
6e90e628aa Merge pull request #7303 from ywk253100/240110_pin
Pin the version of Golang and base image
2024-01-10 17:52:51 +08:00
Wenkai Yin(尹文开)
46f64f2f98 Pin the version of Golang and base image
Pin the version of Golang and base image

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-01-10 17:35:28 +08:00
1373 changed files with 28605 additions and 127685 deletions

View File

@@ -13,10 +13,10 @@ reviewers:
- reasonerjt
- ywk253100
- blackpiglet
- qiuming-best
- shubham-pampattiwar
- Lyndon-Li
- anshulahuja98
- kaovilai
tech-writer:
- sseago

View File

@@ -1,14 +1,5 @@
version: 2
updates:
# Dependencies listed in .github/workflows
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"
labels:
- "Dependencies"
- "github_actions"
- "kind/changelog-not-required"
# Dependencies listed in go.mod
- package-ecosystem: "gomod"
directory: "/" # Location of package manifests

33
.github/labeler.yml vendored
View File

@@ -1,33 +0,0 @@
# This file is used by Auto Label PRs action.
# Works with https://github.com/actions/labeler/
# Below this line, the keys are labels to be applied, and the values are the file globs to match against.
# Anything in the `design` directory gets the `Design` label.
Area/Design:
- changed-files:
- any-glob-to-any-file: design/*
# Anything that has plugin infra will be labeled.
# Individual plugins don't necessarily live here, though
Area/Plugins:
- changed-files:
- any-glob-to-any-file: pkg/plugins/**/*
Dependencies:
- changed-files:
- any-glob-to-any-file: go.mod
Documentation:
- changed-files:
- any-glob-to-any-file: site/content/docs/**/*
# Anything in the site directory gets the website label *EXCEPT* docs
Website:
- all:
- changed-files:
- any-glob-to-any-file: site/**/*
- all-globs-to-all-files: '!site/content/docs/**/*'
has-changelog:
- changed-files:
- any-glob-to-any-file: changelogs/**
has-e2e-2tests:
- changed-files:
- any-glob-to-any-file: test/e2e/**/*
has-unit-tests:
- changed-files:
- any-glob-to-any-file: pkg/**/*_test.go

43
.github/labels.yaml vendored
View File

@@ -1,43 +0,0 @@
# This file is used by [prow github action](https://github.com/jpmcb/prow-github-actions/) in .github/workflows/prow-action.yml.
# This file only has values for kind and area commands.
area:
- CLI
- CSI
- Cloud/AWS
- Cloud/Azure
- Cloud/DigitalOcean
- Cloud/GCP
- Cloud/vSphere
- Design
- Documentation
- Filters
- Plugins
- Process
- Storage/Minio
- Storage/Cinder
- WindowsSupport
- datamover
- fs-backup
- fs-backup/deletion
- fs-backup/file-selectable
- fs-uploader
- kopia-integration
- migration
- multi-tenancy
- progress-monitoring
- resilience
- schedule
- storage/IBM-ObjectStorage
- upgrade
- volume-snapshot-dm
kind:
- changelog-not-required
- question
- refactor
- requirement
- release-note
- release-blocker
- spike
- tech-debt
- usage-error
- voting

41
.github/labels.yml vendored Normal file
View File

@@ -0,0 +1,41 @@
area:
- "Cloud/AWS"
- "Cloud/GCP"
- "Cloud/Azure"
- "Design"
- "Plugins"
# Labels that can be applied to PRs with the /kind command
kind:
- "changelog-not-required"
- "tech-debt"
# Works with https://github.com/actions/labeler/
# Below this line, the keys are labels to be applied, and the values are the file globs to match against.
# Anything in the `design` directory gets the `Design` label.
Area/Design:
- design/*
# Anything in the site directory gets the website label *EXCEPT* docs
Website:
- any: ["site/**/*", "!site/content/docs/**/*"]
Documentation:
- site/content/docs/**/*
Dependencies:
- go.mod
# Anything that has plugin infra will be labeled.
# Individual plugins don't necessarily live here, though
Area/Plugins:
- "pkg/plugins/**/*"
has-unit-tests:
- "pkg/**/*_test.go"
has-e2e-2tests:
- "test/e2e/**/*"
has-changelog:
- "changelogs/**"

View File

@@ -9,5 +9,5 @@ Fixes #(issue)
# Please indicate you've done the following:
- [ ] [Accepted the DCO](https://velero.io/docs/v1.5/code-standards/#dco-sign-off). Commits without the DCO will delay acceptance.
- [ ] [Created a changelog file (`make new-changelog`)](https://velero.io/docs/main/code-standards/#adding-a-changelog) or comment `/kind changelog-not-required` on this PR.
- [ ] [Created a changelog file](https://velero.io/docs/v1.5/code-standards/#adding-a-changelog) or added `/kind changelog-not-required` as a comment on this pull request.
- [ ] Updated the corresponding documentation in `site/content/docs/main`.

View File

@@ -13,7 +13,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Set the author of a PR as the assignee
uses: kentaro-m/auto-assign-action@v2.0.0
uses: kentaro-m/auto-assign-action@v1.1.1
with:
configuration-path: ".github/auto-assignees.yml"
repo-token: "${{ secrets.GITHUB_TOKEN }}"

View File

@@ -13,7 +13,7 @@ jobs:
triage:
runs-on: ubuntu-latest
steps:
- uses: actions/labeler@v5
- uses: actions/labeler@v3
with:
repo-token: "${{ secrets.GITHUB_TOKEN }}"
configuration-path: .github/labeler.yml
configuration-path: .github/labels.yml

View File

@@ -11,7 +11,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Request a PR review based on files types/paths, and/or groups the author belongs to
uses: necojackarc/auto-request-review@v0.13.0
uses: necojackarc/auto-request-review@v0.7.0
with:
token: ${{ secrets.GITHUB_TOKEN }}
config: .github/auto-assignees.yml

93
.github/workflows/crds-verify-kind.yaml vendored Normal file
View File

@@ -0,0 +1,93 @@
name: "Verify Velero CRDs across k8s versions"
on:
pull_request:
# Do not run when the change only includes these directories.
paths-ignore:
- "site/**"
- "design/**"
jobs:
# Build the Velero CLI once for all Kubernetes versions, and cache it so the fan-out workers can get it.
build-cli:
runs-on: ubuntu-latest
steps:
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.21.9'
id: go
# Look for a CLI that's made for this PR
- name: Fetch built CLI
id: cache
uses: actions/cache@v2
env:
cache-name: cache-velero-cli
with:
path: ./_output/bin/linux/amd64/velero
# The cache key a combination of the current PR number, and a SHA256 hash of the Velero binary
key: velero-${{ github.event.pull_request.number }}-${{ hashFiles('./_output/bin/linux/amd64/velero') }}
# This key controls the prefixes that we'll look at in the cache to restore from
restore-keys: |
velero-${{ github.event.pull_request.number }}-
- name: Fetch cached go modules
uses: actions/cache@v2
if: steps.cache.outputs.cache-hit != 'true'
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Check out the code
uses: actions/checkout@v2
if: steps.cache.outputs.cache-hit != 'true'
# If no binaries were built for this PR, build it now.
- name: Build Velero CLI
if: steps.cache.outputs.cache-hit != 'true'
run: |
make local
# Check the common CLI against all Kubernetes versions
crd-check:
needs: build-cli
runs-on: ubuntu-latest
strategy:
matrix:
# Latest k8s versions. There's no series-based tag, nor is there a latest tag.
k8s:
- 1.19.7
- 1.20.2
- 1.21.1
- 1.22.0
- 1.23.6
- 1.24.2
- 1.25.3
# All steps run in parallel unless otherwise specified.
# See https://docs.github.com/en/actions/learn-github-actions/managing-complex-workflows#creating-dependent-jobs
steps:
- name: Fetch built CLI
id: cache
uses: actions/cache@v2
env:
cache-name: cache-velero-cli
with:
path: ./_output/bin/linux/amd64/velero
# The cache key a combination of the current PR number, and a SHA256 hash of the Velero binary
key: velero-${{ github.event.pull_request.number }}-${{ hashFiles('./_output/bin/linux/amd64/velero') }}
# This key controls the prefixes that we'll look at in the cache to restore from
restore-keys: |
velero-${{ github.event.pull_request.number }}-
- uses: engineerd/setup-kind@v0.5.0
with:
version: "v0.17.0"
image: "kindest/node:v${{ matrix.k8s }}"
- name: Install CRDs
run: |
kubectl cluster-info
kubectl get pods -n kube-system
kubectl version
echo "current-context:" $(kubectl config current-context)
echo "environment-kubeconfig:" ${KUBECONFIG}
./_output/bin/linux/amd64/velero install --crds-only --dry-run -oyaml | kubectl apply -f -

View File

@@ -6,43 +6,42 @@ on:
paths-ignore:
- "site/**"
- "design/**"
- "**/*.md"
jobs:
get-go-version:
uses: ./.github/workflows/get-go-version.yaml
with:
ref: ${{ github.event.pull_request.base.ref }}
# Build the Velero CLI and image once for all Kubernetes versions, and cache it so the fan-out workers can get it.
build:
runs-on: ubuntu-latest
needs: get-go-version
outputs:
minio-dockerfile-sha: ${{ steps.minio-version.outputs.dockerfile_sha }}
steps:
- name: Check out the code
uses: actions/checkout@v5
- name: Set up Go version
uses: actions/setup-go@v6
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: ${{ needs.get-go-version.outputs.version }}
go-version: '1.21.9'
id: go
# Look for a CLI that's made for this PR
- name: Fetch built CLI
id: cli-cache
uses: actions/cache@v4
uses: actions/cache@v2
with:
path: ./_output/bin/linux/amd64/velero
# The cache key a combination of the current PR number and the commit SHA
key: velero-cli-${{ github.event.pull_request.number }}-${{ github.sha }}
- name: Fetch built image
id: image-cache
uses: actions/cache@v4
uses: actions/cache@v2
with:
path: ./velero.tar
# The cache key a combination of the current PR number and the commit SHA
key: velero-image-${{ github.event.pull_request.number }}-${{ github.sha }}
- name: Fetch cached go modules
uses: actions/cache@v2
if: steps.cli-cache.outputs.cache-hit != 'true'
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Check out the code
uses: actions/checkout@v2
if: steps.cli-cache.outputs.cache-hit != 'true' || steps.image-cache.outputs.cache-hit != 'true'
# If no binaries were built for this PR, build it now.
- name: Build Velero CLI
if: steps.cli-cache.outputs.cache-hit != 'true'
@@ -52,107 +51,61 @@ jobs:
- name: Build Velero Image
if: steps.image-cache.outputs.cache-hit != 'true'
run: |
IMAGE=velero VERSION=pr-test BUILD_OUTPUT_TYPE=docker make container
docker save velero:pr-test-linux-amd64 -o ./velero.tar
# Check and build MinIO image once for all e2e tests
- name: Check Bitnami MinIO Dockerfile version
id: minio-version
run: |
DOCKERFILE_SHA=$(curl -s https://api.github.com/repos/bitnami/containers/commits?path=bitnami/minio/2025/debian-12/Dockerfile\&per_page=1 | jq -r '.[0].sha')
echo "dockerfile_sha=${DOCKERFILE_SHA}" >> $GITHUB_OUTPUT
- name: Cache MinIO Image
uses: actions/cache@v4
id: minio-cache
with:
path: ./minio-image.tar
key: minio-bitnami-${{ steps.minio-version.outputs.dockerfile_sha }}
- name: Build MinIO Image from Bitnami Dockerfile
if: steps.minio-cache.outputs.cache-hit != 'true'
run: |
echo "Building MinIO image from Bitnami Dockerfile..."
git clone --depth 1 https://github.com/bitnami/containers.git /tmp/bitnami-containers
cd /tmp/bitnami-containers/bitnami/minio/2025/debian-12
docker build -t bitnami/minio:local .
docker save bitnami/minio:local > ${{ github.workspace }}/minio-image.tar
# Create json of k8s versions to test
# from guide: https://stackoverflow.com/a/65094398/4590470
setup-test-matrix:
runs-on: ubuntu-latest
env:
GH_TOKEN: ${{ github.token }}
outputs:
matrix: ${{ steps.set-matrix.outputs.matrix }}
steps:
- name: Set k8s versions
id: set-matrix
# everything excluding older tags. limits needs to be high enough to cover all latest versions
# and test labels
# grep -E "v[1-9]\.(2[5-9]|[3-9][0-9])" filters for v1.25 to v9.99
# and removes older patches of the same minor version
# awk -F. '{if(!a[$1"."$2]++)print $1"."$2"."$NF}'
run: |
echo "matrix={\
\"k8s\":$(wget -q -O - "https://hub.docker.com/v2/namespaces/kindest/repositories/node/tags?page_size=50" | grep -o '"name": *"[^"]*' | grep -o '[^"]*$' | grep -v -E "alpha|beta" | grep -E "v[1-9]\.(2[5-9]|[3-9][0-9])" | awk -F. '{if(!a[$1"."$2]++)print $1"."$2"."$NF}' | sort -r | sed s/v//g | jq -R -c -s 'split("\n")[:-1]'),\
\"labels\":[\
\"Basic && (ClusterResource || NodePort || StorageClass)\", \
\"ResourceFiltering && !Restic\", \
\"ResourceModifier || (Backups && BackupsSync) || PrivilegesMgmt || OrderedResources\", \
\"(NamespaceMapping && Single && Restic) || (NamespaceMapping && Multiple && Restic)\"\
]}" >> $GITHUB_OUTPUT
IMAGE=velero VERSION=pr-test make container
docker save velero:pr-test -o ./velero.tar
# Run E2E test against all Kubernetes versions on kind
run-e2e-test:
needs:
- build
- setup-test-matrix
- get-go-version
needs: build
runs-on: ubuntu-latest
strategy:
matrix: ${{fromJson(needs.setup-test-matrix.outputs.matrix)}}
matrix:
k8s:
- 1.19.16
- 1.20.15
- 1.21.12
- 1.22.9
- 1.23.6
- 1.24.0
- 1.25.3
fail-fast: false
steps:
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.21.9'
id: go
- name: Check out the code
uses: actions/checkout@v5
- name: Set up Go version
uses: actions/setup-go@v6
with:
go-version: ${{ needs.get-go-version.outputs.version }}
# Fetch the pre-built MinIO image from the build job
- name: Fetch built MinIO Image
uses: actions/cache@v4
id: minio-cache
with:
path: ./minio-image.tar
key: minio-bitnami-${{ needs.build.outputs.minio-dockerfile-sha }}
- name: Load MinIO Image
run: |
echo "Loading MinIO image..."
docker load < ./minio-image.tar
uses: actions/checkout@v2
- name: Install MinIO
run: |
docker run -d --rm -p 9000:9000 -e "MINIO_ROOT_USER=minio" -e "MINIO_ROOT_PASSWORD=minio123" -e "MINIO_DEFAULT_BUCKETS=bucket,additional-bucket" bitnami/minio:local
- uses: engineerd/setup-kind@v0.6.2
run:
docker run -d --rm -p 9000:9000 -e "MINIO_ACCESS_KEY=minio" -e "MINIO_SECRET_KEY=minio123" -e "MINIO_DEFAULT_BUCKETS=bucket,additional-bucket" bitnami/minio:2021.6.17-debian-10-r7
- uses: engineerd/setup-kind@v0.5.0
with:
skipClusterLogsExport: true
version: "v0.27.0"
version: "v0.17.0"
image: "kindest/node:v${{ matrix.k8s }}"
- name: Fetch built CLI
id: cli-cache
uses: actions/cache@v4
uses: actions/cache@v2
with:
path: ./_output/bin/linux/amd64/velero
key: velero-cli-${{ github.event.pull_request.number }}-${{ github.sha }}
- name: Fetch built Image
id: image-cache
uses: actions/cache@v4
uses: actions/cache@v2
with:
path: ./velero.tar
key: velero-image-${{ github.event.pull_request.number }}-${{ github.sha }}
- name: Load Velero Image
run:
kind load image-archive velero.tar
# always try to fetch the cached go modules as the e2e test needs it either
- name: Fetch cached go modules
uses: actions/cache@v2
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Run E2E test
run: |
cat << EOF > /tmp/credential
@@ -165,27 +118,17 @@ jobs:
curl -LO https://dl.k8s.io/release/v${{ matrix.k8s }}/bin/linux/amd64/kubectl
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
git clone https://github.com/vmware-tanzu-experiments/distributed-data-generator.git -b main /tmp/kibishii
GOPATH=~/go \
CLOUD_PROVIDER=kind \
OBJECT_STORE_PROVIDER=aws \
BSL_CONFIG=region=minio,s3ForcePathStyle="true",s3Url=http://$(hostname -i):9000 \
CREDS_FILE=/tmp/credential \
BSL_BUCKET=bucket \
ADDITIONAL_OBJECT_STORE_PROVIDER=aws \
ADDITIONAL_BSL_CONFIG=region=minio,s3ForcePathStyle="true",s3Url=http://$(hostname -i):9000 \
ADDITIONAL_CREDS_FILE=/tmp/credential \
ADDITIONAL_BSL_BUCKET=additional-bucket \
VELERO_IMAGE=velero:pr-test-linux-amd64 \
PLUGINS=velero/velero-plugin-for-aws:latest \
GINKGO_LABELS="${{ matrix.labels }}" \
KIBISHII_DIRECTORY=/tmp/kibishii/kubernetes/yaml/ \
make -C test/ run-e2e
GOPATH=~/go CLOUD_PROVIDER=kind \
OBJECT_STORE_PROVIDER=aws BSL_CONFIG=region=minio,s3ForcePathStyle="true",s3Url=http://$(hostname -i):9000 \
CREDS_FILE=/tmp/credential BSL_BUCKET=bucket \
ADDITIONAL_OBJECT_STORE_PROVIDER=aws ADDITIONAL_BSL_CONFIG=region=minio,s3ForcePathStyle="true",s3Url=http://$(hostname -i):9000 \
ADDITIONAL_CREDS_FILE=/tmp/credential ADDITIONAL_BSL_BUCKET=additional-bucket \
GINKGO_FOCUS='Basic\]\[ClusterResource' VELERO_IMAGE=velero:pr-test \
make -C test/e2e run
timeout-minutes: 30
- name: Upload debug bundle
if: ${{ failure() }}
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v2
with:
name: DebugBundle
path: /home/runner/work/velero/velero/test/e2e/debug-bundle*
path: /home/runner/work/velero/velero/test/e2e/debug-bundle*

View File

@@ -1,33 +0,0 @@
on:
workflow_call:
inputs:
ref:
description: "The target branch's ref"
required: true
type: string
outputs:
version:
description: "The expected Go version"
value: ${{ jobs.extract.outputs.version }}
jobs:
extract:
runs-on: ubuntu-latest
outputs:
version: ${{ steps.pick-version.outputs.version }}
steps:
- name: Check out the code
uses: actions/checkout@v5
- id: pick-version
run: |
if [ "${{ inputs.ref }}" == "main" ]; then
version=$(grep '^go ' go.mod | awk '{print $2}' | cut -d. -f1-2)
else
goDirectiveVersion=$(grep '^go ' go.mod | awk '{print $2}')
toolChainVersion=$(grep '^toolchain ' go.mod | awk '{print $2}')
version=$(printf "%s\n%s\n" "$goDirectiveVersion" "$toolChainVersion" | sort -V | tail -n1)
fi
echo "version=$version"
echo "version=$version" >> $GITHUB_OUTPUT

View File

@@ -13,13 +13,13 @@ jobs:
# maintain the versions of Velero those need security scan
versions: [main]
# list of images that need scan
images: [velero, velero-plugin-for-aws, velero-plugin-for-gcp, velero-plugin-for-microsoft-azure]
images: [velero, velero-restore-helper]
permissions:
security-events: write # for github/codeql-action/upload-sarif to upload SARIF results
steps:
- name: Checkout code
uses: actions/checkout@v5
uses: actions/checkout@v3
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
@@ -31,6 +31,6 @@ jobs:
output: 'trivy-results.sarif'
- name: Upload Trivy scan results to GitHub Security tab
uses: github/codeql-action/upload-sarif@v4
uses: github/codeql-action/upload-sarif@v2
with:
sarif_file: 'trivy-results.sarif'

View File

@@ -12,7 +12,7 @@ jobs:
steps:
- name: Check out the code
uses: actions/checkout@v5
uses: actions/checkout@v2
- name: Changelog check
if: ${{ !(contains(github.event.pull_request.labels.*.name, 'kind/changelog-not-required') || contains(github.event.pull_request.labels.*.name, 'Design') || contains(github.event.pull_request.labels.*.name, 'Website') || contains(github.event.pull_request.labels.*.name, 'Documentation'))}}

View File

@@ -1,30 +1,30 @@
name: Pull Request CI Check
on: [pull_request]
jobs:
get-go-version:
uses: ./.github/workflows/get-go-version.yaml
with:
ref: ${{ github.event.pull_request.base.ref }}
build:
name: Run CI
needs: get-go-version
runs-on: ubuntu-latest
strategy:
fail-fast: false
steps:
- name: Check out the code
uses: actions/checkout@v5
- name: Set up Go version
uses: actions/setup-go@v6
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: ${{ needs.get-go-version.outputs.version }}
go-version: '1.21.9'
id: go
- name: Check out the code
uses: actions/checkout@v2
- name: Fetch cached go modules
uses: actions/cache@v2
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Make ci
run: make ci
- name: Upload test coverage
uses: codecov/codecov-action@v5
uses: codecov/codecov-action@v4
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: coverage.out

View File

@@ -8,14 +8,14 @@ jobs:
steps:
- name: Check out the code
uses: actions/checkout@v5
uses: actions/checkout@v2
- name: Codespell
uses: codespell-project/actions-codespell@master
with:
# ignore the config/.../crd.go file as it's generated binary data that is edited elsewhere.
# ignore the config/.../crd.go file as it's generated binary data that is edited elswhere.
skip: .git,*.png,*.jpg,*.woff,*.ttf,*.gif,*.ico,./config/crd/v1beta1/crds/crds.go,./config/crd/v1/crds/crds.go,./config/crd/v2alpha1/crds/crds.go,./go.sum,./LICENSE
ignore_words_list: iam,aks,ist,bridget,ue,shouldnot,atleast,notin,sme,optin,sie
ignore_words_list: iam,aks,ist,bridget,ue,shouldnot,atleast
check_filenames: true
check_hidden: true

View File

@@ -13,18 +13,18 @@ jobs:
name: Build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v3
name: Checkout
- name: Set up QEMU
id: qemu
uses: docker/setup-qemu-action@v3
uses: docker/setup-qemu-action@v1
with:
platforms: all
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@v1
with:
version: latest

View File

@@ -14,7 +14,7 @@ jobs:
name: Build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v3
name: Checkout
- name: Verify .goreleaser.yml and try a dryrun release.

View File

@@ -1,32 +1,14 @@
name: Pull Request Linter Check
on:
pull_request:
# Do not run when the change only includes these directories.
paths-ignore:
- "site/**"
- "design/**"
- "**/*.md"
on: [pull_request]
jobs:
get-go-version:
uses: ./.github/workflows/get-go-version.yaml
with:
ref: ${{ github.event.pull_request.base.ref }}
build:
name: Run Linter Check
runs-on: ubuntu-latest
needs: get-go-version
steps:
- name: Check out the code
uses: actions/checkout@v5
- name: Set up Go version
uses: actions/setup-go@v6
with:
go-version: ${{ needs.get-go-version.outputs.version }}
- name: Check out the code
uses: actions/checkout@v2
- name: Linter check
uses: golangci/golangci-lint-action@v8
with:
version: v2.1.1
args: --verbose
- name: Linter check
run: make lint

View File

@@ -9,21 +9,12 @@ jobs:
execute:
runs-on: ubuntu-latest
steps:
- uses: jpmcb/prow-github-actions@v1.1.3
- uses: jpmcb/prow-github-actions@v1.1.2
with:
# Only support /kind command for now.
# TODO: before allowing the /lgtm command, see if we can block merging if changelog labels are missing.
prow-commands: |
/approve
/area
/assign
/cc
/close
/hold
prow-commands: "/area
/kind
/milestone
/retitle
/remove
/reopen
/uncc
/unassign
/cc
/uncc"
github-token: "${{ secrets.GITHUB_TOKEN }}"

View File

@@ -12,7 +12,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v2
with:
# The default value is "1" which fetches only a single commit. If we merge PR without squash or rebase,
# there are at least two commits: the first one is the merge commit and the second one is the real commit

View File

@@ -9,55 +9,95 @@ on:
- '*'
jobs:
get-go-version:
uses: ./.github/workflows/get-go-version.yaml
with:
ref: ${{ github.ref }}
build:
name: Build
runs-on: ubuntu-latest
needs: get-go-version
steps:
- name: Check out the code
uses: actions/checkout@v5
- name: Set up Go version
uses: actions/setup-go@v6
with:
go-version: ${{ needs.get-go-version.outputs.version }}
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.21.9'
id: go
- name: Set up QEMU
id: qemu
uses: docker/setup-qemu-action@v3
with:
platforms: all
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v3
with:
version: latest
- name: Build
run: |
make local
# Clean go cache to ease the build environment storage pressure.
go clean -modcache -cache
- name: Test
run: make test
- name: Upload test coverage
uses: codecov/codecov-action@v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: coverage.out
verbose: true
# Only try to publish the container image from the root repo; forks don't have permission to do so and will always get failures.
- name: Publish container image
if: github.repository == 'vmware-tanzu/velero'
run: |
sudo swapoff -a
sudo rm -f /mnt/swapfile
docker system prune -a --force
- uses: actions/checkout@v3
# Fix issue of setup-gcloud
- run: |
sudo apt-get install python2.7
export CLOUDSDK_PYTHON="/usr/bin/python2"
- uses: google-github-actions/setup-gcloud@v0
with:
version: '285.0.0'
service_account_key: ${{ secrets.GCS_SA_KEY }}
export_default_credentials: true
- run: gcloud info
- name: Set up QEMU
id: qemu
uses: docker/setup-qemu-action@v1
with:
platforms: all
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
with:
version: latest
- name: Build
run: |
make local
# Clean go cache to ease the build environment storage pressure.
go clean -modcache -cache
- name: Test
run: make test
- name: Upload test coverage
uses: codecov/codecov-action@v2
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: coverage.out
verbose: true
# Use the JSON key in secret to login gcr.io
- uses: 'docker/login-action@v2'
with:
registry: 'gcr.io' # or REGION.docker.pkg.dev
username: '_json_key'
password: '${{ secrets.GCR_SA_KEY }}'
# Only try to publish the container image from the root repo; forks don't have permission to do so and will always get failures.
- name: Publish container image
if: github.repository == 'vmware-tanzu/velero'
run: |
sudo swapoff -a
sudo rm -f /mnt/swapfile
docker system prune -a --force
# Build and push Velero image to docker registry
docker login -u ${{ secrets.DOCKER_USER }} -p ${{ secrets.DOCKER_PASSWORD }}
./hack/docker-push.sh
# Build and push Velero image to docker registry
docker login -u ${{ secrets.DOCKER_USER }} -p ${{ secrets.DOCKER_PASSWORD }}
VERSION=$(./hack/docker-push.sh | grep 'VERSION:' | awk -F: '{print $2}' | xargs)
# Upload Velero image package to GCS
source hack/ci/build_util.sh
BIN=velero
RESTORE_HELPER_BIN=velero-restore-helper
GCS_BUCKET=velero-builds
VELERO_IMAGE=${BIN}-${VERSION}
VELERO_RESTORE_HELPER_IMAGE=${RESTORE_HELPER_BIN}-${VERSION}
VELERO_IMAGE_FILE=${VELERO_IMAGE}.tar.gz
VELERO_RESTORE_HELPER_IMAGE_FILE=${VELERO_RESTORE_HELPER_IMAGE}.tar.gz
VELERO_IMAGE_BACKUP_FILE=${VELERO_IMAGE}-'build.'${GITHUB_RUN_NUMBER}.tar.gz
VELERO_RESTORE_HELPER_IMAGE_BACKUP_FILE=${VELERO_RESTORE_HELPER_IMAGE}-'build.'${GITHUB_RUN_NUMBER}.tar.gz
cp ${VELERO_IMAGE_FILE} ${VELERO_IMAGE_BACKUP_FILE}
cp ${VELERO_RESTORE_HELPER_IMAGE_FILE} ${VELERO_RESTORE_HELPER_IMAGE_BACKUP_FILE}
uploader ${VELERO_IMAGE_FILE} ${GCS_BUCKET}
uploader ${VELERO_RESTORE_HELPER_IMAGE_FILE} ${GCS_BUCKET}
uploader ${VELERO_IMAGE_BACKUP_FILE} ${GCS_BUCKET}
uploader ${VELERO_RESTORE_HELPER_IMAGE_BACKUP_FILE} ${GCS_BUCKET}

View File

@@ -9,10 +9,10 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout the latest code
uses: actions/checkout@v5
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Automatic Rebase
uses: cirrus-actions/rebase@1.8
uses: cirrus-actions/rebase@1.3.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -7,7 +7,7 @@ jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v10.0.0
- uses: actions/stale@v6.0.1
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-issue-message: "This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 14 days. If a Velero team member has requested log or more information, please provide the output of the shared commands."
@@ -20,4 +20,4 @@ jobs:
days-before-pr-close: -1
# Only issues made after Feb 09 2021.
start-date: "2021-09-02T00:00:00"
exempt-issue-labels: "Epic,Area/CLI,Area/Cloud/AWS,Area/Cloud/Azure,Area/Cloud/GCP,Area/Cloud/vSphere,Area/CSI,Area/Design,Area/Documentation,Area/Plugins,Bug,Enhancement/User,kind/requirement,kind/refactor,kind/tech-debt,limitation,Needs investigation,Needs triage,Needs Product,P0 - Hair on fire,P1 - Important,P2 - Long-term important,P3 - Wouldn't it be nice if...,Product Requirements,Restic - GA,Restic,release-blocker,Security,backlog"
exempt-issue-labels: "Epic,Area/CLI,Area/Cloud/AWS,Area/Cloud/Azure,Area/Cloud/GCP,Area/Cloud/vSphere,Area/CSI,Area/Design,Area/Documentation,Area/Plugins,Bug,Enhancement/User,kind/requirement,kind/refactor,kind/tech-debt,limitation,Needs investigation,Needs triage,Needs Product,P0 - Hair on fire,P1 - Important,P2 - Long-term important,P3 - Wouldn't it be nice if...,Product Requirements,Restic - GA,Restic,release-blocker,Security"

11
.gitignore vendored
View File

@@ -53,13 +53,4 @@ tilt-resources/cloud
# test generated files
test/e2e/report.xml
coverage.out
__debug_bin*
debug.test*
# make lint cache
.cache/
# Go telemetry directory created when container sets HOME to working directory
# This happens because Makefile uses 'docker run -w /github.com/vmware-tanzu/velero'
# and Go's os.UserConfigDir() falls back to $HOME/.config when XDG_CONFIG_HOME is unset
.config/
__debug_bin*

View File

@@ -1,438 +0,0 @@
# This file contains all available configuration options
# with their default values.
# options for analysis running
run:
# default concurrency is a available CPU number
concurrency: 4
# timeout for analysis, e.g. 30s, 5m, default is 0
timeout: 20m
# exit code when at least one issue was found, default is 1
issues-exit-code: 1
# by default isn't set. If set we pass it to "go list -mod={option}". From "go help modules":
# If invoked with -mod=readonly, the go command is disallowed from the implicit
# automatic updating of go.mod described above. Instead, it fails when any changes
# to go.mod are needed. This setting is most useful to check that go.mod does
# not need updates, such as in a continuous integration and testing system.
# If invoked with -mod=vendor, the go command assumes that the vendor
# directory holds the correct copies of dependencies and ignores
# the dependency descriptions in go.mod.
# modules-download-mode: readonly|release|vendor
modules-download-mode: readonly
# Allow multiple parallel golangci-lint instances running.
# If false (default) - golangci-lint acquires file lock on start.
allow-parallel-runners: false
# output configuration options
output:
formats:
text:
path: stdout
# print lines of code with issue, default is true
print-issued-lines: true
# print linter name in the end of issue text, default is true
print-linter-name: true
# Show statistics per linter.
show-stats: false
linters:
# all available settings of specific linters
settings:
depguard:
rules:
main:
deny:
# specify an error message to output when a denylisted package is used
- pkg: github.com/sirupsen/logrus
desc: "logging is allowed only by logutils.Log"
dogsled:
# checks assignments with too many blank identifiers; default is 2
max-blank-identifiers: 2
dupl:
# tokens count to trigger issue, 150 by default
threshold: 100
errcheck:
# report about not checking of errors in type assertions: `a := b.(MyStruct)`;
# default is false: such cases aren't reported by default.
check-type-assertions: false
# report about assignment of errors to blank identifier: `num, _ := strconv.Atoi(numStr)`;
# default is false: such cases aren't reported by default.
check-blank: false
exhaustive:
# indicates that switch statements are to be considered exhaustive if a
# 'default' case is present, even if all enum members aren't listed in the
# switch
default-signifies-exhaustive: false
funlen:
lines: 60
statements: 40
gocognit:
# minimal code complexity to report, 30 by default (but we recommend 10-20)
min-complexity: 10
nestif:
# minimal complexity of if statements to report, 5 by default
min-complexity: 4
goconst:
# minimal length of string constant, 3 by default
min-len: 3
# minimal occurrences count to trigger, 3 by default
min-occurrences: 5
gocritic:
# Which checks should be enabled; can't be combined with 'disabled-checks';
# See https://go-critic.github.io/overview#checks-overview
# To check which checks are enabled run `GL_DEBUG=gocritic golangci-lint run`
# By default list of stable checks is used.
settings: # settings passed to gocritic
captLocal: # must be valid enabled check name
paramsOnly: true
gocyclo:
# minimal code complexity to report, 30 by default (but we recommend 10-20)
min-complexity: 10
godot:
# check all top-level comments, not only declarations
check-all: false
godox:
# report any comments starting with keywords, this is useful for TODO or FIXME comments that
# might be left in the code accidentally and should be resolved before merging
keywords: # default keywords are TODO, BUG, and FIXME, these can be overwritten by this setting
- NOTE
- OPTIMIZE # marks code that should be optimized before merging
- HACK # marks hack-arounds that should be removed before merging
gosec:
excludes:
- G115
govet:
# enable or disable analyzers by name
enable:
- atomicalign
enable-all: false
disable:
- shadow
disable-all: false
importas:
alias:
- alias: appsv1api
pkg: k8s.io/api/apps/v1
- alias: corev1api
pkg: k8s.io/api/core/v1
- alias: rbacv1
pkg: k8s.io/api/rbac/v1
- alias: apierrors
pkg: k8s.io/apimachinery/pkg/api/errors
- alias: apiextv1
pkg: k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1
- alias: metav1
pkg: k8s.io/apimachinery/pkg/apis/meta/v1
- alias: storagev1api
pkg: k8s.io/api/storage/v1
- alias: batchv1api
pkg: k8s.io/api/batch/v1
lll:
# max line length, lines longer will be reported. Default is 120.
# '\t' is counted as 1 character by default, and can be changed with the tab-width option
line-length: 120
# tab width in spaces. Default to 1.
tab-width: 1
misspell:
# Correct spellings using locale preferences for US or UK.
# Default is to use a neutral variety of English.
# Setting locale to US will correct the British spelling of 'colour' to 'color'.
locale: US
ignore-rules:
- someword
nakedret:
# make an issue if func has more lines of code than this setting and it has naked returns; default is 30
max-func-lines: 30
prealloc:
# XXX: we don't recommend using this linter before doing performance profiling.
# For most programs usage of prealloc will be a premature optimization.
# Report preallocation suggestions only on simple loops that have no returns/breaks/continues/gotos in them.
# True by default.
simple: true
range-loops: true # Report preallocation suggestions on range loops, true by default
for-loops: false # Report preallocation suggestions on for loops, false by default
nolintlint:
# Enable to ensure that nolint directives are all used. Default is true.
allow-unused: false
# Exclude following linters from requiring an explanation. Default is [].
allow-no-explanation: []
# Enable to require an explanation of nonzero length after each nolint directive. Default is false.
require-explanation: true
# Enable to require nolint directives to mention the specific linter being suppressed. Default is false.
require-specific: true
perfsprint:
strconcat: false
sprintf1: false
errorf: false
int-conversion: true
revive:
rules:
- name: blank-imports
disabled: true
- name: context-as-argument
disabled: true
- name: context-keys-type
- name: dot-imports
disabled: true
- name: early-return
disabled: true
arguments:
- "preserveScope"
- name: empty-block
disabled: true
- name: error-naming
disabled: true
- name: error-return
disabled: true
- name: error-strings
disabled: true
- name: errorf
disabled: true
- name: increment-decrement
- name: indent-error-flow
disabled: true
- name: range
- name: receiver-naming
disabled: true
- name: redefines-builtin-id
disabled: true
- name: superfluous-else
disabled: true
arguments:
- "preserveScope"
- name: time-naming
- name: unexported-return
disabled: true
- name: unnecessary-stmt
- name: unreachable-code
- name: unused-parameter
disabled: true
- name: use-any
- name: var-declaration
- name: var-naming
disabled: true
rowserrcheck:
packages:
- github.com/jmoiron/sqlx
staticcheck:
checks:
- all
- -QF1001 # FIXME
- -QF1003 # FIXME
- -QF1004 # FIXME
- -QF1007 # FIXME
- -QF1008 # FIXME
- -QF1009 # FIXME
- -QF1012 # FIXME
testifylint:
# TODO: enable them all
disable:
- float-compare
- go-require
enable-all: true
testpackage:
# regexp pattern to skip files
skip-regexp: (export|internal)_test\.go
unparam:
# Inspect exported functions, default is false. Set to true if no external program/library imports your code.
# XXX: if you enable this setting, unparam will report a lot of false-positives in text editors:
# if it's called for subdir of a project it can't find external interfaces. All text editor integrations
# with golangci-lint call it on a directory with the changed file.
check-exported: false
usetesting:
os-setenv: false
whitespace:
multi-if: false # Enforces newlines (or comments) after every multi-line if statement
multi-func: false # Enforces newlines (or comments) after every multi-line function signature
wsl:
# If true append is only allowed to be cuddled if appending value is
# matching variables, fields or types on line above. Default is true.
strict-append: true
# Allow calls and assignments to be cuddled as long as the lines have any
# matching variables, fields or types. Default is true.
allow-assign-and-call: true
# Allow multiline assignments to be cuddled. Default is true.
allow-multiline-assign: true
# Allow declarations (var) to be cuddled.
allow-cuddle-declarations: false
# Allow trailing comments in ending of blocks
allow-trailing-comment: false
# Force newlines in end of case at this limit (0 = never).
force-case-trailing-whitespace: 0
# Force cuddling of err checks with err var assignment
force-err-cuddling: false
# Allow leading comments to be separated with empty lines
allow-separated-leading-comment: false
default: none
enable:
- asasalint
- asciicheck
- bidichk
- bodyclose
- copyloopvar
- dogsled
- dupword
- durationcheck
- errcheck
- errchkjson
- exptostd
- ginkgolinter
- goconst
- goheader
- goprintffuncname
- gosec
- govet
- importas
- ineffassign
- misspell
- nakedret
- nilerr
- noctx
- nolintlint
- nosprintfhostport
- perfsprint
- revive
- staticcheck
- testifylint
- thelper
- unconvert
- unparam
- unused
- usestdlibvars
- usetesting
- whitespace
exclusions:
# which dirs to skip: issues from them won't be reported;
# can use regexp here: generated.*, regexp is applied on full path;
# default value is empty list, but default dirs are skipped independently
# from this option's value (see skip-dirs-use-default).
# "/" will be replaced by current OS file path separator to properly work
# on Windows.
paths:
- pkg/plugin/generated/*
- third_party
rules:
- linters:
- staticcheck
text: "DefaultVolumesToRestic" # No need to report deprecate for DefaultVolumesToRestic.
- path: ".*_test.go$"
linters:
- errcheck
- goconst
- gosec
- govet
- staticcheck
- unparam
- unused
- path: test/
linters:
- errcheck
- goconst
- gosec
- nilerr
- staticcheck
- unparam
- unused
- path: ".*data_upload_controller_test.go$"
linters:
- dupword
text: "type"
- path: ".*config_test.go$"
linters:
- dupword
text: "bucket"
generated: lax
presets:
- comments
- common-false-positives
- legacy
- std-error-handling
issues:
# Maximum issues count per one linter. Set to 0 to disable. Default is 50.
max-issues-per-linter: 0
# Maximum count of issues with the same text. Set to 0 to disable. Default is 3.
max-same-issues: 0
# make issues output unique by line, default is true
uniq-by-line: true
# This file contains all available configuration options
# with their default values.
formatters:
enable:
- gofmt
- goimports
exclusions:
generated: lax
paths:
- pkg/plugin/generated/*
- third_party
settings:
gofmt:
# simplify code: gofmt with `-s` option, true by default
simplify: true
goimports:
local-prefixes:
- github.com/vmware-tanzu/velero
severity:
default: error
# Default value is empty list.
# When a list of severity rules are provided, severity information will be added to lint
# issues. Severity rules have the same filtering capability as exclude rules except you
# are allowed to specify one matcher per severity rule.
# Only affects out formats that support setting severity information.
rules:
- linters:
- dupl
severity: info
version: "2"

View File

@@ -26,23 +26,18 @@ builds:
- arm
- arm64
- ppc64le
- s390x
ignore:
# don't build arm for darwin and arm/arm64 for windows
- goos: darwin
goarch: arm
- goos: darwin
goarch: ppc64le
- goos: darwin
goarch: s390x
- goos: windows
goarch: arm
- goos: windows
goarch: arm64
- goos: windows
goarch: ppc64le
- goos: windows
goarch: s390x
ldflags:
- -X "github.com/vmware-tanzu/velero/pkg/buildinfo.Version={{ .Tag }}" -X "github.com/vmware-tanzu/velero/pkg/buildinfo.GitSHA={{ .FullCommit }}" -X "github.com/vmware-tanzu/velero/pkg/buildinfo.GitTreeState={{ .Env.GIT_TREE_STATE }}" -X "github.com/vmware-tanzu/velero/pkg/buildinfo.ImageRegistry={{ .Env.REGISTRY }}"
archives:
@@ -51,6 +46,9 @@ archives:
files:
- LICENSE
- examples/**/*
# Add the setting to resolve the DEPRECATED warning. Actually, Velero's case is not affected by the rlcp behavior change.
# https://github.com/orgs/goreleaser/discussions/3659#discussioncomment-4587257
rlcp: true
checksum:
name_template: 'CHECKSUM'
release:
@@ -65,4 +63,4 @@ git:
# tags if there are more than one tag in the same commit.
#
# Default: `-version:refname`
tag_sort: -version:creatordate
tag_sort: -version:creatordate

View File

@@ -63,7 +63,7 @@ Okteto integrates Velero in [Okteto Cloud][94] and [Okteto Enterprise][95] to pe
Replicated uses the Velero open source project to enable snapshots in [KOTS][101] to backup Kubernetes manifests & persistent volumes. In addition to the default functionality that Velero provides, [KOTS][101] provides a detailed interface in the [Admin Console][102] that can be used to manage the storage destination and schedule, and to perform and monitor the backup and restore process.<br>
**[CloudCasa][103]**<br>
[Catalogic Software][104] integrates Velero with [CloudCasa][103] - A Smart Home in the Cloud for Backups. CloudCasa is a full-featured, scalable, cloud-native solution providing Kubernetes data protection, disaster recovery, and migration as a service. An option to manage existing Velero instances and an enterprise self-hosted option are also available.<br>
[Catalogic Software][104] integrates Velero with [CloudCasa][103] - A Smart Home in the Cloud for Backups. CloudCasa is a simple, scalable, cloud-native solution providing data protection and disaster recovery as a service. This solution is built using Kubernetes for protecting Kubernetes clusters.<br>
**[Microsoft Azure][105]**<br>
[Azure Backup for AKS][106] is an Azure native, Kubernetes aware, Enterprise ready backup for containerized applications deployed on Azure Kubernetes Service (AKS). AKS Backup utilizes Velero to perform backup and restore operations to protect stateful applications in AKS clusters.<br>

View File

@@ -1,9 +1,7 @@
## Current release:
* [CHANGELOG-1.15.md][25]
* [CHANGELOG-1.13.md][23]
## Older releases:
* [CHANGELOG-1.14.md][24]
* [CHANGELOG-1.13.md][23]
* [CHANGELOG-1.12.md][22]
* [CHANGELOG-1.11.md][21]
* [CHANGELOG-1.10.md][20]
@@ -28,8 +26,6 @@
* [CHANGELOG-0.3.md][1]
[25]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.15.md
[24]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.14.md
[23]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.13.md
[22]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.12.md
[21]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.11.md

View File

@@ -5,7 +5,7 @@
We as members, contributors, and leaders pledge to make participation in the Velero project and our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socioeconomic status,
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.

View File

@@ -13,7 +13,7 @@
# limitations under the License.
# Velero binary build section
FROM --platform=$BUILDPLATFORM golang:1.24-bookworm AS velero-builder
FROM --platform=$BUILDPLATFORM golang:1.21.9-bookworm as velero-builder
ARG GOPROXY
ARG BIN
@@ -42,16 +42,13 @@ RUN mkdir -p /output/usr/bin && \
export GOARM=$( echo "${GOARM}" | cut -c2-) && \
go build -o /output/${BIN} \
-ldflags "${LDFLAGS}" ${PKG}/cmd/${BIN} && \
go build -o /output/velero-restore-helper \
-ldflags "${LDFLAGS}" ${PKG}/cmd/velero-restore-helper && \
go build -o /output/velero-helper \
-ldflags "${LDFLAGS}" ${PKG}/cmd/velero-helper && \
go clean -modcache -cache
# Restic binary build section
FROM --platform=$BUILDPLATFORM golang:1.24-bookworm AS restic-builder
FROM --platform=$BUILDPLATFORM golang:1.21.9-bookworm as restic-builder
ARG GOPROXY
ARG BIN
ARG TARGETOS
ARG TARGETARCH
@@ -73,7 +70,7 @@ RUN mkdir -p /output/usr/bin && \
go clean -modcache -cache
# Velero image packing section
FROM paketobuildpacks/run-jammy-tiny:latest
FROM paketobuildpacks/run-jammy-tiny:0.2.19
LABEL maintainer="Xun Jiang <jxun@vmware.com>"

View File

@@ -1,57 +0,0 @@
# Copyright the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
ARG OS_VERSION=1809
# Velero binary build section
FROM --platform=$BUILDPLATFORM golang:1.24-bookworm AS velero-builder
ARG GOPROXY
ARG BIN
ARG PKG
ARG VERSION
ARG REGISTRY
ARG GIT_SHA
ARG GIT_TREE_STATE
ARG TARGETOS
ARG TARGETARCH
ARG TARGETVARIANT
ENV CGO_ENABLED=0 \
GO111MODULE=on \
GOPROXY=${GOPROXY} \
GOOS=${TARGETOS} \
GOARCH=${TARGETARCH} \
GOARM=${TARGETVARIANT} \
LDFLAGS="-X ${PKG}/pkg/buildinfo.Version=${VERSION} -X ${PKG}/pkg/buildinfo.GitSHA=${GIT_SHA} -X ${PKG}/pkg/buildinfo.GitTreeState=${GIT_TREE_STATE} -X ${PKG}/pkg/buildinfo.ImageRegistry=${REGISTRY}"
WORKDIR /go/src/github.com/vmware-tanzu/velero
COPY . /go/src/github.com/vmware-tanzu/velero
RUN mkdir -p /output/usr/bin && \
export GOARM=$( echo "${GOARM}" | cut -c2-) && \
go build -o /output/${BIN}.exe \
-ldflags "${LDFLAGS}" ${PKG}/cmd/${BIN} && \
go build -o /output/velero-restore-helper.exe \
-ldflags "${LDFLAGS}" ${PKG}/cmd/velero-restore-helper && \
go build -o /output/velero-helper.exe \
-ldflags "${LDFLAGS}" ${PKG}/cmd/velero-helper && \
go clean -modcache -cache
# Velero image packing section
FROM mcr.microsoft.com/windows/nanoserver:${OS_VERSION}
COPY --from=velero-builder /output /
USER ContainerUser

View File

@@ -107,29 +107,6 @@ Lazy consensus does _not_ apply to the process of:
* Removal of maintainers from Velero
## Deprecation Policy
### Deprecation Process
Any contributor may introduce a request to deprecate a feature or an option of a feature by opening a feature request issue in the vmware-tanzu/velero GitHub project. The issue should describe why the feature is no longer needed or has become detrimental to Velero, as well as whether and how it has been superseded. The submitter should give as much detail as possible.
Once the issue is filed, a one-month discussion period begins. Discussions take place within the issue itself as well as in the community meetings. The person who opens the issue, or a maintainer, should add the date and time marking the end of the discussion period in a comment on the issue as soon as possible after it is opened. A decision on the issue needs to be made within this one-month period.
The feature will be deprecated by a supermajority vote of 50% plus one of the project maintainers at the time of the vote tallying, which is 72 hours after the end of the community meeting that is the end of the comment period. (Maintainers are permitted to vote in advance of the deadline, but should hold their votes until as close as possible to hear all possible discussion.) Votes will be tallied in comments on the issue.
Non-maintainers may add non-binding votes in comments to the issue as well; these are opinions to be taken into consideration by maintainers, but they do not count as votes.
If the vote passes, the deprecation window takes effect in the subsequent release, and the removal follows the schedule.
### Schedule
If depreciation proposal passes by supermajority votes, the feature is deprecated in the next minor release and the feature can be removed completely after two minor version or equivalent major version e.g., if feature gets deprecated in Nth minor version, then feature can be removed after N+2 minor version or its equivalent if the major version number changes.
### Deprecation Window
The deprecation window is the period from the release in which the deprecation takes effect through the release in which the feature is removed. During this period, only critical security vulnerabilities and catastrophic bugs should be fixed.
**Note:** If a backup relies on a deprecated feature, then backups made with the last Velero release before this feature is removed must still be restorable in version `n+2`. For instance, something like restic feature support, that might mean that restic is removed from the list of supported uploader types in version `n` but the underlying implementation required to restore from a restic backup won't be removed until release `n+2`.
## Updating Governance
All substantive changes in Governance require a supermajority agreement by all maintainers.

View File

@@ -10,10 +10,10 @@
| Daniel Jiang | [reasonerjt](https://github.com/reasonerjt) | [VMware](https://www.github.com/vmware/) |
| Wenkai Yin | [ywk253100](https://github.com/ywk253100) | [VMware](https://www.github.com/vmware/) |
| Xun Jiang | [blackpiglet](https://github.com/blackpiglet) | [VMware](https://www.github.com/vmware/) |
| Ming Qiu | [qiuming-best](https://github.com/qiuming-best) | [VMware](https://www.github.com/vmware/) |
| Shubham Pampattiwar | [shubham-pampattiwar](https://github.com/shubham-pampattiwar) | [OpenShift](https://github.com/openshift) |
| Yonghui Li | [Lyndon-Li](https://github.com/Lyndon-Li) | [VMware](https://www.github.com/vmware/) |
| Anshul Ahuja | [anshulahuja98](https://github.com/anshulahuja98) | [Microsoft Azure](https://www.github.com/azure/) |
| Tiger Kaovilai | [kaovilai](https://github.com/kaovilai) | [OpenShift](https://github.com/openshift) |
## Emeritus Maintainers
* Adnan Abdulhussein ([prydonius](https://github.com/prydonius))
@@ -26,8 +26,7 @@
* Bridget McErlean ([zubron](https://github.com/zubron))
* JenTing Hsiao ([jenting](https://github.com/jenting))
* Dave Smith-Uchida ([dsu-igeek](https://github.com/dsu-igeek))
* Ming Qiu ([qiuming-best](https://github.com/qiuming-best))
## Velero Contributors & Stakeholders
| Feature Area | Lead |

179
Makefile
View File

@@ -22,26 +22,15 @@ PKG := github.com/vmware-tanzu/velero
# Where to push the docker image.
REGISTRY ?= velero
# In order to push images to an insecure registry, follow the two steps:
# 1. Set "INSECURE_REGISTRY=true"
# 2. Provide your own buildx builder instance by setting "BUILDX_INSTANCE=your-own-builder-instance"
# The builder can be created with the following command:
# cat << EOF > buildkitd.toml
# [registry."insecure-registry-ip:port"]
# http = true
# insecure = true
# EOF
# docker buildx create --name=velero-builder --driver=docker-container --bootstrap --use --config ./buildkitd.toml
# Refer to https://github.com/docker/buildx/issues/1370#issuecomment-1288516840 for more details
INSECURE_REGISTRY ?= false
GCR_REGISTRY ?= gcr.io/velero-gcp
# Image name
IMAGE ?= $(REGISTRY)/$(BIN)
GCR_IMAGE ?= $(GCR_REGISTRY)/$(BIN)
# We allow the Dockerfile to be configurable to enable the use of custom Dockerfiles
# that pull base images from different registries.
VELERO_DOCKERFILE ?= Dockerfile
VELERO_DOCKERFILE_WINDOWS ?= Dockerfile-Windows
BUILDER_IMAGE_DOCKERFILE ?= hack/build-image/Dockerfile
# Calculate the realpath of the build-image Dockerfile as we `cd` into the hack/build
@@ -65,7 +54,7 @@ endif
BUILDER_IMAGE := $(REGISTRY)/build-image:$(BUILDER_IMAGE_TAG)
BUILDER_IMAGE_CACHED := $(shell docker images -q ${BUILDER_IMAGE} 2>/dev/null )
HUGO_IMAGE := ghcr.io/gohugoio/hugo
HUGO_IMAGE := hugo-builder
# Which architecture to build - see $(ALL_ARCH) for options.
# if the 'local' rule is being run, detect the ARCH from 'go env'
@@ -79,21 +68,13 @@ TAG_LATEST ?= false
ifeq ($(TAG_LATEST), true)
IMAGE_TAGS ?= $(IMAGE):$(VERSION) $(IMAGE):latest
GCR_IMAGE_TAGS ?= $(GCR_IMAGE):$(VERSION) $(GCR_IMAGE):latest
else
IMAGE_TAGS ?= $(IMAGE):$(VERSION)
GCR_IMAGE_TAGS ?= $(GCR_IMAGE):$(VERSION)
endif
# check buildx is enabled only if docker is in path
# macOS/Windows docker cli without Docker Desktop license: https://github.com/abiosoft/colima
# To add buildx to docker cli: https://github.com/abiosoft/colima/discussions/273#discussioncomment-2684502
ifeq ($(shell which docker 2>/dev/null 1>&2 && docker buildx inspect 2>/dev/null | awk '/Status/ { print $$2 }'), running)
BUILDX_ENABLED ?= true
# if emulated docker cli from podman, assume enabled
# emulated docker cli from podman: https://podman-desktop.io/docs/migrating-from-docker/emulating-docker-cli-with-podman
# podman known issues:
# - on remote podman, such as on macOS,
# --output issue: https://github.com/containers/podman/issues/15922
else ifeq ($(shell which docker 2>/dev/null 1>&2 && cat $(shell which docker) | grep -c "exec podman"), 1)
ifeq ($(shell docker buildx inspect 2>/dev/null | awk '/Status/ { print $$2 }'), running)
BUILDX_ENABLED ?= true
else
BUILDX_ENABLED ?= false
@@ -103,32 +84,13 @@ define BUILDX_ERROR
buildx not enabled, refusing to run this recipe
see: https://velero.io/docs/main/build-from-source/#making-images-and-updating-velero for more info
endef
# comma cannot be escaped and can only be used in Make function arguments by putting into variable
comma=,
# The version of restic binary to be downloaded
RESTIC_VERSION ?= 0.15.0
CLI_PLATFORMS ?= linux-amd64 linux-arm linux-arm64 darwin-amd64 darwin-arm64 windows-amd64 linux-ppc64le linux-s390x
BUILD_OUTPUT_TYPE ?= docker
BUILD_OS ?= linux
BUILD_ARCH ?= amd64
BUILD_WINDOWS_VERSION ?= ltsc2022
ifeq ($(BUILD_OUTPUT_TYPE), docker)
ALL_OS = linux
ALL_ARCH.linux = $(word 2, $(subst -, ,$(shell go env GOOS)-$(shell go env GOARCH)))
else
ALL_OS = $(subst $(comma), ,$(BUILD_OS))
ALL_ARCH.linux = $(subst $(comma), ,$(BUILD_ARCH))
endif
ALL_ARCH.windows = $(if $(filter windows,$(ALL_OS)),amd64,)
ALL_OSVERSIONS.windows = $(if $(filter windows,$(ALL_OS)),$(BUILD_WINDOWS_VERSION),)
ALL_OS_ARCH.linux = $(foreach os, $(filter linux,$(ALL_OS)), $(foreach arch, ${ALL_ARCH.linux}, ${os}-$(arch)))
ALL_OS_ARCH.windows = $(foreach os, $(filter windows,$(ALL_OS)), $(foreach arch, $(ALL_ARCH.windows), $(foreach osversion, ${ALL_OSVERSIONS.windows}, ${os}-${osversion}-${arch})))
ALL_OS_ARCH = $(ALL_OS_ARCH.linux)$(ALL_OS_ARCH.windows)
ALL_IMAGE_TAGS = $(IMAGE_TAGS)
CLI_PLATFORMS ?= linux-amd64 linux-arm linux-arm64 darwin-amd64 darwin-arm64 windows-amd64 linux-ppc64le
BUILDX_PLATFORMS ?= $(subst -,/,$(ARCH))
BUILDX_OUTPUT_TYPE ?= docker
# set git sha and tree state
GIT_SHA = $(shell git rev-parse HEAD)
@@ -146,26 +108,27 @@ platform_temp = $(subst -, ,$(ARCH))
GOOS = $(word 1, $(platform_temp))
GOARCH = $(word 2, $(platform_temp))
GOPROXY ?= https://proxy.golang.org
GOBIN=$$(pwd)/.go/bin
# If you want to build all binaries, see the 'all-build' rule.
# If you want to build all containers, see the 'all-containers' rule.
all:
@$(MAKE) build
@$(MAKE) build BIN=velero-restore-helper
build-%:
@$(MAKE) --no-print-directory ARCH=$* build
@$(MAKE) --no-print-directory ARCH=$* build BIN=velero-restore-helper
all-build: $(addprefix build-, $(CLI_PLATFORMS))
all-containers:
@$(MAKE) --no-print-directory container
@$(MAKE) --no-print-directory container BIN=velero-restore-helper
local: build-dirs
# Add DEBUG=1 to enable debug locally
GOOS=$(GOOS) \
GOARCH=$(GOARCH) \
GOBIN=$(GOBIN) \
VERSION=$(VERSION) \
REGISTRY=$(REGISTRY) \
PKG=$(PKG) \
@@ -182,7 +145,6 @@ _output/bin/$(GOOS)/$(GOARCH)/$(BIN): build-dirs
$(MAKE) shell CMD="-c '\
GOOS=$(GOOS) \
GOARCH=$(GOARCH) \
GOBIN=$(GOBIN) \
VERSION=$(VERSION) \
REGISTRY=$(REGISTRY) \
PKG=$(PKG) \
@@ -221,38 +183,11 @@ container:
ifneq ($(BUILDX_ENABLED), true)
$(error $(BUILDX_ERROR))
endif
ifeq ($(BUILDX_INSTANCE),)
@echo creating a buildx instance
-docker buildx rm velero-builder || true
@docker buildx create --use --name=velero-builder
else
@echo using a specified buildx instance $(BUILDX_INSTANCE)
@docker buildx use $(BUILDX_INSTANCE)
endif
@mkdir -p _output
@for osarch in $(ALL_OS_ARCH); do \
$(MAKE) container-$${osarch}; \
done
ifeq ($(BUILD_OUTPUT_TYPE), registry)
@for tag in $(ALL_IMAGE_TAGS); do \
IMAGE_TAG=$${tag} $(MAKE) push-manifest; \
done
endif
container-linux-%:
@BUILDX_ARCH=$* $(MAKE) container-linux
container-linux:
@echo "building container: $(IMAGE):$(VERSION)-linux-$(BUILDX_ARCH)"
@docker buildx build --pull \
--output="type=$(BUILD_OUTPUT_TYPE)$(if $(findstring tar, $(BUILD_OUTPUT_TYPE)),$(comma)dest=_output/$(BIN)-$(VERSION)-linux-$(BUILDX_ARCH).tar,)" \
--platform="linux/$(BUILDX_ARCH)" \
$(addprefix -t , $(addsuffix "-linux-$(BUILDX_ARCH)",$(ALL_IMAGE_TAGS))) \
--output=type=$(BUILDX_OUTPUT_TYPE) \
--platform $(BUILDX_PLATFORMS) \
$(addprefix -t , $(IMAGE_TAGS)) \
$(addprefix -t , $(GCR_IMAGE_TAGS)) \
--build-arg=GOPROXY=$(GOPROXY) \
--build-arg=PKG=$(PKG) \
--build-arg=BIN=$(BIN) \
@@ -261,54 +196,14 @@ container-linux:
--build-arg=GIT_TREE_STATE=$(GIT_TREE_STATE) \
--build-arg=REGISTRY=$(REGISTRY) \
--build-arg=RESTIC_VERSION=$(RESTIC_VERSION) \
--provenance=false \
--sbom=false \
-f $(VELERO_DOCKERFILE) .
@echo "built container: $(IMAGE):$(VERSION)-linux-$(BUILDX_ARCH)"
container-windows-%:
@BUILDX_OSVERSION=$(firstword $(subst -, ,$*)) BUILDX_ARCH=$(lastword $(subst -, ,$*)) $(MAKE) container-windows
container-windows:
@echo "building container: $(IMAGE):$(VERSION)-windows-$(BUILDX_OSVERSION)-$(BUILDX_ARCH)"
@docker buildx build --pull \
--output="type=$(BUILD_OUTPUT_TYPE)$(if $(findstring tar, $(BUILD_OUTPUT_TYPE)),$(comma)dest=_output/$(BIN)-$(VERSION)-windows-$(BUILDX_OSVERSION)-$(BUILDX_ARCH).tar,)" \
--platform="windows/$(BUILDX_ARCH)" \
$(addprefix -t , $(addsuffix "-windows-$(BUILDX_OSVERSION)-$(BUILDX_ARCH)",$(ALL_IMAGE_TAGS))) \
--build-arg=GOPROXY=$(GOPROXY) \
--build-arg=PKG=$(PKG) \
--build-arg=BIN=$(BIN) \
--build-arg=VERSION=$(VERSION) \
--build-arg=OS_VERSION=$(BUILDX_OSVERSION) \
--build-arg=GIT_SHA=$(GIT_SHA) \
--build-arg=GIT_TREE_STATE=$(GIT_TREE_STATE) \
--build-arg=REGISTRY=$(REGISTRY) \
--provenance=false \
--sbom=false \
-f $(VELERO_DOCKERFILE_WINDOWS) .
@echo "built container: $(IMAGE):$(VERSION)-windows-$(BUILDX_OSVERSION)-$(BUILDX_ARCH)"
push-manifest:
@echo "building manifest: $(IMAGE_TAG) for $(foreach osarch, $(ALL_OS_ARCH), $(IMAGE_TAG)-${osarch})"
@docker manifest create --amend --insecure=$(INSECURE_REGISTRY) $(IMAGE_TAG) $(foreach osarch, $(ALL_OS_ARCH), $(IMAGE_TAG)-${osarch})
@set -x; \
for arch in $(ALL_ARCH.windows); do \
for osversion in $(ALL_OSVERSIONS.windows); do \
BASEIMAGE=mcr.microsoft.com/windows/nanoserver:$${osversion}; \
full_version=`docker manifest inspect --insecure=$(INSECURE_REGISTRY) $${BASEIMAGE} | jq -r '.manifests[0].platform["os.version"]'`; \
docker manifest annotate --os windows --arch $${arch} --os-version $${full_version} $(IMAGE_TAG) $(IMAGE_TAG)-windows-$${osversion}-$${arch}; \
done; \
done
@echo "pushing manifest $(IMAGE_TAG)"
@docker manifest push --purge --insecure=$(INSECURE_REGISTRY) $(IMAGE_TAG)
@echo "pushed manifest $(IMAGE_TAG):"
@docker manifest inspect --insecure=$(INSECURE_REGISTRY) $(IMAGE_TAG)
@echo "container: $(IMAGE):$(VERSION)"
ifeq ($(BUILDX_OUTPUT_TYPE)_$(REGISTRY), registry_velero)
docker pull $(IMAGE):$(VERSION)
rm -f $(BIN)-$(VERSION).tar
docker save $(IMAGE):$(VERSION) -o $(BIN)-$(VERSION).tar
gzip -f $(BIN)-$(VERSION).tar
endif
SKIP_TESTS ?=
test: build-dirs
@@ -451,7 +346,7 @@ release:
serve-docs: build-image-hugo
docker run \
--rm \
-v "$$(pwd)/site:/project" \
-v "$$(pwd)/site:/srv/hugo" \
-it -p 1313:1313 \
$(HUGO_IMAGE) \
server --bind=0.0.0.0 --enableGitInfo=false
@@ -462,29 +357,11 @@ gen-docs:
.PHONY: test-e2e
test-e2e: local
$(MAKE) -e VERSION=$(VERSION) -C test/ run-e2e
$(MAKE) -e VERSION=$(VERSION) -C test/e2e run
.PHONY: test-perf
test-perf: local
$(MAKE) -e VERSION=$(VERSION) -C test/ run-perf
$(MAKE) -e VERSION=$(VERSION) -C test/perf run
go-generate:
go generate ./pkg/...
# requires an authenticated gh cli
# gh: https://cli.github.com/
# First create a PR
# gh pr create --title 'Title name' --body 'PR body'
# by default uses PR title as changelog body but can be overwritten like so
# make new-changelog CHANGELOG_BODY="Changes you have made"
new-changelog: GH_LOGIN ?= $(shell gh pr view --json author --jq .author.login 2> /dev/null)
new-changelog: GH_PR_NUMBER ?= $(shell gh pr view --json number --jq .number 2> /dev/null)
new-changelog: CHANGELOG_BODY ?= '$(shell gh pr view --json title --jq .title)'
new-changelog:
@if [ "$(GH_LOGIN)" = "" ]; then \
echo "branch does not have PR or cli not logged in, try 'gh auth login' or 'gh pr create'"; \
exit 1; \
fi
@mkdir -p ./changelogs/unreleased/ && \
echo $(CHANGELOG_BODY) > ./changelogs/unreleased/$(GH_PR_NUMBER)-$(GH_LOGIN) && \
echo \"$(CHANGELOG_BODY)\" added to "./changelogs/unreleased/$(GH_PR_NUMBER)-$(GH_LOGIN)"
go generate ./pkg/...

24
OWNERS
View File

@@ -1,24 +0,0 @@
# This file is used by the [PROW action](https://github.com/jpmcb/prow-github-actions) to approve and merge PRs.
# The file's format follows the [OWNERS SPEC](https://www.kubernetes.dev/docs/guide/owners/#owners-spec).
# List of usernames who may use /lgtm
reviewers:
- @Lyndon-Li
- @anshulahuja98
- @blackpiglet
- @qiuming-best
- @reasonerjt
- @shubham-pampattiwar
- @sseago
- @ywk253100
# List of usernames who may use /approve
approvers:
- @Lyndon-Li
- @anshulahuja98
- @blackpiglet
- @qiuming-best
- @reasonerjt
- @shubham-pampattiwar
- @sseago
- @ywk253100

View File

@@ -40,19 +40,18 @@ See [the list of releases][6] to find out about feature changes.
The following is a list of the supported Kubernetes versions for each Velero version.
| Velero version | Expected Kubernetes version compatibility | Tested on Kubernetes version |
|----------------|-------------------------------------------|-------------------------------------|
| 1.17 | 1.18-latest | 1.31.7, 1.32.3, 1.33.1, and 1.34.0 |
| 1.16 | 1.18-latest | 1.31.4, 1.32.3, and 1.33.0 |
| 1.15 | 1.18-latest | 1.28.8, 1.29.8, 1.30.4 and 1.31.1 |
| 1.14 | 1.18-latest | 1.27.9, 1.28.9, and 1.29.4 |
| 1.13 | 1.18-latest | 1.26.5, 1.27.3, 1.27.8, and 1.28.3 |
| 1.12 | 1.18-latest | 1.25.7, 1.26.5, 1.26.7, and 1.27.3 |
| 1.11 | 1.18-latest | 1.23.10, 1.24.9, 1.25.5, and 1.26.1 |
| Velero version | Expected Kubernetes version compatibility | Tested on Kubernetes version |
|----------------|-------------------------------------------|----------------------------------------|
| 1.13 | 1.18-latest | 1.26.5, 1.27.3, 1.27.8, and 1.28.3 |
| 1.12 | 1.18-latest | 1.25.7, 1.26.5, 1.26.7, and 1.27.3 |
| 1.11 | 1.18-latest | 1.23.10, 1.24.9, 1.25.5, and 1.26.1 |
| 1.10 | 1.18-latest | 1.22.5, 1.23.8, 1.24.6 and 1.25.1 |
| 1.9 | 1.18-latest | 1.20.5, 1.21.2, 1.22.5, 1.23, and 1.24 |
| 1.8 | 1.18-latest | |
Velero supports IPv4, IPv6, and dual stack environments. Support for this was tested against Velero v1.8.
The Velero maintainers are continuously working to expand testing coverage, but are not able to test every combination of Velero and supported Kubernetes versions for each Velero release. The table above is meant to track the current testing coverage and the expected supported Kubernetes versions for each Velero version.
The Velero maintainers are continuously working to expand testing coverage, but are not able to test every combination of Velero and supported Kubernetes versions for each Velero release. The table above is meant to track the current testing coverage and the expected supported Kubernetes versions for each Velero version. If you have a question about test coverage before v1.9, please reach out in the [#velero-users](https://kubernetes.slack.com/archives/C6VCGP4MT) Slack channel.
If you are interested in using a different version of Kubernetes with a given Velero version, we'd recommend that you perform testing before installing or upgrading your environment. For full information around capabilities within a release, also see the Velero [release notes](https://github.com/vmware-tanzu/velero/releases) or Kubernetes [release notes](https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG). See the Velero [support page](https://velero.io/docs/latest/support-process/) for information about supported versions of Velero.

View File

@@ -12,13 +12,13 @@ The Velero project maintains the following [governance document](https://github.
Security is of the highest importance and all security vulnerabilities or suspected security vulnerabilities should be reported to Velero privately, to minimize attacks against current users of Velero before they are fixed. Vulnerabilities will be investigated and patched on the next patch (or minor) release as soon as possible. This information could be kept entirely internal to the project.
If you know of a publicly disclosed security vulnerability for Velero, please **IMMEDIATELY** contact the Security Team (velero-security.pdl@broadcom.com).
If you know of a publicly disclosed security vulnerability for Velero, please **IMMEDIATELY** contact the VMware Security Team (security@vmware.com).
**IMPORTANT: Do not file public issues on GitHub for security vulnerabilities**
To report a vulnerability or a security-related issue, please contact the email address with the details of the vulnerability. The email will be fielded by the Security Team and then shared with the Velero maintainers who have committer and release permissions. Emails will be addressed within 3 business days, including a detailed plan to investigate the issue and any potential workarounds to perform in the meantime. Do not report non-security-impacting bugs through this channel. Use [GitHub issues](https://github.com/vmware-tanzu/velero/issues/new/choose) instead.
To report a vulnerability or a security-related issue, please contact the VMware email address with the details of the vulnerability. The email will be fielded by the VMware Security Team and then shared with the Velero maintainers who have committer and release permissions. Emails will be addressed within 3 business days, including a detailed plan to investigate the issue and any potential workarounds to perform in the meantime. Do not report non-security-impacting bugs through this channel. Use [GitHub issues](https://github.com/vmware-tanzu/velero/issues/new/choose) instead.
## Proposed Email Content
@@ -29,7 +29,7 @@ Provide a descriptive subject line and in the body of the email include the foll
* Basic identity information, such as your name and your affiliation or company.
* Detailed steps to reproduce the vulnerability (POC scripts, screenshots, and logs are all helpful to us).
* Description of the effects of the vulnerability on Velero and the related hardware and software configurations, so that the Security Team can reproduce it.
* Description of the effects of the vulnerability on Velero and the related hardware and software configurations, so that the VMware Security Team can reproduce it.
* How the vulnerability affects Velero usage and an estimation of the attack surface, if there is one.
* List other projects or dependencies that were used in conjunction with Velero to produce the vulnerability.
@@ -49,7 +49,7 @@ Provide a descriptive subject line and in the body of the email include the foll
## Patch, Release, and Disclosure
The Security Team will respond to vulnerability reports as follows:
The VMware Security Team will respond to vulnerability reports as follows:
@@ -62,7 +62,7 @@ The Security Team will respond to vulnerability reports as follows:
5. The Security Team will also create a [CVSS](https://www.first.org/cvss/specification-document) using the [CVSS Calculator](https://www.first.org/cvss/calculator/3.0). The Security Team makes the final call on the calculated CVSS; it is better to move quickly than making the CVSS perfect. Issues may also be reported to [Mitre](https://cve.mitre.org/) using this [scoring calculator](https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator). The CVE will initially be set to private.
6. The Security Team will work on fixing the vulnerability and perform internal testing before preparing to roll out the fix.
7. The Security Team will provide early disclosure of the vulnerability by emailing the [Velero Distributors](https://groups.google.com/u/1/g/projectvelero-distributors) mailing list. Distributors can initially plan for the vulnerability patch ahead of the fix, and later can test the fix and provide feedback to the Velero team. See the section **Early Disclosure to Velero Distributors List** for details about how to join this mailing list.
8. A public disclosure date is negotiated by the SecurityTeam, the bug submitter, and the distributors list. We prefer to fully disclose the bug as soon as possible once a user mitigation or patch is available. It is reasonable to delay disclosure when the bug or the fix is not yet fully understood, the solution is not well-tested, or for distributor coordination. The timeframe for disclosure is from immediate (especially if its already publicly known) to a few weeks. For a critical vulnerability with a straightforward mitigation, we expect the report date for the public disclosure date to be on the order of 14 business days. The Security Team holds the final say when setting a public disclosure date.
8. A public disclosure date is negotiated by the VMware SecurityTeam, the bug submitter, and the distributors list. We prefer to fully disclose the bug as soon as possible once a user mitigation or patch is available. It is reasonable to delay disclosure when the bug or the fix is not yet fully understood, the solution is not well-tested, or for distributor coordination. The timeframe for disclosure is from immediate (especially if its already publicly known) to a few weeks. For a critical vulnerability with a straightforward mitigation, we expect the report date for the public disclosure date to be on the order of 14 business days. The VMware Security Team holds the final say when setting a public disclosure date.
9. Once the fix is confirmed, the Security Team will patch the vulnerability in the next patch or minor release, and backport a patch release into all earlier supported releases. Upon release of the patched version of Velero, we will follow the **Public Disclosure Process**.
@@ -79,7 +79,7 @@ The Security Team will also publish any mitigating steps users can take until th
* Use velero-security.pdl@broadcom.com to report security concerns to the Security Team, who uses the list to privately discuss security issues and fixes prior to disclosure.
* Use security@vmware.com to report security concerns to the VMware Security Team, who uses the list to privately discuss security issues and fixes prior to disclosure.
* Join the [Velero Distributors](https://groups.google.com/u/1/g/projectvelero-distributors) mailing list for early private information and vulnerability disclosure. Early disclosure may include mitigating steps and additional information on security patch releases. See below for information on how Velero distributors or vendors can apply to join this list.
@@ -107,11 +107,11 @@ To be eligible to join the [Velero Distributors](https://groups.google.com/u/1/g
## Embargo Policy
The information that members receive on the Velero Distributors mailing list must not be made public, shared, or even hinted at anywhere beyond those who need to know within your specific team, unless you receive explicit approval to do so from the Security Team. This remains true until the public disclosure date/time agreed upon by the list. Members of the list and others cannot use the information for any reason other than to get the issue fixed for your respective distribution's users.
The information that members receive on the Velero Distributors mailing list must not be made public, shared, or even hinted at anywhere beyond those who need to know within your specific team, unless you receive explicit approval to do so from the VMware Security Team. This remains true until the public disclosure date/time agreed upon by the list. Members of the list and others cannot use the information for any reason other than to get the issue fixed for your respective distribution's users.
Before you share any information from the list with members of your team who are required to fix the issue, these team members must agree to the same terms, and only be provided with information on a need-to-know basis.
In the unfortunate event that you share information beyond what is permitted by this policy, you must urgently inform the Security Team (velero-security.pdl@broadcom.com) of exactly what information was leaked and to whom. If you continue to leak information and break the policy outlined here, you will be permanently removed from the list.
In the unfortunate event that you share information beyond what is permitted by this policy, you must urgently inform the VMware Security Team (security@vmware.com) of exactly what information was leaked and to whom. If you continue to leak information and break the policy outlined here, you will be permanently removed from the list.
@@ -123,6 +123,6 @@ Send new membership requests to projectvelero-distributors@googlegroups.com. In
## Confidentiality, integrity and availability
We consider vulnerabilities leading to the compromise of data confidentiality, elevation of privilege, or integrity to be our highest priority concerns. Availability, in particular in areas relating to DoS and resource exhaustion, is also a serious security concern. The Security Team takes all vulnerabilities, potential vulnerabilities, and suspected vulnerabilities seriously and will investigate them in an urgent and expeditious manner.
We consider vulnerabilities leading to the compromise of data confidentiality, elevation of privilege, or integrity to be our highest priority concerns. Availability, in particular in areas relating to DoS and resource exhaustion, is also a serious security concern. The VMware Security Team takes all vulnerabilities, potential vulnerabilities, and suspected vulnerabilities seriously and will investigate them in an urgent and expeditious manner.
Note that we do not currently consider the default settings for Velero to be secure-by-default. It is necessary for operators to explicitly configure settings, role based access control, and other resource related features in Velero to provide a hardened Velero environment. We will not act on any security disclosure that relates to a lack of safe defaults. Over time, we will work towards improved safe-by-default configuration, taking into account backwards compatibility.

View File

@@ -52,7 +52,7 @@ git_sha = str(local("git rev-parse HEAD", quiet = True, echo_off = True)).strip(
tilt_helper_dockerfile_header = """
# Tilt image
FROM golang:1.24 as tilt-helper
FROM golang:1.21.9 as tilt-helper
# Support live reloading with Tilt
RUN wget --output-document /restart.sh --quiet https://raw.githubusercontent.com/windmilleng/rerun-process-wrapper/master/restart.sh && \

View File

@@ -1,3 +1,45 @@
## v1.13.2
### 2024-04-17
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.13.2
### Container Image
`velero/velero:v1.13.2`
### Documentation
https://velero.io/docs/v1.13/
### Upgrading
https://velero.io/docs/v1.13/upgrade-to-1.13/
### All changes
* Bump up the versions of several Kubernetes-related libs (#7577, @ywk253100)
* Fix issue #7535, add the MustHave resource check during item collection and item filter for restore (#7586, @Lyndon-Li)
* Bump Golang version, and bump protobuf version (#7606, @blackpiglet)
## v1.13.1
### 2024-03-13
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.13.1
### Container Image
`velero/velero:v1.13.1`
### Documentation
https://velero.io/docs/v1.13/
### Upgrading
https://velero.io/docs/v1.13/upgrade-to-1.13/
### All changes
* Fix issue #7308, change the data path requeue time to 5 second for data mover backup/restore, PVB and PVR. (#7459, @Lyndon-Li)
* BackupRepositories associated with a BSL are invalidated when BSL is (re-)created. (#7399, @kaovilai)
* Adjust the logic for the backup_last_status metrics to stop incorrectly incrementing over time (#7445, @allenxu404)
## v1.13
### 2024-01-10
@@ -72,6 +114,7 @@ To fix CVEs and keep pace with Golang, Velero made changes as follows:
* After the backup VolumeInfo metadata file is added to the backup, Velero decides how to restore the PV resource according to the VolumeInfo content. To support the backup generated by the older version of Velero, the old logic is also kept. The support for the backup without the VolumeInfo metadata file will be kept for two releases. The support logic will be deleted in the v1.15 release.
### All Changes
* Check resource Group Version and Kind is available in cluster before attempting restore to prevent being stuck (#7336, @kaovilai)
* Make "disable-informer-cache" option false(enabled) by default to keep it consistent with the help message (#7294, @ywk253100)
* Fix issue #6928, remove snapshot deletion timeout for PVB (#7282, @Lyndon-Li)
* Do not set "targetNamespace" to namespace items (#7274, @reasonerjt)

View File

@@ -1,105 +0,0 @@
## v1.14
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.14.0
### Container Image
`velero/velero:v1.14.0`
### Documentation
https://velero.io/docs/v1.14/
### Upgrading
https://velero.io/docs/v1.14/upgrade-to-1.14/
### Highlights
#### The maintenance work for kopia/restic backup repositories is run in jobs
Since velero started using kopia as the approach for filesystem-level backup/restore, we've noticed an issue when velero connects to the kopia backup repositories and performs maintenance, it sometimes consumes excessive memory that can cause the velero pod to get OOM Killed. To mitigate this issue, the maintenance work will be moved out of velero pod to a separate kubernetes job, and the user will be able to specify the resource request in "velero install".
#### Volume Policies are extended to support more actions to handle volumes
In an earlier release, a flexible volume policy was introduced to skip certain volumes from a backup. In v1.14 we've made enhancement to this policy to allow the user to set how the volumes should be backed up. The user will be able to set "fs-backup" or "snapshot" as value of “action" in the policy and velero will backup the volumes accordingly. This enhancement allows the user to achieve a fine-grained control like "opt-in/out" without having to update the target workload. For more details please refer to https://velero.io/docs/v1.14/resource-filtering/#supported-volumepolicy-actions
#### Node Selection for Data Movement Backup
In velero the data movement flow relies on datamover pods, and these pods may take substantial resources and keep running for a long time. In v1.14, the user will be able to create a configmap to define the eligible nodes on which the datamover pods are launched. For more details refer to https://velero.io/docs/v1.14/data-movement-backup-node-selection/
#### VolumeInfo metadata for restored volumes
In v1.13, we introduced volumeinfo metadata for backup to help velero CLI and downstream adopter understand how velero handles each volume during backup. In v1.14, similar metadata will be persisted for each restore. velero CLI is also updated to bring more info in the output of "velero restore describe".
#### "Finalizing" phase is introduced to restores
The "Finalizing" phase is added to the state transition flow to restore, which helps us fix several issues: The labels added to PVs will be restored after the data in the PV is restored via volumesnapshotter. The post restore hook will be executed after datamovement is finished.
#### Certificate-based authentication support for Azure
Besides the service principal with secret(password)-based authentication, Velero introduces the new support for service principal with certificate-based authentication in v1.14.0. This approach enables you to adopt a phishing resistant authentication by using conditional access policies, which better protects Azure resources and is the recommended way by Azure.
### Runtime and dependencies
* Golang runtime: v1.22.2
* kopia: v0.17.0
### Limitations/Known issues
* For the external BackupItemAction plugins that take snapshots for PVs, such as vsphere plugin. If the plugin checks the value of the field "snapshotVolumes" in the backup spec as a criteria for snapshot, the settings in the volume policy will not take effect. For example, if the "snapshotVolumes" is set to False in the backup spec, but a volume meets the condition in the volume policy for "snapshot" action, because the plugin will not check the settings in the volume policy, the plugin will not take snapshot for the volume. For more details please refer to #7818
### Breaking changes
* CSI plugin has been merged into velero repo in v1.14 release. It will be installed by default as an internal plugin, and should not be installed via "plugins " parameter in "velero install" command.
* The default resource requests and limitations for node agent are removed in v1.14, to make the node agent pods have the QoS class of "BestEffort", more details please refer to #7391
* There's a change in namespace filtering behavior during backup: In v1.14, when the includedNamespaces/excludedNamespaces fields are not set and the labelSelector/OrLabelSelectors are set in the backup spec, the backup will only include the namespaces which contain the resources that match the label selectors, while in previous releases all namespaces will be included in the backup with such settings. More details refer to #7105
* Patching the PV in the "Finalizing" state may cause the restore to be in "PartiallyFailed" state when the PV is blocked in "Pending" state, while in the previous release the restore may end up being in "Complete" state. For more details refer to #7866
### All Changes
* Fix backup log to show error string, not index (#7805, @piny940)
* Modify the volume helper logic. (#7794, @blackpiglet)
* Add documentation for extension of volume policy feature (#7779, @shubham-pampattiwar)
* Surface errors when waiting for backupRepository and timeout occurs (#7762, @kaovilai)
* Add existingResourcePolicy restore CR validation to controller (#7757, @kaovilai)
* Fix condition matching in resource modifier when there are multiple rules (#7715, @27149chen)
* Bump up the version of KinD and k8s in github actions (#7702, @reasonerjt)
* Implementation for Extending VolumePolicies to support more actions (#7664, @shubham-pampattiwar)
* Migrate from `github.com/Azure/azure-storage-blob-go` to `github.com/Azure/azure-sdk-for-go/sdk/storage/azblob` (#7598, @mmorel-35)
* When Included/ExcludedNamespaces are omitted, and LabelSelector or OrLabelSelector is used, namespaces without selected items are excluded from backup. (#7697, @blackpiglet)
* Display CSI snapshot restores in restore describe (#7687, @reasonerjt)
* Use specific credential rather than the credential chain for Azure (#7680, @ywk253100)
* Modify hook docs for clarity on displaying hook execution results (#7679, @allenxu404)
* Wait for results of restore exec hook executions in Finalizing phase instead of InProgress phase (#7619, @allenxu404)
* migrating to `sdk/resourcemanager/**/arm**` from `services/**/mgmt/**` (#7596, @mmorel-35)
* Bump up to go1.22 (#7666, @reasonerjt)
* Fix issue #7648. Adjust the exposing logic to avoid exposing failure and snapshot leak when expose fails (#7662, @Lyndon-Li)
* Track and persist restore volume info (#7630, @reasonerjt)
* Check the existence of the namespaces provided in the "--include-namespaces" option (#7569, @ywk253100)
* Add the finalization phase to the restore workflow (#7377, @allenxu404)
* Upgrade the version of go plugin related libs/tools (#7373, @ywk253100)
* Check resource Group Version and Kind is available in cluster before attempting restore to prevent being stuck. (#7322, @kaovilai)
* Merge CSI plugin code into Velero. (#7609, @blackpiglet)
* Fix issue #7391, remove the default constraint for node-agent pods (#7488, @Lyndon-Li)
* Fix DataDownload fails during restore for empty PVC workload (#7521, @qiuming-best)
* Add repository maintenance job (#7451, @qiuming-best)
* Check whether the VolumeSnapshot's source PVC is nil before using it.
Skip populate VolumeInfo for data-moved PV when CSI is not enabled. (#7515, @blackpiglet)
* Fix issue #7308, change the data path requeue time to 5 second for data mover backup/restore, PVB and PVR. (#7458, @Lyndon-Li)
* Patch newly dynamically provisioned PV with volume info to restore custom setting of PV (#7504, @allenxu404)
* Adjust the logic for the backup_last_status metrics to stop incorrectly incrementing over time (#7445, @allenxu404)
* dependabot: support github-actions updates (#7594, @mmorel-35)
* Include the design for adding the finalization phase to the restore workflow (#7317, @allenxu404)
* Fix issue #7211. Enable advanced feature capability and add support to concatenate objects for unified repo. (#7452, @Lyndon-Li)
* Add design to introduce restore volume info (#7610, @reasonerjt)
* Increase the k8s client QPS/burst to avoid throttling request errors (#7311, @ywk253100)
* Support update the backup VolumeInfos by the Async ops result. (#7554, @blackpiglet)
* FS backup create PodVolumeBackup when the backup excluded PVC,
so I added logic to skip PVC volume type when PVC is not included in the backup resources to be backed up. (#7472, @sbahar619)
* Respect and use `credentialsFile` specified in BSL.spec.config when IRSA is configured over Velero Pod Environment credentials (#7374, @reasonerjt)
* Move the native snapshot definition code into internal directory (#7544, @blackpiglet)
* Fix issue #7036. Add the implementation of node selection for data mover backups (#7437, @Lyndon-Li)
* Fix issue #7535, add the MustHave resource check during item collection and item filter for restore (#7585, @Lyndon-Li)
* build(deps): bump json-patch to v5.8.0 (#7584, @mmorel-35)
* Add confirm flag to velero plugin add (#7566, @kaovilai)
* do not skip unknown gvr at the beginning and get new gr when kind is changed (#7523, @27149chen)
* Fix snapshot leak for backup (#7558, @qiuming-best)
* For issue #7036, add the document for data mover node selection (#7640, @Lyndon-Li)
* Add design for Extending VolumePolicies to support more actions (#6956, @shubham-pampattiwar)
* BackupRepositories associated with a BSL are invalidated when BSL is (re-)created. (#7380, @kaovilai)
* Improve the concurrency for PVBs in different pods (#7571, @ywk253100)
* Bump up Kopia to v0.16.0 and open kopia repo with no index change (#7559, @Lyndon-Li)
* Bump up the versions of several Kubernetes-related libs (#7489, @ywk253100)
* Make parallel restore configurable (#7512, @qiuming-best)
* Support certificate-based authentication for Azure (#7549, @ywk253100)
* Fix issue #7281, batch delete snapshots in the same repo (#7438, @Lyndon-Li)
* Add CRD name to error message when it is not ready to use (#7295, @josemarevalo)
* Add the design for node selection for data mover backup (#7383, @Lyndon-Li)
* Bump up aws-sdk to latest version to leverage Pod Identity credentials. (#7307, @guikcd)
* Fix issue #7246. Document the behavior for repo snapshot deletion (#7622, @Lyndon-Li)
* Fix issue #7583, set backupName optional for Restore CRD (#7617, @Lyndon-Li)

View File

@@ -1,145 +0,0 @@
## v1.15
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.15.0
### Container Image
`velero/velero:v1.15.0`
### Documentation
https://velero.io/docs/v1.15/
### Upgrading
https://velero.io/docs/v1.15/upgrade-to-1.15/
### Highlights
#### Data mover micro service
Data transfer activities for CSI Snapshot Data Movement are moved from node-agent pods to dedicate backupPods or restorePods. This brings many benefits such as:
- This avoids to access volume data through host path, while host path access is privileged and may involve security escalations, which are concerned by users.
- This enables users to to control resource (i.e., cpu, memory) allocations in a granular manner, e.g., control them per backup/restore of a volume.
- This enhances the resilience, crash of one data movement activity won't affect others.
- This prevents unnecessary full backup because of host path changes after workload pods restart.
- For more information, check the design https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/vgdp-micro-service/vgdp-micro-service.md.
#### Item Block concepts and ItemBlockAction (IBA) plugin
Item Block concepts are introduced for resource backups to help to achieve multiple thread backups. Specifically, correlated resources are categorized in the same item block and item blocks could be processed concurrently in multiple threads.
ItemBlockAction plugin is introduced to help Velero to categorize resources into item blocks. At present, Velero provides built-in IBAs for pods and PVCs and Velero also supports customized IBAs for any resources.
In v1.15, Velero doesn't support multiple thread process of item blocks though item block concepts and IBA plugins are fully supported. The multiple thread support will be delivered in future releases.
For more information, check the design https://github.com/vmware-tanzu/velero/blob/main/design/backup-performance-improvements.md.
#### Node selection for repository maintenance job
Repository maintenance are resource consuming tasks, Velero now allows you to configure the nodes to run repository maintenance jobs, so that you can run repository maintenance jobs in idle nodes or avoid them to run in nodes hosting critical workloads.
To support the configuration, a new repository maintenance configuration configMap is introduced.
For more information, check the document https://velero.io/docs/v1.15/repository-maintenance/.
#### Backup PVC read-only configuration
In 1.15, Velero allows you to configure the data mover backupPods to read-only mount the backupPVCs. In this way, the data mover expose process could be significantly accelerated for some storages (i.e., ceph).
To support the configuration, a new backup PVC configuration configMap is introduced.
For more information, check the document https://velero.io/docs/v1.15/data-movement-backup-pvc-configuration/.
#### Backup PVC storage class configuration
In 1.15, Velero allows you to configure the storageclass used by the data mover backupPods. In this way, the provision of backupPVCs don't need to adhere to the same pattern as workload PVCs, e.g., for a backupPVC, it only needs one replica, whereas, the a workload PVC may have multiple replicas.
To support the configuration, the same backup PVC configuration configMap is used.
For more information, check the document https://velero.io/docs/v1.15/data-movement-backup-pvc-configuration/.
#### Backup repository data cache configuration
The backup repository may need to cache data on the client side during various repository operations, i.e., read, write, maintenance, etc. The cache consumes the root file system space of the pod where the repository access happens.
In 1.15, Velero allows you to configure the total size of the cache per repository. In this way, if your pod doesn't have enough space in its root file system, the pod won't be evicted due to running out of ephemeral storage.
To support the configuration, a new backup repository configuration configMap is introduced.
For more information, check the document https://velero.io/docs/v1.15/backup-repository-configuration/.
#### Performance improvements
In 1.15, several performance related issues/enhancements are included, which makes significant performance improvements in specific scenarios:
- There was a memory leak of Velero server after plugin calls, now it is fixed, see issue https://github.com/vmware-tanzu/velero/issues/7925
- The `client-burst/client-qps` parameters are automatically inherited to plugins, so that you can use the same velero server parameters to accelerate the plugin executions when large number of API server calls happen, see issue https://github.com/vmware-tanzu/velero/issues/7806
- Maintenance of Kopia repository takes huge memory in scenarios that huge number of files have been backed up, Velero 1.15 has included the Kopia upstream enhancement to fix the problem, see issue https://github.com/vmware-tanzu/velero/issues/7510
### Runtime and dependencies
Golang runtime: v1.22.8
kopia: v0.17.0
### Limitations/Known issues
#### Read-only backup PVC may not work on SELinux environments
Due to an issue of Kubernetes upstream, if a volume is mounted as read-only in SELinux environments, the read privilege is not granted to any user, as a result, the data mover backup will fail. On the other hand, the backupPVC must be mounted as read-only in order to accelerate the data mover expose process.
Therefore, a user option is added in the same backup PVC configuration configMap, once the option is enabled, the backupPod container will run as a super privileged container and disable SELinux access control. If you have concern in this super privileged container or you have configured [pod security admissions](https://kubernetes.io/docs/concepts/security/pod-security-admission/) and don't allow super privileged containers, you will not be able to use this read-only backupPVC feature and lose the benefit to accelerate the data mover expose process.
### Breaking changes
#### Deprecation of Restic
Restic path for fs-backup is in deprecation process starting from 1.15. According to [Velero deprecation policy](https://github.com/vmware-tanzu/velero/blob/v1.15/GOVERNANCE.md#deprecation-policy), for 1.15, if Restic path is used the backup/restore of fs-backup still creates and succeeds, but you will see warnings in below scenarios:
- When `--uploader-type=restic` is used in Velero installation
- When Restic path is used to create backup/restore of fs-backup
#### node-agent configuration name is configurable
Previously, a fixed name is searched for node-agent configuration configMap. Now in 1.15, Velero allows you to customize the name of the configMap, on the other hand, the name must be specified by node-agent server parameter `node-agent-configmap`.
#### Repository maintenance job configurations in Velero server parameter are moved to repository maintenance job configuration configMap
In 1.15, below Velero server parameters for repository maintenance jobs are moved to the repository maintenance job configuration configMap. While for back compatibility reason, the same Velero sever parameters are preserved as is. But the configMap is recommended and the same values in the configMap take preference if they exist in both places:
```
--keep-latest-maintenance-jobs
--maintenance-job-cpu-request
--maintenance-job-mem-request
--maintenance-job-cpu-limit
--maintenance-job-mem-limit
```
#### Changing PVC selected-node feature is deprecated
In 1.15, the [Changing PVC selected-node feature](https://velero.io/docs/v1.15/restore-reference/#changing-pvc-selected-node) enters deprecation process and will be removed in future releases according to [Velero deprecation policy](https://github.com/vmware-tanzu/velero/blob/v1.15/GOVERNANCE.md#deprecation-policy). Usage of this feature for any purpose is not recommended.
### All Changes
* add no-relabeling option to backupPVC configmap (#8288, @sseago)
* only set spec.volumes readonly if PVC is readonly for datamover (#8284, @sseago)
* Add labels to maintenance job pods (#8256, @shubham-pampattiwar)
* Add the Carvel package related resources to the restore priority list (#8228, @ywk253100)
* Reduces indirect imports for plugin/framework importers (#8208, @kaovilai)
* Add controller name to periodical_enqueue_source. The logger parameter now includes an additional field with the value of reflect.TypeOf(objList).String() and another field with the value of controllerName. (#8198, @kaovilai)
* Update Openshift SCC docs link (#8170, @shubham-pampattiwar)
* Partially fix issue #8138, add doc for node-agent memory preserve (#8167, @Lyndon-Li)
* Pass Velero server command args to the plugins (#8166, @ywk253100)
* Fix issue #8155, Merge Kopia upstream commits for critical issue fixes and performance improvements (#8158, @Lyndon-Li)
* Implement the Repo maintenance Job configuration. (#8145, @blackpiglet)
* Add document for data mover micro service (#8144, @Lyndon-Li)
* Fix issue #8134, allow to config resource request/limit for data mover micro service pods (#8143, @Lyndon-Li)
* Apply backupPVCConfig to backupPod volume spec (#8141, @shubham-pampattiwar)
* Add resource modifier for velero restore describe CLI (#8139, @blackpiglet)
* Fix issue #7620, add doc for backup repo config (#8131, @Lyndon-Li)
* Modify E2E and perf test report generated directory (#8129, @blackpiglet)
* Add docs for backup pvc config support (#8119, @shubham-pampattiwar)
* Delete generated k8s client and informer. (#8114, @blackpiglet)
* Add support for backup PVC configuration (#8109, @shubham-pampattiwar)
* ItemBlock model and phase 1 (single-thread) workflow changes (#8102, @sseago)
* Fix issue #8032, make node-agent configMap name configurable (#8097, @Lyndon-Li)
* Fix issue #8072, add the warning messages for restic deprecation (#8096, @Lyndon-Li)
* Fix issue #7620, add backup repository configuration implementation and support cacheLimit configuration for Kopia repo (#8093, @Lyndon-Li)
* Patch dbr's status when error happens (#8086, @reasonerjt)
* According to design #7576, after node-agent restarts, if a DU/DD is in InProgress status, re-capture the data mover ms pod and continue the execution (#8085, @Lyndon-Li)
* Updates to IBM COS documentation to match current version (#8082, @gjanders)
* Data mover micro service DUCR/DDCR controller refactor according to design #7576 (#8074, @Lyndon-Li)
* add retries with timeout to existing patch calls that moves a backup/restore from InProgress/Finalizing to a final status phase. (#8068, @kaovilai)
* Data mover micro service restore according to design #7576 (#8061, @Lyndon-Li)
* Internal ItemBlockAction plugins (#8054, @sseago)
* Data mover micro service backup according to design #7576 (#8046, @Lyndon-Li)
* Avoid wrapping failed PVB status with empty message. (#8028, @mrnold)
* Created new ItemBlockAction (IBA) plugin type (#8026, @sseago)
* Make PVPatchMaximumDuration timeout configurable (#8021, @shubham-pampattiwar)
* Reuse existing plugin manager for get/put volume info (#8012, @sseago)
* Data mover ms watcher according to design #7576 (#7999, @Lyndon-Li)
* New data path for data mover ms according to design #7576 (#7988, @Lyndon-Li)
* For issue #7700 and #7747, add the design for backup PVC configurations (#7982, @Lyndon-Li)
* Only get VolumeSnapshotClass when DataUpload exists. (#7974, @blackpiglet)
* Fix issue #7972, sync the backupPVC deletion in expose clean up (#7973, @Lyndon-Li)
* Expose the VolumeHelper to third-party plugins. (#7969, @blackpiglet)
* Check whether the volume's source is PVC before fetching its PV. (#7967, @blackpiglet)
* Check whether the namespaces specified in namespace filter exist. (#7965, @blackpiglet)
* Add design for backup repository configurations for issue #7620, #7301 (#7963, @Lyndon-Li)
* New data path for data mover ms according to design #7576 (#7955, @Lyndon-Li)
* Skip PV patch step in Restoe workflow for WaitForFirstConsumer VolumeBindingMode Pending state PVCs (#7953, @shubham-pampattiwar)
* Fix issue #7904, add the deprecation and limitation clarification for change PVC selected-node feature (#7948, @Lyndon-Li)
* Expose the VolumeHelper to third-party plugins. (#7944, @blackpiglet)
* Don't consider unschedulable pods unrecoverable (#7899, @sseago)
* Upgrade to robfig/cron/v3 to support time zone specification. (#7793, @kaovilai)
* Add the result in the backup's VolumeInfo. (#7775, @blackpiglet)
* Migrate from github.com/golang/protobuf to google.golang.org/protobuf (#7593, @mmorel-35)
* Add the design for data mover micro service (#7576, @Lyndon-Li)
* Descriptive restore error when restoring into a terminating namespace. (#7424, @kaovilai)
* Ignore missing path error in conditional match (#7410, @seanblong)
* Propose a deprecation process for velero (#5532, @shubham-pampattiwar)

View File

@@ -1,156 +0,0 @@
## v1.16
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.16.0
### Container Image
`velero/velero:v1.16.0`
### Documentation
https://velero.io/docs/v1.16/
### Upgrading
https://velero.io/docs/v1.16/upgrade-to-1.16/
### Highlights
#### Windows cluster support
In v1.16, Velero supports to run in Windows clusters and backup/restore Windows workloads, either stateful or stateless:
* Hybrid build and all-in-one image: the build process is enhanced to build an all-in-one image for hybrid CPU architecture and hybrid platform. For more information, check the design https://github.com/vmware-tanzu/velero/blob/main/design/multiple-arch-build-with-windows.md
* Deployment in Windows clusters: Velero node-agent, data mover pods and maintenance jobs now support to run in both linux and Windows nodes
* Data mover backup/restore Windows workloads: Velero built-in data mover supports Windows workloads throughout its full cycle, i.e., discovery, backup, restore, pre/post hook, etc. It automatically identifies Windows workloads and schedules data mover pods to the right group of nodes
Check the epic issue https://github.com/vmware-tanzu/velero/issues/8289 for more information.
#### Parallel Item Block backup
v1.16 now supports to back up item blocks in parallel. Specifically, during backup, correlated resources are grouped in item blocks and Velero backup engine creates a thread pool to back up the item blocks in parallel. This significantly improves the backup throughput, especially when there are large scale of resources.
Pre/post hooks also belongs to item blocks, so will also run in parallel along with the item blocks.
Users are allowed to configure the parallelism through the `--item-block-worker-count` Velero server parameter. If not configured, the default parallelism is 1.
For more information, check issue https://github.com/vmware-tanzu/velero/issues/8334.
#### Data mover restore enhancement in scalability
In previous releases, for each volume of WaitForFirstConsumer mode, data mover restore is only allowed to happen in the node that the volume is attached. This severely degrades the parallelism and the balance of node resource(CPU, memory, network bandwidth) consumption for data mover restore (https://github.com/vmware-tanzu/velero/issues/8044).
In v1.16, users are allowed to configure data mover restores running and spreading evenly across all nodes in the cluster. The configuration is done through a new flag `ignoreDelayBinding` in node-agent configuration (https://github.com/vmware-tanzu/velero/issues/8242).
#### Data mover enhancements in observability
In 1.16, some observability enhancements are added:
* Output various statuses of intermediate objects for failures of data mover backup/restore (https://github.com/vmware-tanzu/velero/issues/8267)
* Output the errors when Velero fails to delete intermediate objects during clean up (https://github.com/vmware-tanzu/velero/issues/8125)
The outputs are in the same node-agent log and enabled automatically.
#### CSI snapshot backup/restore enhancement in usability
In previous releases, a unnecessary VolumeSnapshotContent object is retained for each backup and synced to other clusters sharing the same backup storage location. And during restore, the retained VolumeSnapshotContent is also restored unnecessarily.
In 1.16, the retained VolumeSnapshotContent is removed from the backup, so no unnecessary CSI objects are synced or restored.
For more information, check issue https://github.com/vmware-tanzu/velero/issues/8725.
#### Backup Repository Maintenance enhancement in resiliency and observability
In v1.16, some enhancements of backup repository maintenance are added to improve the observability and resiliency:
* A new backup repository maintenance history section, called `RecentMaintenance`, is added to the BackupRepository CR. Specifically, for each BackupRepository, including start/completion time, completion status and error message. (https://github.com/vmware-tanzu/velero/issues/7810)
* Running maintenance jobs are now recaptured after Velero server restarts. (https://github.com/vmware-tanzu/velero/issues/7753)
* The maintenance job will not be launched for readOnly BackupStorageLocation. (https://github.com/vmware-tanzu/velero/issues/8238)
* The backup repository will not try to initialize a new repository for readOnly BackupStorageLocation. (https://github.com/vmware-tanzu/velero/issues/8091)
* Users now are allowed to configure the intervals of an effective maintenance in the way of `normalGC`, `fastGC` and `eagerGC`, through the `fullMaintenanceInterval` parameter in backupRepository configuration. (https://github.com/vmware-tanzu/velero/issues/8364)
#### Volume Policy enhancement of filtering volumes by PVC labels
In v1.16, Volume Policy is extended to support filtering volumes by PVC labels. (https://github.com/vmware-tanzu/velero/issues/8256).
#### Resource Status restore per object
In v1.16, users are allowed to define whether to restore resource status per object through an annotation `velero.io/restore-status` set on the object. (https://github.com/vmware-tanzu/velero/issues/8204).
#### Velero Restore Helper binary is merged into Velero image
In v1.16, Velero banaries, i.e., velero, velero-helper and velero-restore-helper, are all included into the single Velero image. (https://github.com/vmware-tanzu/velero/issues/8484).
### Runtime and dependencies
Golang runtime: 1.23.7
kopia: 0.19.0
### Limitations/Known issues
#### Limitations of Windows support
* fs-backup is not supported for Windows workloads and so fs-backup runs only in linux nodes for linux workloads
* Backup/restore of NTFS extended attributes/advanced features are not supported, i.e., Security Descriptors, System/Hidden/ReadOnly attributes, Creation Time, NTFS Streams, etc.
### All Changes
* Add third party annotation support for maintenance job, so that the declared third party annotations could be added to the maintenance job pods (#8812, @Lyndon-Li)
* Fix issue #8803, use deterministic name to create backupRepository (#8808, @Lyndon-Li)
* Refactor restoreItem and related functions to differentiate the backup resource name and the restore target resource name. (#8797, @blackpiglet)
* ensure that PV is removed before VS is deleted (#8777, @ix-rzi)
* host_pods should not be mandatory to node-agent (#8774, @mpryc)
* Log doesn't show pv name, but displays %!s(MISSING) instead (#8771, @hu-keyu)
* Fix issue #8754, add third party annotation support for data mover (#8770, @Lyndon-Li)
* Add docs for volume policy with labels as a criteria (#8759, @shubham-pampattiwar)
* Move pvc annotation removal from CSI RIA to regular PVC RIA (#8755, @sseago)
* Add doc for maintenance history (#8747, @Lyndon-Li)
* Fix issue #8733, add doc for restorePVC (#8737, @Lyndon-Li)
* Fix issue #8426, add doc for Windows support (#8736, @Lyndon-Li)
* Fix issue #8475, refactor build-from-source doc for hybrid image build (#8729, @Lyndon-Li)
* Return directly if no pod volme backup are tracked (#8728, @ywk253100)
* Fix issue #8706, for immediate volumes, there is no selected-node annotation on PVC, so deduce the attached node from VolumeAttachment CRs (#8715, @Lyndon-Li)
* Add labels as a criteria for volume policy (#8713, @shubham-pampattiwar)
* Copy SecurityContext from Containers[0] if present for PVR (#8712, @sseago)
* Support pushing images to an insecure registry (#8703, @ywk253100)
* Modify golangci configuration to make it work. (#8695, @blackpiglet)
* Run backup post hooks inside ItemBlock synchronously (#8694, @ywk253100)
* Add docs for object level status restore (#8693, @shubham-pampattiwar)
* Clean artifacts generated during CSI B/R. (#8684, @blackpiglet)
* Don't run maintenance on the ReadOnly BackupRepositories. (#8681, @blackpiglet)
* Fix #8657: WaitGroup panic issue (#8679, @ywk253100)
* Fixes issue #8214, validate `--from-schedule` flag in create backup command to prevent empty or whitespace-only values. (#8665, @aj-2000)
* Implement parallel ItemBlock processing via backup_controller goroutines (#8659, @sseago)
* Clean up leaked CSI snapshot for incomplete backup (#8637, @raesonerjt)
* Handle update conflict when restoring the status (#8630, @ywk253100)
* Fix issue #8419, support repo maintenance job to run on Windows nodes (#8626, @Lyndon-Li)
* Always create DataUpload configmap in restore namespace (#8621, @sseago)
* Fix issue #8091, avoid to create new repo when BSL is readonly (#8615, @Lyndon-Li)
* Fix issue #8242, distribute dd evenly across nodes (#8611, @Lyndon-Li)
* Fix issue #8497, update du/dd progress on completion (#8608, @Lyndon-Li)
* Fix issue #8418, add Windows toleration to data mover pods (#8606, @Lyndon-Li)
* Check the PVB status via podvolume Backupper rather than calling API server to avoid API server issue (#8603, @ywk253100)
* Fix issue #8067, add tmp folder (/tmp for linux, C:\Windows\Temp for Windows) as an alternative of udmrepo's config file location (#8602, @Lyndon-Li)
* Data mover restore for Windows (#8594, @Lyndon-Li)
* Skip patching the PV in finalization for failed operation (#8591, @reasonerjt)
* Fix issue #8579, set event burst to block event broadcaster from filtering events (#8590, @Lyndon-Li)
* Configurable Kopia Maintenance Interval. backup-repository-configmap adds an option for configurable`fullMaintenanceInterval` where fastGC (12 hours), and eagerGC (6 hours) allowing for faster removal of deleted velero backups from kopia repo. (#8581, @kaovilai)
* Fix issue #7753, recall repo maintenance history on Velero server restart (#8580, @Lyndon-Li)
* Clear validation errors when schedule is valid (#8575, @ywk253100)
* Merge restore helper image into Velero server image (#8574, @ywk253100)
* Don't include excluded items in ItemBlocks (#8572, @sseago)
* fs uploader and block uploader support Windows nodes (#8569, @Lyndon-Li)
* Fix issue #8418, support data mover backup for Windows nodes (#8555, @Lyndon-Li)
* Fix issue #8044, allow users to ignore delay binding the restorePVC of data mover when it is in WaitForFirstConsumer mode (#8550, @Lyndon-Li)
* Fix issue #8539, validate uploader types when o.CRDsOnly is set to false only since CRD installation doesn't rely on uploader types (#8538, @Lyndon-Li)
* Fix issue #7810, add maintenance history for backupRepository CRs (#8532, @Lyndon-Li)
* Make fs-backup work on linux nodes with the new Velero deployment and disable fs-backup if the source/target pod is running in non-linux node (#8424) (#8518, @Lyndon-Li)
* Fix issue: backup schedule pause/unpause doesn't work (#8512, @ywk253100)
* Fix backup post hook issue #8159 (caused by #7571): always execute backup post hooks after PVBs are handled (#8509, @ywk253100)
* Fix issue #8267, enhance the error message when expose fails (#8508, @Lyndon-Li)
* Fix issue #8416, #8417, deploy Velero server and node-agent in linux/Windows hybrid env (#8504, @Lyndon-Li)
* Design to add label selector as a criteria for volume policy (#8503, @shubham-pampattiwar)
* Related to issue #8485, move the acceptedByNode and acceptedTimestamp to Status of DU/DD CRD (#8498, @Lyndon-Li)
* Add SecurityContext to restore-helper (#8491, @reasonerjt)
* Fix issue #8433, add third party labels to data mover pods when the same labels exist in node-agent pods (#8487, @Lyndon-Li)
* Fix issue #8485, add an accepted time so as to count the prepare timeout (#8486, @Lyndon-Li)
* Fix issue #8125, log diagnostic info for data mover exposers when expose timeout (#8482, @Lyndon-Li)
* Fix issue #8415, implement multi-arch build and Windows build (#8476, @Lyndon-Li)
* Pin kopia to 0.18.2 (#8472, @Lyndon-Li)
* Add nil check for updating DataUpload VolumeInfo in finalizing phase (#8471, @blackpiglet)
* Allowing Object-Level Resource Status Restore (#8464, @shubham-pampattiwar)
* For issue #8429. Add the design for multi-arch build and windows build (#8459, @Lyndon-Li)
* Upgrade go.mod k8s.io/ go.mod to v0.31.3 and implemented proper logger configuration for both client-go and controller-runtime libraries. This change ensures that logging format and level settings are properly applied throughout the codebase. The update improves logging consistency and control across the Velero system. (#8450, @kaovilai)
* Add Design for Allowing Object-Level Resource Status Restore (#8403, @shubham-pampattiwar)
* Fix issue #8391, check ErrCancelled from suffix of data mover pod's termination message (#8396, @Lyndon-Li)
* Fix issue #8394, don't call closeDataPath in VGDP callbacks, otherwise, the VGDP cleanup will hang (#8395, @Lyndon-Li)
* Adding support in velero Resource Policies for filtering PVs based on additional VolumeAttributes properties under CSI PVs (#8383, @mayankagg9722)
* Add --item-block-worker-count flag to velero install and server (#8380, @sseago)
* Make BackedUpItems thread safe (#8366, @sseago)
* Include --annotations flag in backup and restore create commands (#8354, @alromeros)
* Use aggregated discovery API to discovery API groups and resources (#8353, @ywk253100)
* Copy "envFrom" from Velero server when creating maintenance jobs (#8343, @evhan)
* Set hinting region to use for GetBucketRegion() in pkg/repository/config/aws.go (#8297, @kaovilai)
* Bump up version of client-go and controller-runtime (#8275, @ywk253100)
* fix(pkg/repository/maintenance): don't panic when there's no container statuses (#8271, @mcluseau)
* Add Backup warning for inclusion of NS managed by ArgoCD (#8257, @shubham-pampattiwar)
* Added tracking for deleted namespace status check in restore flow. (#8233, @sangitaray2021)

View File

@@ -1,143 +0,0 @@
## v1.17
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.17.0
### Container Image
`velero/velero:v1.17.0`
### Documentation
https://velero.io/docs/v1.17/
### Upgrading
https://velero.io/docs/v1.17/upgrade-to-1.17/
### Highlights
#### Modernized fs-backup
In v1.17, Velero fs-backup is modernized to the micro-service architecture, which brings below benefits:
- Many features that were absent to fs-backup are now available, i.e., load concurrency control, cancel, resume on restart, etc.
- fs-backup is more robust, the running backup/restore could survive from node-agent restart; and the resource allocation is in a more granular manner, the failure of one backup/restore won't impact others.
- The resource usage of node-agent is steady, especially, the node-agent pods won't request huge memory and hold it for a long time.
Check design https://github.com/vmware-tanzu/velero/blob/main/design/vgdp-micro-service-for-fs-backup/vgdp-micro-service-for-fs-backup.md for more details.
#### fs-backup support Windows cluster
In v1.17, Velero fs-backup supports to backup/restore Windows workloads. By leveraging the new micro-service architecture for fs-backup, data mover pods could run in Windows nodes and backup/restore Windows volumes. Together with CSI snapshot data movement for Windows which is delivered in 1.16, Velero now supports Windows workload backup/restore in full scenarios.
Check design https://github.com/vmware-tanzu/velero/blob/main/design/vgdp-micro-service-for-fs-backup/vgdp-micro-service-for-fs-backup.md for more details.
#### Volume group snapshot support
In v1.17, Velero supports [volume group snapshots](https://kubernetes.io/blog/2024/12/18/kubernetes-1-32-volume-group-snapshot-beta/) which is a beta feature in Kubernetes upstream, for both CSI snapshot backup and CSI snapshot data movement. This allows a snapshot to be taken from multiple volumes at the same point-in-time to achieve write order consistency, which is helpful to achieve better data consistency when multiple volumes being backed up are correlated.
Check the document https://velero.io/docs/main/volume-group-snapshots/ for more details.
#### Priority class support
In v1.17, [Kubernetes priority class](https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass) is supported for all modules across Velero. Specifically, users are allowed to configure priority class to Velero server, node-agent, data mover pods, backup repository maintenance jobs separately.
Check design https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/priority-class-name-support_design.md for more details.
#### Scalability and Resiliency improvements of data movers
##### Reduce excessive number of data mover pods in Pending state
In v1.17, Velero allows users to set a `PrepareQueueLength` in the node-agent configuration, data mover pods and volumes out of this number won't be created until data path quota is available, so that excessive number cluster resources won't be taken unnecessarily, which is particularly helpful for large scale environments. This improvement applies to all kinds of data movements, including fs-backup and CSI snapshot data movement.
Check design https://github.com/vmware-tanzu/velero/blob/main/design/node-agent-load-soothing.md for more details.
##### Enhancement on node-agent restart handling for data movements
In v1.17, data movements in all phases could survive from node-agent restart and resume themselves; when a data movement gets orphaned in special cases, e.g., cluster node absent, it could also be canceled appropriately after the restart. This improvement applies to all kinds of data movements, including fs-backup and CSI snapshot data movement.
Check issue https://github.com/vmware-tanzu/velero/issues/8534 for more details.
##### CSI snapshot data movement restore node-selection and node-selection by storage class
In v1.17, CSI snapshot data movement restore acquires the same node-selection capability as backup, that is, users could specify which nodes can/cannot run data mover pods for both backup and restore now. And users are also allowed to configure the node-selection per storage class, which is particularly helpful to the environments where a storage class are not usable by all cluster nodes.
Check issue https://github.com/vmware-tanzu/velero/issues/8186 and https://github.com/vmware-tanzu/velero/issues/8223 for more details.
#### Include/exclude policy support for resource policy
In v1.17, Velero resource policy supports `includeExcludePolicy` besides the existing `volumePolicy`. This allows users to set include/exclude filters for resources in a resource policy configmap, so that these filters are reusable among multiple backups.
Check the document https://velero.io/docs/main/resource-filtering/#creating-resource-policies:~:text=resources%3D%22*%22-,Resource%20policies,-Velero%20provides%20resource for more details.
### Runtime and dependencies
Golang runtime: 1.24.6
kopia: 0.21.1
### Limitations/Known issues
### Breaking changes
#### Deprecation of Restic
According to [Velero deprecation policy](https://github.com/vmware-tanzu/velero/blob/main/GOVERNANCE.md#deprecation-policy), backup of fs-backup under Restic path is removed in v1.17, so `--uploader-type=restic` is not a valid installation configuration anymore. This means you cannot create a backup under Restic path, but you can still restore from the previous backups under Restic path until v1.19.
#### Repository maintenance job configurations are removed from Velero server parameter
Since the repository maintenance job configurations are moved to repository maintenance job configMap, in v1.17 below Velero sever parameters are removed:
- --keep-latest-maintenance-jobs
- --maintenance-job-cpu-request
- --maintenance-job-mem-request
- --maintenance-job-cpu-limit
- --maintenance-job-mem-limit
### All Changes
* Add ConfigMap parameters validation for install CLI and server start. (#9200, @blackpiglet)
* Add priorityclasses to high priority restore list (#9175, @kaovilai)
* Introduced context-based logger for backend implementations (Azure, GCS, S3, and Filesystem) (#9168, @priyansh17)
* Fix issue #9140, add os=windows:NoSchedule toleration for Windows pods (#9165, @Lyndon-Li)
* Remove the repository maintenance job parameters from velero server. (#9147, @blackpiglet)
* Add include/exclude policy to resources policy (#9145, @reasonerjt)
* Add ConfigMap support for keepLatestMaintenanceJobs with CLI parameter fallback (#9135, @shubham-pampattiwar)
* Fix the dd and du's node affinity issue. (#9130, @blackpiglet)
* Remove the WaitUntilVSCHandleIsReady from vs BIA. (#9124, @blackpiglet)
* Add comprehensive Volume Group Snapshots documentation with workflow diagrams and examples (#9123, @shubham-pampattiwar)
* Fix issue #9065, add doc for node-agent prepare queue length (#9118, @Lyndon-Li)
* Fix issue #9095, update restore doc for PVC selected-node (#9117, @Lyndon-Li)
* Update CSI Snapshot Data Movement doc for issue #8534, #8185 (#9113, @Lyndon-Li)
* Fix issue #8986, refactor fs-backup doc after VGDP Micro Service for fs-backup (#9112, @Lyndon-Li)
* Return error if timeout when checking server version (#9111, @ywk253100)
* Update "Default Volumes to Fs Backup" to "File System Backup (Default)" (#9105, @shubham-pampattiwar)
* Fix issue #9077, don't block backup deletion on list VS error (#9100, @Lyndon-Li)
* Bump up Kopia to v0.21.1 (#9098, @Lyndon-Li)
* Add imagePullSecrets inheritance for VGDP pod and maintenance job. (#9096, @blackpiglet)
* Avoid checking the VS and VSC status in the backup finalizing phase. (#9092, @blackpiglet)
* Fix issue #9053, Always remove selected-node annotation during PVC restore when no node mapping exists. Breaking change: Previously, the annotation was preserved if the node existed. (#9076, @Lyndon-Li)
* Enable parameterized kubelet mount path during node-agent installation (#9074, @longxiucai)
* Fix issue #8857, support third party tolerations for data mover pods (#9072, @Lyndon-Li)
* Fix issue #8813, remove restic from the valid uploader type (#9069, @Lyndon-Li)
* Fix issue #8185, allow users to disable pod volume host path mount for node-agent (#9068, @Lyndon-Li)
* Fix #8344, add the design for a mechanism to soothe creation of data mover pods for DataUpload, DataDownload, PodVolumeBackup and PodVolumeRestore (#9067, @Lyndon-Li)
* Fix #8344, add a mechanism to soothe creation of data mover pods for DataUpload, DataDownload, PodVolumeBackup and PodVolumeRestore (#9064, @Lyndon-Li)
* Add Gauge metric for BSL availability (#9059, @reasonerjt)
* Fix missing defaultVolumesToFsBackup flag output in Velero describe backup cmd (#9056, @shubham-pampattiwar)
* Allow for proper tracking of multiple hooks per container (#9048, @sseago)
* Make the backup repository controller doesn't invalidate the BSL on restart (#9046, @blackpiglet)
* Removed username/password credential handling from newConfigCredential as azidentity.UsernamePasswordCredentialOptions is reported as deprecated. (#9041, @priyansh17)
* Remove dependency with VolumeSnapshotClass in DataUpload. (#9040, @blackpiglet)
* Fix issue #8961, cancel PVB/PVR on Velero server restart (#9031, @Lyndon-Li)
* Fix issue #8962, resume PVB/PVR during node-agent restarts (#9030, @Lyndon-Li)
* Bump kopia v0.20.1 (#9027, @Lyndon-Li)
* Fix issue #8965, support PVB/PVR's cancel state in the backup/restore (#9026, @Lyndon-Li)
* Fix Issue 8816 When specifying LabelSelector on restore, related items such as PVC and VolumeSnapshot are not included (#9024, @amastbau)
* Fix issue #8963, add legacy PVR controller for Restic path (#9022, @Lyndon-Li)
* Fix issue #8964, add Windows support for VGDP MS for fs-backup (#9021, @Lyndon-Li)
* Accommodate VGS workflows in PVC CSI plugin (#9019, @shubham-pampattiwar)
* Fix issue #8958, add VGDP MS PVB controller (#9015, @Lyndon-Li)
* Fix issue #8959, add VGDP MS PVR controller (#9014, @Lyndon-Li)
* Fix issue #8988, add data path for VGDP ms PVR (#9005, @Lyndon-Li)
* Fix issue #8988, add data path for VGDP ms pvb (#8998, @Lyndon-Li)
* Skip VS and VSC not created by backup. (#8990, @blackpiglet)
* Make ResticIdentifier optional for kopia BackupRepositories (#8987, @kaovilai)
* Fix issue #8960, implement PodVolume exposer for PVB/PVR (#8985, @Lyndon-Li)
* fix: update mc command in minio-deployment example (#8982, @vishal-chdhry)
* Fix issue #8957, add design for VGDP MS for fs-backup (#8979, @Lyndon-Li)
* Add BSL status check for backup/restore operations. (#8976, @blackpiglet)
* Mark BackupRepository not ready when BSL changed (#8975, @ywk253100)
* Add support for [distributed snapshotting](https://github.com/kubernetes-csi/external-snapshotter/tree/4cedb3f45790ac593ebfa3324c490abedf739477?tab=readme-ov-file#distributed-snapshotting) (#8969, @flx5)
* Fix issue #8534, refactor dm controllers to tolerate cancel request in more cases, e.g., node restart, node drain (#8952, @Lyndon-Li)
* The backup and restore VGDP affinity enhancement implementation. (#8949, @blackpiglet)
* Remove CSI VS and VSC metadata from backup. (#8946, @blackpiglet)
* Extend PVCAction itemblock plugin to support grouping PVCs under VGS label key (#8944, @shubham-pampattiwar)
* Copy security context from origin pod (#8943, @farodin91)
* Add support for configuring VGS label key (#8938, @shubham-pampattiwar)
* Add VolumeSnapshotContent into the RIA and the mustHave resource list. (#8924, @blackpiglet)
* Mounted cloud credentials should not be world-readable (#8919, @sseago)
* Warn for not found error in patching managed fields (#8902, @sseago)
* Fix issue 8878, relief node os deduction error checks (#8891, @Lyndon-Li)
* Skip namespace in terminating state in backup resource collection. (#8890, @blackpiglet)
* Implement PriorityClass Support (#8883, @kaovilai)
* Fix Velero adding restore-wait init container when not needed. (#8880, @kaovilai)
* Pass the logger in kopia related operations. (#8875, @hu-keyu)
* Inherit the dnsPolicy and dnsConfig from the node agent pod. This is done so that the kopia task uses the same configuration. (#8845, @flx5)
* Add design for VolumeGroupSnapshot support (#8778, @shubham-pampattiwar)
* Inherit k8s default volumeSnapshotClass. (#8719, @hu-keyu)
* CLI automatically discovers and uses cacert from BSL for download requests (#8557, @kaovilai)
* This PR aims to add s390x support to Velero binary. (#7505, @pandurangkhandeparker)

View File

@@ -0,0 +1 @@
Fix issue #7535, don't skip must have resources for label selector

View File

@@ -0,0 +1,2 @@
Check whether the VolumeSnapshot's source PVC is nil before using it.
Skip populate VolumeInfo for data-moved PV when CSI is not enabled.

View File

@@ -1 +0,0 @@
feat: Permit specifying annotations for the BackupPVC

View File

@@ -1 +0,0 @@
Get pod list once per namespace in pvc IBA

View File

@@ -1 +0,0 @@
Fix issue #9229, don't attach backupPVC to the source node

View File

@@ -1 +0,0 @@
Update AzureAD Microsoft Authentication Library to v1.5.0

View File

@@ -1 +0,0 @@
Protect VolumeSnapshot field from race condition during multi-thread backup

View File

@@ -1 +0,0 @@
Fix repository maintenance jobs to inherit allowlisted tolerations from Velero deployment

View File

@@ -1 +0,0 @@
Fix schedule controller to prevent backup queue accumulation during extended blocking scenarios by properly handling empty backup phases

View File

@@ -1 +0,0 @@
Implement concurrency control for cache of native VolumeSnapshotter plugin.

View File

@@ -1 +0,0 @@
Add option for privileged fs-backup pod

View File

@@ -1 +0,0 @@
Fix issue #9267, add events to data mover prepare diagnostic

View File

@@ -1 +0,0 @@
VerifyJSONConfigs verify every elements in Data.

View File

@@ -1 +0,0 @@
Fix typos in documentation

View File

@@ -73,7 +73,7 @@ func done() bool {
return false
}
fmt.Printf("Found the done file %s\n", doneFile)
fmt.Printf("Found %s", doneFile)
}
return true

View File

@@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.16.5
controller-gen.kubebuilder.io/version: v0.12.0
name: backuprepositories.velero.io
spec:
group: velero.io
@@ -26,19 +26,14 @@ spec:
openAPIV3Schema:
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
@@ -46,21 +41,13 @@ spec:
description: BackupRepositorySpec is the specification for a BackupRepository.
properties:
backupStorageLocation:
description: |-
BackupStorageLocation is the name of the BackupStorageLocation
description: BackupStorageLocation is the name of the BackupStorageLocation
that should contain this repository.
type: string
maintenanceFrequency:
description: MaintenanceFrequency is how often maintenance should
be run.
type: string
repositoryConfig:
additionalProperties:
type: string
description: RepositoryConfig is for repository-specific configuration
fields.
nullable: true
type: object
repositoryType:
description: RepositoryType indicates the type of the backend repository
enum:
@@ -69,26 +56,25 @@ spec:
- ""
type: string
resticIdentifier:
description: |-
ResticIdentifier is the full restic-compatible string for identifying
this repository. This field is only used when RepositoryType is "restic".
description: ResticIdentifier is the full restic-compatible string
for identifying this repository.
type: string
volumeNamespace:
description: |-
VolumeNamespace is the namespace this backup repository contains
pod volume backups for.
description: VolumeNamespace is the namespace this backup repository
contains pod volume backups for.
type: string
required:
- backupStorageLocation
- maintenanceFrequency
- resticIdentifier
- volumeNamespace
type: object
status:
description: BackupRepositoryStatus is the current status of a BackupRepository.
properties:
lastMaintenanceTime:
description: LastMaintenanceTime is the last time repo maintenance
succeeded.
description: LastMaintenanceTime is the last time maintenance was
run.
format: date-time
nullable: true
type: string
@@ -103,33 +89,6 @@ spec:
- Ready
- NotReady
type: string
recentMaintenance:
description: RecentMaintenance is status of the recent repo maintenance.
items:
properties:
completeTimestamp:
description: CompleteTimestamp is the completion time of the
repo maintenance.
format: date-time
nullable: true
type: string
message:
description: Message is a message about the current status of
the repo maintenance.
type: string
result:
description: Result is the result of the repo maintenance.
enum:
- Succeeded
- Failed
type: string
startTimestamp:
description: StartTimestamp is the start time of the repo maintenance.
format: date-time
nullable: true
type: string
type: object
type: array
type: object
type: object
served: true

View File

@@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.16.5
controller-gen.kubebuilder.io/version: v0.12.0
name: backups.velero.io
spec:
group: velero.io
@@ -17,24 +17,18 @@ spec:
- name: v1
schema:
openAPIV3Schema:
description: |-
Backup is a Velero resource that represents the capture of Kubernetes
description: Backup is a Velero resource that represents the capture of Kubernetes
cluster state at a point in time (API objects and associated volume state).
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
@@ -42,62 +36,55 @@ spec:
description: BackupSpec defines the specification for a Velero backup.
properties:
csiSnapshotTimeout:
description: |-
CSISnapshotTimeout specifies the time used to wait for CSI VolumeSnapshot status turns to
ReadyToUse during creation, before returning error as timeout.
The default value is 10 minute.
description: CSISnapshotTimeout specifies the time used to wait for
CSI VolumeSnapshot status turns to ReadyToUse during creation, before
returning error as timeout. The default value is 10 minute.
type: string
datamover:
description: |-
DataMover specifies the data mover to be used by the backup.
If DataMover is "" or "velero", the built-in data mover will be used.
description: DataMover specifies the data mover to be used by the
backup. If DataMover is "" or "velero", the built-in data mover
will be used.
type: string
defaultVolumesToFsBackup:
description: |-
DefaultVolumesToFsBackup specifies whether pod volume file system backup should be used
for all volumes by default.
description: DefaultVolumesToFsBackup specifies whether pod volume
file system backup should be used for all volumes by default.
nullable: true
type: boolean
defaultVolumesToRestic:
description: |-
DefaultVolumesToRestic specifies whether restic should be used to take a
backup of all pod volumes by default.
Deprecated: this field is no longer used and will be removed entirely in future. Use DefaultVolumesToFsBackup instead.
description: "DefaultVolumesToRestic specifies whether restic should
be used to take a backup of all pod volumes by default. \n Deprecated:
this field is no longer used and will be removed entirely in future.
Use DefaultVolumesToFsBackup instead."
nullable: true
type: boolean
excludedClusterScopedResources:
description: |-
ExcludedClusterScopedResources is a slice of cluster-scoped
resource type names to exclude from the backup.
If set to "*", all cluster-scoped resource types are excluded.
The default value is empty.
description: ExcludedClusterScopedResources is a slice of cluster-scoped
resource type names to exclude from the backup. If set to "*", all
cluster-scoped resource types are excluded. The default value is
empty.
items:
type: string
nullable: true
type: array
excludedNamespaceScopedResources:
description: |-
ExcludedNamespaceScopedResources is a slice of namespace-scoped
resource type names to exclude from the backup.
If set to "*", all namespace-scoped resource types are excluded.
The default value is empty.
description: ExcludedNamespaceScopedResources is a slice of namespace-scoped
resource type names to exclude from the backup. If set to "*", all
namespace-scoped resource types are excluded. The default value
is empty.
items:
type: string
nullable: true
type: array
excludedNamespaces:
description: |-
ExcludedNamespaces contains a list of namespaces that are not
included in the backup.
description: ExcludedNamespaces contains a list of namespaces that
are not included in the backup.
items:
type: string
nullable: true
type: array
excludedResources:
description: |-
ExcludedResources is a slice of resource names that are not
included in the backup.
description: ExcludedResources is a slice of resource names that are
not included in the backup.
items:
type: string
nullable: true
@@ -110,9 +97,9 @@ spec:
description: Resources are hooks that should be executed when
backing up individual instances of a resource.
items:
description: |-
BackupResourceHookSpec defines one or more BackupResourceHooks that should be executed based on
the rules defined for namespaces, resources, and label selector.
description: BackupResourceHookSpec defines one or more BackupResourceHooks
that should be executed based on the rules defined for namespaces,
resources, and label selector.
properties:
excludedNamespaces:
description: ExcludedNamespaces specifies the namespaces
@@ -129,17 +116,17 @@ spec:
nullable: true
type: array
includedNamespaces:
description: |-
IncludedNamespaces specifies the namespaces to which this hook spec applies. If empty, it applies
description: IncludedNamespaces specifies the namespaces
to which this hook spec applies. If empty, it applies
to all namespaces.
items:
type: string
nullable: true
type: array
includedResources:
description: |-
IncludedResources specifies the resources to which this hook spec applies. If empty, it applies
to all resources.
description: IncludedResources specifies the resources to
which this hook spec applies. If empty, it applies to
all resources.
items:
type: string
nullable: true
@@ -153,8 +140,8 @@ spec:
description: matchExpressions is a list of label selector
requirements. The requirements are ANDed.
items:
description: |-
A label selector requirement is a selector that contains values, a key, and an operator that
description: A label selector requirement is a selector
that contains values, a key, and an operator that
relates the key and values.
properties:
key:
@@ -162,33 +149,33 @@ spec:
applies to.
type: string
operator:
description: |-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
description: operator represents a key's relationship
to a set of values. Valid operators are In,
NotIn, Exists and DoesNotExist.
type: string
values:
description: |-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
description: values is an array of string values.
If the operator is In or NotIn, the values array
must be non-empty. If the operator is Exists
or DoesNotExist, the values array must be empty.
This array is replaced during a strategic merge
patch.
items:
type: string
type: array
x-kubernetes-list-type: atomic
required:
- key
- operator
type: object
type: array
x-kubernetes-list-type: atomic
matchLabels:
additionalProperties:
type: string
description: |-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
description: matchLabels is a map of {key,value} pairs.
A single {key,value} in the matchLabels map is equivalent
to an element of matchExpressions, whose key field
is "key", the operator is "In", and the values array
contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
@@ -196,9 +183,10 @@ spec:
description: Name is the name of this hook.
type: string
post:
description: |-
PostHooks is a list of BackupResourceHooks to execute after storing the item in the backup.
These are executed after all "additional items" from item actions are processed.
description: PostHooks is a list of BackupResourceHooks
to execute after storing the item in the backup. These
are executed after all "additional items" from item actions
are processed.
items:
description: BackupResourceHook defines a hook for a resource.
properties:
@@ -213,9 +201,10 @@ spec:
minItems: 1
type: array
container:
description: |-
Container is the container in the pod where the command should be executed. If not specified,
the pod's first container is used.
description: Container is the container in the
pod where the command should be executed. If
not specified, the pod's first container is
used.
type: string
onError:
description: OnError specifies how Velero should
@@ -226,9 +215,9 @@ spec:
- Fail
type: string
timeout:
description: |-
Timeout defines the maximum amount of time Velero should wait for the hook to complete before
considering the execution a failure.
description: Timeout defines the maximum amount
of time Velero should wait for the hook to complete
before considering the execution a failure.
type: string
required:
- command
@@ -238,9 +227,10 @@ spec:
type: object
type: array
pre:
description: |-
PreHooks is a list of BackupResourceHooks to execute prior to storing the item in the backup.
These are executed before any "additional items" from item actions are processed.
description: PreHooks is a list of BackupResourceHooks to
execute prior to storing the item in the backup. These
are executed before any "additional items" from item actions
are processed.
items:
description: BackupResourceHook defines a hook for a resource.
properties:
@@ -255,9 +245,10 @@ spec:
minItems: 1
type: array
container:
description: |-
Container is the container in the pod where the command should be executed. If not specified,
the pod's first container is used.
description: Container is the container in the
pod where the command should be executed. If
not specified, the pod's first container is
used.
type: string
onError:
description: OnError specifies how Velero should
@@ -268,9 +259,9 @@ spec:
- Fail
type: string
timeout:
description: |-
Timeout defines the maximum amount of time Velero should wait for the hook to complete before
considering the execution a failure.
description: Timeout defines the maximum amount
of time Velero should wait for the hook to complete
before considering the execution a failure.
type: string
required:
- command
@@ -286,99 +277,91 @@ spec:
type: array
type: object
includeClusterResources:
description: |-
IncludeClusterResources specifies whether cluster-scoped resources
should be included for consideration in the backup.
description: IncludeClusterResources specifies whether cluster-scoped
resources should be included for consideration in the backup.
nullable: true
type: boolean
includedClusterScopedResources:
description: |-
IncludedClusterScopedResources is a slice of cluster-scoped
resource type names to include in the backup.
If set to "*", all cluster-scoped resource types are included.
The default value is empty, which means only related
cluster-scoped resources are included.
description: IncludedClusterScopedResources is a slice of cluster-scoped
resource type names to include in the backup. If set to "*", all
cluster-scoped resource types are included. The default value is
empty, which means only related cluster-scoped resources are included.
items:
type: string
nullable: true
type: array
includedNamespaceScopedResources:
description: |-
IncludedNamespaceScopedResources is a slice of namespace-scoped
resource type names to include in the backup.
The default value is "*".
description: IncludedNamespaceScopedResources is a slice of namespace-scoped
resource type names to include in the backup. The default value
is "*".
items:
type: string
nullable: true
type: array
includedNamespaces:
description: |-
IncludedNamespaces is a slice of namespace names to include objects
from. If empty, all namespaces are included.
description: IncludedNamespaces is a slice of namespace names to include
objects from. If empty, all namespaces are included.
items:
type: string
nullable: true
type: array
includedResources:
description: |-
IncludedResources is a slice of resource names to include
description: IncludedResources is a slice of resource names to include
in the backup. If empty, all resources are included.
items:
type: string
nullable: true
type: array
itemOperationTimeout:
description: |-
ItemOperationTimeout specifies the time used to wait for asynchronous BackupItemAction operations
The default value is 4 hour.
description: ItemOperationTimeout specifies the time used to wait
for asynchronous BackupItemAction operations The default value is
1 hour.
type: string
labelSelector:
description: |-
LabelSelector is a metav1.LabelSelector to filter with
when adding individual objects to the backup. If empty
or nil, all objects are included. Optional.
description: LabelSelector is a metav1.LabelSelector to filter with
when adding individual objects to the backup. If empty or nil, all
objects are included. Optional.
nullable: true
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements.
The requirements are ANDed.
items:
description: |-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
description: A label selector requirement is a selector that
contains values, a key, and an operator that relates the key
and values.
properties:
key:
description: key is the label key that the selector applies
to.
type: string
operator:
description: |-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
description: operator represents a key's relationship to
a set of values. Valid operators are In, NotIn, Exists
and DoesNotExist.
type: string
values:
description: |-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
description: values is an array of string values. If the
operator is In or NotIn, the values array must be non-empty.
If the operator is Exists or DoesNotExist, the values
array must be empty. This array is replaced during a strategic
merge patch.
items:
type: string
type: array
x-kubernetes-list-type: atomic
required:
- key
- operator
type: object
type: array
x-kubernetes-list-type: atomic
matchLabels:
additionalProperties:
type: string
description: |-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
description: matchLabels is a map of {key,value} pairs. A single
{key,value} in the matchLabels map is equivalent to an element
of matchExpressions, whose key field is "key", the operator
is "In", and the values array contains only "value". The requirements
are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
@@ -390,58 +373,56 @@ spec:
type: object
type: object
orLabelSelectors:
description: |-
OrLabelSelectors is list of metav1.LabelSelector to filter with
when adding individual objects to the backup. If multiple provided
description: OrLabelSelectors is list of metav1.LabelSelector to filter
with when adding individual objects to the backup. If multiple provided
they will be joined by the OR operator. LabelSelector as well as
OrLabelSelectors cannot co-exist in backup request, only one of them
can be used.
OrLabelSelectors cannot co-exist in backup request, only one of
them can be used.
items:
description: |-
A label selector is a label query over a set of resources. The result of matchLabels and
matchExpressions are ANDed. An empty label selector matches all objects. A null
label selector matches no objects.
description: A label selector is a label query over a set of resources.
The result of matchLabels and matchExpressions are ANDed. An empty
label selector matches all objects. A null label selector matches
no objects.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements.
The requirements are ANDed.
items:
description: |-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
description: A label selector requirement is a selector that
contains values, a key, and an operator that relates the
key and values.
properties:
key:
description: key is the label key that the selector applies
to.
type: string
operator:
description: |-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
description: operator represents a key's relationship
to a set of values. Valid operators are In, NotIn, Exists
and DoesNotExist.
type: string
values:
description: |-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
description: values is an array of string values. If the
operator is In or NotIn, the values array must be non-empty.
If the operator is Exists or DoesNotExist, the values
array must be empty. This array is replaced during a
strategic merge patch.
items:
type: string
type: array
x-kubernetes-list-type: atomic
required:
- key
- operator
type: object
type: array
x-kubernetes-list-type: atomic
matchLabels:
additionalProperties:
type: string
description: |-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
description: matchLabels is a map of {key,value} pairs. A single
{key,value} in the matchLabels map is equivalent to an element
of matchExpressions, whose key field is "key", the operator
is "In", and the values array contains only "value". The requirements
are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
@@ -450,10 +431,11 @@ spec:
orderedResources:
additionalProperties:
type: string
description: |-
OrderedResources specifies the backup order of resources of specific Kind.
The map key is the resource name and value is a list of object names separated by commas.
Each resource name has format "namespace/objectname". For cluster resources, simply use "objectname".
description: OrderedResources specifies the backup order of resources
of specific Kind. The map key is the resource name and value is
a list of object names separated by commas. Each resource name has
format "namespace/objectname". For cluster resources, simply use
"objectname".
nullable: true
type: object
resourcePolicy:
@@ -461,10 +443,10 @@ spec:
that backup should follow
properties:
apiGroup:
description: |-
APIGroup is the group for the resource being referenced.
If APIGroup is not specified, the specified Kind must be in the core API group.
For any other third-party types, APIGroup is required.
description: APIGroup is the group for the resource being referenced.
If APIGroup is not specified, the specified Kind must be in
the core API group. For any other third-party types, APIGroup
is required.
type: string
kind:
description: Kind is the type of resource being referenced
@@ -483,10 +465,8 @@ spec:
nullable: true
type: boolean
snapshotVolumes:
description: |-
SnapshotVolumes specifies whether to take snapshots
of any PV's referenced in the set of objects included
in the Backup.
description: SnapshotVolumes specifies whether to take snapshots of
any PV's referenced in the set of objects included in the Backup.
nullable: true
type: boolean
storageLocation:
@@ -494,9 +474,8 @@ spec:
BackupStorageLocation where the backup should be stored.
type: string
ttl:
description: |-
TTL is a time.Duration-parseable string describing how long
the Backup should be retained for.
description: TTL is a time.Duration-parseable string describing how
long the Backup should be retained for.
type: string
uploaderConfig:
description: UploaderConfig specifies the configuration for the uploader.
@@ -507,10 +486,6 @@ spec:
uploads to perform when using the uploader.
type: integer
type: object
volumeGroupSnapshotLabelKey:
description: VolumeGroupSnapshotLabelKey specifies the label key to
group PVCs under a VGS.
type: string
volumeSnapshotLocations:
description: VolumeSnapshotLocations is a list containing names of
VolumeSnapshotLocations associated with this backup.
@@ -522,44 +497,39 @@ spec:
description: BackupStatus captures the current status of a Velero backup.
properties:
backupItemOperationsAttempted:
description: |-
BackupItemOperationsAttempted is the total number of attempted
async BackupItemAction operations for this backup.
description: BackupItemOperationsAttempted is the total number of
attempted async BackupItemAction operations for this backup.
type: integer
backupItemOperationsCompleted:
description: |-
BackupItemOperationsCompleted is the total number of successfully completed
async BackupItemAction operations for this backup.
description: BackupItemOperationsCompleted is the total number of
successfully completed async BackupItemAction operations for this
backup.
type: integer
backupItemOperationsFailed:
description: |-
BackupItemOperationsFailed is the total number of async
BackupItemAction operations for this backup which ended with an error.
description: BackupItemOperationsFailed is the total number of async
BackupItemAction operations for this backup which ended with an
error.
type: integer
completionTimestamp:
description: |-
CompletionTimestamp records the time a backup was completed.
Completion time is recorded even on failed backups.
Completion time is recorded before uploading the backup object.
The server's time is used for CompletionTimestamps
description: CompletionTimestamp records the time a backup was completed.
Completion time is recorded even on failed backups. Completion time
is recorded before uploading the backup object. The server's time
is used for CompletionTimestamps
format: date-time
nullable: true
type: string
csiVolumeSnapshotsAttempted:
description: |-
CSIVolumeSnapshotsAttempted is the total number of attempted
description: CSIVolumeSnapshotsAttempted is the total number of attempted
CSI VolumeSnapshots for this backup.
type: integer
csiVolumeSnapshotsCompleted:
description: |-
CSIVolumeSnapshotsCompleted is the total number of successfully
description: CSIVolumeSnapshotsCompleted is the total number of successfully
completed CSI VolumeSnapshots for this backup.
type: integer
errors:
description: |-
Errors is a count of all error messages that were generated during
execution of the backup. The actual errors are in the backup's log
file in object storage.
description: Errors is a count of all error messages that were generated
during execution of the backup. The actual errors are in the backup's
log file in object storage.
type: integer
expiration:
description: Expiration is when this Backup is eligible for garbage-collection.
@@ -580,10 +550,10 @@ spec:
nullable: true
properties:
hooksAttempted:
description: |-
HooksAttempted is the total number of attempted hooks
Specifically, HooksAttempted represents the number of hooks that failed to execute
and the number of hooks that executed successfully.
description: HooksAttempted is the total number of attempted hooks
Specifically, HooksAttempted represents the number of hooks
that failed to execute and the number of hooks that executed
successfully.
type: integer
hooksFailed:
description: HooksFailed is the total number of hooks which ended
@@ -606,62 +576,53 @@ spec:
- Deleting
type: string
progress:
description: |-
Progress contains information about the backup's execution progress. Note
that this information is best-effort only -- if Velero fails to update it
during a backup for any reason, it may be inaccurate/stale.
description: Progress contains information about the backup's execution
progress. Note that this information is best-effort only -- if Velero
fails to update it during a backup for any reason, it may be inaccurate/stale.
nullable: true
properties:
itemsBackedUp:
description: |-
ItemsBackedUp is the number of items that have actually been written to the
backup tarball so far.
description: ItemsBackedUp is the number of items that have actually
been written to the backup tarball so far.
type: integer
totalItems:
description: |-
TotalItems is the total number of items to be backed up. This number may change
throughout the execution of the backup due to plugins that return additional related
items to back up, the velero.io/exclude-from-backup label, and various other
description: TotalItems is the total number of items to be backed
up. This number may change throughout the execution of the backup
due to plugins that return additional related items to back
up, the velero.io/exclude-from-backup label, and various other
filters that happen as items are processed.
type: integer
type: object
startTimestamp:
description: |-
StartTimestamp records the time a backup was started.
Separate from CreationTimestamp, since that value changes
on restores.
description: StartTimestamp records the time a backup was started.
Separate from CreationTimestamp, since that value changes on restores.
The server's time is used for StartTimestamps
format: date-time
nullable: true
type: string
validationErrors:
description: |-
ValidationErrors is a slice of all validation errors (if
applicable).
description: ValidationErrors is a slice of all validation errors
(if applicable).
items:
type: string
nullable: true
type: array
version:
description: |-
Version is the backup format major version.
Deprecated: Please see FormatVersion
description: 'Version is the backup format major version. Deprecated:
Please see FormatVersion'
type: integer
volumeSnapshotsAttempted:
description: |-
VolumeSnapshotsAttempted is the total number of attempted
description: VolumeSnapshotsAttempted is the total number of attempted
volume snapshots for this backup.
type: integer
volumeSnapshotsCompleted:
description: |-
VolumeSnapshotsCompleted is the total number of successfully
description: VolumeSnapshotsCompleted is the total number of successfully
completed volume snapshots for this backup.
type: integer
warnings:
description: |-
Warnings is a count of all warning messages that were generated during
execution of the backup. The actual warnings are in the backup's log
file in object storage.
description: Warnings is a count of all warning messages that were
generated during execution of the backup. The actual warnings are
in the backup's log file in object storage.
type: integer
type: object
type: object

View File

@@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.16.5
controller-gen.kubebuilder.io/version: v0.12.0
name: backupstoragelocations.velero.io
spec:
group: velero.io
@@ -40,19 +40,14 @@ spec:
objects
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
@@ -86,13 +81,8 @@ spec:
valid secret key.
type: string
name:
default: ""
description: |-
Name of the referent.
This field is effectively required, but due to backwards compatibility is
allowed to be empty. Instances of this type with an empty value here are
almost certainly wrong.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
optional:
description: Specify whether the Secret or its key must be defined
@@ -141,34 +131,29 @@ spec:
BackupStorageLocation
properties:
accessMode:
description: |-
AccessMode is an unused field.
Deprecated: there is now an AccessMode field on the Spec and this field
will be removed entirely as of v2.0.
description: "AccessMode is an unused field. \n Deprecated: there
is now an AccessMode field on the Spec and this field will be removed
entirely as of v2.0."
enum:
- ReadOnly
- ReadWrite
type: string
lastSyncedRevision:
description: |-
LastSyncedRevision is the value of the `metadata/revision` file in the backup
storage location the last time the BSL's contents were synced into the cluster.
Deprecated: this field is no longer updated or used for detecting changes to
the location's contents and will be removed entirely in v2.0.
description: "LastSyncedRevision is the value of the `metadata/revision`
file in the backup storage location the last time the BSL's contents
were synced into the cluster. \n Deprecated: this field is no longer
updated or used for detecting changes to the location's contents
and will be removed entirely in v2.0."
type: string
lastSyncedTime:
description: |-
LastSyncedTime is the last time the contents of the location were synced into
the cluster.
description: LastSyncedTime is the last time the contents of the location
were synced into the cluster.
format: date-time
nullable: true
type: string
lastValidationTime:
description: |-
LastValidationTime is the last time the backup store location was validated
the cluster.
description: LastValidationTime is the last time the backup store
location was validated the cluster.
format: date-time
nullable: true
type: string

View File

@@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.16.5
controller-gen.kubebuilder.io/version: v0.12.0
name: deletebackuprequests.velero.io
spec:
group: velero.io
@@ -29,19 +29,14 @@ spec:
description: DeleteBackupRequest is a request to delete one or more backups.
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object

View File

@@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.16.5
controller-gen.kubebuilder.io/version: v0.12.0
name: downloadrequests.velero.io
spec:
group: velero.io
@@ -17,24 +17,18 @@ spec:
- name: v1
schema:
openAPIV3Schema:
description: |-
DownloadRequest is a request to download an artifact from backup object storage, such as a backup
log file.
description: DownloadRequest is a request to download an artifact from backup
object storage, such as a backup log file.
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
@@ -60,7 +54,6 @@ spec:
- CSIBackupVolumeSnapshots
- CSIBackupVolumeSnapshotContents
- BackupVolumeInfos
- RestoreVolumeInfo
type: string
name:
description: Name is the name of the Kubernetes resource with

View File

@@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.16.5
controller-gen.kubebuilder.io/version: v0.12.0
name: podvolumebackups.velero.io
spec:
group: velero.io
@@ -15,59 +15,51 @@ spec:
scope: Namespaced
versions:
- additionalPrinterColumns:
- description: PodVolumeBackup status such as New/InProgress
- description: Pod Volume Backup status such as New/InProgress
jsonPath: .status.phase
name: Status
type: string
- description: Time duration since this PodVolumeBackup was started
- description: Time when this backup was started
jsonPath: .status.startTimestamp
name: Started
name: Created
type: date
- description: Completed bytes
format: int64
jsonPath: .status.progress.bytesDone
name: Bytes Done
type: integer
- description: Total bytes
format: int64
jsonPath: .status.progress.totalBytes
name: Total Bytes
type: integer
- description: Namespace of the pod containing the volume to be backed up
jsonPath: .spec.pod.namespace
name: Namespace
type: string
- description: Name of the pod containing the volume to be backed up
jsonPath: .spec.pod.name
name: Pod
type: string
- description: Name of the volume to be backed up
jsonPath: .spec.volume
name: Volume
type: string
- description: The type of the uploader to handle data transfer
jsonPath: .spec.uploaderType
name: Uploader Type
type: string
- description: Name of the Backup Storage Location where this backup should be
stored
jsonPath: .spec.backupStorageLocation
name: Storage Location
type: string
- description: Time duration since this PodVolumeBackup was created
jsonPath: .metadata.creationTimestamp
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
- description: Name of the node where the PodVolumeBackup is processed
jsonPath: .status.node
name: Node
type: string
- description: The type of the uploader to handle data transfer
jsonPath: .spec.uploaderType
name: Uploader
type: string
name: v1
schema:
openAPIV3Schema:
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
@@ -75,15 +67,9 @@ spec:
description: PodVolumeBackupSpec is the specification for a PodVolumeBackup.
properties:
backupStorageLocation:
description: |-
BackupStorageLocation is the name of the backup storage location
where the backup repository is stored.
description: BackupStorageLocation is the name of the backup storage
location where the backup repository is stored.
type: string
cancel:
description: |-
Cancel indicates request to cancel the ongoing PodVolumeBackup. It can be set
when the PodVolumeBackup is in InProgress phase
type: boolean
node:
description: Node is the name of the node that the Pod is running
on.
@@ -96,39 +82,33 @@ spec:
description: API version of the referent.
type: string
fieldPath:
description: |-
If referring to a piece of an object instead of an entire object, this string
should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2].
For example, if the object reference is to a container within a pod, this would take on a value like:
"spec.containers{name}" (where "name" refers to the name of the container that triggered
the event) or if no container name is specified "spec.containers[2]" (container with
index 2 in this pod). This syntax is chosen only to have some well-defined way of
referencing a part of an object.
description: 'If referring to a piece of an object instead of
an entire object, this string should contain a valid JSON/Go
field access statement, such as desiredState.manifest.containers[2].
For example, if the object reference is to a container within
a pod, this would take on a value like: "spec.containers{name}"
(where "name" refers to the name of the container that triggered
the event) or if no container name is specified "spec.containers[2]"
(container with index 2 in this pod). This syntax is chosen
only to have some well-defined way of referencing a part of
an object. TODO: this design is not final and this field is
subject to change in the future.'
type: string
kind:
description: |-
Kind of the referent.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
description: 'Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
name:
description: |-
Name of the referent.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names'
type: string
namespace:
description: |-
Namespace of the referent.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
description: 'Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/'
type: string
resourceVersion:
description: |-
Specific resourceVersion to which this reference is made, if any.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
description: 'Specific resourceVersion to which this reference
is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency'
type: string
uid:
description: |-
UID of the referent.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
description: 'UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids'
type: string
type: object
x-kubernetes-map-type: atomic
@@ -138,16 +118,14 @@ spec:
tags:
additionalProperties:
type: string
description: |-
Tags are a map of key-value pairs that should be applied to the
volume backup as tags.
description: Tags are a map of key-value pairs that should be applied
to the volume backup as tags.
type: object
uploaderSettings:
additionalProperties:
type: string
description: |-
UploaderSettings are a map of key-value pairs that should be applied to the
uploader configuration.
description: UploaderSettings are a map of key-value pairs that should
be applied to the uploader configuration.
nullable: true
type: object
uploaderType:
@@ -159,9 +137,8 @@ spec:
- ""
type: string
volume:
description: |-
Volume is the name of the volume within the Pod to be backed
up.
description: Volume is the name of the volume within the Pod to be
backed up.
type: string
required:
- backupStorageLocation
@@ -173,19 +150,11 @@ spec:
status:
description: PodVolumeBackupStatus is the current status of a PodVolumeBackup.
properties:
acceptedTimestamp:
description: |-
AcceptedTimestamp records the time the pod volume backup is to be prepared.
The server's time is used for AcceptedTimestamp
format: date-time
nullable: true
type: string
completionTimestamp:
description: |-
CompletionTimestamp records the time a backup was completed.
Completion time is recorded even on failed backups.
Completion time is recorded before uploading the backup object.
The server's time is used for CompletionTimestamps
description: CompletionTimestamp records the time a backup was completed.
Completion time is recorded even on failed backups. Completion time
is recorded before uploading the backup object. The server's time
is used for CompletionTimestamps
format: date-time
nullable: true
type: string
@@ -200,19 +169,14 @@ spec:
description: Phase is the current state of the PodVolumeBackup.
enum:
- New
- Accepted
- Prepared
- InProgress
- Canceling
- Canceled
- Completed
- Failed
type: string
progress:
description: |-
Progress holds the total number of bytes of the volume and the current
number of backed up bytes. This can be used to display progress information
about the backup operation.
description: Progress holds the total number of bytes of the volume
and the current number of backed up bytes. This can be used to display
progress information about the backup operation.
properties:
bytesDone:
format: int64
@@ -226,10 +190,8 @@ spec:
pod volume.
type: string
startTimestamp:
description: |-
StartTimestamp records the time a backup was started.
Separate from CreationTimestamp, since that value changes
on restores.
description: StartTimestamp records the time a backup was started.
Separate from CreationTimestamp, since that value changes on restores.
The server's time is used for StartTimestamps
format: date-time
nullable: true

View File

@@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.16.5
controller-gen.kubebuilder.io/version: v0.12.0
name: podvolumerestores.velero.io
spec:
group: velero.io
@@ -15,58 +15,52 @@ spec:
scope: Namespaced
versions:
- additionalPrinterColumns:
- description: PodVolumeRestore status such as New/InProgress
jsonPath: .status.phase
name: Status
- description: Namespace of the pod containing the volume to be restored
jsonPath: .spec.pod.namespace
name: Namespace
type: string
- description: Time duration since this PodVolumeRestore was started
jsonPath: .status.startTimestamp
name: Started
type: date
- description: Completed bytes
format: int64
jsonPath: .status.progress.bytesDone
name: Bytes Done
type: integer
- description: Total bytes
format: int64
jsonPath: .status.progress.totalBytes
name: Total Bytes
type: integer
- description: Name of the Backup Storage Location where the backup data is stored
jsonPath: .spec.backupStorageLocation
name: Storage Location
type: string
- description: Time duration since this PodVolumeRestore was created
jsonPath: .metadata.creationTimestamp
name: Age
type: date
- description: Name of the node where the PodVolumeRestore is processed
jsonPath: .status.node
name: Node
- description: Name of the pod containing the volume to be restored
jsonPath: .spec.pod.name
name: Pod
type: string
- description: The type of the uploader to handle data transfer
jsonPath: .spec.uploaderType
name: Uploader Type
type: string
- description: Name of the volume to be restored
jsonPath: .spec.volume
name: Volume
type: string
- description: Pod Volume Restore status such as New/InProgress
jsonPath: .status.phase
name: Status
type: string
- description: Pod Volume Restore status such as New/InProgress
format: int64
jsonPath: .status.progress.totalBytes
name: TotalBytes
type: integer
- description: Pod Volume Restore status such as New/InProgress
format: int64
jsonPath: .status.progress.bytesDone
name: BytesDone
type: integer
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1
schema:
openAPIV3Schema:
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
@@ -74,15 +68,9 @@ spec:
description: PodVolumeRestoreSpec is the specification for a PodVolumeRestore.
properties:
backupStorageLocation:
description: |-
BackupStorageLocation is the name of the backup storage location
where the backup repository is stored.
description: BackupStorageLocation is the name of the backup storage
location where the backup repository is stored.
type: string
cancel:
description: |-
Cancel indicates request to cancel the ongoing PodVolumeRestore. It can be set
when the PodVolumeRestore is in InProgress phase
type: boolean
pod:
description: Pod is a reference to the pod containing the volume to
be restored.
@@ -91,39 +79,33 @@ spec:
description: API version of the referent.
type: string
fieldPath:
description: |-
If referring to a piece of an object instead of an entire object, this string
should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2].
For example, if the object reference is to a container within a pod, this would take on a value like:
"spec.containers{name}" (where "name" refers to the name of the container that triggered
the event) or if no container name is specified "spec.containers[2]" (container with
index 2 in this pod). This syntax is chosen only to have some well-defined way of
referencing a part of an object.
description: 'If referring to a piece of an object instead of
an entire object, this string should contain a valid JSON/Go
field access statement, such as desiredState.manifest.containers[2].
For example, if the object reference is to a container within
a pod, this would take on a value like: "spec.containers{name}"
(where "name" refers to the name of the container that triggered
the event) or if no container name is specified "spec.containers[2]"
(container with index 2 in this pod). This syntax is chosen
only to have some well-defined way of referencing a part of
an object. TODO: this design is not final and this field is
subject to change in the future.'
type: string
kind:
description: |-
Kind of the referent.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
description: 'Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
name:
description: |-
Name of the referent.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names'
type: string
namespace:
description: |-
Namespace of the referent.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
description: 'Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/'
type: string
resourceVersion:
description: |-
Specific resourceVersion to which this reference is made, if any.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
description: 'Specific resourceVersion to which this reference
is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency'
type: string
uid:
description: |-
UID of the referent.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
description: 'UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids'
type: string
type: object
x-kubernetes-map-type: atomic
@@ -140,9 +122,8 @@ spec:
uploaderSettings:
additionalProperties:
type: string
description: |-
UploaderSettings are a map of key-value pairs that should be applied to the
uploader configuration.
description: UploaderSettings are a map of key-value pairs that should
be applied to the uploader configuration.
nullable: true
type: object
uploaderType:
@@ -168,45 +149,28 @@ spec:
status:
description: PodVolumeRestoreStatus is the current status of a PodVolumeRestore.
properties:
acceptedTimestamp:
description: |-
AcceptedTimestamp records the time the pod volume restore is to be prepared.
The server's time is used for AcceptedTimestamp
format: date-time
nullable: true
type: string
completionTimestamp:
description: |-
CompletionTimestamp records the time a restore was completed.
Completion time is recorded even on failed restores.
The server's time is used for CompletionTimestamps
description: CompletionTimestamp records the time a restore was completed.
Completion time is recorded even on failed restores. The server's
time is used for CompletionTimestamps
format: date-time
nullable: true
type: string
message:
description: Message is a message about the pod volume restore's status.
type: string
node:
description: Node is name of the node where the pod volume restore
is processed.
type: string
phase:
description: Phase is the current state of the PodVolumeRestore.
enum:
- New
- Accepted
- Prepared
- InProgress
- Canceling
- Canceled
- Completed
- Failed
type: string
progress:
description: |-
Progress holds the total number of bytes of the snapshot and the current
number of restored bytes. This can be used to display progress information
about the restore operation.
description: Progress holds the total number of bytes of the snapshot
and the current number of restored bytes. This can be used to display
progress information about the restore operation.
properties:
bytesDone:
format: int64
@@ -216,8 +180,7 @@ spec:
type: integer
type: object
startTimestamp:
description: |-
StartTimestamp records the time a restore was started.
description: StartTimestamp records the time a restore was started.
The server's time is used for StartTimestamps
format: date-time
nullable: true

View File

@@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.16.5
controller-gen.kubebuilder.io/version: v0.12.0
name: restores.velero.io
spec:
group: velero.io
@@ -17,24 +17,18 @@ spec:
- name: v1
schema:
openAPIV3Schema:
description: |-
Restore is a Velero resource that represents the application of
resources from a Velero backup to a target Kubernetes cluster.
description: Restore is a Velero resource that represents the application
of resources from a Velero backup to a target Kubernetes cluster.
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
@@ -42,22 +36,19 @@ spec:
description: RestoreSpec defines the specification for a Velero restore.
properties:
backupName:
description: |-
BackupName is the unique name of the Velero backup to restore
from.
description: BackupName is the unique name of the Velero backup to
restore from.
type: string
excludedNamespaces:
description: |-
ExcludedNamespaces contains a list of namespaces that are not
included in the restore.
description: ExcludedNamespaces contains a list of namespaces that
are not included in the restore.
items:
type: string
nullable: true
type: array
excludedResources:
description: |-
ExcludedResources is a slice of resource names that are not
included in the restore.
description: ExcludedResources is a slice of resource names that are
not included in the restore.
items:
type: string
nullable: true
@@ -73,9 +64,9 @@ spec:
properties:
resources:
items:
description: |-
RestoreResourceHookSpec defines one or more RestoreResrouceHooks that should be executed based on
the rules defined for namespaces, resources, and label selector.
description: RestoreResourceHookSpec defines one or more RestoreResrouceHooks
that should be executed based on the rules defined for namespaces,
resources, and label selector.
properties:
excludedNamespaces:
description: ExcludedNamespaces specifies the namespaces
@@ -92,17 +83,17 @@ spec:
nullable: true
type: array
includedNamespaces:
description: |-
IncludedNamespaces specifies the namespaces to which this hook spec applies. If empty, it applies
description: IncludedNamespaces specifies the namespaces
to which this hook spec applies. If empty, it applies
to all namespaces.
items:
type: string
nullable: true
type: array
includedResources:
description: |-
IncludedResources specifies the resources to which this hook spec applies. If empty, it applies
to all resources.
description: IncludedResources specifies the resources to
which this hook spec applies. If empty, it applies to
all resources.
items:
type: string
nullable: true
@@ -116,8 +107,8 @@ spec:
description: matchExpressions is a list of label selector
requirements. The requirements are ANDed.
items:
description: |-
A label selector requirement is a selector that contains values, a key, and an operator that
description: A label selector requirement is a selector
that contains values, a key, and an operator that
relates the key and values.
properties:
key:
@@ -125,33 +116,33 @@ spec:
applies to.
type: string
operator:
description: |-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
description: operator represents a key's relationship
to a set of values. Valid operators are In,
NotIn, Exists and DoesNotExist.
type: string
values:
description: |-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
description: values is an array of string values.
If the operator is In or NotIn, the values array
must be non-empty. If the operator is Exists
or DoesNotExist, the values array must be empty.
This array is replaced during a strategic merge
patch.
items:
type: string
type: array
x-kubernetes-list-type: atomic
required:
- key
- operator
type: object
type: array
x-kubernetes-list-type: atomic
matchLabels:
additionalProperties:
type: string
description: |-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
description: matchLabels is a map of {key,value} pairs.
A single {key,value} in the matchLabels map is equivalent
to an element of matchExpressions, whose key field
is "key", the operator is "In", and the values array
contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
@@ -177,14 +168,15 @@ spec:
minItems: 1
type: array
container:
description: |-
Container is the container in the pod where the command should be executed. If not specified,
the pod's first container is used.
description: Container is the container in the
pod where the command should be executed. If
not specified, the pod's first container is
used.
type: string
execTimeout:
description: |-
ExecTimeout defines the maximum amount of time Velero should wait for the hook to complete before
considering the execution a failure.
description: ExecTimeout defines the maximum amount
of time Velero should wait for the hook to complete
before considering the execution a failure.
type: string
onError:
description: OnError specifies how Velero should
@@ -201,9 +193,9 @@ spec:
nullable: true
type: boolean
waitTimeout:
description: |-
WaitTimeout defines the maximum amount of time Velero should wait for the container to be Ready
before attempting to run the command.
description: WaitTimeout defines the maximum amount
of time Velero should wait for the container
to be Ready before attempting to run the command.
type: string
required:
- command
@@ -233,145 +225,136 @@ spec:
type: array
type: object
includeClusterResources:
description: |-
IncludeClusterResources specifies whether cluster-scoped resources
should be included for consideration in the restore. If null, defaults
to true.
description: IncludeClusterResources specifies whether cluster-scoped
resources should be included for consideration in the restore. If
null, defaults to true.
nullable: true
type: boolean
includedNamespaces:
description: |-
IncludedNamespaces is a slice of namespace names to include objects
from. If empty, all namespaces are included.
description: IncludedNamespaces is a slice of namespace names to include
objects from. If empty, all namespaces are included.
items:
type: string
nullable: true
type: array
includedResources:
description: |-
IncludedResources is a slice of resource names to include
description: IncludedResources is a slice of resource names to include
in the restore. If empty, all resources in the backup are included.
items:
type: string
nullable: true
type: array
itemOperationTimeout:
description: |-
ItemOperationTimeout specifies the time used to wait for RestoreItemAction operations
The default value is 4 hour.
description: ItemOperationTimeout specifies the time used to wait
for RestoreItemAction operations The default value is 1 hour.
type: string
labelSelector:
description: |-
LabelSelector is a metav1.LabelSelector to filter with
when restoring individual objects from the backup. If empty
or nil, all objects are included. Optional.
description: LabelSelector is a metav1.LabelSelector to filter with
when restoring individual objects from the backup. If empty or nil,
all objects are included. Optional.
nullable: true
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements.
The requirements are ANDed.
items:
description: |-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
description: A label selector requirement is a selector that
contains values, a key, and an operator that relates the key
and values.
properties:
key:
description: key is the label key that the selector applies
to.
type: string
operator:
description: |-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
description: operator represents a key's relationship to
a set of values. Valid operators are In, NotIn, Exists
and DoesNotExist.
type: string
values:
description: |-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
description: values is an array of string values. If the
operator is In or NotIn, the values array must be non-empty.
If the operator is Exists or DoesNotExist, the values
array must be empty. This array is replaced during a strategic
merge patch.
items:
type: string
type: array
x-kubernetes-list-type: atomic
required:
- key
- operator
type: object
type: array
x-kubernetes-list-type: atomic
matchLabels:
additionalProperties:
type: string
description: |-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
description: matchLabels is a map of {key,value} pairs. A single
{key,value} in the matchLabels map is equivalent to an element
of matchExpressions, whose key field is "key", the operator
is "In", and the values array contains only "value". The requirements
are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaceMapping:
additionalProperties:
type: string
description: |-
NamespaceMapping is a map of source namespace names
to target namespace names to restore into. Any source
namespaces not included in the map will be restored into
namespaces of the same name.
description: NamespaceMapping is a map of source namespace names to
target namespace names to restore into. Any source namespaces not
included in the map will be restored into namespaces of the same
name.
type: object
orLabelSelectors:
description: |-
OrLabelSelectors is list of metav1.LabelSelector to filter with
when restoring individual objects from the backup. If multiple provided
they will be joined by the OR operator. LabelSelector as well as
OrLabelSelectors cannot co-exist in restore request, only one of them
can be used
description: OrLabelSelectors is list of metav1.LabelSelector to filter
with when restoring individual objects from the backup. If multiple
provided they will be joined by the OR operator. LabelSelector as
well as OrLabelSelectors cannot co-exist in restore request, only
one of them can be used
items:
description: |-
A label selector is a label query over a set of resources. The result of matchLabels and
matchExpressions are ANDed. An empty label selector matches all objects. A null
label selector matches no objects.
description: A label selector is a label query over a set of resources.
The result of matchLabels and matchExpressions are ANDed. An empty
label selector matches all objects. A null label selector matches
no objects.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements.
The requirements are ANDed.
items:
description: |-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
description: A label selector requirement is a selector that
contains values, a key, and an operator that relates the
key and values.
properties:
key:
description: key is the label key that the selector applies
to.
type: string
operator:
description: |-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
description: operator represents a key's relationship
to a set of values. Valid operators are In, NotIn, Exists
and DoesNotExist.
type: string
values:
description: |-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
description: values is an array of string values. If the
operator is In or NotIn, the values array must be non-empty.
If the operator is Exists or DoesNotExist, the values
array must be empty. This array is replaced during a
strategic merge patch.
items:
type: string
type: array
x-kubernetes-list-type: atomic
required:
- key
- operator
type: object
type: array
x-kubernetes-list-type: atomic
matchLabels:
additionalProperties:
type: string
description: |-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
description: matchLabels is a map of {key,value} pairs. A single
{key,value} in the matchLabels map is equivalent to an element
of matchExpressions, whose key field is "key", the operator
is "In", and the values array contains only "value". The requirements
are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
@@ -388,10 +371,10 @@ spec:
nullable: true
properties:
apiGroup:
description: |-
APIGroup is the group for the resource being referenced.
If APIGroup is not specified, the specified Kind must be in the core API group.
For any other third-party types, APIGroup is required.
description: APIGroup is the group for the resource being referenced.
If APIGroup is not specified, the specified Kind must be in
the core API group. For any other third-party types, APIGroup
is required.
type: string
kind:
description: Kind is the type of resource being referenced
@@ -405,15 +388,13 @@ spec:
type: object
x-kubernetes-map-type: atomic
restorePVs:
description: |-
RestorePVs specifies whether to restore all included
description: RestorePVs specifies whether to restore all included
PVs from snapshot
nullable: true
type: boolean
restoreStatus:
description: |-
RestoreStatus specifies which resources we should restore the status
field. If nil, no objects are included. Optional.
description: RestoreStatus specifies which resources we should restore
the status field. If nil, no objects are included. Optional.
nullable: true
properties:
excludedResources:
@@ -424,50 +405,46 @@ spec:
nullable: true
type: array
includedResources:
description: |-
IncludedResources specifies the resources to which will restore the status.
If empty, it applies to all resources.
description: IncludedResources specifies the resources to which
will restore the status. If empty, it applies to all resources.
items:
type: string
nullable: true
type: array
type: object
scheduleName:
description: |-
ScheduleName is the unique name of the Velero schedule to restore
from. If specified, and BackupName is empty, Velero will restore
from the most recent successful backup created from this schedule.
description: ScheduleName is the unique name of the Velero schedule
to restore from. If specified, and BackupName is empty, Velero will
restore from the most recent successful backup created from this
schedule.
type: string
uploaderConfig:
description: UploaderConfig specifies the configuration for the restore.
nullable: true
properties:
parallelFilesDownload:
description: ParallelFilesDownload is the concurrency number setting
for restore.
type: integer
writeSparseFiles:
description: WriteSparseFiles is a flag to indicate whether write
files sparsely or not.
nullable: true
type: boolean
type: object
required:
- backupName
type: object
status:
description: RestoreStatus captures the current status of a Velero restore
properties:
completionTimestamp:
description: |-
CompletionTimestamp records the time the restore operation was completed.
Completion time is recorded even on failed restore.
description: CompletionTimestamp records the time the restore operation
was completed. Completion time is recorded even on failed restore.
The server's time is used for StartTimestamps
format: date-time
nullable: true
type: string
errors:
description: |-
Errors is a count of all error messages that were generated during
execution of the restore. The actual errors are stored in object storage.
description: Errors is a count of all error messages that were generated
during execution of the restore. The actual errors are stored in
object storage.
type: integer
failureReason:
description: FailureReason is an error that caused the entire restore
@@ -479,10 +456,10 @@ spec:
nullable: true
properties:
hooksAttempted:
description: |-
HooksAttempted is the total number of attempted hooks
Specifically, HooksAttempted represents the number of hooks that failed to execute
and the number of hooks that executed successfully.
description: HooksAttempted is the total number of attempted hooks
Specifically, HooksAttempted represents the number of hooks
that failed to execute and the number of hooks that executed
successfully.
type: integer
hooksFailed:
description: HooksFailed is the total number of hooks which ended
@@ -500,14 +477,11 @@ spec:
- Completed
- PartiallyFailed
- Failed
- Finalizing
- FinalizingPartiallyFailed
type: string
progress:
description: |-
Progress contains information about the restore's execution progress. Note
that this information is best-effort only -- if Velero fails to update it
during a restore for any reason, it may be inaccurate/stale.
description: Progress contains information about the restore's execution
progress. Note that this information is best-effort only -- if Velero
fails to update it during a restore for any reason, it may be inaccurate/stale.
nullable: true
properties:
itemsRestored:
@@ -515,46 +489,42 @@ spec:
been restored so far
type: integer
totalItems:
description: |-
TotalItems is the total number of items to be restored. This number may change
throughout the execution of the restore due to plugins that return additional related
items to restore
description: TotalItems is the total number of items to be restored.
This number may change throughout the execution of the restore
due to plugins that return additional related items to restore
type: integer
type: object
restoreItemOperationsAttempted:
description: |-
RestoreItemOperationsAttempted is the total number of attempted
async RestoreItemAction operations for this restore.
description: RestoreItemOperationsAttempted is the total number of
attempted async RestoreItemAction operations for this restore.
type: integer
restoreItemOperationsCompleted:
description: |-
RestoreItemOperationsCompleted is the total number of successfully completed
async RestoreItemAction operations for this restore.
description: RestoreItemOperationsCompleted is the total number of
successfully completed async RestoreItemAction operations for this
restore.
type: integer
restoreItemOperationsFailed:
description: |-
RestoreItemOperationsFailed is the total number of async
RestoreItemAction operations for this restore which ended with an error.
description: RestoreItemOperationsFailed is the total number of async
RestoreItemAction operations for this restore which ended with an
error.
type: integer
startTimestamp:
description: |-
StartTimestamp records the time the restore operation was started.
The server's time is used for StartTimestamps
description: StartTimestamp records the time the restore operation
was started. The server's time is used for StartTimestamps
format: date-time
nullable: true
type: string
validationErrors:
description: |-
ValidationErrors is a slice of all validation errors (if
applicable)
description: ValidationErrors is a slice of all validation errors
(if applicable)
items:
type: string
nullable: true
type: array
warnings:
description: |-
Warnings is a count of all warning messages that were generated during
execution of the restore. The actual warnings are stored in object storage.
description: Warnings is a count of all warning messages that were
generated during execution of the restore. The actual warnings are
stored in object storage.
type: integer
type: object
type: object

View File

@@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.16.5
controller-gen.kubebuilder.io/version: v0.12.0
name: schedules.velero.io
spec:
group: velero.io
@@ -36,24 +36,18 @@ spec:
name: v1
schema:
openAPIV3Schema:
description: |-
Schedule is a Velero resource that represents a pre-scheduled or
periodic Backup that should be run.
description: Schedule is a Velero resource that represents a pre-scheduled
or periodic Backup that should be run.
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
@@ -64,79 +58,73 @@ spec:
description: Paused specifies whether the schedule is paused or not
type: boolean
schedule:
description: |-
Schedule is a Cron expression defining when to run
the Backup.
description: Schedule is a Cron expression defining when to run the
Backup.
type: string
skipImmediately:
description: |-
SkipImmediately specifies whether to skip backup if schedule is due immediately from `schedule.status.lastBackup` timestamp when schedule is unpaused or if schedule is new.
If true, backup will be skipped immediately when schedule is unpaused if it is due based on .Status.LastBackupTimestamp or schedule is new, and will run at next schedule time.
If false, backup will not be skipped immediately when schedule is unpaused, but will run at next schedule time.
If empty, will follow server configuration (default: false).
description: 'SkipImmediately specifies whether to skip backup if
schedule is due immediately from `schedule.status.lastBackup` timestamp
when schedule is unpaused or if schedule is new. If true, backup
will be skipped immediately when schedule is unpaused if it is due
based on .Status.LastBackupTimestamp or schedule is new, and will
run at next schedule time. If false, backup will not be skipped
immediately when schedule is unpaused, but will run at next schedule
time. If empty, will follow server configuration (default: false).'
type: boolean
template:
description: |-
Template is the definition of the Backup to be run
on the provided schedule
description: Template is the definition of the Backup to be run on
the provided schedule
properties:
csiSnapshotTimeout:
description: |-
CSISnapshotTimeout specifies the time used to wait for CSI VolumeSnapshot status turns to
ReadyToUse during creation, before returning error as timeout.
The default value is 10 minute.
description: CSISnapshotTimeout specifies the time used to wait
for CSI VolumeSnapshot status turns to ReadyToUse during creation,
before returning error as timeout. The default value is 10 minute.
type: string
datamover:
description: |-
DataMover specifies the data mover to be used by the backup.
If DataMover is "" or "velero", the built-in data mover will be used.
description: DataMover specifies the data mover to be used by
the backup. If DataMover is "" or "velero", the built-in data
mover will be used.
type: string
defaultVolumesToFsBackup:
description: |-
DefaultVolumesToFsBackup specifies whether pod volume file system backup should be used
for all volumes by default.
description: DefaultVolumesToFsBackup specifies whether pod volume
file system backup should be used for all volumes by default.
nullable: true
type: boolean
defaultVolumesToRestic:
description: |-
DefaultVolumesToRestic specifies whether restic should be used to take a
backup of all pod volumes by default.
Deprecated: this field is no longer used and will be removed entirely in future. Use DefaultVolumesToFsBackup instead.
description: "DefaultVolumesToRestic specifies whether restic
should be used to take a backup of all pod volumes by default.
\n Deprecated: this field is no longer used and will be removed
entirely in future. Use DefaultVolumesToFsBackup instead."
nullable: true
type: boolean
excludedClusterScopedResources:
description: |-
ExcludedClusterScopedResources is a slice of cluster-scoped
resource type names to exclude from the backup.
If set to "*", all cluster-scoped resource types are excluded.
The default value is empty.
description: ExcludedClusterScopedResources is a slice of cluster-scoped
resource type names to exclude from the backup. If set to "*",
all cluster-scoped resource types are excluded. The default
value is empty.
items:
type: string
nullable: true
type: array
excludedNamespaceScopedResources:
description: |-
ExcludedNamespaceScopedResources is a slice of namespace-scoped
resource type names to exclude from the backup.
If set to "*", all namespace-scoped resource types are excluded.
The default value is empty.
description: ExcludedNamespaceScopedResources is a slice of namespace-scoped
resource type names to exclude from the backup. If set to "*",
all namespace-scoped resource types are excluded. The default
value is empty.
items:
type: string
nullable: true
type: array
excludedNamespaces:
description: |-
ExcludedNamespaces contains a list of namespaces that are not
included in the backup.
description: ExcludedNamespaces contains a list of namespaces
that are not included in the backup.
items:
type: string
nullable: true
type: array
excludedResources:
description: |-
ExcludedResources is a slice of resource names that are not
included in the backup.
description: ExcludedResources is a slice of resource names that
are not included in the backup.
items:
type: string
nullable: true
@@ -149,9 +137,9 @@ spec:
description: Resources are hooks that should be executed when
backing up individual instances of a resource.
items:
description: |-
BackupResourceHookSpec defines one or more BackupResourceHooks that should be executed based on
the rules defined for namespaces, resources, and label selector.
description: BackupResourceHookSpec defines one or more
BackupResourceHooks that should be executed based on the
rules defined for namespaces, resources, and label selector.
properties:
excludedNamespaces:
description: ExcludedNamespaces specifies the namespaces
@@ -168,16 +156,16 @@ spec:
nullable: true
type: array
includedNamespaces:
description: |-
IncludedNamespaces specifies the namespaces to which this hook spec applies. If empty, it applies
description: IncludedNamespaces specifies the namespaces
to which this hook spec applies. If empty, it applies
to all namespaces.
items:
type: string
nullable: true
type: array
includedResources:
description: |-
IncludedResources specifies the resources to which this hook spec applies. If empty, it applies
description: IncludedResources specifies the resources
to which this hook spec applies. If empty, it applies
to all resources.
items:
type: string
@@ -192,42 +180,43 @@ spec:
description: matchExpressions is a list of label
selector requirements. The requirements are ANDed.
items:
description: |-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
description: A label selector requirement is a
selector that contains values, a key, and an
operator that relates the key and values.
properties:
key:
description: key is the label key that the
selector applies to.
type: string
operator:
description: |-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
description: operator represents a key's relationship
to a set of values. Valid operators are
In, NotIn, Exists and DoesNotExist.
type: string
values:
description: |-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
description: values is an array of string
values. If the operator is In or NotIn,
the values array must be non-empty. If the
operator is Exists or DoesNotExist, the
values array must be empty. This array is
replaced during a strategic merge patch.
items:
type: string
type: array
x-kubernetes-list-type: atomic
required:
- key
- operator
type: object
type: array
x-kubernetes-list-type: atomic
matchLabels:
additionalProperties:
type: string
description: |-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
description: matchLabels is a map of {key,value}
pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions,
whose key field is "key", the operator is "In",
and the values array contains only "value". The
requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
@@ -235,9 +224,10 @@ spec:
description: Name is the name of this hook.
type: string
post:
description: |-
PostHooks is a list of BackupResourceHooks to execute after storing the item in the backup.
These are executed after all "additional items" from item actions are processed.
description: PostHooks is a list of BackupResourceHooks
to execute after storing the item in the backup. These
are executed after all "additional items" from item
actions are processed.
items:
description: BackupResourceHook defines a hook for
a resource.
@@ -253,9 +243,10 @@ spec:
minItems: 1
type: array
container:
description: |-
Container is the container in the pod where the command should be executed. If not specified,
the pod's first container is used.
description: Container is the container in
the pod where the command should be executed.
If not specified, the pod's first container
is used.
type: string
onError:
description: OnError specifies how Velero
@@ -266,9 +257,10 @@ spec:
- Fail
type: string
timeout:
description: |-
Timeout defines the maximum amount of time Velero should wait for the hook to complete before
considering the execution a failure.
description: Timeout defines the maximum amount
of time Velero should wait for the hook
to complete before considering the execution
a failure.
type: string
required:
- command
@@ -278,9 +270,10 @@ spec:
type: object
type: array
pre:
description: |-
PreHooks is a list of BackupResourceHooks to execute prior to storing the item in the backup.
These are executed before any "additional items" from item actions are processed.
description: PreHooks is a list of BackupResourceHooks
to execute prior to storing the item in the backup.
These are executed before any "additional items" from
item actions are processed.
items:
description: BackupResourceHook defines a hook for
a resource.
@@ -296,9 +289,10 @@ spec:
minItems: 1
type: array
container:
description: |-
Container is the container in the pod where the command should be executed. If not specified,
the pod's first container is used.
description: Container is the container in
the pod where the command should be executed.
If not specified, the pod's first container
is used.
type: string
onError:
description: OnError specifies how Velero
@@ -309,9 +303,10 @@ spec:
- Fail
type: string
timeout:
description: |-
Timeout defines the maximum amount of time Velero should wait for the hook to complete before
considering the execution a failure.
description: Timeout defines the maximum amount
of time Velero should wait for the hook
to complete before considering the execution
a failure.
type: string
required:
- command
@@ -327,56 +322,50 @@ spec:
type: array
type: object
includeClusterResources:
description: |-
IncludeClusterResources specifies whether cluster-scoped resources
should be included for consideration in the backup.
description: IncludeClusterResources specifies whether cluster-scoped
resources should be included for consideration in the backup.
nullable: true
type: boolean
includedClusterScopedResources:
description: |-
IncludedClusterScopedResources is a slice of cluster-scoped
resource type names to include in the backup.
If set to "*", all cluster-scoped resource types are included.
The default value is empty, which means only related
cluster-scoped resources are included.
description: IncludedClusterScopedResources is a slice of cluster-scoped
resource type names to include in the backup. If set to "*",
all cluster-scoped resource types are included. The default
value is empty, which means only related cluster-scoped resources
are included.
items:
type: string
nullable: true
type: array
includedNamespaceScopedResources:
description: |-
IncludedNamespaceScopedResources is a slice of namespace-scoped
resource type names to include in the backup.
The default value is "*".
description: IncludedNamespaceScopedResources is a slice of namespace-scoped
resource type names to include in the backup. The default value
is "*".
items:
type: string
nullable: true
type: array
includedNamespaces:
description: |-
IncludedNamespaces is a slice of namespace names to include objects
from. If empty, all namespaces are included.
description: IncludedNamespaces is a slice of namespace names
to include objects from. If empty, all namespaces are included.
items:
type: string
nullable: true
type: array
includedResources:
description: |-
IncludedResources is a slice of resource names to include
in the backup. If empty, all resources are included.
description: IncludedResources is a slice of resource names to
include in the backup. If empty, all resources are included.
items:
type: string
nullable: true
type: array
itemOperationTimeout:
description: |-
ItemOperationTimeout specifies the time used to wait for asynchronous BackupItemAction operations
The default value is 4 hour.
description: ItemOperationTimeout specifies the time used to wait
for asynchronous BackupItemAction operations The default value
is 1 hour.
type: string
labelSelector:
description: |-
LabelSelector is a metav1.LabelSelector to filter with
when adding individual objects to the backup. If empty
description: LabelSelector is a metav1.LabelSelector to filter
with when adding individual objects to the backup. If empty
or nil, all objects are included. Optional.
nullable: true
properties:
@@ -384,42 +373,41 @@ spec:
description: matchExpressions is a list of label selector
requirements. The requirements are ANDed.
items:
description: |-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
description: A label selector requirement is a selector
that contains values, a key, and an operator that relates
the key and values.
properties:
key:
description: key is the label key that the selector
applies to.
type: string
operator:
description: |-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
description: operator represents a key's relationship
to a set of values. Valid operators are In, NotIn,
Exists and DoesNotExist.
type: string
values:
description: |-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
description: values is an array of string values. If
the operator is In or NotIn, the values array must
be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced
during a strategic merge patch.
items:
type: string
type: array
x-kubernetes-list-type: atomic
required:
- key
- operator
type: object
type: array
x-kubernetes-list-type: atomic
matchLabels:
additionalProperties:
type: string
description: |-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
description: matchLabels is a map of {key,value} pairs. A
single {key,value} in the matchLabels map is equivalent
to an element of matchExpressions, whose key field is "key",
the operator is "In", and the values array contains only
"value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
@@ -431,58 +419,56 @@ spec:
type: object
type: object
orLabelSelectors:
description: |-
OrLabelSelectors is list of metav1.LabelSelector to filter with
when adding individual objects to the backup. If multiple provided
they will be joined by the OR operator. LabelSelector as well as
OrLabelSelectors cannot co-exist in backup request, only one of them
can be used.
description: OrLabelSelectors is list of metav1.LabelSelector
to filter with when adding individual objects to the backup.
If multiple provided they will be joined by the OR operator.
LabelSelector as well as OrLabelSelectors cannot co-exist in
backup request, only one of them can be used.
items:
description: |-
A label selector is a label query over a set of resources. The result of matchLabels and
matchExpressions are ANDed. An empty label selector matches all objects. A null
label selector matches no objects.
description: A label selector is a label query over a set of
resources. The result of matchLabels and matchExpressions
are ANDed. An empty label selector matches all objects. A
null label selector matches no objects.
properties:
matchExpressions:
description: matchExpressions is a list of label selector
requirements. The requirements are ANDed.
items:
description: |-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
description: A label selector requirement is a selector
that contains values, a key, and an operator that relates
the key and values.
properties:
key:
description: key is the label key that the selector
applies to.
type: string
operator:
description: |-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
description: operator represents a key's relationship
to a set of values. Valid operators are In, NotIn,
Exists and DoesNotExist.
type: string
values:
description: |-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
description: values is an array of string values.
If the operator is In or NotIn, the values array
must be non-empty. If the operator is Exists or
DoesNotExist, the values array must be empty. This
array is replaced during a strategic merge patch.
items:
type: string
type: array
x-kubernetes-list-type: atomic
required:
- key
- operator
type: object
type: array
x-kubernetes-list-type: atomic
matchLabels:
additionalProperties:
type: string
description: |-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
description: matchLabels is a map of {key,value} pairs.
A single {key,value} in the matchLabels map is equivalent
to an element of matchExpressions, whose key field is
"key", the operator is "In", and the values array contains
only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
@@ -491,10 +477,11 @@ spec:
orderedResources:
additionalProperties:
type: string
description: |-
OrderedResources specifies the backup order of resources of specific Kind.
The map key is the resource name and value is a list of object names separated by commas.
Each resource name has format "namespace/objectname". For cluster resources, simply use "objectname".
description: OrderedResources specifies the backup order of resources
of specific Kind. The map key is the resource name and value
is a list of object names separated by commas. Each resource
name has format "namespace/objectname". For cluster resources,
simply use "objectname".
nullable: true
type: object
resourcePolicy:
@@ -502,10 +489,10 @@ spec:
policies that backup should follow
properties:
apiGroup:
description: |-
APIGroup is the group for the resource being referenced.
If APIGroup is not specified, the specified Kind must be in the core API group.
For any other third-party types, APIGroup is required.
description: APIGroup is the group for the resource being
referenced. If APIGroup is not specified, the specified
Kind must be in the core API group. For any other third-party
types, APIGroup is required.
type: string
kind:
description: Kind is the type of resource being referenced
@@ -524,10 +511,9 @@ spec:
nullable: true
type: boolean
snapshotVolumes:
description: |-
SnapshotVolumes specifies whether to take snapshots
of any PV's referenced in the set of objects included
in the Backup.
description: SnapshotVolumes specifies whether to take snapshots
of any PV's referenced in the set of objects included in the
Backup.
nullable: true
type: boolean
storageLocation:
@@ -535,9 +521,8 @@ spec:
a BackupStorageLocation where the backup should be stored.
type: string
ttl:
description: |-
TTL is a time.Duration-parseable string describing how long
the Backup should be retained for.
description: TTL is a time.Duration-parseable string describing
how long the Backup should be retained for.
type: string
uploaderConfig:
description: UploaderConfig specifies the configuration for the
@@ -549,10 +534,6 @@ spec:
uploads to perform when using the uploader.
type: integer
type: object
volumeGroupSnapshotLabelKey:
description: VolumeGroupSnapshotLabelKey specifies the label key
to group PVCs under a VGS.
type: string
volumeSnapshotLocations:
description: VolumeSnapshotLocations is a list containing names
of VolumeSnapshotLocations associated with this backup.
@@ -561,9 +542,8 @@ spec:
type: array
type: object
useOwnerReferencesInBackup:
description: |-
UseOwnerReferencesBackup specifies whether to use
OwnerReferences on backups created by this Schedule.
description: UseOwnerReferencesBackup specifies whether to use OwnerReferences
on backups created by this Schedule.
nullable: true
type: boolean
required:
@@ -574,8 +554,7 @@ spec:
description: ScheduleStatus captures the current state of a Velero schedule
properties:
lastBackup:
description: |-
LastBackup is the last time a Backup was run for this
description: LastBackup is the last time a Backup was run for this
Schedule schedule
format: date-time
nullable: true
@@ -593,9 +572,8 @@ spec:
- FailedValidation
type: string
validationErrors:
description: |-
ValidationErrors is a slice of all validation errors (if
applicable)
description: ValidationErrors is a slice of all validation errors
(if applicable)
items:
type: string
type: array

View File

@@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.16.5
controller-gen.kubebuilder.io/version: v0.12.0
name: serverstatusrequests.velero.io
spec:
group: velero.io
@@ -19,24 +19,18 @@ spec:
- name: v1
schema:
openAPIV3Schema:
description: |-
ServerStatusRequest is a request to access current status information about
the Velero server.
description: ServerStatusRequest is a request to access current status information
about the Velero server.
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
@@ -69,9 +63,8 @@ spec:
nullable: true
type: array
processedTimestamp:
description: |-
ProcessedTimestamp is when the ServerStatusRequest was processed
by the ServerStatusRequestController.
description: ProcessedTimestamp is when the ServerStatusRequest was
processed by the ServerStatusRequestController.
format: date-time
nullable: true
type: string

View File

@@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.16.5
controller-gen.kubebuilder.io/version: v0.12.0
name: volumesnapshotlocations.velero.io
spec:
group: velero.io
@@ -23,19 +23,14 @@ spec:
snapshots.
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
@@ -57,13 +52,8 @@ spec:
valid secret key.
type: string
name:
default: ""
description: |-
Name of the referent.
This field is effectively required, but due to backwards compatibility is
allowed to be empty. Instances of this type with an empty value here are
almost certainly wrong.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
optional:
description: Specify whether the Secret or its key must be defined

File diff suppressed because one or more lines are too long

View File

@@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.16.5
controller-gen.kubebuilder.io/version: v0.12.0
name: datadownloads.velero.io
spec:
group: velero.io
@@ -52,19 +52,14 @@ spec:
and data mover controller for the datamover restore operation
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
@@ -72,14 +67,12 @@ spec:
description: DataDownloadSpec is the specification for a DataDownload.
properties:
backupStorageLocation:
description: |-
BackupStorageLocation is the name of the backup storage location
where the backup repository is stored.
description: BackupStorageLocation is the name of the backup storage
location where the backup repository is stored.
type: string
cancel:
description: |-
Cancel indicates request to cancel the ongoing DataDownload. It can be set
when the DataDownload is in InProgress phase
description: Cancel indicates request to cancel the ongoing DataDownload.
It can be set when the DataDownload is in InProgress phase
type: boolean
dataMoverConfig:
additionalProperties:
@@ -88,30 +81,22 @@ spec:
fields.
type: object
datamover:
description: |-
DataMover specifies the data mover to be used by the backup.
If DataMover is "" or "velero", the built-in data mover will be used.
type: string
nodeOS:
description: NodeOS is OS of the node where the DataDownload is processed.
enum:
- auto
- linux
- windows
description: DataMover specifies the data mover to be used by the
backup. If DataMover is "" or "velero", the built-in data mover
will be used.
type: string
operationTimeout:
description: |-
OperationTimeout specifies the time used to wait internal operations,
before returning error as timeout.
description: OperationTimeout specifies the time used to wait internal
operations, before returning error as timeout.
type: string
snapshotID:
description: SnapshotID is the ID of the Velero backup snapshot to
be restored from.
type: string
sourceNamespace:
description: |-
SourceNamespace is the original namespace where the volume is backed up from.
It may be different from SourcePVC's namespace if namespace is remapped during restore.
description: SourceNamespace is the original namespace where the volume
is backed up from. It may be different from SourcePVC's namespace
if namespace is remapped during restore.
type: string
targetVolume:
description: TargetVolume is the information of the target PVC and
@@ -143,21 +128,10 @@ spec:
status:
description: DataDownloadStatus is the current status of a DataDownload.
properties:
acceptedByNode:
description: Node is name of the node where the DataUpload is prepared.
type: string
acceptedTimestamp:
description: |-
AcceptedTimestamp records the time the DataUpload is to be prepared.
The server's time is used for AcceptedTimestamp
format: date-time
nullable: true
type: string
completionTimestamp:
description: |-
CompletionTimestamp records the time a restore was completed.
Completion time is recorded even on failed restores.
The server's time is used for CompletionTimestamps
description: CompletionTimestamp records the time a restore was completed.
Completion time is recorded even on failed restores. The server's
time is used for CompletionTimestamps
format: date-time
nullable: true
type: string
@@ -180,10 +154,9 @@ spec:
- Failed
type: string
progress:
description: |-
Progress holds the total number of bytes of the snapshot and the current
number of restored bytes. This can be used to display progress information
about the restore operation.
description: Progress holds the total number of bytes of the snapshot
and the current number of restored bytes. This can be used to display
progress information about the restore operation.
properties:
bytesDone:
format: int64
@@ -193,8 +166,7 @@ spec:
type: integer
type: object
startTimestamp:
description: |-
StartTimestamp records the time a restore was started.
description: StartTimestamp records the time a restore was started.
The server's time is used for StartTimestamps
format: date-time
nullable: true

View File

@@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.16.5
controller-gen.kubebuilder.io/version: v0.12.0
name: datauploads.velero.io
spec:
group: velero.io
@@ -53,19 +53,14 @@ spec:
data mover controller for the datamover backup operation
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
@@ -73,23 +68,18 @@ spec:
description: DataUploadSpec is the specification for a DataUpload.
properties:
backupStorageLocation:
description: |-
BackupStorageLocation is the name of the backup storage location
where the backup repository is stored.
description: BackupStorageLocation is the name of the backup storage
location where the backup repository is stored.
type: string
cancel:
description: |-
Cancel indicates request to cancel the ongoing DataUpload. It can be set
when the DataUpload is in InProgress phase
description: Cancel indicates request to cancel the ongoing DataUpload.
It can be set when the DataUpload is in InProgress phase
type: boolean
csiSnapshot:
description: If SnapshotType is CSI, CSISnapshot provides the information
of the CSI snapshot.
nullable: true
properties:
driver:
description: Driver is the driver used by the VolumeSnapshotContent
type: string
snapshotClass:
description: SnapshotClass is the name of the snapshot class that
the volume snapshot is created with
@@ -114,23 +104,22 @@ spec:
nullable: true
type: object
datamover:
description: |-
DataMover specifies the data mover to be used by the backup.
If DataMover is "" or "velero", the built-in data mover will be used.
description: DataMover specifies the data mover to be used by the
backup. If DataMover is "" or "velero", the built-in data mover
will be used.
type: string
operationTimeout:
description: |-
OperationTimeout specifies the time used to wait internal operations,
before returning error as timeout.
description: OperationTimeout specifies the time used to wait internal
operations, before returning error as timeout.
type: string
snapshotType:
description: SnapshotType is the type of the snapshot to be backed
up.
type: string
sourceNamespace:
description: |-
SourceNamespace is the original namespace where the volume is backed up from.
It is the same namespace for SourcePVC and CSI namespaced objects.
description: SourceNamespace is the original namespace where the volume
is backed up from. It is the same namespace for SourcePVC and CSI
namespaced objects.
type: string
sourcePVC:
description: SourcePVC is the name of the PVC which the snapshot is
@@ -146,23 +135,11 @@ spec:
status:
description: DataUploadStatus is the current status of a DataUpload.
properties:
acceptedByNode:
description: AcceptedByNode is name of the node where the DataUpload
is prepared.
type: string
acceptedTimestamp:
description: |-
AcceptedTimestamp records the time the DataUpload is to be prepared.
The server's time is used for AcceptedTimestamp
format: date-time
nullable: true
type: string
completionTimestamp:
description: |-
CompletionTimestamp records the time a backup was completed.
Completion time is recorded even on failed backups.
Completion time is recorded before uploading the backup object.
The server's time is used for CompletionTimestamps
description: CompletionTimestamp records the time a backup was completed.
Completion time is recorded even on failed backups. Completion time
is recorded before uploading the backup object. The server's time
is used for CompletionTimestamps
format: date-time
nullable: true
type: string
@@ -179,13 +156,6 @@ spec:
node:
description: Node is name of the node where the DataUpload is processed.
type: string
nodeOS:
description: NodeOS is OS of the node where the DataUpload is processed.
enum:
- auto
- linux
- windows
type: string
path:
description: Path is the full path of the snapshot volume being backed
up.
@@ -203,10 +173,9 @@ spec:
- Failed
type: string
progress:
description: |-
Progress holds the total number of bytes of the volume and the current
number of backed up bytes. This can be used to display progress information
about the backup operation.
description: Progress holds the total number of bytes of the volume
and the current number of backed up bytes. This can be used to display
progress information about the backup operation.
properties:
bytesDone:
format: int64
@@ -220,10 +189,8 @@ spec:
backup repository.
type: string
startTimestamp:
description: |-
StartTimestamp records the time a backup was started.
Separate from CreationTimestamp, since that value changes
on restores.
description: StartTimestamp records the time a backup was started.
Separate from CreationTimestamp, since that value changes on restores.
The server's time is used for StartTimestamps
format: date-time
nullable: true

File diff suppressed because one or more lines are too long

View File

@@ -8,7 +8,17 @@ rules:
- ""
resources:
- persistentvolumerclaims
verbs:
- get
- apiGroups:
- ""
resources:
- persistentvolumes
verbs:
- get
- apiGroups:
- ""
resources:
- pods
verbs:
- get
@@ -16,18 +26,6 @@ rules:
- velero.io
resources:
- backuprepositories
- backups
- backupstoragelocations
- datadownloads
- datauploads
- deletebackuprequests
- downloadrequests
- podvolumebackups
- podvolumerestores
- restores
- schedules
- serverstatusrequests
- volumesnapshotlocations
verbs:
- create
- delete
@@ -40,18 +38,239 @@ rules:
- velero.io
resources:
- backuprepositories/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- backups
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- backups/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- backupstoragelocations
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- backupstoragelocations/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- datadownloads
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- datadownloads/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- datauploads
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- datauploads/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- deletebackuprequests
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- deletebackuprequests/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- downloadrequests
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- downloadrequests/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- podvolumebackups
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- podvolumebackups/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- podvolumerestores
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- podvolumerestores/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- restores
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- restores/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- schedules
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- schedules/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- serverstatusrequests
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- serverstatusrequests/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- volumesnapshotlocations
verbs:
- create
- delete
- get
- list
- patch
- update
- watch

View File

@@ -1,344 +0,0 @@
# Extend VolumePolicies to support more actions
## Abstract
Currently, the [VolumePolicies feature](https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/handle-backup-of-volumes-by-resources-filters.md) which can be used to filter/handle volumes during backup only supports the skip action on matching conditions. Users need more actions to be supported.
## Background
The `VolumePolicies` feature was introduced in Velero 1.11 as a flexible way to handle volumes. The main agenda of
introducing the VolumePolicies feature was to improve the overall user experience when performing backup operations
for volume resources, the feature enables users to group volumes according the `conditions` (criteria) specified and
also lets you specify the `action` that velero needs to take for these grouped volumes during the backup operation.
The limitation being that currently `VolumePolicies` only supports `skip` as an action, We want to extend the `action`
functionality to support more usable options like `fs-backup` (File system backup) and `snapshot` (VolumeSnapshots).
## Goals
- Extending the VolumePolicies to support more actions like `fs-backup` (File system backup) and `snapshot` (VolumeSnapshots).
- Improve user experience when backing up Volumes via Velero
## Non-Goals
- No changes to existing approaches to opt-in/opt-out annotations for volumes
- No changes to existing `VolumePolicies` functionalities
- No additions or implementations to support more granular actions like `snapshot-csi` and `snapshot-datamover`. These actions can be implemented as a future enhancement
## Use-cases/Scenarios
**Use-case 1:**
- A user wants to use `snapshot` (volumesnapshots) backup option for all the csi supported volumes and `fs-backup` for the rest of the volumes.
- Currently, velero supports this use-case but the user experience is not that great.
- The user will have to individually annotate the volume mounting pod with the annotation "backup.velero.io/backup-volumes" for `fs-backup`
- This becomes cumbersome at scale.
- Using `VolumePolicies`, the user can just specify 2 simple `VolumePolicies` like for csi supported volumes as `snapshot` action and rest can be backed up`fs-backup` action:
```yaml
version: v1
volumePolicies:
- conditions:
storageClass:
- gp2
action:
type: snapshot
- conditions: {}
action:
type: fs-backup
```
**Use-case 2:**
- A user wants to use `fs-backup` for nfs volumes pertaining to a particular server
- In such a scenario the user can just specify a `VolumePolicy` like:
```yaml
version: v1
volumePolicies:
- conditions:
nfs:
server: 192.168.200.90
action:
type: fs-backup
```
## High-Level Design
- When the VolumePolicy action is set as `fs-backup` the backup workflow modifications would be:
- We call [backupItem() -> backupItemInternal()](https://github.com/vmware-tanzu/velero/blob/main/pkg/backup/item_backupper.go#L95) on all the items that are to be backed up
- Here when we encounter [Pod as an item ](https://github.com/vmware-tanzu/velero/blob/main/pkg/backup/item_backupper.go#L195)
- We will have to modify the backup workflow to account for the `fs-backup` VolumePolicy action
- When the VolumePolicy action is set as `snapshot` the backup workflow modifications would be:
- Once again, We call [backupItem() -> backupItemInternal()](https://github.com/vmware-tanzu/velero/blob/main/pkg/backup/item_backupper.go#L95) on all the items that are to be backed up
- Here when we encounter [Persistent Volume as an item](https://github.com/vmware-tanzu/velero/blob/d4128542590470b204a642ee43311921c11db880/pkg/backup/item_backupper.go#L253)
- And we call the [takePVSnapshot func](https://github.com/vmware-tanzu/velero/blob/d4128542590470b204a642ee43311921c11db880/pkg/backup/item_backupper.go#L508)
- We need to modify the takePVSnapshot function to account for the `snapshot` VolumePolicy action.
- In case of csi snapshots for PVC objects, these snapshot actions are taken by the velero-plugin-for-csi, we need to modify the [executeActions()](https://github.com/vmware-tanzu/velero/blob/512fe0dabdcb3bbf1ca68a9089056ae549663bcf/pkg/backup/item_backupper.go#L232) function to account for the `snapshot` VolumePolicy action.
**Note:** `Snapshot` action can either be a native snapshot or a csi snapshot, as is the case with the current flow where velero itself makes the decision based on the backup CR.
## Detailed Design
- Update VolumePolicy action type validation to account for `fs-backup` and `snapshot` as valid VolumePolicy actions.
- Modifications needed for `fs-backup` action:
- Now based on the specification of volume policy on backup request we will decide whether to go via legacy pod annotations approach or the newer volume policy based fs-backup action approach.
- If there is a presence of volume policy(fs-backup/snapshot) on the backup request that matches as an action for a volume we use the newer volume policy approach to get the list of the volumes for `fs-backup` action
- Else continue with the annotation based legacy approach workflow.
- Modifications needed for `snapshot` action:
- In the [takePVSnapshot function](https://github.com/vmware-tanzu/velero/blob/d4128542590470b204a642ee43311921c11db880/pkg/backup/item_backupper.go#L508) we will check the PV fits the volume policy criteria and see if the associated action is `snapshot`
- If it is not snapshot then we skip the further workflow and avoid taking the snapshot of the PV
- Similarly, For csi snapshot of PVC object, we need to do similar changes in [executeAction() function](https://github.com/vmware-tanzu/velero/blob/512fe0dabdcb3bbf1ca68a9089056ae549663bcf/pkg/backup/item_backupper.go#L348). we will check the PVC fits the volume policy criteria and see if the associated action is `snapshot` via csi plugin
- If it is not snapshot then we skip the csi BIA execute action and avoid taking the snapshot of the PVC by not invoking the csi plugin action for the PVC
**Note:**
- When we are using the `VolumePolicy` approach for backing up the volumes then the volume policy criteria and action need to be specific and explicit, there is no default behavior, if a volume matches `fs-backup` action then `fs-backup` method will be used for that volume and similarly if the volume matches the criteria for `snapshot` action then the snapshot workflow will be used for the volume backup.
- Another thing to note is the workflow proposed in this design uses the legacy `opt-in/opt-out` approach as a fallback option. For instance, the user specifies a VolumePolicy but for a particular volume included in the backup there are no actions(fs-backup/snapshot) matching in the volume policy for that volume, in such a scenario the legacy approach will be used for backing up the particular volume.
- The relation between the `VolumePolicy` and the backup's legacy parameter `SnapshotVolumes`:
- The `VolumePolicy`'s `snapshot` action matching for volume has higher priority. When there is a `snapshot` action matching for the selected volume, it will be backed by the snapshot way, no matter of the `backup.Spec.SnapshotVolumes` setting.
- If there is no `snapshot` action matching the selected volume in the `VolumePolicy`, then the volume will be backed up by `snapshot` way, if the `backup.Spec.SnapshotVolumes` is not set to false.
- The relation between the `VolumePolicy` and the backup's legacy filesystem `opt-in/opt-out` approach:
- The `VolumePolicy`'s `fs-backup` action matching for volume has higher priority. When there is a `fs-backup` action matching for the selected volume, it will be backed by the fs-backup way, no matter of the `backup.Spec.DefaultVolumesToFsBackup` setting and the pod's `opt-in/opt-out` annotation setting.
- If there is no `fs-backup` action matching the selected volume in the `VolumePolicy`, then the volume will be backed up by the legacy `opt-in/opt-out` way.
## Implementation
- The implementation should be included in velero 1.14
- We will introduce a `VolumeHelper` interface. It will consist of two methods:
```go
type VolumeHelper interface {
ShouldPerformSnapshot(obj runtime.Unstructured, groupResource schema.GroupResource) (bool, error)
ShouldPerformFSBackup(volume corev1api.Volume, pod corev1api.Pod) (bool, error)
}
```
- The `VolumeHelperImpl` struct will implement the `VolumeHelper` interface and will consist of the functions that we will use through the backup workflow to accommodate volume policies for PVs and PVCs.
```go
type volumeHelperImpl struct {
volumePolicy *resourcepolicies.Policies
snapshotVolumes *bool
logger logrus.FieldLogger
client crclient.Client
defaultVolumesToFSBackup bool
backupExcludePVC bool
}
```
- We will create an instance of the structure `volumeHelperImpl` in `item_backupper.go`
```go
itemBackupper := &itemBackupper{
...
volumeHelperImpl: volumehelper.NewVolumeHelperImpl(
resourcePolicy,
backupRequest.Spec.SnapshotVolumes,
log,
kb.kbClient,
boolptr.IsSetToTrue(backupRequest.Spec.DefaultVolumesToFsBackup),
!backupRequest.ResourceIncludesExcludes.ShouldInclude(kuberesource.PersistentVolumeClaims.String()),
),
}
```
#### FS-Backup
- Regarding `fs-backup` action to decide whether to use legacy annotation based approach or volume policy based approach:
- We will use the `vh.ShouldPerformFSBackup()` function from the `volumehelper` package
- Functions involved in processing `fs-backup` volume policy action will somewhat look like:
```go
func (v volumeHelperImpl) ShouldPerformFSBackup(volume corev1api.Volume, pod corev1api.Pod) (bool, error) {
if !v.shouldIncludeVolumeInBackup(volume) {
v.logger.Debugf("skip fs-backup action for pod %s's volume %s, due to not pass volume check.", pod.Namespace+"/"+pod.Name, volume.Name)
return false, nil
}
if v.volumePolicy != nil {
pvc, err := kubeutil.GetPVCForPodVolume(&volume, &pod, v.client)
if err != nil {
v.logger.WithError(err).Errorf("fail to get PVC for pod %s", pod.Namespace+"/"+pod.Name)
return false, err
}
pv, err := kubeutil.GetPVForPVC(pvc, v.client)
if err != nil {
v.logger.WithError(err).Errorf("fail to get PV for PVC %s", pvc.Namespace+"/"+pvc.Name)
return false, err
}
action, err := v.volumePolicy.GetMatchAction(pv)
if err != nil {
v.logger.WithError(err).Errorf("fail to get VolumePolicy match action for PV %s", pv.Name)
return false, err
}
if action != nil {
if action.Type == resourcepolicies.FSBackup {
v.logger.Infof("Perform fs-backup action for volume %s of pod %s due to volume policy match",
volume.Name, pod.Namespace+"/"+pod.Name)
return true, nil
} else {
v.logger.Infof("Skip fs-backup action for volume %s for pod %s because the action type is %s",
volume.Name, pod.Namespace+"/"+pod.Name, action.Type)
return false, nil
}
}
}
if v.shouldPerformFSBackupLegacy(volume, pod) {
v.logger.Infof("Perform fs-backup action for volume %s of pod %s due to opt-in/out way",
volume.Name, pod.Namespace+"/"+pod.Name)
return true, nil
} else {
v.logger.Infof("Skip fs-backup action for volume %s of pod %s due to opt-in/out way",
volume.Name, pod.Namespace+"/"+pod.Name)
return false, nil
}
}
```
- The main function from the above will be called when we encounter Pods during the backup workflow:
```go
for _, volume := range pod.Spec.Volumes {
shouldDoFSBackup, err := ib.volumeHelperImpl.ShouldPerformFSBackup(volume, *pod)
if err != nil {
backupErrs = append(backupErrs, errors.WithStack(err))
}
...
}
```
#### Snapshot (PV)
- Making sure that `snapshot` action is skipped for PVs that do not fit the volume policy criteria, for this we will use the `vh.ShouldPerformSnapshot` from the `VolumeHelperImpl(vh)` receiver.
```go
func (v *volumeHelperImpl) ShouldPerformSnapshot(obj runtime.Unstructured, groupResource schema.GroupResource) (bool, error) {
// check if volume policy exists and also check if the object(pv/pvc) fits a volume policy criteria and see if the associated action is snapshot
// if it is not snapshot then skip the code path for snapshotting the PV/PVC
pvc := new(corev1api.PersistentVolumeClaim)
pv := new(corev1api.PersistentVolume)
var err error
if groupResource == kuberesource.PersistentVolumeClaims {
if err = runtime.DefaultUnstructuredConverter.FromUnstructured(obj.UnstructuredContent(), &pvc); err != nil {
return false, err
}
pv, err = kubeutil.GetPVForPVC(pvc, v.client)
if err != nil {
return false, err
}
}
if groupResource == kuberesource.PersistentVolumes {
if err = runtime.DefaultUnstructuredConverter.FromUnstructured(obj.UnstructuredContent(), &pv); err != nil {
return false, err
}
}
if v.volumePolicy != nil {
action, err := v.volumePolicy.GetMatchAction(pv)
if err != nil {
return false, err
}
// If there is a match action, and the action type is snapshot, return true,
// or the action type is not snapshot, then return false.
// If there is no match action, go on to the next check.
if action != nil {
if action.Type == resourcepolicies.Snapshot {
v.logger.Infof(fmt.Sprintf("performing snapshot action for pv %s", pv.Name))
return true, nil
} else {
v.logger.Infof("Skip snapshot action for pv %s as the action type is %s", pv.Name, action.Type)
return false, nil
}
}
}
// If this PV is claimed, see if we've already taken a (pod volume backup)
// snapshot of the contents of this PV. If so, don't take a snapshot.
if pv.Spec.ClaimRef != nil {
pods, err := podvolumeutil.GetPodsUsingPVC(
pv.Spec.ClaimRef.Namespace,
pv.Spec.ClaimRef.Name,
v.client,
)
if err != nil {
v.logger.WithError(err).Errorf("fail to get pod for PV %s", pv.Name)
return false, err
}
for _, pod := range pods {
for _, vol := range pod.Spec.Volumes {
if vol.PersistentVolumeClaim != nil &&
vol.PersistentVolumeClaim.ClaimName == pv.Spec.ClaimRef.Name &&
v.shouldPerformFSBackupLegacy(vol, pod) {
v.logger.Infof("Skipping snapshot of pv %s because it is backed up with PodVolumeBackup.", pv.Name)
return false, nil
}
}
}
}
if !boolptr.IsSetToFalse(v.snapshotVolumes) {
// If the backup.Spec.SnapshotVolumes is not set, or set to true, then should take the snapshot.
v.logger.Infof("performing snapshot action for pv %s as the snapshotVolumes is not set to false", pv.Name)
return true, nil
}
v.logger.Infof(fmt.Sprintf("skipping snapshot action for pv %s possibly due to no volume policy setting or snapshotVolumes is false", pv.Name))
return false, nil
}
```
- The function `ShouldPerformSnapshot` will be used as follows in `takePVSnapshot` function of the backup workflow:
```go
snapshotVolume, err := ib.volumeHelperImpl.ShouldPerformSnapshot(obj, kuberesource.PersistentVolumes)
if err != nil {
return err
}
if !snapshotVolume {
log.Info(fmt.Sprintf("skipping volume snapshot for PV %s as it does not fit the volume policy criteria specified by the user for snapshot action", pv.Name))
ib.trackSkippedPV(obj, kuberesource.PersistentVolumes, volumeSnapshotApproach, "does not satisfy the criteria for volume policy based snapshot action", log)
return nil
}
```
#### Snapshot (PVC)
- Making sure that `snapshot` action is skipped for PVCs that do not fit the volume policy criteria, for this we will again use the `vh.ShouldPerformSnapshot` from the `VolumeHelperImpl(vh)` receiver.
- We will pass the `VolumeHelperImpl(vh)` instance in `executeActions` method so that it is available to use.
```go
```
- The above function will be used as follows in the `executeActions` function of backup workflow.
- Considering the vSphere plugin doesn't support the VolumePolicy yet, don't use the VolumePolicy for vSphere plugin by now.
```go
if groupResource == kuberesource.PersistentVolumeClaims {
if actionName == csiBIAPluginName {
snapshotVolume, err := ib.volumeHelperImpl.ShouldPerformSnapshot(obj, kuberesource.PersistentVolumeClaims)
if err != nil {
return nil, itemFiles, errors.WithStack(err)
}
if !snapshotVolume {
log.Info(fmt.Sprintf("skipping csi volume snapshot for PVC %s as it does not fit the volume policy criteria specified by the user for snapshot action", namespace+"/"+name))
ib.trackSkippedPV(obj, kuberesource.PersistentVolumeClaims, volumeSnapshotApproach, "does not satisfy the criteria for volume policy based snapshot action", log)
continue
}
}
}
```
## Future Implementation
It makes sense to add more specific actions in the future, once we deprecate the legacy opt-in/opt-out approach to keep things simple. Another point of note is, csi related action can be
easier to implement once we decide to merge the csi plugin into main velero code flow.
In the future, we envision the following actions that can be implemented:
- `snapshot-native`: only use volume snapshotter (native cloud provider snapshots), do nothing if not present/not compatible
- `snapshot-csi`: only use csi-plugin, don't use volume snapshotter(native cloud provider snapshots), don't use datamover even if snapshotMoveData is true
- `snapshot-datamover`: only use csi with datamover, don't use volume snapshotter (native cloud provider snapshots), use datamover even if snapshotMoveData is false
**Note:** The above actions are just suggestions for future scope, we may not use/implement them as is. We could definitely merge these suggested actions as `Snapshot` actions and use volume policy parameters and criteria to segregate them instead of making the user explicitly supply the action names to such granular levels.
## Related to Design
[Handle backup of volumes by resources filters](https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/handle-backup-of-volumes-by-resources-filters.md)
## Alternatives Considered
Same as the earlier design as this is an extension of the original VolumePolicies design

View File

@@ -1,370 +0,0 @@
# Velero Backup performance Improvements and VolumeGroupSnapshot enablement
There are two different goals here, linked by a single primary missing feature in the Velero backup workflow.
The first goal is to enhance backup performance by allowing the primary backup controller to run in multiple threads, enabling Velero to back up multiple items at the same time for a given backup.
The second goal is to enable Velero to eventually support VolumeGroupSnapshots.
For both of these goals, Velero needs a way to determine which items should be backed up together.
This design proposal will include two development phases:
- Phase 1 will refactor the backup workflow to identify blocks of related items that should be backed up together, and then coordinate backup hooks among items in the block.
- Phase 2 will add multiple worker threads for backing up item blocks, so instead of backing up each block as it identified, the velero backup workflow will instead add the block to a channel and one of the workers will pick it up.
- Actual support for VolumeGroupSnapshots is out-of-scope here and will be handled in a future design proposal, but the item block refactor introduced in Phase 1 is a primary building block for this future proposal.
## Background
Currently, during backup processing, the main Velero backup controller runs in a single thread, completely finishing the primary backup processing for one resource before moving on to the next one.
We can improve the overall backup performance by backing up multiple items for a backup at the same time, but before we can do this we must first identify resources that need to be backed up together.
Generally speaking, resources that need to be backed up together are resources with interdependencies -- pods with their PVCs, PVCs with their PVs, groups of pods that form a single application, CRs, pods, and other resources that belong to the same operator, etc.
As part of this initial refactoring, once these "Item Blocks" are identified, an additional change will be to move pod hook processing up to the ItemBlock level.
If there are multiple pods in the ItemBlock, pre-hooks for all pods will be run before backing up the items, followed by post-hooks for all pods.
This change to hook processing is another prerequisite for future VolumeGroupSnapshot support, since supporting this will require backing up the pods and volumes together for any volumes which belong to the same group.
Once we are backing up items by block, the next step will be to create multiple worker threads to process and back up ItemBlocks, so that we can back up multiple ItemBlocks at the same time.
In looking at the different kinds of large backups that Velero must deal with, two obvious scenarios come to mind:
1. Backups with a relatively small number of large volumes
2. Backups with a large number of relatively small volumes.
In case 1, the majority of the time spent on the backup is in the asynchronous phases -- CSI snapshot creation actions after the snaphandle exists, and DataUpload processing. In that case, parallel item processing will likely have a minimal impact on overall backup completion time.
In case 2, the majority of time spent on the backup will likely be during the synchronous actions. Especially as regards CSI snapshot creation, the waiting for the VSC snaphandle to exist will result in significant passage of time with thousands of volumes. This is the sort of use case which will benefit the most from parallel item processing.
## Goals
- Identify groups of related items to back up together (ItemBlocks).
- Manage backup hooks at the ItemBlock level rather than per-item.
- Using worker threads, back up ItemBlocks at the same time.
## Non Goals
- Support VolumeGroupSnapshots: this is a future feature, although certain prerequisites for this enhancement are included in this proposal.
- Process multiple backups in parallel: this is a future feature, although certain prerequisites for this enhancement are included in this proposal.
- Refactoring plugin infrastructure to avoid RPC calls for internal plugins.
- Restore performance improvements: this is potentially a future feature
## High-Level Design
### ItemBlock concept
The updated design is based on a new struct/type called `ItemBlock`.
Essentially, an `ItemBlock` is a group of items that must be backed up together in order to guarantee backup integrity.
When we eventually split item backup across multiple worker threads, `ItemBlocks` will be kept together as the basic unit of backup.
To facilitate this, a new plugin type, `ItemBlockAction` will allow relationships between items to be identified by velero -- any resources that must be backed up with other resources will need IBA plugins defined for them.
Examples of `ItemBlocks` include:
1. A pod, its mounted PVCs, and the bound PVs for those PVCs.
2. A VolumeGroup (related PVCs and PVs) along with any pods mounting these volumes.
3. For a ReadWriteMany PVC, the PVC, its bound PV, and all pods mounting this PVC.
### Phase 1: ItemBlock processing
- A new plugin type, `ItemBlockAction`, will be created
- `ItemBlockAction` will contain the API method `GetRelatedItems`, which will be needed for determining which items to group together into `ItemBlocks`.
- When processing the list of items returned from the item collector, instead of simply calling `BackupItem` on each in turn, we will use the `GetRelatedItems` API call to determine other items to include with the current item in an ItemBlock. Repeat recursively on each item returned.
- Don't include an item in more than one ItemBlock -- if the next item from the item collector is already in a block, skip it.
- Once ItemBlock is determined, call new func `BackupItemBlock` instead of `BackupItem`.
- New func `BackupItemBlock` will call pre hooks for any pods in the block, then back up the items in the block (`BackupItem` will no longer run hooks directly), then call post hooks for any pods in the block.
- The finalize phase will not be affected by the ItemBlock design, since this is just updating resources after async operations are completed on the items and there is no need to run these updates in parallel.
### Phase 2: Process ItemBlocks for a single backup in multiple threads
- Concurrent `BackupItemBlock` operations will be executed by worker threads invoked by the backup controller, which will communicate with the backup controller operation via a shared channel.
- The ItemBlock processing loop implemented in Phase 1 will be modified to send each newly-created ItemBlock to the shared channel rather than calling `BackupItemBlock` inline.
- Users will be able to configure the number of workers available for concurrent `BackupItemBlock` operations.
- Access to the BackedUpItems map must be synchronized
## Detailed Design
### Phase 1: ItemBlock processing
#### New ItemBlockAction plugin type
In order for Velero to identify groups of items to back up together in an ItemBlock, we need a way to identify items which need to be backed up along with the current item. While the current `Execute` BackupItemAction method does return a list of additional items which are required by the current item, we need to know this *before* we start the item backup. To support this, we need a new plugin type, `ItemBlockAction` (IBA) with an API method, `GetRelatedItems` which Velero will call on each item as it processes. The expectation is that the registered IBA plugins will return the same items as returned as additional items by the BIA `Execute` method, with the exception that items which are not created until calling `Execute` should not be returned here, as they don't exist yet.
#### Proto definition (compiled into golang by protoc)
The ItemBlockAction plugin type is defined as follows:
```
service ItemBlockAction {
rpc AppliesTo(ItemBlockActionAppliesToRequest) returns (ItemBlockActionAppliesToResponse);
rpc GetRelatedItems(ItemBlockActionGetRelatedItemsRequest) returns (ItemBlockActionGetRelatedItemsResponse);
}
message ItemBlockActionAppliesToRequest {
string plugin = 1;
}
message ItemBlockActionAppliesToResponse {
ResourceSelector ResourceSelector = 1;
}
message ItemBlockActionGetRelatedItemsRequest {
string plugin = 1;
bytes item = 2;
bytes backup = 3;
}
message ItemBlockActionGetRelatedItemsResponse {
repeated generated.ResourceIdentifier relatedItems = 1;
}
```
A new PluginKind, `ItemBlockAction`, will be created, and the backup process will be modified to use this plugin kind.
For any BIA plugins which return additional items from `Execute()` that need to be backed up at the same time or sequentially in the same worker thread as the current items should add a new IBA plugin to return these same items (minus any which won't exist before BIA `Execute()` is called).
This mainly applies to plugins that operate on pods which reference resources which must be backed up along with the pod and are potentially affected by pod hooks or for plugins which connect multiple pods whose volumes should be backed up at the same time.
### Changes to processing item list from the Item Collector
#### New structs BackupItemBlock, ItemBlock, and ItemBlockItem
```go
package backup
type BackupItemBlock struct {
itemblock.ItemBlock
// This is a reference to the shared itemBackupper for the backup
itemBackupper *itemBackupper
}
package itemblock
type ItemBlock struct {
Log logrus.FieldLogger
Items []ItemBlockItem
}
type ItemBlockItem struct {
Gr schema.GroupResource
Item *unstructured.Unstructured
PreferredGVR schema.GroupVersionResource
}
```
#### Current workflow
In the `BackupWithResolvers` func, the current Velero implementation iterates over the list of items for backup returned by the Item Collector. For each item, Velero loads the item from the file created by the Item Collector, we call `backupItem`, update the GR map if successful, remove the (temporary) file containing item metadata, and update progress for the backup.
#### Modifications to the loop over ItemCollector results
The `kubernetesResource` struct used by the item collector will be modified to add an `orderedResource` bool which will be set true for all of the resources moved to the beginning for each GroupResource as a result of being ordered resources.
In addition, an `inItemBlock` bool is added to the struct which will be set to true later when processing the list when each item is added to an ItemBlock.
While the item collector already puts ordered resources first for each GR, there is no indication in the list which of these initial items are from the ordered resources list and which are the remaining (unordered) items.
Velero needs to know which resources are ordered because when we process them later, the ordered resources for each GroupResource must be processed sequentially in a single ItemBlock.
The current workflow within each iteration of the ItemCollector.items loop will replaced with the following:
- (note that some of the below should be pulled out into a helper func to facilitate recursive call to it for items returned from `GetRelatedItems`.)
- Before loop iteration, create a pointer to a `BackupItemBlock` which will represent the current ItemBlock being processed.
- If `item` has `inItemBlock==true`, continue. This one has already been processed.
- If current `itemBlock` is nil, create it.
- Add `item` to `itemBlock`.
- Load item from ItemCollector file. Close/remove file after loading (on error return or not, possibly with similar anonymous func to current impl)
- If other versions of the same item exist (via EnableAPIGroupVersions), add these to the `itemBlock` as well (and load from ItemCollector file)
- Get matching IBA plugins for item, call `GetRelatedItems` for each. For each item returned, get full item content from ItemCollector (if present in item list, pulling from file, removing file when done) or from cluster (if not present in item list), add item to the current block, add item to `itemsInBlock` map, and then recursively apply current step to each (i.e. call IBA method, add to block, etc.)
- If current item and next item are both ordered items for the same GR, then continue to next item, adding to current `itemBlock`.
- Once full ItemBlock list is generated, call `backupItemBlock(block ItemBlock)
- Add `backupItemBlock` return values to `backedUpGroupResources` map
#### New func `backupItemBlock`
Method signature for new func `backupItemBlock` is as follows:
```go
func (kb *kubernetesBackupper) backupItemBlock(block BackupItemBlock) []schema.GroupResource
```
The return value is a slice of GRs for resources which were backed up. Velero tracks these to determine which CRDs need to be included in the backup. Note that we need to make sure we include in this not only those resources that were backed up directly, but also those backed up indirectly via additional items BIA execute returns.
In order to handle backup hooks, this func will first take the input item list (`block.items`) and get a list of included pods, filtered to include only those not yet backed up (using `block.itemBackupper.backupRequest.BackedUpItems`). Iterate over this list and execute pre hooks (pulled out of `itemBackupper.backupItemInternal`) for each item.
Now iterate over the full list (`block.items`) and call `backupItem` for each. After the first, the later items should already have been backed up, but calling a second time is harmless, since the first thing Velero does is check the `BackedUpItems` map, exiting if item is already backed up). We still need this call in case there's a plugin which returns something in `GetAdditionalItems` but forgets to return it in the `Execute` additional items return value. If we don't do this, we could end up missing items.
After backing up the items in the block, we now execute post hooks using the same filtered item list we used for pre hooks, again taking the logic from `itemBackupper.backupItemInternal`).
#### `itemBackupper.backupItemInternal` cleanup
After implementing backup hooks in `backupItemBlock`, hook processing should be removed from `itemBackupper.backupItemInternal`.
### Phase 2: Process ItemBlocks for a single backup in multiple threads
#### New input field for number of ItemBlock workers
The velero installer and server CLIs will get a new input field `itemBlockWorkerCount`, which will be passed along to the `backupReconciler`.
The `backupReconciler` struct will also have this new field added.
#### Worker pool for item block processing
A new type, `ItemBlockWorker` will be added which will manage a pool of worker goroutines which will process item blocks, a shared input channel for passing blocks to workers, and a WaitGroup to shut down cleanly when the reconciler exits.
```go
type ItemBlockWorkerPool struct {
itemBlockChannel chan ItemBlockInput
wg *sync.WaitGroup
logger logrus.FieldLogger
}
type ItemBlockInput struct {
itemBlock *BackupItemBlock
returnChan chan ItemBlockReturn
}
type ItemBlockReturn struct {
itemBlock *BackupItemBlock
resources []schema.GroupResource
err error
}
func (*p ItemBlockWorkerPool) getInputChannel() chan ItemBlockInput
func StartItemBlockWorkerPool(context context.Context, workers int, logger logrus.FieldLogger) ItemBlockWorkerPool
func processItemBlockWorker(context context.Context, itemBlockChannel chan ItemBlockInput, logger logrus.FieldLogger, wg *sync.WaitGroup)
```
The worker pool will be started by calling `StartItemBlockWorkerPool` in `NewBackupReconciler()`, passing in the worker count and reconciler context.
`backupreconciler.prepareBackupRequest` will also add the input channel to the `backupRequest` so that it will be available during backup processing.
The func `StartItemBlockWorkerPool` will create the `ItemBlockWorkerPool` with a shared buffered input channel (fixed buffer size) and start `workers` gororoutines which will each call `processItemBlockWorker`.
The `processItemBlockWorker` func (run by the worker goroutines) will read from `itemBlockChannel`, call `BackupItemBlock` on the retrieved `ItemBlock`, and then send the return value to the retrieved `returnChan`, and then process the next block.
#### Modify ItemBlock processing loop to send ItemBlocks to the worker pool rather than backing them up directly
The ItemBlock processing loop implemented in Phase 1 will be modified to send each newly-created ItemBlock to the shared channel rather than calling `BackupItemBlock` inline, using a WaitGroup to manage in-process items. A separate goroutine will be created to process returns for this backup. After completion of the ItemBlock processing loop, velero will use the WaitGroup to wait for all ItemBlock processing to complete before moving forward.
A simplified example of what this response goroutine might look like:
```go
// omitting cancel handling, context, etc
ret := make(chan ItemBlockReturn)
wg := &sync.WaitGroup{}
// Handle returns
go func() {
for {
select {
case response := <-ret: // process each BackupItemBlock response
func() {
defer wg.Done()
responses = append(responses, response)
}()
case <-ctx.Done():
return
}
}
}()
// Simplified illustration, looping over and assumed already-determined ItemBlock list
for _, itemBlock := range itemBlocks {
wg.Add(1)
inputChan <- ItemBlockInput{itemBlock: itemBlock, returnChan: ret}
}
done := make(chan struct{})
go func() {
defer close(done)
wg.Wait()
}()
// Wait for all the ItemBlocks to be processed
select {
case <-done:
logger.Info("done processing ItemBlocks")
}
// responses from BackupItemBlock calls are in responses
```
When processing the responses, the main thing is to set `backedUpGroupResources[item.groupResource]=true` for each GR returned, which will give the same result as the current implementation calling items one-by-one and setting that field as needed.
The ItemBlock processing loop described above will be split into two separate iterations. For the first iteration, velero will only process those items at the beginning of the loop identified as `orderedResources` -- when the groups generated from these resources are passed to the worker channel, velero will wait for the response before moving on to the next ItemBlock.
This is to ensure that the ordered resources are processed in the required order. Once the last ordered resource is processed, the remaining ItemBlocks will be processed and sent to the worker channel without waiting for a response, in order to allow these ItemBlocks to be processed in parallel.
The reason we must execute `ItemBlocks` with ordered resources first (and one at a time) is that this is a list of resources identified by the user as resources which must be backed up first, and in a particular order.
#### Synchronize access to the BackedUpItems map
Velero uses a map of BackedUpItems to track which items have already been backed up. This prevents velero from attempting to back up an item more than once, as well as guarding against creating infinite loops due to circular dependencies in the additional items returns. Since velero will now be accessing this map from the parallel goroutines, access to the map must be synchronized with mutexes.
### Backup Finalize phase
The finalize phase will not be affected by the ItemBlock design, since this is just updating resources after async operations are completed on the items and there is no need to run these updates in parallel.
## Alternatives considered
### BackpuItemAction v3 API
Instead of adding a new `ItemBlockAction` plugin type, we could add a `GetAdditionalItems` method to BackupItemAction.
This was rejected because the new plugin type provides a cleaner interface, and keeps the function of grouping related items separate from the function of modifying item content for the backup.
### Per-backup worker pool
The current design makes use of a permanent worker pool, started at backup controller startup time. With this design, when we follow on with running multiple backups in parallel, the same set of workers will take ItemBlock inputs from more than one backup. Another approach that was initially considered was a temporary worker pool, created while processing a backup, and deleted upon backup completion.
#### User-visible API differences between the two approaches
The main user-visible difference here is in the configuration API. For the permanent worker approach, the worker count represents the total worker count for all backups. The concurrent backup count represents the number of backups running at the same time. At any given time, though, the maximum number of worker threads backing up items concurrently is equal to the worker count. If worker count is 15 and the concurrent backup count is 3, then there will be, at most, 15 items being processed at the same time, split among up to three running backups.
For the per-backup worker approach, the worker count represents the worker count for each backup. The concurrent backup count, as before, represents the number of backups running at the same time. If worker count is 15 and the concurrent backup count is 3, then there will be, at most, 45 items being processed at the same time, up to 15 for each of up to three running backups.
#### Comparison of the two approaches
- Permanent worker pool advantages:
- This is the more commonly-followed Kubernetes pattern. It's generally better to follow standard practices, unless there are genuine reasons for the use case to go in a different way.
- It's easier for users to understand the maximum number of concurrent items processed, which will have performance impact and impact on the resource requirements for the Velero pod. Users will not have to multiply the config numbers in their heads when working out how many total workers are present.
- It will give us more flexibility for future enhancements around concurrent backups. One possible use case: backup priority. Maybe a user wants scheduled backups to have a lower priority than user-generated backups, since a user is sitting there waiting for completion -- a shared worker pool could react to the priority by taking ItemBlocks for the higher priority backup first, which would allow a large lower-priority backup's items to be preempted by a higher-priority backup's items without needing to explicitly stop the main controller flow for that backup.
- Per-backup worker pool advantages:
- Lower memory consumption than permanent worker pool, but the total memory used by a worker blocked on input will be pretty low, so if we're talking only 10-20 workers, the impact will be minimal.
## Compatibility
### Example IBA implementation for BIA plugins which return additional items
Included below is an example of what might be required for a BIA plugin which returns additional items.
The code is taken from the internal velero `pod_action.go` which identifies the items required for a given pod.
In this particular case, the only function of pod_action is to return additional items, so we can really just convert this plugin to an IBA plugin. If there were other actions, such as modifying the pod content on backup, then we would still need the pod action, and the related items vs. content manipulation functions would need to be separated.
```go
// PodAction implements ItemBlockAction.
type PodAction struct {
log logrus.FieldLogger
}
// NewPodAction creates a new ItemAction for pods.
func NewPodAction(logger logrus.FieldLogger) *PodAction {
return &PodAction{log: logger}
}
// AppliesTo returns a ResourceSelector that applies only to pods.
func (a *PodAction) AppliesTo() (velero.ResourceSelector, error) {
return velero.ResourceSelector{
IncludedResources: []string{"pods"},
}, nil
}
// GetRelatedItems scans the pod's spec.volumes for persistentVolumeClaim volumes and returns a
// ResourceIdentifier list containing references to all of the persistentVolumeClaim volumes used by
// the pod. This ensures that when a pod is backed up, all referenced PVCs are backed up too.
func (a *PodAction) GetRelatedItems(item runtime.Unstructured, backup *v1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, error) {
pod := new(corev1api.Pod)
if err := runtime.DefaultUnstructuredConverter.FromUnstructured(item.UnstructuredContent(), pod); err != nil {
return nil, errors.WithStack(err)
}
var relatedItems []velero.ResourceIdentifier
if pod.Spec.PriorityClassName != "" {
a.log.Infof("Adding priorityclass %s to relatedItems", pod.Spec.PriorityClassName)
relatedItems = append(relatedItems, velero.ResourceIdentifier{
GroupResource: kuberesource.PriorityClasses,
Name: pod.Spec.PriorityClassName,
})
}
if len(pod.Spec.Volumes) == 0 {
a.log.Info("pod has no volumes")
return relatedItems, nil
}
for _, volume := range pod.Spec.Volumes {
if volume.PersistentVolumeClaim != nil && volume.PersistentVolumeClaim.ClaimName != "" {
a.log.Infof("Adding pvc %s to relatedItems", volume.PersistentVolumeClaim.ClaimName)
relatedItems = append(relatedItems, velero.ResourceIdentifier{
GroupResource: kuberesource.PersistentVolumeClaims,
Namespace: pod.Namespace,
Name: volume.PersistentVolumeClaim.ClaimName,
})
}
}
return relatedItems, nil
}
// API call
func (a *PodAction) Name() string {
return "PodAction"
}
```
## Implementation
Phase 1 and Phase 2 could be implemented within the same Velero release cycle, but they need not be.
Phase 1 is expected to be implemented in Velero 1.15.
Phase 2 is expected to be implemented in Velero 1.16.

View File

@@ -1,94 +0,0 @@
# Backup PVC Configuration Design
## Glossary & Abbreviation
**Velero Generic Data Path (VGDP)**: VGDP is the collective modules that is introduced in [Unified Repository design][1]. Velero uses these modules to finish data transfer for various purposes (i.e., PodVolume backup/restore, Volume Snapshot Data Movement). VGDP modules include uploaders and the backup repository.
**Exposer**: Exposer is a module that is introduced in [Volume Snapshot Data Movement Design][2]. Velero uses this module to expose the volume snapshots to Velero node-agent pods or node-agent associated pods so as to complete the data movement from the snapshots.
**backupPVC**: The intermediate PVC created by the exposer for VGDP to access data from, see [Volume Snapshot Data Movement Design][2] for more details.
**backupPod**: The pod consumes the backupPVC so that VGDP could access data from the backupPVC, see [Volume Snapshot Data Movement Design][2] for more details.
**sourcePVC**: The PVC to be backed up, see [Volume Snapshot Data Movement Design][2] for more details.
## Background
As elaberated in [Volume Snapshot Data Movement Design][2], a backupPVC may be created by the Exposer and the VGDP reads data from the backupPVC.
In some scenarios, users may need to configure some advanced settings of the backupPVC so that the data movement could work in best performance in their environments. Specifically:
- For some storage providers, when creating a read-only volume from a snapshot, it is very fast; whereas, if a writable volume is created from the snapshot, they need to clone the entire disk data, which is time consuming. If the backupPVC's `accessModes` is set as `ReadOnlyMany`, the volume driver is able to tell the storage to create a read-only volume, which may dramatically shorten the snapshot expose time. On the other hand, `ReadOnlyMany` is not supported by all volumes. Therefore, users should be allowed to configure the `accessModes` for the backupPVC.
- Some storage providers create one or more replicas when creating a volume, the number of replicas is defined in the storage class. However, it doesn't make any sense to keep replicas when an intermediate volume used by the backup. Therefore, users should be allowed to configure another storage class specifically used by the backupPVC.
## Goals
- Create a mechanism for users to specify various configurations for backupPVC
## Non-Goals
## Solution
We will use the ConfigMap specified by `velero node-agent` CLI's parameter `--node-agent-configmap` to host the backupPVC configurations.
This configMap is not created by Velero, users should create it manually on demand. The configMap should be in the same namespace where Velero is installed. If multiple Velero instances are installed in different namespaces, there should be one configMap in each namespace which applies to node-agent in that namespace only.
Node-agent server checks these configurations at startup time and use it to initiate the related Exposer modules. Therefore, users could edit this configMap any time, but in order to make the changes effective, node-agent server needs to be restarted.
Inside the ConfigMap we will add one new kind of configuration as the data in the configMap, the name is ```backupPVC```.
Users may want to set different backupPVC configurations for different volumes, therefore, we define the configurations as a map and allow users to specific configurations by storage class. Specifically, the key of the map element is the storage class name used by the sourcePVC and the value is the set of configurations for the backupPVC created for the sourcePVC.
The data structure is as below:
```go
type Configs struct {
// LoadConcurrency is the config for data path load concurrency per node.
LoadConcurrency *LoadConcurrency `json:"loadConcurrency,omitempty"`
// LoadAffinity is the config for data path load affinity.
LoadAffinity []*LoadAffinity `json:"loadAffinity,omitempty"`
// BackupPVC is the config for backupPVC of snapshot data movement.
BackupPVC map[string]BackupPVC `json:"backupPVC,omitempty"`
}
type BackupPVC struct {
// StorageClass is the name of storage class to be used by the backupPVC.
StorageClass string `json:"storageClass,omitempty"`
// ReadOnly sets the backupPVC's access mode as read only.
ReadOnly bool `json:"readOnly,omitempty"`
}
```
### Sample
A sample of the ConfigMap is as below:
```json
{
"backupPVC": {
"storage-class-1": {
"storageClass": "snapshot-storage-class",
"readOnly": true
},
"storage-class-2": {
"storageClass": "snapshot-storage-class"
},
"storage-class-3": {
"readOnly": true
}
}
}
```
To create the configMap, users need to save something like the above sample to a json file and then run below command:
```
kubectl create cm <ConfigMap name> -n velero --from-file=<json file name>
```
### Implementation
The `backupPVC` is passed to the exposer and the exposer sets the related specification and create the backupPVC.
If `backupPVC.storageClass` doesn't exist or set as empty, the sourcePVC's storage class will be used.
If `backupPVC.readOnly` is set to true, `ReadOnlyMany` will be the only value set to the backupPVC's `accessModes`, otherwise, `ReadWriteOnce` is used.
Once `backupPVC.storageClass` is set, users must make sure that the specified storage class exists in the cluster and can be used the the backupPVC, otherwise, the corresponding DataUpload CR will stay in `Accepted` phase until the prepare timeout (by default 30min).
Once `backupPVC.readOnly` is set to true, users must make sure that the storage supports to create a `ReadOnlyMany` PVC from a snapshot, otherwise, the corresponding DataUpload CR will stay in `Accepted` phase until the prepare timeout (by default 30min).
Once above problems happen, the DataUpload CR is cancelled after prepare timeout and the backupPVC and backupPod will be deleted, so there is no way to tell the cause is one of the above problems or others.
To help the troubleshooting, we can add some diagnostic mechanism to discover the status of the backupPod before deleting it as a result of the prepare timeout.
[1]: unified-repo-and-kopia-integration/unified-repo-and-kopia-integration.md
[2]: volume-snapshot-data-movement/volume-snapshot-data-movement.md

View File

@@ -1,123 +0,0 @@
# Backup Repository Configuration Design
## Glossary & Abbreviation
**Backup Storage**: The storage to store the backup data. Check [Unified Repository design][1] for details.
**Backup Repository**: Backup repository is layered between BR data movers and Backup Storage to provide BR related features that is introduced in [Unified Repository design][1].
## Background
According to the [Unified Repository design][1] Velero uses selectable backup repositories for various backup/restore methods, i.e., fs-backup, volume snapshot data movement, etc. To achieve the best performance, backup repositories may need to be configured according to the running environments.
For example, if there are sufficient CPU and memory resources in the environment, users may enable compression feature provided by the backup repository, so as to achieve the best backup throughput.
As another example, if the local disk space is not sufficient, users may want to constraint the backup repository's cache size, so as to prevent the repository from running out of the disk space.
Therefore, it is worthy to allow users to configure some essential parameters of the backup repsoitories, and the configuration may vary from backup repositories.
## Goals
- Create a mechanism for users to specify configurations for backup repositories
## Non-Goals
## Solution
### BackupRepository CRD
After a backup repository is initialized, a BackupRepository CR is created to represent the instance of the backup repository. The BackupRepository's spec is a core parameter used by Unified Repo modules when interactive with the backup repsoitory. Therefore, we can add the configurations into the BackupRepository CR called ```repositoryConfig```.
The configurations may be different varying from backup repositories, therefore, we will not define each of the configurations explicitly. Instead, we add a map in the BackupRepository's spec to take any configuration to be set to the backup repository.
During various operations to the backup repository, the Unified Repo modules will retrieve from the map for the specific configuration that is required at that time. So even though it is specified, a configuration may not be visited/hornored if the operations don't require it for the specific backup repository, this won't bring any issue. When and how a configuration is hornored is decided by the configuration itself and should be clarified in the configuration's specification.
Below is the new BackupRepository's spec after adding the configuration map:
```yaml
spec:
description: BackupRepositorySpec is the specification for a BackupRepository.
properties:
backupStorageLocation:
description: |-
BackupStorageLocation is the name of the BackupStorageLocation
that should contain this repository.
type: string
maintenanceFrequency:
description: MaintenanceFrequency is how often maintenance should
be run.
type: string
repositoryConfig:
additionalProperties:
type: string
description: RepositoryConfig contains configurations for the specific
repository.
type: object
repositoryType:
description: RepositoryType indicates the type of the backend repository
enum:
- kopia
- restic
- ""
type: string
resticIdentifier:
description: |-
ResticIdentifier is the full restic-compatible string for identifying
this repository.
type: string
volumeNamespace:
description: |-
VolumeNamespace is the namespace this backup repository contains
pod volume backups for.
type: string
required:
- backupStorageLocation
- maintenanceFrequency
- resticIdentifier
- volumeNamespace
type: object
```
### BackupRepository configMap
The BackupRepository CR is not created explicitly by a Velero CLI, but created as part of the backup/restore/maintenance operation if the CR doesn't exist. As a result, users don't have any way to specify the configurations before the BackupRepository CR is created.
Therefore, a BackupRepository configMap is introduced as a template of the configurations to be applied to the backup repository CR.
When the backup repository CR is created by the BackupRepository controller, the configurations in the configMap are copied to the ```repositoryConfig``` field.
For an existing BackupRepository CR, the configMap is never visited, if users want to modify the configuration value, they should directly edit the BackupRepository CR.
The BackupRepository configMap is created by users in velero installation namespace. The configMap name must be specified in the velero server parameter ```--backup-repository-configmap```, otherwise, it won't effect.
If the configMap name is specified but the configMap doesn't exist by the time of a backup repository is created, the configMap name is ignored.
For any reason, if the configMap doesn't effect, nothing is specified to the backup repository CR, so the Unified Repo modules use the hard-coded values to configure the backup repository.
The BackupRepository configMap supports backup repository type specific configurations, even though users can only specify one configMap.
So in the configMap struct, multiple entries are supported, indexed by the backup repository type. During the backup repository creation, the configMap is searched by the repository type.
### Configurations
With the above mechanisms, any kind of configuration could be added. Here list the configurations defined at present:
```cacheLimitMB```: specifies the size limit(in MB) for the local data cache. The more data is cached locally, the less data may be downloaded from the backup storage, so the better performance may be achieved. Practically, users can specify any size that is smaller than the free space so that the disk space won't run out. This parameter is for each repository connection, that is, users could change it before connecting to the repository. If a backup repository doesn't use local cache, this parameter will be ignored. For Kopia repository, this parameter is supported.
```enableCompression```: specifies to enable/disable compression for a backup repsotiory. Most of the backup repositories support the data compression feature, if it is not supported by a backup repository, this parameter is ignored. Most of the backup repositories support to dynamically enable/disable compression, so this parameter is defined to be used whenever creating a write connection to the backup repository, if the dynamically changing is not supported, this parameter will be hornored only when initializing the backup repository. For Kopia repository, this parameter is supported and can be dynamically modified.
### Sample
Below is an example of the BackupRepository configMap with the configurations:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: <config-name>
namespace: velero
data:
<repository-type-1>: |
{
"cacheLimitMB": 2048,
"enableCompression": true
}
<repository-type-2>: |
{
"cacheLimitMB": 1,
"enableCompression": false
}
```
To create the configMap, users need to save something like the above sample to a file and then run below commands:
```
kubectl apply -f <yaml file name>
```
[1]: unified-repo-and-kopia-integration/unified-repo-and-kopia-integration.md

View File

@@ -1,374 +0,0 @@
# Design to clean the artifacts generated in the CSI backup and restore workflows
## Terminology
* VSC: VolumeSnapshotContent
* VS: VolumeSnapshot
## Abstract
* The design aims to delete the unnecessary VSs and VSCs generated during CSI backup and restore process.
* The design stop creating related VSCs during backup syncing.
## Background
In the current CSI backup and restore workflows, please notice the CSI B/R workflows means only using the CSI snapshots in the B/R, not including the CSI snapshot data movement workflows, some generated artifacts are kept after the backup or the restore process completion.
Some of them are kept due to design, for example, the VolumeSnapshotContents generated during the backup are kept to make sure the backup deletion can clean the snapshots in the storage providers.
Some of them are kept by accident, for example, after restore, two VolumeSnapshotContents are generated for the same VolumeSnapshot. One is from the backup content, and one is dynamically generated from the restore's VolumeSnapshot.
The design aims to clean the unnecessary artifacts, and make the CSI B/R workflow more concise and reliable.
## Goals
- Clean the redundant VSC generated during CSI backup and restore.
- Remove the VSCs in the backup sync process.
## Non Goals
- There were some discussion about whether Velero backup should include VSs and VSCs not generated in during the backup. By far, the conclusion is not including them is a better option. Although that is a useful enhancement, that is not included this design.
- Delete all the CSI-related metadata files in the BSL is not the aim of this design.
## Detailed Design
### Backup
During backup, the main change is the backup-generated VSCs should not kept anymore.
The reasons is we don't need them to ensure the snapshots clean up during backup deletion. Please reference to the [Backup Deletion section](#backup-deletion) section for detail.
As a result, we can simplify the VS deletion logic in the backup. Before, we need to not only delete the VS, but also recreate a static VSC pointing a non-exiting VS.
The deletion code in VS BackupItemAction can be simplify to the following:
``` go
if backup.Status.Phase == velerov1api.BackupPhaseFinalizing ||
backup.Status.Phase == velerov1api.BackupPhaseFinalizingPartiallyFailed {
p.log.
WithField("Backup", fmt.Sprintf("%s/%s", backup.Namespace, backup.Name)).
WithField("BackupPhase", backup.Status.Phase).Debugf("Cleaning VolumeSnapshots.")
if vsc == nil {
vsc = &snapshotv1api.VolumeSnapshotContent{}
}
csi.DeleteReadyVolumeSnapshot(*vs, *vsc, p.crClient, p.log)
return item, nil, "", nil, nil
}
func DeleteReadyVolumeSnapshot(
vs snapshotv1api.VolumeSnapshot,
vsc snapshotv1api.VolumeSnapshotContent,
client crclient.Client,
logger logrus.FieldLogger,
) {
logger.Infof("Deleting Volumesnapshot %s/%s", vs.Namespace, vs.Name)
if vs.Status == nil ||
vs.Status.BoundVolumeSnapshotContentName == nil ||
len(*vs.Status.BoundVolumeSnapshotContentName) <= 0 {
logger.Errorf("VolumeSnapshot %s/%s is not ready. This is not expected.",
vs.Namespace, vs.Name)
return
}
if vs.Status != nil && vs.Status.BoundVolumeSnapshotContentName != nil {
// Patch the DeletionPolicy of the VolumeSnapshotContent to set it to Retain.
// This ensures that the volume snapshot in the storage provider is kept.
if err := SetVolumeSnapshotContentDeletionPolicy(
vsc.Name,
client,
snapshotv1api.VolumeSnapshotContentRetain,
); err != nil {
logger.Warnf("Failed to patch DeletionPolicy of volume snapshot %s/%s",
vs.Namespace, vs.Name)
return
}
if err := client.Delete(context.TODO(), &vsc); err != nil {
logger.Warnf("Failed to delete the VSC %s: %s", vsc.Name, err.Error())
}
}
if err := client.Delete(context.TODO(), &vs); err != nil {
logger.Warnf("Failed to delete volumesnapshot %s/%s: %v", vs.Namespace, vs.Name, err)
} else {
logger.Infof("Deleted volumesnapshot with volumesnapshotContent %s/%s",
vs.Namespace, vs.Name)
}
}
```
### Restore
#### Restore the VolumeSnapshotContent
The current behavior of VSC restoration is that the VSC from the backup is restore, and the restored VS also triggers creating a new VSC dynamically.
Two VSCs created for the same VS in one restore seems not right.
Skip restore the VSC from the backup is not a viable alternative, because VSC may reference to a [snapshot create secret](https://kubernetes-csi.github.io/docs/secrets-and-credentials-volume-snapshot-class.html?highlight=snapshotter-secret-name#createdelete-volumesnapshot-secret).
If the `SkipRestore` is set true in the restore action's result, the secret returned in the additional items is ignored too.
As a result, restore the VSC from the backup, and setup the VSC and the VS's relation is a better choice.
Another consideration is the VSC name should not be the same as the backed-up VSC's, because the older version Velero's restore and backup keep the VSC after completion.
There's high possibility that the restore will fail due to the VSC already exists in the cluster.
Multiple restores of the same backup will also meet the same problem.
The proposed solution is using the restore's UID and the VS's name to generate sha256 hash value as the new VSC name. Both the VS and VSC RestoreItemAction can access those UIDs, and it will avoid the conflicts issues.
The restored VS name also shares the same generated name.
The VS-referenced VSC name and the VSC's snapshot handle name are in their status.
Velero restore process purges the restore resources' metadata and status before running the RestoreItemActions.
As a result, we cannot read these information in the VS and VSC RestoreItemActions.
Fortunately, RestoreItemAction input parameters includes the `ItemFromBackup`. The status is intact in `ItemFromBackup`.
``` go
func (p *volumeSnapshotRestoreItemAction) Execute(
input *velero.RestoreItemActionExecuteInput,
) (*velero.RestoreItemActionExecuteOutput, error) {
p.log.Info("Starting VolumeSnapshotRestoreItemAction")
if boolptr.IsSetToFalse(input.Restore.Spec.RestorePVs) {
p.log.Infof("Restore %s/%s did not request for PVs to be restored.",
input.Restore.Namespace, input.Restore.Name)
return &velero.RestoreItemActionExecuteOutput{SkipRestore: true}, nil
}
var vs snapshotv1api.VolumeSnapshot
if err := runtime.DefaultUnstructuredConverter.FromUnstructured(
input.Item.UnstructuredContent(), &vs); err != nil {
return &velero.RestoreItemActionExecuteOutput{},
errors.Wrapf(err, "failed to convert input.Item from unstructured")
}
var vsFromBackup snapshotv1api.VolumeSnapshot
if err := runtime.DefaultUnstructuredConverter.FromUnstructured(
input.ItemFromBackup.UnstructuredContent(), &vsFromBackup); err != nil {
return &velero.RestoreItemActionExecuteOutput{},
errors.Wrapf(err, "failed to convert input.Item from unstructured")
}
// If cross-namespace restore is configured, change the namespace
// for VolumeSnapshot object to be restored
newNamespace, ok := input.Restore.Spec.NamespaceMapping[vs.GetNamespace()]
if !ok {
// Use original namespace
newNamespace = vs.Namespace
}
if csiutil.IsVolumeSnapshotExists(newNamespace, vs.Name, p.crClient) {
p.log.Debugf("VolumeSnapshot %s already exists in the cluster. Return without change.", vs.Namespace+"/"+vs.Name)
return &velero.RestoreItemActionExecuteOutput{UpdatedItem: input.Item}, nil
}
newVSCName := generateSha256FromRestoreAndVsUID(string(input.Restore.UID), string(vsFromBackup.UID))
// Reset Spec to convert the VolumeSnapshot from using
// the dynamic VolumeSnapshotContent to the static one.
resetVolumeSnapshotSpecForRestore(&vs, &newVSCName)
// Reset VolumeSnapshot annotation. By now, only change
// DeletionPolicy to Retain.
resetVolumeSnapshotAnnotation(&vs)
vsMap, err := runtime.DefaultUnstructuredConverter.ToUnstructured(&vs)
if err != nil {
p.log.Errorf("Fail to convert VS %s to unstructured", vs.Namespace+"/"+vs.Name)
return nil, errors.WithStack(err)
}
p.log.Infof(`Returning from VolumeSnapshotRestoreItemAction with
no additionalItems`)
return &velero.RestoreItemActionExecuteOutput{
UpdatedItem: &unstructured.Unstructured{Object: vsMap},
AdditionalItems: []velero.ResourceIdentifier{},
}, nil
}
// generateSha256FromRestoreAndVsUID Use the restore UID and the VS UID to generate the new VSC name.
// By this way, VS and VSC RIA action can get the same VSC name.
func generateSha256FromRestoreAndVsUID(restoreUID string, vsUID string) string {
sha256Bytes := sha256.Sum256([]byte(restoreUID + "/" + vsUID))
return "vsc-" + hex.EncodeToString(sha256Bytes[:])
}
```
#### Restore the VolumeSnapshot
``` go
// Execute restores a VolumeSnapshotContent object without modification
// returning the snapshot lister secret, if any, as additional items to restore.
func (p *volumeSnapshotContentRestoreItemAction) Execute(
input *velero.RestoreItemActionExecuteInput,
) (*velero.RestoreItemActionExecuteOutput, error) {
if boolptr.IsSetToFalse(input.Restore.Spec.RestorePVs) {
p.log.Infof("Restore did not request for PVs to be restored %s/%s",
input.Restore.Namespace, input.Restore.Name)
return &velero.RestoreItemActionExecuteOutput{SkipRestore: true}, nil
}
p.log.Info("Starting VolumeSnapshotContentRestoreItemAction")
var vsc snapshotv1api.VolumeSnapshotContent
if err := runtime.DefaultUnstructuredConverter.FromUnstructured(
input.Item.UnstructuredContent(), &vsc); err != nil {
return &velero.RestoreItemActionExecuteOutput{},
errors.Wrapf(err, "failed to convert input.Item from unstructured")
}
var vscFromBackup snapshotv1api.VolumeSnapshotContent
if err := runtime.DefaultUnstructuredConverter.FromUnstructured(
input.ItemFromBackup.UnstructuredContent(), &vscFromBackup); err != nil {
return &velero.RestoreItemActionExecuteOutput{},
errors.Errorf(err.Error(), "failed to convert input.ItemFromBackup from unstructured")
}
// If cross-namespace restore is configured, change the namespace
// for VolumeSnapshot object to be restored
newNamespace, ok := input.Restore.Spec.NamespaceMapping[vsc.Spec.VolumeSnapshotRef.Namespace]
if ok {
// Update the referenced VS namespace to the mapping one.
vsc.Spec.VolumeSnapshotRef.Namespace = newNamespace
}
// Reset VSC name to align with VS.
vsc.Name = generateSha256FromRestoreAndVsUID(string(input.Restore.UID), string(vscFromBackup.Spec.VolumeSnapshotRef.UID))
// Reset the ResourceVersion and UID of referenced VolumeSnapshot.
vsc.Spec.VolumeSnapshotRef.ResourceVersion = ""
vsc.Spec.VolumeSnapshotRef.UID = ""
// Set the DeletionPolicy to Retain to avoid VS deletion will not trigger snapshot deletion
vsc.Spec.DeletionPolicy = snapshotv1api.VolumeSnapshotContentRetain
if vscFromBackup.Status != nil && vscFromBackup.Status.SnapshotHandle != nil {
vsc.Spec.Source.VolumeHandle = nil
vsc.Spec.Source.SnapshotHandle = vscFromBackup.Status.SnapshotHandle
} else {
p.log.Errorf("fail to get snapshot handle from VSC %s status", vsc.Name)
return nil, errors.Errorf("fail to get snapshot handle from VSC %s status", vsc.Name)
}
additionalItems := []velero.ResourceIdentifier{}
if csi.IsVolumeSnapshotContentHasDeleteSecret(&vsc) {
additionalItems = append(additionalItems,
velero.ResourceIdentifier{
GroupResource: schema.GroupResource{Group: "", Resource: "secrets"},
Name: vsc.Annotations[velerov1api.PrefixedSecretNameAnnotation],
Namespace: vsc.Annotations[velerov1api.PrefixedSecretNamespaceAnnotation],
},
)
}
vscMap, err := runtime.DefaultUnstructuredConverter.ToUnstructured(&vsc)
if err != nil {
return nil, errors.WithStack(err)
}
p.log.Infof("Returning from VolumeSnapshotContentRestoreItemAction with %d additionalItems",
len(additionalItems))
return &velero.RestoreItemActionExecuteOutput{
UpdatedItem: &unstructured.Unstructured{Object: vscMap},
AdditionalItems: additionalItems,
}, nil
}
```
### Backup Sync
csi-volumesnapshotclasses.json, csi-volumesnapshotcontents.json, and csi-volumesnapshots.json are CSI-related metadata files in the BSL for each backup.
csi-volumesnapshotcontents.json and csi-volumesnapshots.json are not needed anymore, but csi-volumesnapshotclasses.json is still needed.
One concrete scenario is that a backup is created in cluster-A, then the backup is synced to cluster-B, and the backup is deleted in the cluster-B. In this case, we don't have a chance to create the VS and VSC needed VolumeSnapshotClass.
The VSC deletion workflow proposed by this design needs to create the VSC first. If the VSC's referenced VolumeSnapshotClass doesn't exist in cluster, the creation of VSC will fail.
As a result, the VolumeSnapshotClass should still be synced in the backup sync process.
### Backup Deletion
Two factors are worthy for consideration for the backup deletion change:
* Because the VSCs generated by the backup are not synced anymore, and the VSCs generated during the backup will not be kept too. The backup deletion needs to generate a VSC, then deletes it to make sure the snapshots in the storage provider are clean too.
* The VSs generated by the backup are already deleted in the backup process, we don't need a DeleteItemAction for the VS anymore. As a result, the `velero.io/csi-volumesnapshot-delete` plugin is unneeded.
For the VSC DeleteItemAction, we need to generate a VSC. Because we only care about the snapshot deletion, we don't need to create a VS associated with the VSC.
Create a static VSC, then point it to a pseudo VS, and reference to the snapshot handle should be enough.
To avoid the created VSC conflict with older version Velero B/R generated ones, the VSC name is set to `vsc-uuid`.
The following is an example of the implementation.
``` go
uuid, err := uuid.NewRandom()
if err != nil {
p.log.WithError(err).Errorf("Fail to generate the UUID to create VSC %s", snapCont.Name)
return errors.Wrapf(err, "Fail to generate the UUID to create VSC %s", snapCont.Name)
}
snapCont.Name = "vsc-" + uuid.String()
snapCont.Spec.DeletionPolicy = snapshotv1api.VolumeSnapshotContentDelete
snapCont.Spec.Source = snapshotv1api.VolumeSnapshotContentSource{
SnapshotHandle: snapCont.Status.SnapshotHandle,
}
snapCont.Spec.VolumeSnapshotRef = corev1api.ObjectReference{
APIVersion: snapshotv1api.SchemeGroupVersion.String(),
Kind: "VolumeSnapshot",
Namespace: "ns-" + string(snapCont.UID),
Name: "name-" + string(snapCont.UID),
}
snapCont.ResourceVersion = ""
if err := p.crClient.Create(context.TODO(), &snapCont); err != nil {
return errors.Wrapf(err, "fail to create VolumeSnapshotContent %s", snapCont.Name)
}
// Read resource timeout from backup annotation, if not set, use default value.
timeout, err := time.ParseDuration(
input.Backup.Annotations[velerov1api.ResourceTimeoutAnnotation])
if err != nil {
p.log.Warnf("fail to parse resource timeout annotation %s: %s",
input.Backup.Annotations[velerov1api.ResourceTimeoutAnnotation], err.Error())
timeout = 10 * time.Minute
}
p.log.Debugf("resource timeout is set to %s", timeout.String())
interval := 5 * time.Second
// Wait until VSC created and ReadyToUse is true.
if err := wait.PollUntilContextTimeout(
context.Background(),
interval,
timeout,
true,
func(ctx context.Context) (bool, error) {
tmpVSC := new(snapshotv1api.VolumeSnapshotContent)
if err := p.crClient.Get(ctx, crclient.ObjectKeyFromObject(&snapCont), tmpVSC); err != nil {
return false, errors.Wrapf(
err, "failed to get VolumeSnapshotContent %s", snapCont.Name,
)
}
if tmpVSC.Status != nil && boolptr.IsSetToTrue(tmpVSC.Status.ReadyToUse) {
return true, nil
}
return false, nil
},
); err != nil {
return errors.Wrapf(err, "fail to wait VolumeSnapshotContent %s becomes ready.", snapCont.Name)
}
```
## Security Considerations
Security is not relevant to this design.
## Compatibility
In this design, no new information is added in backup and restore. As a result, this design doesn't have any compatibility issue.
## Open Issues
Please notice the CSI snapshot backup and restore mechanism not supporting all file-store-based volume, e.g. Azure Files, EFS or vSphere CNS File Volume. Only block-based volumes are supported.
Refer to [this comment](https://github.com/vmware-tanzu/velero/issues/3151#issuecomment-2623507686) for more details.

View File

@@ -86,7 +86,7 @@ volumePolicies:
# capacity condition matches the volumes whose capacity falls into the range
capacity: "0,100Gi"
csi:
driver: ebs.csi.aws.com
driver: aws.ebs.csi.driver
fsType: ext4
storageClass:
- gp2
@@ -174,7 +174,7 @@ data:
- conditions:
capacity: "0,100Gi"
csi:
driver: ebs.csi.aws.com
driver: aws.ebs.csi.driver
fsType: ext4
storageClass:
- gp2

View File

@@ -1,82 +0,0 @@
# Proposal to add include exclude policy to resource policy
This enhancement will allow the user to set include and exclude filters for resources in a resource policy configmap, so that
these filters are reusable and the user will not need to set them each time they create a backup.
## Background
As mentioned in issue [#8610](https://github.com/vmware-tanzu/velero/issues/8610). When there's a long list of resources
to include or exclude in a backup, it can be cumbersome to set them each time a backup is created. There's a requirement to
set these filters in a separate data structure so that they can be reused in multiple backups.
## High-Level Design
We may extend the data structure of resource policy to add `includeExcludePolicy`, which include the include and exclude filters
in the BackupSpec. When the user creates a backup which references the resource policy config `velero backup create --resource-policies-configmap <configmap-name>`,
the filters in "includeExcludePolicy" will take effect to filter the resources when velero collects the resources to backup.
## Detailed Design
### Data Structure
The map `includeExcludePolicy` contains four fields `includedClusterScopedResources`, `excludedClusterScopedResources`,
`includedNamespaceScopedResources`,`excludedNamespaceScopedResources`. These filters work exactly as the filters defined BackupSpec with
the same names. An example of the policy looks like:
```yaml
#omitted other irrelevant fields like 'version', 'volumePolicies'
includeExcludePolicy:
includedClusterScopedResources:
- "cr"
- "crd"
- "pv"
excludedClusterScopedResources:
- "volumegroupsnapshotclass"
- "ingressclass"
includedNamespaceScopedResources:
- "pod"
- "service"
- "deployment"
- "pvc"
excludedNamespaceScopedResources:
- "configmap"
```
These filters are in the form of scoped include/exclude filters, which by design will not work with the "old" resource filters.
Therefore, when a Backup references a resource policy configmap which has `includeExcludePolicy`, and at the same time it has
the "old" resource filters, i.e. `includedResources`, `excludedResources`, `includeClusterResources` set in the BackupSpec, the
Backup will fail with a validation error.
### Priorities
A user may set the include/exclude filters in Backupspec and also in the resource policy configmap. In this case, the filters
in both the Backupspec and the resource policy configmap will take effect. When there's a conflict, the filters in the Backupspec
will take precedence. For example, if resource X is in the list of `includedNamespaceScopedResources` filter in the Backupspec, but
it's also in the list of `excludedClusterScopedResources` in the resource policy configmap, then resource X will be included in the backup.
In this way, users can set the filters in the resource policy configmap to cover most of their use cases, and then override them
in the Backupspec when needed.
### Implementation
In addition to the data structure change, we will need to implement the following changes:
1. A new function `CombineWithPolicy` will be added to the struct `ScopeIncludesExcludes`, which will combine the include/exclude filters
in the resource policy configmap with the include/exclude filters in the Backupspec:
```go
func (ie *ScopeIncludesExcludes) CombineWithPolicy(policy resourcepolicies.IncludeExcludePolicy) {
mapFunc := scopeResourceMapFunc(ie.helper)
for _, item := range policy.ExcludedNamespaceScopedResources {
resolvedItem := mapFunc(item, true)
if resolvedItem == "" {
continue
}
if !ie.ShouldInclude(resolvedItem) && !ie.ShouldExclude(resolvedItem) {
// The existing includeExcludes in the struct has higher priority, therefore, we should only add the item to the filter
// when the struct does not include this item and this item is not yet in the excludes filter.
ie.namespaceScopedResourceFilter.excludes.Insert(resolvedItem)
}
}
.....
```
This function will be called in the `kubernetesBackupper.BackupWithResolvers` function, to make sure the combined `ScopeIncludesExcludes`
filter will be assigned to the `ResourceIncludesExcludes` filter of the Backup request.
2. Extra validation code will be added to the function `prepareBackupRequest` of `BackupReconciler` to check if there are "old"
Resource filters in the BackupSpec when the Backup references a resource policy configmap which has `includeExcludePolicy`.
## Alternatives Considered
We may put `includeExcludePolicy` in a separate configmap, but it will require adding extra field to BackupSpec to reference the configmap,
which is not necessary.

View File

@@ -65,7 +65,7 @@ This page contains a pre-migration checklist for ensuring a repo migration goes
#### Updating Netlify
The settings for Netlify should remain the same, except that it now needs to be installed in the new repo. The instructions on how to install Netlify on the new repo are here: https://www.netlify.com/docs/github-permissions/.
The settings for Netflify should remain the same, except that it now needs to be installed in the new repo. The instructions on how to install Netlify on the new repo are here: https://www.netlify.com/docs/github-permissions/.
#### Communication strategy

View File

@@ -1,122 +0,0 @@
# Multi-arch Build and Windows Build Support
## Background
At present, Velero images could be built for linux-amd64 and linux-arm64. We need to support other platforms, i.e., windows-amd64.
At present, for linux image build, we leverage Buildkit's `--platform` option to create the image manifest list in one build call. However, it is a limited way and doesn't fully support all multi-arch scenarios. Specifically, since the build is done in one call with the same parameters, it is impossbile to build images with different configurations (e.g., Windows build requires a different Dockerfile).
At present, Velero by default build images locally, or no image or manifest is pushed to registry. However, docker doesn't support multi-arch build locally. We need to clarify the behavior of local build.
## Goals
- Refactor the `make container` process to fully support multi-arch build
- Add Windows build to the existing build process
- Clarify the behavior of local build with multi-arch build capabilities
- Don't change the pattern of the final image tag to be used by users
## Non-Goals
- There may be some workarounds to make the multi-arch image/manifest fully available locally. These workarounds will not be adopted, so local build always build single-arch images
## Local Build
For local build, two values of `--output` parameter for `docker buildx build` are supported:
- `docker`: a docker format image is built, but the image is only built for the platform (`<os>/<arch>`) as same as the building env. E.g., when building from linux-amd64 env, a single manifest of linux-amd64 is created regardless how the input parameters are configured.
- `tar`: one or more images are built as tarballs according to the input platform (`<os>/<arch>`) parameters. Specifically, one tarball is generated for each platform. The build process is the same with the `Build Separate Manifests` of `Push Build` as detailed below. Merely, the `--output` parameter diffs, as `type=tar;dest=<tarball generated path>`. The tarball is generated to the `_output` folder and named with the platform info, e.g., `_output/velero-main-linux-amd64.tar`.
## Push Build
For push build, the `--output` parameter for `docker buildx build` is always `registry`. And build will go according to the input parameters and create multi-arch manifest lists.
### Step 1: Build Separate Manifests
Instead of specifying multiple platforms (`<os>/<arch>`) to `--platform` option, we add multiple `container-%` targets in Makefile and each target builds one platform representively.
The goal here is to build multiple manifests through the multiple targets. However, `docker buildx build` by default creates a manifest list even though there is only one element in `--platform`. Therefore, two flags `--provenance=false` and `--sbom=false` will be set additionally to force `docker buildx build` to create manifests.
Each manifest has a unique tag, the OS type and arch is added to the tag, in the pattern `$(REGISTRY)/$(BIN):$(VERSION)-$(OS)-$(ARCH)`. For example, `velero/velero:main-linux-amd64`.
All the created manifests will be pushed to registry so that the all-in-one manifest list could be created.
### Step 2: Create All-In-One Manifest List
The next step is to create a manifest list to include all the created manifests. This could be done by `docker manifest create` command, the tags created and pushed at Step 1 are passed to this command.
A tag is also created for the manifest list, in the pattern `$(REGISTRY)/$(BIN):$(VERSION)`. For example, `velero/velero:main`.
### Step 3: Push All-In-One Manifest List
The created manifest will be pushed to registry by command `docker manifest push`.
## Input Parameters
Below are the input parameters that are configurable to meet different build purposes during Dev and release cycle:
- BUILD_OUTPUT_TYPE: the type of output for the build, i.e., `docker`, `tar`, `registry`, while `docker` and `tar` is for local build; `registry` means push build. Default value is `docker`
- BUILD_OS: which types of OS should be built for. Multiple values are accepted, e.g., `linux,windows`. Default value is `linux`
- BUILD_ARCH: which types of architecture should be built for. Multiple values are accepted, e.g., `amd64,arm64`. Default value is `amd64`
- BUILDX_INSTANCE: an existing buildx instance to be used by the build. Default value is <empty> which indicates the build to create a new buildx instance
## Windows Build
Windows container images vary from Windows OS versions, e.g., `ltsc2022` for Windows server 2022 and `1809` for Windows server 2019. Images for different OS versions should be built separately.
Therefore, separate build targets are added for each OS version, like `container-windows-%`.
For the same reason, a new input parameter is added, `BUILD_WINDOWS_VERSION`. The default value is `ltsc2022`. Windows server 2022 is the only base image we will deliver officially, Windows server 2019 is not supported. In future, we may need to support Windows server 2025 base image.
For local build to tar, the Windows OS version is also added to the name of the tarball, e.g., `_output/velero-main-windows-ltsc2022-amd64.tar`.
At present, Windows container image only supports `amd64` as the architecture, so `BUILD_ARCH` is ignored for Windows.
The Windows manifests need to be annotated with os type, arch, and os version. This will be done through `docker manifest annotate` command.
## Use Malti-arch Images
In order to use the images, the manifest list's tag should be provided to `velero install` command or helm, the individual manifests are covered by the manifest list. During launch time, the container engine will load the right image to the container according to the platform of the running node.
## Build Samples
**Local build to docker**
```
make container
```
The built image could be listed by `docker image ls`.
**Local build for linux-amd64 and windows-amd64 to tar**
```
BUILD_OUTPUT_TYPE=tar BUILD_OS=linux,windows make container
```
Under `_output` directory, below files are generated:
```
velero-main-linux-amd64.tar
velero-main-windows-ltsc2022-amd64.tar
```
**Local build for linux-amd64, linux-arm64 and windows-amd64 to tar**
```
BUILD_OUTPUT_TYPE=tar BUILD_OS=linux,windows BUILD_ARCH=amd64,arm64 make container
```
Under `_output` directory, below files are generated:
```
velero-main-linux-amd64.tar
velero-main-linux-arm64.tar
velero-main-windows-ltsc2022-amd64.tar
```
**Push build for linux-amd64 and windows-amd64**
Prerequisite: login to registry, e.g., through `docker login`
```
BUILD_OUTPUT_TYPE=registry REGISTRY=<registry> BUILD_OS=linux,windows make container
```
Nothing is available locally, in the registry 3 tags are available:
```
velero/velero:main
velero/velero:main-windows-ltsc2022-amd64
velero/velero:main-linux-amd64
```
**Push build for linux-amd64, linux-arm64 and windows-amd64**
Prerequisite: login to registry, e.g., through `docker login`
```
BUILD_OUTPUT_TYPE=registry REGISTRY=<registry> BUILD_OS=linux,windows BUILD_ARCH=amd64,arm64 make container
```
Nothing is available locally, in the registry 4 tags are available:
```
velero/velero:main
velero/velero:main-windows-ltsc2022-amd64
velero/velero:main-linux-amd64
velero/velero:main-linux-arm64
```

View File

@@ -1,132 +0,0 @@
# Node-agent Load Affinity Design
## Glossary & Abbreviation
**Velero Generic Data Path (VGDP)**: VGDP is the collective modules that is introduced in [Unified Repository design][1]. Velero uses these modules to finish data transfer for various purposes (i.e., PodVolume backup/restore, Volume Snapshot Data Movement). VGDP modules include uploaders and the backup repository.
**Exposer**: Exposer is a module that is introduced in [Volume Snapshot Data Movement Design][2]. Velero uses this module to expose the volume snapshots to Velero node-agent pods or node-agent associated pods so as to complete the data movement from the snapshots.
## Background
Velero node-agent is a daemonset hosting controllers and VGDP modules to complete the concrete work of backups/restores, i.e., PodVolume backup/restore, Volume Snapshot Data Movement backup/restore.
Specifically, node-agent runs DataUpload controllers to watch DataUpload CRs for Volume Snapshot Data Movement backups, so there is one controller instance in each node. One controller instance takes a DataUpload CR and then launches a VGDP instance, which initializes a uploader instance and the backup repository connection, to finish the data transfer. The VGDP instance runs inside a node-agent pod or in a pod associated to the node-agent pod in the same node.
Varying from the data size, data complexity, resource availability, VGDP may take a long time and remarkable resources (CPU, memory, network bandwidth, etc.).
Technically, VGDP instances are able to run in any node that allows pod schedule. On the other hand, users may want to constrain the nodes where VGDP instances run for various reasons, below are some examples:
- Prevent VGDP instances from running in specific nodes because users have more critical workloads in the nodes
- Constrain VGDP instances to run in specific nodes because these nodes have more resources than others
- Constrain VGDP instances to run in specific nodes because the storage allows volume/snapshot provisions in these nodes only
Therefore, in order to improve the compatibility, it is worthy to configure the affinity of VGDP to nodes, especially for backups for which VGDP instances run frequently and centrally.
## Goals
- Define the behaviors of node affinity of VGDP instances in node-agent for volume snapshot data movement backups
- Create a mechanism for users to specify the node affinity of VGDP instances for volume snapshot data movement backups
## Non-Goals
- It is also beneficial to support VGDP instances affinity for PodVolume backup/restore, however, it is not possible since VGDP instances for PodVolume backup/restore should always run in the node where the source/target pods are created.
- It is also beneficial to support VGDP instances affinity for data movement restores, however, it is not possible in some cases. For example, when the `volumeBindingMode` in the StorageClass is `WaitForFirstConsumer`, the restore volume must be mounted in the node where the target pod is scheduled, so the VGDP instance must run in the same node. On the other hand, considering the fact that restores may not frequently and centrally run, we will not support data movement restores.
- As elaborated in the [Volume Snapshot Data Movement Design][2], the Exposer may take different ways to expose snapshots, i.e., through backup pods (this is the only way supported at present). The implementation section below only considers this approach currently, if a new expose method is introduced in future, the definition of the affinity configurations and behaviors should still work, but we may need a new implementation.
## Solution
We will use the ConfigMap specified by `velero node-agent` CLI's parameter `--node-agent-configmap` to host the node affinity configurations.
This configMap is not created by Velero, users should create it manually on demand. The configMap should be in the same namespace where Velero is installed. If multiple Velero instances are installed in different namespaces, there should be one configMap in each namespace which applies to node-agent in that namespace only.
Node-agent server checks these configurations at startup time and use it to initiate the related VGDP modules. Therefore, users could edit this configMap any time, but in order to make the changes effective, node-agent server needs to be restarted.
Inside the ConfigMap we will add one new kind of configuration as the data in the configMap, the name is ```loadAffinity```.
Users may want to set different LoadAffinity configurations according to different conditions (i.e., for different storages represented by StorageClass, CSI driver, etc.), so we define ```loadAffinity``` as an array. This is for extensibility consideration, at present, we don't implement multiple configurations support, so if there are multiple configurations, we always take the first one in the array.
The data structure is as below:
```go
type Configs struct {
// LoadConcurrency is the config for load concurrency per node.
LoadConcurrency *LoadConcurrency `json:"loadConcurrency,omitempty"`
// LoadAffinity is the config for data path load affinity.
LoadAffinity []*LoadAffinity `json:"loadAffinity,omitempty"`
}
type LoadAffinity struct {
// NodeSelector specifies the label selector to match nodes
NodeSelector metav1.LabelSelector `json:"nodeSelector"`
}
```
### Affinity
Affinity configuration means allowing VGDP instances running in the nodes specified. There are two ways to define it:
- It could be defined by `MatchLabels` of `metav1.LabelSelector`. The labels defined in `MatchLabels` means a `LabelSelectorOpIn` operation by default, so in the current context, they will be treated as affinity rules.
- It could be defined by `MatchExpressions` of `metav1.LabelSelector`. The labels are defined in `Key` and `Values` of `MatchExpressions` and the `Operator` should be defined as `LabelSelectorOpIn` or `LabelSelectorOpExists`.
### Anti-affinity
Anti-affinity configuration means preventing VGDP instances running in the nodes specified. Below is the way to define it:
- It could be defined by `MatchExpressions` of `metav1.LabelSelector`. The labels are defined in `Key` and `Values` of `MatchExpressions` and the `Operator` should be defined as `LabelSelectorOpNotIn` or `LabelSelectorOpDoesNotExist`.
### Sample
A sample of the ConfigMap is as below:
```json
{
"loadAffinity": [
{
"nodeSelector": {
"matchLabels": {
"beta.kubernetes.io/instance-type": "Standard_B4ms"
},
"matchExpressions": [
{
"key": "kubernetes.io/hostname",
"values": [
"node-1",
"node-2",
"node-3"
],
"operator": "In"
},
{
"key": "xxx/critial-workload",
"operator": "DoesNotExist"
}
]
}
}
]
}
```
This sample showcases two affinity configurations:
- matchLabels: VGDP instances will run only in nodes with label key `beta.kubernetes.io/instance-type` and value `Standard_B4ms`
- matchExpressions: VGDP instances will run in node `node1`, `node2` and `node3` (selected by `kubernetes.io/hostname` label)
This sample showcases one anti-affinity configuration:
- matchExpressions: VGDP instances will not run in nodes with label key `xxx/critial-workload`
To create the configMap, users need to save something like the above sample to a json file and then run below command:
```
kubectl create cm <ConfigMap name> -n velero --from-file=<json file name>
```
### Implementation
As mentioned in the [Volume Snapshot Data Movement Design][2], the exposer decides where to launch the VGDP instances. At present, for volume snapshot data movement backups, the exposer creates backupPods and the VGDP instances will be initiated in the nodes where backupPods are scheduled. So the loadAffinity will be translated (from `metav1.LabelSelector` to `corev1.Affinity`) and set to the backupPods.
It is possible that node-agent pods, as a daemonset, don't run in every worker nodes, users could fulfil this by specify `nodeSelector` or `nodeAffinity` to the node-agent daemonset spec. On the other hand, at present, VGDP instances must be assigned to nodes where node-agent pods are running. Therefore, if there is any node selection for node-agent pods, users must consider this into this load affinity configuration, so as to guarantee that VGDP instances are always assigned to nodes where node-agent pods are available. This is done by users, we don't inherit any node selection configuration from node-agent daemonset as we think daemonset scheduler works differently from plain pod scheduler, simply inheriting all the configurations may cause unexpected result of backupPod schedule.
Otherwise, if a backupPod are scheduled to a node where node-agent pod is absent, the corresponding DataUpload CR will stay in `Accepted` phase until the prepare timeout (by default 30min).
At present, as part of the expose operations, the exposer creates a volume, represented by backupPVC, from the snapshot. The backupPVC uses the same storageClass with the source volume. If the `volumeBindingMode` in the storageClass is `Immediate`, the volume is immediately allocated from the underlying storage without waiting for the backupPod. On the other hand, the loadAffinity is set to the backupPod's affinity. If the backupPod is scheduled to a node where the snapshot volume is not accessible, e.g., because of storage topologies, the backupPod won't get into Running state, concequently, the data movement won't complete.
Once this problem happens, the backupPod stays in `Pending` phase, and the corresponding DataUpload CR stays in `Accepted` phase until the prepare timeout (by default 30min). Below is an example of the backupPod's status when the problem happens:
```
status:
conditions:
- lastProbeTime: null
message: '0/2 nodes are available: 1 node(s) didn''t match Pod''s node affinity/selector,
1 node(s) had volume node affinity conflict. preemption: 0/2 nodes are available:
2 Preemption is not helpful for scheduling..'
reason: Unschedulable
status: "False"
type: PodScheduled
phase: Pending
```
On the other hand, the backupPod is deleted after the prepare timeout, so there is no way to tell the cause is one of the above problems or others.
To help the troubleshooting, we can add some diagnostic mechanism to discover the status of the backupPod and node-agent in the same node before deleting it as a result of the prepare timeout.
[1]: unified-repo-and-kopia-integration/unified-repo-and-kopia-integration.md
[2]: volume-snapshot-data-movement/volume-snapshot-data-movement.md

View File

@@ -26,11 +26,11 @@ Therefore, in order to gain the optimized performance with the limited resources
## Solution
We introduce a ConfigMap specified by `velero node-agent` CLI's parameter `--node-agent-configmap` for users to specify the node-agent related configurations. This configMap is not created by Velero, users should create it manually on demand. The configMap should be in the same namespace where Velero is installed. If multiple Velero instances are installed in different namespaces, there should be one configMap in each namespace which applies to node-agent in that namespace only.
We introduce a configMap named ```node-agent-config``` for users to specify the node-agent related configurations. This configMap is not created by Velero, users should create it manually on demand. The configMap should be in the same namespace where Velero is installed. If multiple Velero instances are installed in different namespaces, there should be one configMap in each namespace which applies to node-agent in that namespace only.
Node-agent server checks these configurations at startup time and use it to initiate the related VGDP modules. Therefore, users could edit this configMap any time, but in order to make the changes effective, node-agent server needs to be restarted.
The ConfigMap may be used for other purpose of configuring node-agent in future, at present, there is only one kind of configuration as the data in the configMap, the name is ```loadConcurrency```.
The ```node-agent-config``` configMap may be used for other purpose of configuring node-agent in future, at present, there is only one kind of configuration as the data in the configMap, the name is ```loadConcurrency```.
The data structure is as below:
The data structure for ```node-agent-config``` is as below:
```go
type Configs struct {
// LoadConcurrency is the config for load concurrency per node.
@@ -82,7 +82,7 @@ At least one node is expected to have a label with the specified ```RuledConfigs
If one node falls into more than one rules, e.g., if node1 also has the label ```beta.kubernetes.io/instance-type=Standard_B4ms```, the smallest number (3) will be used.
### Sample
A sample of the ConfigMap is as below:
A sample of the ```node-agent-config``` configMap is as below:
```json
{
"loadConcurrency": {
@@ -110,7 +110,7 @@ A sample of the ConfigMap is as below:
```
To create the configMap, users need to save something like the above sample to a json file and then run below command:
```
kubectl create cm <ConfigMap name> -n velero --from-file=<json file name>
kubectl create cm node-agent-config -n velero --from-file=<json file name>
```
### Global data path manager

View File

@@ -1,121 +0,0 @@
# Node-agent Load Soothing Design
## Glossary & Abbreviation
**Velero Generic Data Path (VGDP)**: VGDP is the collective of modules that is introduced in [Unified Repository design][1]. Velero uses these modules to finish data transfer for various purposes (i.e., PodVolume backup/restore, Volume Snapshot Data Movement). VGDP modules include uploaders and the backup repository.
## Background
As mentioned in [node-agent Concurrency design][2], [CSI Snapshot Data Movement design][3], [VGDP Micro Service design][4] and [VGDP Micro Service for fs-backup design][5], all data movement activities for CSI snapshot data movement backups/restores and fs-backup respect the `loadConcurrency` settings configured in the `node-agent-configmap`. Once the number of existing loads exceeds the corresponding `loadConcurrency` setting, the loads will be throttled and some loads will be held until VGDP quotas are available.
However, this throttling only happens after the data mover pod is started and gets to `running`. As a result, when there are large number of concurrent volume backups, there may be many data mover pods get created but the VGDP instances inside them are actually on hold because of the VGDP throttling.
This could cause below problems:
- In some environments, there is a pod limit in each node of the cluster or a pod limit throughout the cluster, too many of the inactive data mover pods may block other pods from running
- In some environments, the system disk for each node of the cluster is limited, while pods also occupy system disk space, etc., many of the inactive data mover pods also take unnecessary space from system disk and cause other critical pods evicted
- For CSI snapshot data movement backup, before creation of the data mover pod, the volume snapshot has also created, this means excessive number of snapshots may also be created and live for longer time since the VGDP won't start until the quota is available. However, in some environments, large number of snapshots is not allowed or may cause degradation of the storage peroformance
On the other hand, the VGDP throttling mentioned in [node-agent Concurrency design][2] is an accurate controlling mechanism, that is, exactly the required number of data mover pods are throttled.
Therefore, another mechanism is required to soothe the creation of the data mover pods and volume snapshots before the VGDP throttling. It doesn't need to accurately control these creations but should effectively reduce the excessive number of inactive data mover pods and volume snapshots.
It is not practical to make an accurate control as it is almost impossible to predict which group of nodes a data mover pod is scheduled to, under the consideration of many complex factors, i.e., selected node, affinity, node OS, etc.
## Goals
- Allow users to configure the expected number of loads pending on waiting for VGDP load concurrency quota
- Create a soothing mechanism to prevent new loads from starting if the number of existing loads excceds the expected number
## Non-Goals
- Accurately controlling the loads from initiation is not a goal
## Solution
We introduce a new field `prepareQueueLength` in `loadConcurrency` of `node-agent-configmap` as the allowed number of loads that are under preparing (expose). Specifically, loads are in this situation after its CR is in `Accepted` and `Prepared` phase. The `prepareQueueLength` should be a positive number, negative numbers will be ignored.
Once the value is set, the soothing mechanism takes effect, as the best effort, only the allowed number of CRs go into `Accepted` or `Prepared` phase, others will wait and stay as `New` state; and thereby only the allowed number of data mover pods, volume snapshots are created.
Otherwise, node-agent works the same as the legacy behavior, CRs go to `Accepted` or `Prepared` state as soon as the controllers process them and data mover pods and volume snapshots are also created without any constraints.
If users want to constrain the excessive number of pending data mover pods and volume snapshots, they could set a value by considering the VGDP load concurrency; otherwise, if they don't see constrains for pods or volume snapshots in their environment, they don't need to use this feature, in parallel preparing could also be beneficial for increasing the concurrency.
Node-agent server checks this configuration at startup time and use it to initiate the related VGDP modules. Therefore, users could edit this configMap any time, but in order to make the changes effective, node-agent server needs to be restarted.
The data structure is as below:
```go
type LoadConcurrency struct {
// GlobalConfig specifies the concurrency number to all nodes for which per-node config is not specified
GlobalConfig int `json:"globalConfig,omitempty"`
// PerNodeConfig specifies the concurrency number to nodes matched by rules
PerNodeConfig []RuledConfigs `json:"perNodeConfig,omitempty"`
// PrepareQueueLength specifies the max number of loads that are under expose
PrepareQueueLength int `json:"prepareQueueLength,omitempty"`
}
```
### Sample
A sample of the ConfigMap is as below:
```json
{
"loadConcurrency": {
"globalConfig": 2,
"perNodeConfig": [
{
"nodeSelector": {
"matchLabels": {
"kubernetes.io/hostname": "node1"
}
},
"number": 3
},
{
"nodeSelector": {
"matchLabels": {
"beta.kubernetes.io/instance-type": "Standard_B4ms"
}
},
"number": 5
}
],
"prepareQueueLength": 2
}
}
```
To create the configMap, users need to save something like the above sample to a json file and then run below command:
```
kubectl create cm <ConfigMap name> -n velero --from-file=<json file name>
```
## Detailed Design
Changes apply to the DataUpload Controller, DataDownload Controller, PodVolumeBackup Controller and PodVolumeRestore Controller, as below:
1. The soothe happens to data mover CRs (DataUpload, DataDownload, PodVolumeBackup or PodVolumeRestore) that are in `New` state
2. Before starting processing the CR, the corresponding controller counts the existing CRs under or pending for expose in the cluster, that is a total number of existing DataUpload, DataDownload, PodVolumeBackup and PodVolumeRestore that are in either `Accepted` or `Preparing` state
3. If the total number doesn't exceed the allowed number, the controller set the CR's phase to `Accepted`
4. Once the total number exceeds the allowed number, the controller gives up processing the CR and have it requeued later. The delay for the requeue is 5 seconds
The count happens for all the controllers in all nodes, to prevent the checks drain out the API server, the count happens to controller client caches for those CRs. And the count result is also cached, so that the count only happens whenever necessary. Below shows how it judges the necessity:
- When one or more CRs' phase change to `Accepted`
- When one or more CRs' phase change from `Accepted` to one of the terminal phases
- When one or more CRs' phase change from `Prepared` to one of the terminal phases
- When one or more CRs' phase change from `Prepared` to `InProgress`
Ideally, 2~3 in the above steps need to be synchornized among controllers in all nodes. However, this synchronization is not implemented, the consideration is as below:
1. It is impossible to accurately synchronize the count among controllers in different nodes, because the client cache is not coherrent among nodes.
2. It is possible to synchronize the count among controllers in the same node. However, it is too expensive to make this synchronization, because 2~3 are part of the expose workflow, the synchronization impacts the performance and stability of the existing workflow.
3. Even without the synchronization, the soothing mechanism still works eventually -- when the controllers see all the discharged loads (expected ones and over-discharged ones), they will stop creating new loads until the quota is available again.
4. Step 2~3 that need to be synchronized could complete very quickly.
This is why we say this mechanism is not an accurate control. Or in another word, it is possible that more loads than the number of `prepareQueueLength` are discharged if controllers make the count and expose in the overlapped time (step 2~3).
For example, when multiple controllers of the same type (DataUpload, DataDownload, PodVolumeBackup or PodVolumeRestore) from different nodes make the count:
```
max number of waiting loads = number defined by `prepareQueueLength` + number of nodes in cluster
```
As another example, when hybrid loads are running the count concurrently, e.g., mix of data mover backups, data mover restores, pod volume backups or pod volume restores, more loads may be discharged and the number depends on the number of concurrent hybrid loads.
In either case, because step 2~3 is short in time, it is less likely to reach the theoretically worset result.
[1]: unified-repo-and-kopia-integration/unified-repo-and-kopia-integration.md
[2]: node-agent-concurrency.md
[3]: volume-snapshot-data-movement/volume-snapshot-data-movement.md
[4]: vgdp-micro-service/vgdp-micro-service.md
[5]: vgdp-micro-service-for-fs-backup/vgdp-micro-service-for-fs-backup.md

View File

@@ -241,7 +241,7 @@ In cases where the methods signatures remain the same, the adaptation layer will
Examples where an adaptation may be safe:
- A method signature is being changed to add a new parameter but the parameter could be optional (for example, adding a context parameter). The adaptation could call through to the method provided in the previous version but omit the parameter.
- A method signature is being changed to remove a parameter, but it is safe to pass a default value to the previous version. The adaptation could call through to the method provided in the previous version but use a default value for the parameter.
- A new method is being added but does not impact any existing behaviour of Velero (for example, a new method which will allow Velero to [wait for additional items to be ready](https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/wait-for-additional-items.md)). The adaptation would return a value which allows the existing behaviour to be performed.
- A new method is being added but does not impact any existing behaviour of Velero (for example, a new method which will allow Velero to [wait for additional items to be ready](https://github.com/vmware-tanzu/velero/blob/main/design/wait-for-additional-items.md)). The adaptation would return a value which allows the existing behaviour to be performed.
- A method is being deleted as it is no longer used. The adaptation would call through to any methods which are still included but would omit the deleted method in the adaptation.
Examples where an adaptation may not be safe:

View File

@@ -1,694 +0,0 @@
# PriorityClass Support Design Proposal
## Abstract
This design document outlines the implementation of priority class name support for Velero components, including the Velero server deployment, node agent daemonset, and maintenance jobs. This feature allows users to specify a priority class name for Velero components, which can be used to influence the scheduling and eviction behavior of these components.
## Background
Kubernetes allows users to define priority classes, which can be used to influence the scheduling and eviction behavior of pods. Priority classes are defined as cluster-wide resources, and pods can reference them by name. When a pod is created, the priority admission controller uses the priority class name to populate the priority value for the pod. The scheduler then uses this priority value to determine the order in which pods are scheduled.
Currently, Velero does not provide a way for users to specify a priority class name for its components. This can be problematic in clusters where resource contention is high, as Velero components may be evicted or not scheduled in a timely manner, potentially impacting backup and restore operations.
## Goals
- Add support for specifying priority class names for Velero components
- Update the Velero CLI to accept priority class name parameters for different components
- Update the Velero deployment, node agent daemonset, maintenance jobs, and data mover pods to use the specified priority class names
## Non Goals
- Creating or managing priority classes
- Automatically determining the appropriate priority class for Velero components
## High-Level Design
The implementation will add new fields to the Velero options struct to store the priority class names for the server deployment and node agent daemonset. The Velero CLI will be updated to accept new flags for these components. For data mover pods and maintenance jobs, priority class names will be configured through existing ConfigMap mechanisms (`node-agent-configmap` for data movers and `repo-maintenance-job-configmap` for maintenance jobs). The Velero deployment, node agent daemonset, maintenance jobs, and data mover pods will be updated to use their respective priority class names.
## Detailed Design
### CLI Changes
New flags will be added to the `velero install` command to specify priority class names for different components:
```go
flags.StringVar(
&o.ServerPriorityClassName,
"server-priority-class-name",
o.ServerPriorityClassName,
"Priority class name for the Velero server deployment. Optional.",
)
flags.StringVar(
&o.NodeAgentPriorityClassName,
"node-agent-priority-class-name",
o.NodeAgentPriorityClassName,
"Priority class name for the node agent daemonset. Optional.",
)
```
Note: Priority class names for data mover pods and maintenance jobs will be configured through their respective ConfigMaps (`--node-agent-configmap` for data movers and `--repo-maintenance-job-configmap` for maintenance jobs).
### Velero Options Changes
The `VeleroOptions` struct in `pkg/install/resources.go` will be updated to include new fields for priority class names:
```go
type VeleroOptions struct {
// ... existing fields ...
ServerPriorityClassName string
NodeAgentPriorityClassName string
}
```
### Deployment Changes
The `podTemplateConfig` struct in `pkg/install/deployment.go` will be updated to include a new field for the priority class name:
```go
type podTemplateConfig struct {
// ... existing fields ...
priorityClassName string
}
```
A new function, `WithPriorityClassName`, will be added to set this field:
```go
func WithPriorityClassName(priorityClassName string) podTemplateOption {
return func(c *podTemplateConfig) {
c.priorityClassName = priorityClassName
}
}
```
The `Deployment` function will be updated to use the priority class name:
```go
deployment := &appsv1api.Deployment{
// ... existing fields ...
Spec: appsv1api.DeploymentSpec{
// ... existing fields ...
Template: corev1api.PodTemplateSpec{
// ... existing fields ...
Spec: corev1api.PodSpec{
// ... existing fields ...
PriorityClassName: c.priorityClassName,
},
},
},
}
```
### DaemonSet Changes
The `DaemonSet` function will use the priority class name passed via the podTemplateConfig (from the CLI flag):
```go
daemonSet := &appsv1api.DaemonSet{
// ... existing fields ...
Spec: appsv1api.DaemonSetSpec{
// ... existing fields ...
Template: corev1api.PodTemplateSpec{
// ... existing fields ...
Spec: corev1api.PodSpec{
// ... existing fields ...
PriorityClassName: c.priorityClassName,
},
},
},
}
```
### Maintenance Job Changes
The `JobConfigs` struct in `pkg/repository/maintenance/maintenance.go` will be updated to include a field for the priority class name:
```go
type JobConfigs struct {
// LoadAffinities is the config for repository maintenance job load affinity.
LoadAffinities []*kube.LoadAffinity `json:"loadAffinity,omitempty"`
// PodResources is the config for the CPU and memory resources setting.
PodResources *kube.PodResources `json:"podResources,omitempty"`
// PriorityClassName is the priority class name for the maintenance job pod
// Note: This is only read from the global configuration, not per-repository
PriorityClassName string `json:"priorityClassName,omitempty"`
}
```
The `buildJob` function will be updated to use the priority class name from the global job configuration:
```go
func buildJob(cli client.Client, ctx context.Context, repo *velerov1api.BackupRepository, bslName string, config *JobConfigs,
podResources kube.PodResources, logLevel logrus.Level, logFormat *logging.FormatFlag) (*batchv1.Job, error) {
// ... existing code ...
// Use the priority class name from the global job configuration if available
// Note: Priority class is only read from global config, not per-repository
priorityClassName := ""
if config != nil && config.PriorityClassName != "" {
priorityClassName = config.PriorityClassName
}
// ... existing code ...
job := &batchv1.Job{
// ... existing fields ...
Spec: batchv1.JobSpec{
// ... existing fields ...
Template: corev1api.PodTemplateSpec{
// ... existing fields ...
Spec: corev1api.PodSpec{
// ... existing fields ...
PriorityClassName: priorityClassName,
},
},
},
}
// ... existing code ...
}
```
Users will be able to configure the priority class name for all maintenance jobs by creating the repository maintenance job ConfigMap before installation. For example:
```bash
# Create the ConfigMap before running velero install
cat <<EOF | kubectl create configmap repo-maintenance-job-config -n velero --from-file=config.json=/dev/stdin
{
"global": {
"priorityClassName": "low-priority",
"podResources": {
"cpuRequest": "100m",
"memoryRequest": "128Mi"
}
}
}
EOF
# Then install Velero referencing this ConfigMap
velero install --provider aws \
--repo-maintenance-job-configmap repo-maintenance-job-config \
# ... other flags
```
The ConfigMap can be updated after installation to change the priority class for future maintenance jobs. Note that only the "global" configuration is used for priority class - all maintenance jobs will use the same priority class regardless of which repository they are maintaining.
### Node Agent ConfigMap Changes
We'll update the `Configs` struct in `pkg/nodeagent/node_agent.go` to include a field for the priority class name in the node-agent-configmap:
```go
type Configs struct {
// ... existing fields ...
// PriorityClassName is the priority class name for the data mover pods
// created by the node agent
PriorityClassName string `json:"priorityClassName,omitempty"`
}
```
This will allow users to configure the priority class name for data mover pods through the node-agent-configmap. Note that the node agent daemonset itself gets its priority class from the `--node-agent-priority-class-name` CLI flag during installation, not from this configmap. For example:
```bash
# Create the ConfigMap before running velero install
cat <<EOF | kubectl create configmap node-agent-config -n velero --from-file=config.json=/dev/stdin
{
"priorityClassName": "low-priority",
"loadAffinity": [
{
"nodeSelector": {
"matchLabels": {
"node-role.kubernetes.io/worker": "true"
}
}
}
]
}
EOF
# Then install Velero referencing this ConfigMap
velero install --provider aws \
--node-agent-configmap node-agent-config \
--use-node-agent \
# ... other flags
```
The `createBackupPod` function in `pkg/exposer/csi_snapshot.go` will be updated to accept and use the priority class name:
```go
func (e *csiSnapshotExposer) createBackupPod(
ctx context.Context,
ownerObject corev1api.ObjectReference,
backupPVC *corev1api.PersistentVolumeClaim,
operationTimeout time.Duration,
label map[string]string,
annotation map[string]string,
affinity *kube.LoadAffinity,
resources corev1api.ResourceRequirements,
backupPVCReadOnly bool,
spcNoRelabeling bool,
nodeOS string,
priorityClassName string, // New parameter
) (*corev1api.Pod, error) {
// ... existing code ...
pod := &corev1api.Pod{
// ... existing fields ...
Spec: corev1api.PodSpec{
// ... existing fields ...
PriorityClassName: priorityClassName,
// ... existing fields ...
},
}
// ... existing code ...
}
```
The call to `createBackupPod` in the `Expose` method will be updated to pass the priority class name retrieved from the node-agent-configmap:
```go
priorityClassName, _ := kube.GetDataMoverPriorityClassName(ctx, namespace, kubeClient, configMapName)
backupPod, err := e.createBackupPod(
ctx,
ownerObject,
backupPVC,
csiExposeParam.OperationTimeout,
csiExposeParam.HostingPodLabels,
csiExposeParam.HostingPodAnnotations,
csiExposeParam.Affinity,
csiExposeParam.Resources,
backupPVCReadOnly,
spcNoRelabeling,
csiExposeParam.NodeOS,
priorityClassName, // Priority class name from node-agent-configmap
)
```
A new function, `GetDataMoverPriorityClassName`, will be added to the `pkg/util/kube` package (in the same file as `ValidatePriorityClass`) to retrieve the priority class name for data mover pods:
```go
// In pkg/util/kube/priority_class.go
// GetDataMoverPriorityClassName retrieves the priority class name for data mover pods from the node-agent-configmap
func GetDataMoverPriorityClassName(ctx context.Context, namespace string, kubeClient kubernetes.Interface, configName string) (string, error) {
// configData is a minimal struct to parse only the priority class name from the ConfigMap
type configData struct {
PriorityClassName string `json:"priorityClassName,omitempty"`
}
// Get the ConfigMap
cm, err := kubeClient.CoreV1().ConfigMaps(namespace).Get(ctx, configName, metav1.GetOptions{})
if err != nil {
if apierrors.IsNotFound(err) {
// ConfigMap not found is not an error, just return empty string
return "", nil
}
return "", errors.Wrapf(err, "error getting node agent config map %s", configName)
}
if cm.Data == nil {
// No data in ConfigMap, return empty string
return "", nil
}
// Extract the first value from the ConfigMap data
jsonString := ""
for _, v := range cm.Data {
jsonString = v
break // Use the first value found
}
if jsonString == "" {
// No data to parse, return empty string
return "", nil
}
// Parse the JSON to extract priority class name
var config configData
if err := json.Unmarshal([]byte(jsonString), &config); err != nil {
// Invalid JSON is not a critical error for priority class
// Just return empty string to use default behavior
return "", nil
}
return config.PriorityClassName, nil
}
```
This function will get the priority class name from the node-agent-configmap. If it's not found, it will return an empty string.
### Validation and Logging
To improve observability and help with troubleshooting, the implementation will include:
1. **Optional Priority Class Validation**: A helper function to check if a priority class exists in the cluster. This function will be added to the `pkg/util/kube` package alongside other Kubernetes utility functions:
```go
// In pkg/util/kube/priority_class.go
// ValidatePriorityClass checks if the specified priority class exists in the cluster
// Returns true if the priority class exists or if priorityClassName is empty
// Returns false if the priority class doesn't exist or validation fails
// Logs warnings when the priority class doesn't exist
func ValidatePriorityClass(ctx context.Context, kubeClient kubernetes.Interface, priorityClassName string, logger logrus.FieldLogger) bool {
if priorityClassName == "" {
return true
}
_, err := kubeClient.SchedulingV1().PriorityClasses().Get(ctx, priorityClassName, metav1.GetOptions{})
if err != nil {
if apierrors.IsNotFound(err) {
logger.Warnf("Priority class %q not found in cluster. Pod creation may fail if the priority class doesn't exist when pods are scheduled.", priorityClassName)
} else {
logger.WithError(err).Warnf("Failed to validate priority class %q", priorityClassName)
}
return false
}
logger.Infof("Validated priority class %q exists in cluster", priorityClassName)
return true
}
```
2. **Debug Logging**: Add debug logs when priority classes are applied:
```go
// In deployment creation
if c.priorityClassName != "" {
logger.Debugf("Setting priority class %q for Velero server deployment", c.priorityClassName)
}
// In daemonset creation
if c.priorityClassName != "" {
logger.Debugf("Setting priority class %q for node agent daemonset", c.priorityClassName)
}
// In maintenance job creation
if priorityClassName != "" {
logger.Debugf("Setting priority class %q for maintenance job %s", priorityClassName, job.Name)
}
// In data mover pod creation
if priorityClassName != "" {
logger.Debugf("Setting priority class %q for data mover pod %s", priorityClassName, pod.Name)
}
```
These validation and logging features will help administrators:
- Identify configuration issues early (validation warnings)
- Troubleshoot priority class application issues
- Verify that priority classes are being applied as expected
The `ValidatePriorityClass` function should be called at the following points:
1. **During `velero install`**: Validate the priority classes specified via CLI flags:
- After parsing `--server-priority-class-name` flag
- After parsing `--node-agent-priority-class-name` flag
2. **When reading from ConfigMaps**: Validate priority classes when loading configurations:
- In `GetDataMoverPriorityClassName` when reading from node-agent-configmap
- In maintenance job controller when reading from repo-maintenance-job-configmap
3. **During pod/job creation** (optional, for runtime validation):
- Before creating data mover pods (PVB/PVR/CSI snapshot data movement)
- Before creating maintenance jobs
Example usage:
```go
// During velero install
if o.ServerPriorityClassName != "" {
_ = kube.ValidatePriorityClass(ctx, kubeClient, o.ServerPriorityClassName, logger.WithField("component", "server"))
// For install command, we continue even if validation fails (warnings are logged)
}
// When reading from ConfigMap in node-agent server
priorityClassName, err := kube.GetDataMoverPriorityClassName(ctx, namespace, kubeClient, configMapName)
if err == nil && priorityClassName != "" {
// Validate the priority class exists in the cluster
if kube.ValidatePriorityClass(ctx, kubeClient, priorityClassName, logger.WithField("component", "data-mover")) {
dataMovePriorityClass = priorityClassName
logger.WithField("priorityClassName", priorityClassName).Info("Using priority class for data mover pods")
} else {
logger.WithField("priorityClassName", priorityClassName).Warn("Priority class not found in cluster, data mover pods will use default priority")
// Clear the priority class to prevent pod creation failures
priorityClassName = ""
}
}
```
Note: The validation function returns a boolean to allow callers to decide how to handle missing priority classes. For the install command, validation failures are ignored (only warnings are logged) to allow for scenarios where priority classes might be created after Velero installation. For runtime components like the node-agent server, the priority class is cleared if validation fails to prevent pod creation failures.
## Alternatives Considered
1. **Using a single flag for all components**: We could have used a single flag for all components, but this would not allow for different priority classes for different components. Since maintenance jobs and data movers typically require lower priority than the Velero server, separate flags provide more flexibility.
2. **Using a configuration file**: We could have added support for specifying the priority class names in a configuration file. However, this would have required additional changes to the Velero CLI and would have been more complex to implement.
3. **Inheriting priority class from parent components**: We initially considered having maintenance jobs inherit their priority class from the Velero server, and data movers inherit from the node agent. However, this approach doesn't allow for the appropriate prioritization of different components based on their importance and resource requirements.
## Security Considerations
There are no security considerations for this feature.
## Compatibility
This feature is compatible with all Kubernetes versions that support priority classes. The PodPriority feature became stable in Kubernetes 1.14. For more information, see the [Kubernetes documentation on Pod Priority and Preemption](https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/).
## ConfigMap Update Strategy
### Static ConfigMap Reading at Startup
The node-agent server reads and parses the ConfigMap once during initialization and passes configurations (like `podResources`, `loadAffinity`, and `priorityClassName`) directly to controllers as parameters. This approach ensures:
- Single ConfigMap read to minimize API calls
- Consistent configuration across all controllers
- Validation of priority classes at startup with fallback behavior
- No need for complex update mechanisms or watchers
ConfigMap changes require a restart of the node-agent to take effect.
### Implementation Approach
1. **Data Mover Controllers**: Receive priority class as a string parameter from node-agent server at initialization
2. **Maintenance Job Controller**: Read fresh configuration from repo-maintenance-job-configmap at job creation time
3. ConfigMap changes require restart of components to take effect
4. Priority class validation happens at startup with automatic fallback to prevent failures
## Implementation
The implementation will involve the following steps:
1. Add the priority class name fields for server and node agent to the `VeleroOptions` struct
2. Add the priority class name field to the `podTemplateConfig` struct
3. Add the `WithPriorityClassName` function for the server deployment and daemonset
4. Update the `Deployment` function to use the server priority class name
5. Update the `DaemonSet` function to use the node agent priority class name
6. Update the `JobConfigs` struct to include `PriorityClassName` field
7. Update the `buildJob` function in maintenance job to use the priority class name from JobConfigs (global config only)
8. Update the `Configs` struct in node agent to include `PriorityClassName` field for data mover pods
9. Update the data mover pod creation to use the priority class name from node-agent-configmap
10. Update the PodVolumeBackup controller to retrieve and apply priority class name from node-agent-configmap
11. Update the PodVolumeRestore controller to retrieve and apply priority class name from node-agent-configmap
12. Add the `GetDataMoverPriorityClassName` utility function to retrieve priority class from configmap
13. Add the priority class name flags for server and node agent to the `velero install` command
14. Add unit tests for:
- `WithPriorityClassName` function
- `GetDataMoverPriorityClassName` function
- Priority class application in deployment, daemonset, and job specs
15. Add integration tests to verify:
- Priority class is correctly applied to all component pods
- ConfigMap updates are reflected in new pods
- Empty/missing priority class names are handled gracefully
16. Update user documentation to include:
- How to configure priority classes for each component
- Examples of creating ConfigMaps before installation
- Expected priority class hierarchy recommendations
- Troubleshooting guide for priority class issues
17. Update CLI documentation for new flags (`--server-priority-class-name` and `--node-agent-priority-class-name`)
Note: The server deployment and node agent daemonset will have CLI flags for priority class. Data mover pods and maintenance jobs will use their respective ConfigMaps for priority class configuration.
This approach ensures that different Velero components can use different priority class names based on their importance and resource requirements:
1. The Velero server deployment can use a higher priority class to ensure it continues running even under resource pressure.
2. The node agent daemonset can use a medium priority class.
3. Maintenance jobs can use a lower priority class since they should not run when resources are limited.
4. Data mover pods can use a lower priority class since they should not run when resources are limited.
### Implementation Considerations
Priority class names are configured through different mechanisms:
1. **Server Deployment**: Uses the `--server-priority-class-name` CLI flag during installation.
2. **Node Agent DaemonSet**: Uses the `--node-agent-priority-class-name` CLI flag during installation.
3. **Data Mover Pods**: Will use the node-agent-configmap (specified via the `--node-agent-configmap` flag). This ConfigMap controls priority class for all data mover pods (including PVB and PVR) created by the node agent.
4. **Maintenance Jobs**: Will use the repository maintenance job ConfigMap (specified via the `--repo-maintenance-job-configmap` flag). Users should create this ConfigMap before running `velero install` with the desired priority class configuration. The ConfigMap can be updated after installation to change priority classes for future maintenance jobs. While the ConfigMap structure supports per-repository configuration for resources and affinity, priority class is intentionally only read from the global configuration to ensure all maintenance jobs have the same priority.
#### ConfigMap Pre-Creation Guide
For components that use ConfigMaps for priority class configuration, the ConfigMaps must be created before running `velero install`. Here's the recommended workflow:
```bash
# Step 1: Create priority classes in your cluster (if not already existing)
kubectl apply -f - <<EOF
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: velero-critical
value: 100
globalDefault: false
description: "Critical priority for Velero server"
---
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: velero-standard
value: 50
globalDefault: false
description: "Standard priority for Velero node agent"
---
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: velero-low
value: 10
globalDefault: false
description: "Low priority for Velero data movers and maintenance jobs"
EOF
# Step 2: Create the namespace
kubectl create namespace velero
# Step 3: Create ConfigMaps for data movers and maintenance jobs
kubectl create configmap node-agent-config -n velero --from-file=config.json=/dev/stdin <<EOF
{
"priorityClassName": "velero-low"
}
EOF
kubectl create configmap repo-maintenance-job-config -n velero --from-file=config.json=/dev/stdin <<EOF
{
"global": {
"priorityClassName": "velero-low"
}
}
EOF
# Step 4: Install Velero with priority class configuration
velero install \
--provider aws \
--server-priority-class-name velero-critical \
--node-agent-priority-class-name velero-standard \
--node-agent-configmap node-agent-config \
--repo-maintenance-job-configmap repo-maintenance-job-config \
--use-node-agent
```
#### Recommended Priority Class Hierarchy
When configuring priority classes for Velero components, consider the following hierarchy based on component criticality:
1. **Velero Server (Highest Priority)**:
- Example: `velero-critical` with value 100
- Rationale: The server must remain running to coordinate backup/restore operations
2. **Node Agent DaemonSet (Medium Priority)**:
- Example: `velero-standard` with value 50
- Rationale: Node agents need to be available on nodes but are less critical than the server
3. **Data Mover Pods & Maintenance Jobs (Lower Priority)**:
- Example: `velero-low` with value 10
- Rationale: These are temporary workloads that can be delayed during resource contention
This hierarchy ensures that core Velero components remain operational even under resource pressure, while allowing less critical workloads to be preempted if necessary.
This approach has several advantages:
- Leverages existing configuration mechanisms, minimizing new CLI flags
- Provides a single point of configuration for related components (node agent and its pods)
- Allows dynamic configuration updates without requiring Velero reinstallation
- Maintains backward compatibility with existing installations
- Enables administrators to set up priority classes during initial deployment
- Keeps configuration simple by using the same priority class for all maintenance jobs
The priority class name for data mover pods will be determined by checking the node-agent-configmap. This approach provides a centralized way to configure priority class names for all data mover pods. The same approach will be used for PVB (PodVolumeBackup) and PVR (PodVolumeRestore) pods, which will also retrieve their priority class name from the node-agent-configmap.
For PVB and PVR pods specifically, the implementation follows this approach:
1. **Controller Initialization**: Both PodVolumeBackup and PodVolumeRestore controllers are updated to accept a priority class name as a string parameter. The node-agent server reads the priority class from the node-agent-configmap once at startup:
```go
// In node-agent server startup (pkg/cmd/cli/nodeagent/server.go)
dataMovePriorityClass := ""
if s.config.nodeAgentConfig != "" {
ctx, cancel := context.WithTimeout(context.Background(), time.Second*30)
defer cancel()
priorityClass, err := kube.GetDataMoverPriorityClassName(ctx, s.namespace, s.kubeClient, s.config.nodeAgentConfig)
if err != nil {
s.logger.WithError(err).Warn("Failed to get priority class name from node-agent-configmap, using empty value")
} else if priorityClass != "" {
// Validate the priority class exists in the cluster
if kube.ValidatePriorityClass(ctx, s.kubeClient, priorityClass, s.logger.WithField("component", "data-mover")) {
dataMovePriorityClass = priorityClass
s.logger.WithField("priorityClassName", priorityClass).Info("Using priority class for data mover pods")
} else {
s.logger.WithField("priorityClassName", priorityClass).Warn("Priority class not found in cluster, data mover pods will use default priority")
}
}
}
// Pass priority class to controllers
pvbReconciler := controller.NewPodVolumeBackupReconciler(
s.mgr.GetClient(), s.mgr, s.kubeClient, ..., dataMovePriorityClass)
pvrReconciler := controller.NewPodVolumeRestoreReconciler(
s.mgr.GetClient(), s.mgr, s.kubeClient, ..., dataMovePriorityClass)
```
2. **Controller Structure**: Controllers store the priority class name as a field:
```go
type PodVolumeBackupReconciler struct {
// ... existing fields ...
dataMovePriorityClass string
}
```
3. **Pod Creation**: The priority class is included in the pod spec when creating data mover pods.
### VGDP Micro-Service Considerations
With the introduction of VGDP micro-services (as described in the VGDP micro-service design), data mover pods are created as dedicated pods for volume snapshot data movement. These pods will also inherit the priority class configuration from the node-agent-configmap. Since VGDP-MS pods (backupPod/restorePod) inherit their configurations from the node-agent, they will automatically use the priority class name specified in the node-agent-configmap.
This ensures that all pods created by Velero for data movement operations (CSI snapshot data movement, PVB, and PVR) use a consistent approach for priority class name configuration through the node-agent-configmap.
### How Exposers Receive Configuration
CSI Snapshot Exposer and Generic Restore Exposer do not directly watch or read ConfigMaps. Instead, they receive configuration through their parent controllers:
1. **Controller Initialization**: Controllers receive the priority class name as a parameter during initialization from the node-agent server.
2. **Configuration Propagation**: During reconciliation of resources:
- The controller calls `setupExposeParam()` which includes the `dataMovePriorityClass` value
- For CSI operations: `CSISnapshotExposeParam.PriorityClassName` is set
- For generic restore: `GenericRestoreExposeParam.PriorityClassName` is set
- The controller passes these parameters to the exposer's `Expose()` method
3. **Pod Creation**: The exposer creates pods with the priority class name provided by the controller.
This design keeps exposers stateless and ensures:
- Exposers remain simple and focused on pod creation
- All configuration flows through controllers consistently
- No complex state synchronization between components
- Configuration changes require component restart to take effect
## Open Issues
None.

View File

@@ -1,143 +0,0 @@
# Volume information for restore design
## Background
Velero has different ways to handle data in the volumes during restore. The users want to have more clarity in terms of how
the volumes are handled in restore process via either Velero CLI or other downstream product which consumes Velero.
## Goals
- Create new metadata to store the information of the restored volume, which will have the same life-cycle as the restore CR.
- Consume the metadata in velero CLI to enable it display more details for volumes in the output of `velero restore describe --details`
## Non Goals
- Provide finer grained control of the volume restore process. The focus of the design is to enable displaying more details.
- Persist additional metadata like podvolume, datadownloads etc to the restore folder in backup-location.
## Design
### Structure of the restore volume info
The restore volume info will be stored in a file named like `${restore_name}-vol-info.json`. The content of the file will
be a list of volume info objects, each of which will map to a volume that is restored, and will contain the information
like name of the restored PV/PVC, restore method and related objects to provide details depending on the way it's restored,
it will look like this:
```
[
{
"pvcName": "nginx-logs-2",
"pvcNamespace": "nginx-app-restore",
"pvName": "pvc-e320d75b-a788-41a3-b6ba-267a553efa5e",
"restoreMethod": "PodVolumeRestore",
"snapshotDataMoved": false,
"pvrInfo": {
"snapshotHandle": "81973157c3a945a5229285c931b02c68",
"uploaderType": "kopia",
"volumeName": "nginx-logs",
"podName": "nginx-deployment-79b56c644b-mjdhp",
"podNamespace": "nginx-app-restore"
}
},
{
"pvcName": "nginx-logs-1",
"pvcNamespace": "nginx-app-restore",
"pvName": "pvc-98c151f4-df47-4980-ba6d-470842f652cc",
"restoreMethod": "CSISnapshot",
"snapshotDataMoved": false,
"csiSnapshotInfo": {
"snapshotHandle": "snap-01a3b21a5e9f85528",
"size": 2147483648,
"driver": "ebs.csi.aws.com",
"vscName": "velero-velero-nginx-logs-1-jxmbg-hx9x5"
}
}
......
]
```
Each field will have the same meaning as the corresponding field in the backup volume info. It will not have the fields
that were introduced to help with the backup process, like `pvInfo`, `dataupload` etc.
### How the restore volume info is generated
Two steps are involved in generating the restore volume info, the first is "collection", which is to gather the information
for restoration of the volumes, the second is "generation", which is to iterate through the data collected in the first step
and generate the volume info list as is described above.
Unlike backup, the CR objects created during the restore process will not be persisted to the backup storage location.
Therefore, to gather the information needed to generate volume information, we either need to collect the CRs in the middle
of the restore process, or retrieve the objects based on the `resouce-list.json` of the restore via API server.
The information to be collected are:
- **PV/PVC mapping relationship:** It will be collected via the `restore-resource-list.json`, b/c at the time the json is ready, all
PVCs and PVs are already created.
- **Native snapshot information:** It will be collected in the restore workflow when each snapshot is restored.
- **podvolumerestore CRs:** It will be collected in the restore workflow after each pvr is created.
- **volumesnapshot CRs for CSI snapshot:** It will be collected in the step of collecting PVC info, by reading the `dataSource`
field in the spec of the PVC.
- **datadownload CRs** It will be collected in the phase of collecting PVC info, by querying the API-server to list the datadownload
CRs labeled with the restore name.
After the collection step, the generation step is relatively straight-forward, as we have all the information needed in
the data structures.
The whole collection and generation steps will be done with the "best-effort" manner, i.e. if there are any failures we
will only log the error in restore log, rather than failing the whole restore process, we will not put these errors or warnings
into the `result.json`, b/c it won't impact the restored resources.
Depending on the number of the restored PVCs the "collection" step may involve many API calls, but it's considered acceptable
b/c at that time the resources are already created, so the actual RTO is not impacted. By using the client of controller runtime
we can make the collection step more efficient by using the cache of the API server. We may consider to make improvements if
we observe performance issues, like using multiple go-routines in the collection.
### Implementation
Because the restore volume info shares the same data structures with the backup volume info, we will refactor the code in
package `internal/volume` to make the sub-components in backup volume info shared by both backup and restore volume info.
We'll introduce a struct called `RestoreVolumeInfoTracker` which encapsulates the logic of collecting and generating the restore volume info:
```
// RestoreVolumeInfoTracker is used to track the volume information during restore.
// It is used to generate the RestoreVolumeInfo array.
type RestoreVolumeInfoTracker struct {
*sync.Mutex
restore *velerov1api.Restore
log logrus.FieldLogger
client kbclient.Client
pvPvc *pvcPvMap
// map of PV name to the NativeSnapshotInfo from which the PV is restored
pvNativeSnapshotMap map[string]NativeSnapshotInfo
// map of PV name to the CSISnapshot object from which the PV is restored
pvCSISnapshotMap map[string]snapshotv1api.VolumeSnapshot
datadownloadList *velerov2alpha1.DataDownloadList
pvrs []*velerov1api.PodVolumeRestore
}
```
The `RestoreVolumeInfoTracker` will be created when the restore request is initialized, and it will be passed to the `restoreContext`
and carried over the whole restore process.
The `client` in this struct is to be used to query the resources in the restored namespace, and the current client in restore
reconciler only watches the resources in the namespace where velero is installed. Therefore, we need to introduce the
`CrClient` which has the same life-cycle of velero server to the restore reconciler, because this is the client that watches all the
resources on the cluster.
In addition to that, we will make small changes in the restore workflow to collect the information needed. We'll make the
changes un-intrusive and make sure not to change the logic of the restore to avoid break change or regression.
We'll also introduce routine changes in the package `pkg/persistence` to persist the restore volume info to the backup storage location.
Last but not least, the `velero restore describe --details` will be updated to display the volume info in the output.
## Alternatives Considered
There used to be suggestion that to provide more details about volume, we can query the `backup-vol-info.json` with the resource
identifier in `restore-resource-list.json`. This will not work when there're resource modifiers involved in the restore process,
which may change the metadata of PVC/PV. In addition, we may add more detailed restore-specific information about the volumes that is not available
in the `backup-vol-info.json`. Therefore, the `restore-vol-info.json` is a better approach.
## Security Considerations
There should be no security impact introduced by this design.
## Compatibility
The restore volume info will be consumed by Velero CLI and downstream products for displaying details. So the functionality
of backup and restore will not be impacted for restores created by older versions of Velero which do not have the restore volume info
metadata. The client should properly handle the case when the restore volume info does not exist.
The data structures referenced by volume info is shared between both restore and backup and it's not versioned, so in the future
we must make sure there will only be incremental changes to the metadata, such that no break change will be introduced to the client.
## Open Issues
https://github.com/vmware-tanzu/velero/issues/7546
https://github.com/vmware-tanzu/velero/issues/6478

View File

@@ -1,311 +0,0 @@
# Repository maintenance job configuration design
## Abstract
Add this design to make the repository maintenance job can read configuration from a dedicate ConfigMap and make the Job's necessary parts configurable, e.g. `PodSpec.Affinity` and `PodSpec.Resources`.
## Background
Repository maintenance is split from the Velero server to a k8s Job in v1.14 by design [repository maintenance job](repository-maintenance.md).
The repository maintenance Job configuration was read from the Velero server CLI parameter, and it inherits the most of Velero server's Deployment's PodSpec to fill un-configured fields.
This design introduces a new way to let the user to customize the repository maintenance behavior instead of inheriting from the Velero server Deployment or reading from `velero server` CLI parameters.
The configurations added in this design including the resource limitations, node selection.
It's possible new configurations are introduced in future releases based on this design.
For the node selection, the repository maintenance Job also inherits from the Velero server deployment before, but the Job may last for a while and cost noneligible resources, especially memory.
The users have the need to choose which k8s node to run the maintenance Job.
This design reuses the data structure introduced by design [Velero Generic Data Path affinity configuration](node-agent-affinity.md) to make the repository maintenance job can choose which node running on.
## Goals
- Unify the repository maintenance Job configuration at one place.
- Let user can choose repository maintenance Job running on which nodes.
## Non Goals
- There was an [issue](https://github.com/vmware-tanzu/velero/issues/7911) to require the whole Job's PodSpec should be configurable. That's not in the scope of this design.
- Please notice this new configuration is dedicated for the repository maintenance. Repository itself configuration is not covered.
## Compatibility
v1.14 uses the `velero server` CLI's parameter to pass the repository maintenance job configuration.
In v1.15, those parameters are still kept, including `--maintenance-job-cpu-request`, `--maintenance-job-mem-request`, `--maintenance-job-cpu-limit`, `--maintenance-job-mem-limit`, and `--keep-latest-maintenance-jobs`.
But the parameters read from the ConfigMap specified by `velero server` CLI parameter `--repo-maintenance-job-configmap` introduced by this design have a higher priority.
If there `--repo-maintenance-job-configmap` is not specified, then the `velero server` parameters are used if provided.
If the `velero server` parameters are not specified too, then the default values are used.
* `--keep-latest-maintenance-jobs` default value is 3.
* `--maintenance-job-cpu-request` default value is 0.
* `--maintenance-job-mem-request` default value is 0.
* `--maintenance-job-cpu-limit` default value is 0.
* `--maintenance-job-mem-limit` default value is 0.
## Deprecation
Propose to deprecate the `velero server` parameters `--maintenance-job-cpu-request`, `--maintenance-job-mem-request`, `--maintenance-job-cpu-limit`, `--maintenance-job-mem-limit`, and `--keep-latest-maintenance-jobs` in release-1.15.
That means those parameters will be deleted in release-1.17.
After deletion, those resources-related parameters are replaced by the ConfigMap specified by `velero server` CLI's parameter `--repo-maintenance-job-configmap`.
`--keep-latest-maintenance-jobs` is deleted from `velero server` CLI. It turns into a non-configurable internal parameter, and its value is 3.
Please check [issue 7923](https://github.com/vmware-tanzu/velero/issues/7923) for more information why deleting this parameter.
## Design
This design introduces a new ConfigMap specified by `velero server` CLI parameter `--repo-maintenance-job-configmap` as the source of the repository maintenance job configuration. The specified ConfigMap is read from the namespace where Velero is installed.
If the ConfigMap doesn't exist, the internal default values are used.
Example of using the parameter `--repo-maintenance-job-configmap`:
```
velero server \
...
--repo-maintenance-job-configmap repo-job-config
...
```
**Notice**
* Velero doesn't own this ConfigMap. If the user wants to customize the repository maintenance job, the user needs to create this ConfigMap.
* Velero reads this ConfigMap content at starting a new repository maintenance job, so the ConfigMap change will not take affect until the next created job.
### Structure
The data structure is as below:
```go
type Configs struct {
// LoadAffinity is the config for data path load affinity.
LoadAffinity []*LoadAffinity `json:"loadAffinity,omitempty"`
// PodResources is the config for the CPU and memory resources setting.
PodResources *kube.PodResources `json:"podResources,omitempty"`
}
type LoadAffinity struct {
// NodeSelector specifies the label selector to match nodes
NodeSelector metav1.LabelSelector `json:"nodeSelector"`
}
type PodResources struct {
CPURequest string `json:"cpuRequest,omitempty"`
MemoryRequest string `json:"memoryRequest,omitempty"`
CPULimit string `json:"cpuLimit,omitempty"`
MemoryLimit string `json:"memoryLimit,omitempty"`
}
```
The ConfigMap content is a map.
If there is a key value as `global` in the map, the key's value is applied to all BackupRepositories maintenance jobs that cannot find their own specific configuration in the ConfigMap.
The other keys in the map is the combination of three elements of a BackupRepository:
* The namespace in which BackupRepository backs up volume data.
* The BackupRepository referenced BackupStorageLocation's name.
* The BackupRepository's type. Possible values are `kopia` and `restic`.
Those three keys can identify a [unique BackupRepository](https://github.com/vmware-tanzu/velero/blob/2fc6300f2239f250b40b0488c35feae59520f2d3/pkg/repository/backup_repo_op.go#L32-L37).
If there is a key match with BackupRepository, the key's value is applied to the BackupRepository's maintenance jobs.
By this way, it's possible to let user configure before the BackupRepository is created.
This is especially convenient for administrator configuring during the Velero installation.
For example, the following BackupRepository's key should be `test-default-kopia`.
``` yaml
- apiVersion: velero.io/v1
kind: BackupRepository
metadata:
generateName: test-default-kopia-
labels:
velero.io/repository-type: kopia
velero.io/storage-location: default
velero.io/volume-namespace: test
name: test-default-kopia-kgt6n
namespace: velero
spec:
backupStorageLocation: default
maintenanceFrequency: 1h0m0s
repositoryType: kopia
resticIdentifier: gs:jxun:/restic/test
volumeNamespace: test
```
The `LoadAffinity` structure is reused from design [Velero Generic Data Path affinity configuration](node-agent-affinity.md).
It's possible that the users want to choose nodes that match condition A or condition B to run the job.
For example, the user want to let the nodes is in a specified machine type or the nodes locate in the us-central1-x zones to run the job.
This can be done by adding multiple entries in the `LoadAffinity` array.
### Affinity Example
A sample of the ConfigMap is as below:
``` bash
cat <<EOF > repo-maintenance-job-config.json
{
"global": {
podResources: {
"cpuRequest": "100m",
"cpuLimit": "200m",
"memoryRequest": "100Mi",
"memoryLimit": "200Mi"
},
"loadAffinity": [
{
"nodeSelector": {
"matchExpressions": [
{
"key": "cloud.google.com/machine-family",
"operator": "In",
"values": [
"e2"
]
}
]
}
},
{
"nodeSelector": {
"matchExpressions": [
{
"key": "topology.kubernetes.io/zone",
"operator": "In",
"values": [
"us-central1-a",
"us-central1-b",
"us-central1-c"
]
}
]
}
}
]
}
}
EOF
```
This sample showcases two affinity configurations:
- matchLabels: maintenance job runs on nodes with label key `cloud.google.com/machine-family` and value `e2`.
- matchLabels: maintenance job runs on nodes located in `us-central1-a`, `us-central1-b` and `us-central1-c`.
The nodes matching one of the two conditions are selected.
To create the configMap, users need to save something like the above sample to a json file and then run below command:
```
kubectl create cm repo-maintenance-job-config -n velero --from-file=repo-maintenance-job-config.json
```
### Value assigning rules
If the Velero BackupRepositoryController cannot find the introduced ConfigMap, the following default values are used for repository maintenance job:
``` go
config := Configs {
// LoadAffinity is the config for data path load affinity.
LoadAffinity: nil,
// Resources is the config for the CPU and memory resources setting.
PodResources: &kube.PodResources{
// The repository maintenance job CPU request setting
CPURequest: "0m",
// The repository maintenance job memory request setting
MemoryRequest: "0Mi",
// The repository maintenance job CPU limit setting
CPULimit: "0m",
// The repository maintenance job memory limit setting
MemoryLimit: "0Mi",
},
}
```
If the Velero BackupRepositoryController finds the introduced ConfigMap with only `global` element, the `global` value is used.
If the Velero BackupRepositoryController finds the introduced ConfigMap with only element matches the BackupRepository, the matched element value is used.
If the Velero BackupRepositoryController finds the introduced ConfigMap with both `global` element and element matches the BackupRepository, the matched element defined values overwrite the `global` value, and the `global` value is still used for matched element undefined values.
For example, the ConfigMap content has two elements.
``` json
{
"global": {
"loadAffinity": [
{
"nodeSelector": {
"matchExpressions": [
{
"key": "cloud.google.com/machine-family",
"operator": "In",
"values": [
"e2"
]
}
]
}
},
],
"podResources": {
"cpuRequest": "100m",
"cpuLimit": "200m",
"memoryRequest": "100Mi",
"memoryLimit": "200Mi"
}
},
"ns1-default-kopia": {
"podResources": {
"memoryRequest": "400Mi",
"memoryLimit": "800Mi"
}
}
}
```
The config value used for BackupRepository backing up volume data in namespace `ns1`, referencing BSL `default`, and the type is `Kopia`:
``` go
config := Configs {
// LoadAffinity is the config for data path load affinity.
LoadAffinity: []*kube.LoadAffinity{
{
NodeSelector: metav1.LabelSelector{
MatchExpressions: []metav1.LabelSelectorRequirement{
{
Key: "cloud.google.com/machine-family",
Operator: metav1.LabelSelectorOpIn,
Values: []string{"e2"},
},
},
},
},
},
PodResources: &kube.PodResources{
// The repository maintenance job CPU request setting
CPURequest: "",
// The repository maintenance job memory request setting
MemoryRequest: "400Mi",
// The repository maintenance job CPU limit setting
CPULimit: "",
// The repository maintenance job memory limit setting
MemoryLimit: "800Mi",
}
}
```
### Implementation
During the Velero repository controller starts to maintain a repository, it will call the repository manager's `PruneRepo` function to build the maintenance Job.
The ConfigMap specified by `velero server` CLI parameter `--repo-maintenance-job-configmap` is get to reinitialize the repository `MaintenanceConfig` setting.
``` go
jobConfig, err := getMaintenanceJobConfig(
context.Background(),
m.client,
m.log,
m.namespace,
m.repoMaintenanceJobConfig,
repo,
)
if err != nil {
log.Infof("Cannot find the ConfigMap %s with error: %s. Use default value.",
m.namespace+"/"+m.repoMaintenanceJobConfig,
err.Error(),
)
}
log.Info("Start to maintenance repo")
maintenanceJob, err := m.buildMaintenanceJob(
jobConfig,
param,
)
if err != nil {
return errors.Wrap(err, "error to build maintenance job")
}
```
## Alternatives Considered
An other option is creating each ConfigMap for a BackupRepository.
This is not ideal for scenario that has a lot of BackupRepositories in the cluster.

View File

@@ -1,318 +0,0 @@
# Design for repository maintenance job
## Abstract
This design proposal aims to decouple repository maintenance from the Velero server by launching a maintenance job when needed, to mitigate the impact on the Velero server during backups.
## Background
During backups, Velero performs periodic maintenance on the repository. This operation may consume significant CPU and memory resources in some cases, leading to potential issues such as the Velero server being killed by OOM. This proposal addresses these challenges by separating repository maintenance from the Velero server.
## Goals
1. **Independent Repository Maintenance**: Decouple maintenance from Velero's main logic to reduce the impact on the Velero server pod.
2. **Configurable Resources Usage**: Make the resources used by the maintenance job configurable.
3. **No API Changes**: Retain existing APIs and workflow in the backup repository controller.
## Non Goals
We have lots of concerns over parallel maintenance, which will increase the complexity of our design currently.
- Non-blocking maintenance job: it may conflict with updating the same `backuprepositories` CR when parallel maintenance.
- Maintenance job concurrency control: there is no one suitable mechanism in Kubernetes to control the concurrency of different jobs.
- Parallel maintenance: Maintaining the same repo by multiple jobs at the same time would have some compatible cases that some providers may not support.
Unfortunately, parallel maintenance is currently not a priority because of the concerns above, improving maintenance efficiency is not the primary focus at this stage.
## High-Level Design
1. **Add Maintenance Subcommand**: Introduce a new Velero server subcommand for repository maintenance.
2. **Create Jobs by Repository Manager**: Modify the backup repository controller to create a maintenance job instead of directly calling the multiple chain calls for Kopia or Restic maintenance.
3. **Update Maintenance Job Result in BackupRepository CR**: Retrieve the result of the maintenance job and update the status of the `BackupRepository` CR accordingly.
4. **Add Setting for Maintenance Job**: Introduce a configuration option to set maintenance jobs, including resource limits (CPU and memory), keeping the latest N maintenance jobs for each repository.
## Detailed Design
### 1. Add Maintenance sub-command
The CLI command will be added to the Velero CLI, the command is designed for use in a pod of maintenance jobs.
Our CLI command is designed as follows:
```shell
$ velero repo-maintenance --repo-name $repo-name --repo-type $repo-type --backup-storage-location $bsl
```
Compared with other CLI commands, the maintenance command is used in a pod of maintenance jobs not for user use, and the job should show the result of maintenance after finish.
Here we will write the error message into one specific file which could be read by the maintenance job.
on the whole, we record two kinds of logs:
- one is the log output of the intermediate maintenance process: this log could be retrieved via the Kubernetes API server, including the error log.
- one is the result of the command which could indicate whether the execution is an error or not: the result could be redirected to a file that the maintenance job itself could read, and the file only contains the error message.
we will write the error message into the `/dev/termination-log` file if execution is failed.
The main maintenance logic would be using the repository provider to do the maintenance.
```golang
func checkError(err error, file *os.File) {
if err != nil {
if err != context.Canceled {
if _, errWrite := file.WriteString(fmt.Sprintf("An error occurred: %v", err)); errWrite != nil {
fmt.Fprintf(os.Stderr, "Failed to write error to termination log file: %v\n", errWrite)
}
file.Close()
os.Exit(1) // indicate the command executed failed
}
}
}
func (o *Options) Run(f veleroCli.Factory) {
logger := logging.DefaultLogger(o.LogLevelFlag.Parse(), o.FormatFlag.Parse())
logger.SetOutput(os.Stdout)
errorFile, err := os.Create("/dev/termination-log")
if err != nil {
fmt.Fprintf(os.Stderr, "Failed to create termination log file: %v\n", err)
return
}
defer errorFile.Close()
...
err = o.runRepoPrune(cli, f.Namespace(), logger)
checkError(err, errorFile)
...
}
func (o *Options) runRepoPrune(cli client.Client, namespace string, logger logrus.FieldLogger) error {
...
var repoProvider provider.Provider
if o.RepoType == velerov1api.BackupRepositoryTypeRestic {
repoProvider = provider.NewResticRepositoryProvider(credentialFileStore, filesystem.NewFileSystem(), logger)
} else {
repoProvider = provider.NewUnifiedRepoProvider(
credentials.CredentialGetter{
FromFile: credentialFileStore,
FromSecret: credentialSecretStore,
}, o.RepoType, cli, logger)
}
...
err = repoProvider.BoostRepoConnect(context.Background(), para)
if err != nil {
return errors.Wrap(err, "failed to boost repo connect")
}
err = repoProvider.PruneRepo(context.Background(), para)
if err != nil {
return errors.Wrap(err, "failed to prune repo")
}
return nil
}
```
### 2. Create Jobs by Repository Manager
Currently, the backup repository controller will call the repository manager to do the `PruneRepo`, and Kopia or Restic maintenance is then finally called through multiple chain calls.
We will keep using the `PruneRepo` function in the repository manager, but we cut off the multiple chain calls by creating a maintenance job.
The job definition would be like below:
```yaml
apiVersion: v1
items:
- apiVersion: batch/v1
kind: Job
metadata:
# labels or affinity or topology settings would inherit from the velero deployment
labels:
# label the job name for later list jobs by name
job-name: nginx-example-default-kopia-pqz6c
name: nginx-example-default-kopia-pqz6c
namespace: velero
spec:
# Not retry it again
backoffLimit: 1
# Only have one job one time
completions: 1
# Not parallel running job
parallelism: 1
template:
metadata:
labels:
job-name: nginx-example-default-kopia-pqz6c
name: kopia-maintenance-job
spec:
containers:
# arguments for repo maintenance job
- args:
- repo-maintenance
- --repo-name=nginx-example
- --repo-type=kopia
- --backup-storage-location=default
# inherit from Velero server
- --log-level=debug
command:
- /velero
# inherit environment variables from the velero deployment
env:
- name: AZURE_CREDENTIALS_FILE
value: /credentials/cloud
# inherit image from the velero deployment
image: velero/velero:main
imagePullPolicy: IfNotPresent
name: kopia-maintenance-container
# resource limitation set by Velero server configuration
# if not specified, it would apply best effort resources allocation strategy
resources: {}
# error message would be written to /dev/termination-log
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
# inherit volume mounts from the velero deployment
volumeMounts:
- mountPath: /credentials
name: cloud-credentials
dnsPolicy: ClusterFirst
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
# inherit service account from the velero deployment
serviceAccount: velero
serviceAccountName: velero
volumes:
# inherit cloud credentials from the velero deployment
- name: cloud-credentials
secret:
defaultMode: 420
secretName: cloud-credentials
# ttlSecondsAfterFinished set the job expired seconds
ttlSecondsAfterFinished: 86400
status:
# which contains the result after maintenance
message: ""
lastMaintenanceTime: ""
```
Now, the backup repository controller will call the repository manager to create one maintenance job and wait for the job to complete. The Kopia or Restic maintenance multiple chains are called by the job.
### 3. Update the Result of the Maintenance Job into BackupRepository CR
The backup repository controller will update the result of the maintenance job into the backup repository CR.
For how to get the result of the maintenance job we could refer to [here](https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/#writing-and-reading-a-termination-message).
After the maintenance job is finished, we could get the result of maintenance by getting the terminated message from the related pod:
```golang
func GetContainerTerminatedMessage(pod *v1.Pod) string {
...
for _, containerStatus := range pod.Status.ContainerStatuses {
if containerStatus.LastTerminationState.Terminated != nil {
return containerStatus.LastTerminationState.Terminated.Message
}
}
...
return ""
}
```
Then we could update the status of backupRepository CR with the message.
### 4. Add Setting for Resource Usage of Maintenance
Add one configuration for setting the resource limit of maintenance jobs as below:
```shell
velero server --maintenance-job-cpu-request $cpu-request --maintenance-job-mem-request $mem-request --maintenance-job-cpu-limit $cpu-limit --maintenance-job-mem-limit $mem-limit
```
Our default value is 0, which means we don't limit the resources, and the resource allocation strategy would be [best effort](https://kubernetes.io/docs/concepts/workloads/pods/pod-qos/#besteffort).
### 5. Automatic Cleanup for Finished Maintenance Jobs
Add configuration for clean up maintenance jobs:
- keep-latest-maintenance-jobs: the number of keeping latest maintenance jobs for each repository.
```shell
velero server --keep-latest-maintenance-jobs $num
```
We would check and keep the latest N jobs after a new job is finished.
```golang
func deleteOldMaintenanceJobs(cli client.Client, repo string, keep int) error {
// Get the maintenance job list by label
jobList := &batchv1.JobList{}
err := cli.List(context.TODO(), jobList, client.MatchingLabels(map[string]string{RepositoryNameLabel: repo}))
if err != nil {
return err
}
// Delete old maintenance jobs
if len(jobList.Items) > keep {
sort.Slice(jobList.Items, func(i, j int) bool {
return jobList.Items[i].CreationTimestamp.Before(&jobList.Items[j].CreationTimestamp)
})
for i := 0; i < len(jobList.Items)-keep; i++ {
err = cli.Delete(context.TODO(), &jobList.Items[i], client.PropagationPolicy(metav1.DeletePropagationBackground))
if err != nil {
return err
}
}
}
return nil
}
```
### 6 Velero Install with Maintenance Options
All the above maintenance options should be supported by Velero install command.
### 7. Observability and Debuggability
Some monitoring metrics are added for backup repository maintenance:
- repo_maintenance_total
- repo_maintenance_success_total
- repo_maintenance_failed_total
- repo_maintenance_duration_seconds
We will keep the latest N maintenance jobs for each repo, and users can get the log from the job. the job log level inherent from the Velero server setting.
Also, we would integrate maintenance job logs and `backuprepositories` CRs into `velero debug`.
Roughly, the process is as follows:
1. The backup repository controller will check the BackupRepository request in the queue periodically.
2. If the maintenance period of the repository checked by `runMaintenanceIfDue` in `Reconcile` is due, then the backup repository controller will call the Repository manager to execute `PruneRepo`
3. The `PruneRepo` of the Repository manager will create one maintenance job, the resource limitation, environment variables, service account, images, etc. would inherit from the Velero server pod. Also, one clean up TTL would be set to maintenance job.
4. The maintenance job will execute the Velero maintenance command, wait for maintaining to finish and write the maintenance result into the terminationMessagePath file of the related pod.
5. Kubernetes could show the result in the status of the pod by reading the termination message in the pod.
6. The backup repository controller will wait for the maintenance job to finish and read the status of the maintenance job, then update the message field and phase in the status of `backuprepositories` CR accordingly.
6. Clean up old maintenance jobs and keep only N latest for each repository.
### 8. Codes Refinement
Once `backuprepositories` CR status is modified, the CR would re-queue to be reconciled, and re-execute logics in reconcile shortly not respecting the re-queue frequency configured by `repoSyncPeriod`.
For one abnormal scenario if the maintenance job fails, the status of `backuprepositories` CR would be updated and the CR will re-queue immediately, if the new maintenance job still fails, then it will re-queue again, making the logic of `backuprepositories` CR re-queue like a dead loop.
So we change the Predicates logic in Controller manager making it only re-queue if the Spec of `backuprepositories` CR is changed.
```golang
ctrl.NewControllerManagedBy(mgr).For(&velerov1api.BackupRepository{}, builder.WithPredicates(kube.SpecChangePredicate{}))
```
This change would bring the behavior different from the previous, errors that occurred in the maintenance job would retry in the next reconciliation period instead of retrying immediately.
## Prospects for Future Work
Future work may focus on improving the efficiency of Velero maintenance through non-blocking parallel modes. Potential areas for enhancement include:
**Non-blocking Mode**: Explore the implementation of a non-blocking mode for parallel maintenance to enhance overall efficiency.
**Concurrency Control**: Investigate mechanisms for better concurrency control of different maintenance jobs.
**Provider Support for Parallel Maintenance**: Evaluate the feasibility of parallel maintenance for different providers and address any compatibility issues.
**Efficiency Improvements**: Investigate strategies to optimize maintenance efficiency without compromising reliability.
By considering these areas, future iterations of Velero may benefit from enhanced parallelization and improved resource utilization during repository maintenance.

View File

@@ -1,113 +0,0 @@
# Allow Object-Level Resource Status Restore in Velero
## Abstract
This design proposes a way to enhance Veleros restore functionality by enabling object-level resource status restoration through annotations.
Currently, Velero allows restoring resource statuses only at a resource type level, which lacks granularity of restoring the status of specific resources.
By introducing an annotation that controllers can set on individual resource objects, this design aims to improve flexibility and autonomy for users/resource-controllers, providing a more way
to enable resource status restore.
## Background
Velero provides the `restoreStatus` field in the Restore API to specify resource types for status restoration. However, this feature is limited to resource types as a whole, lacking the granularity needed to restore specific objects of a resource type. Resource controllers, especially those managing custom resources with external dependencies, may need to restore status on a per-object basis based on internal logic and dependencies.
This design adds an annotation-based approach to allow controllers to specify status restoration at the object level, enabling Velero to handle status restores more flexibly.
## Goals
- Provide a mechanism to specify the restoration of a resources status at an object level.
- Maintain backwards compatibility with existing functionality, allowing gradual adoption of this feature.
- Integrate the new annotation-based objects-level status restore with Veleros existing resource-type-level `restoreStatus` configuration.
## Non-Goals
- Alter Veleros existing resource type-level status restoration mechanism for resources without annotations.
## Use-Cases/Scenarios
1. Controller managing specific Resources
- A resource controller identifies that a specific object of a resource should have its status restored due to particular dependencies
- The controller automatically sets the `velero.io/restore-status: true` annotation on the resource.
- During restore, Velero restores the status of this object, while leaving other resources unaffected.
- The status for the annotated object will be restored regardless of its inclusion/exclusion in `restoreStatus.includedResources`
2. A specific object must not have its status restored even if its included in `restoreStatus.includedResources`
- A user specifies a resource type in the `restoreStatus.includedResources` field within the Restore custom resource.
- A particular object of that resource type is annotated with `velero.io/restore-status: false` by the user.
- The status of the annotated object will not restored even though its included in `restoreStatus.includedResources` because annotation is `false` and it takes precedence.
4. Default Behavior for objects Without the Annotation
- Objects without the `velero.io/restore-status` annotation behave as they currently do: Velero skips their status restoration unless the resource type is specified in the `restoreStatus.includedResources` field.
## High-Level Design
- Object-Level Status Restore Annotation: We are introducing the `velero.io/restore-status` annotation at the resource object level to mark specific objects for status restoration.
- `true`: Indicates that the status should be restored for this object
- `false`: Skip restoring status for this specific object
- Invalid or missing annotations defer to the meaning of existing resource type-level logic.
- Restore logic precedence:
- Annotations take precedence when they exist with valid values (`true` or `false`).
- Restore spec `restoreStatus.includedResources` is only used when annotations are invalid or missing.
- Velero Restore Logic Update: During a restore operation, Velero will:
- Extend the existing restore logic to parse and prioritize annotations introduced in this design.
- Update resource objects accordingly based on their annotation values or fallback configuration.
## Detailed Design
- Annotation for object-Level Status Restore: The `velero.io/restore-status` annotation will be set on individual resource objects by users/controllers as needed:
```yaml
metadata:
annotations:
velero.io/restore-status: "true"
```
- Restore Logic Modifications: During the restore operation, the restore controller will follow these steps:
- Parse the `restoreStatus.includedResources` spec to determine resource types eligible for status restoration.
- For each resource object:
- Check for the `velero.io/restore-status` annotation.
- If the annotation value is:
- `true`: Restore the status of the object
- `false`: Skip restoring the status of the object
- If the annotation is invalid or missing:
- Default to the `restoreStatus.includedResources` configuration
## Implementation
We are targeting the implementation of this design for Velero 1.16 release.
Current restoreStatus logic resides here: https://github.com/vmware-tanzu/velero/blob/32a8c62920ad96c70f1465252c0197b83d5fa6b6/pkg/restore/restore.go#L1652
The modified logic would look somewhat like:
```go
// Determine whether to restore status from resource type configuration
shouldRestoreStatus := ctx.resourceStatusIncludesExcludes != nil && ctx.resourceStatusIncludesExcludes.ShouldInclude(groupResource.String())
// Check for object-level annotation
annotations := obj.GetAnnotations()
objectAnnotation := annotations["velero.io/restore-status"]
annotationValid := objectAnnotation == "true" || objectAnnotation == "false"
// Determine restore behavior based on annotation precedence
shouldRestoreStatus = (annotationValid && objectAnnotation == "true") || (!annotationValid && shouldRestoreStatus)
ctx.log.Debugf("status field for %s: exists: %v, should restore: %v (by annotation: %v)", newGR, statusFieldExists, shouldRestoreStatus, annotationValid)
if shouldRestoreStatus && statusFieldExists {
if err := unstructured.SetNestedField(obj.Object, objStatus, "status"); err != nil {
ctx.log.Errorf("Could not set status field %s: %v", kube.NamespaceAndName(obj), err)
errs.Add(namespace, err)
return warnings, errs, itemExists
}
obj.SetResourceVersion(createdObj.GetResourceVersion())
updated, err := resourceClient.UpdateStatus(obj, metav1.UpdateOptions{})
if err != nil {
ctx.log.Infof("Status field update failed %s: %v", kube.NamespaceAndName(obj), err)
warnings.Add(namespace, err)
} else {
createdObj = updated
}
}
```

View File

@@ -1,120 +0,0 @@
# Design for Adding Finalization Phase in Restore Workflow
## Abstract
This design proposes adding the finalization phase to the restore workflow. The finalization phase would be entered after all item restoration and plugin operations have been completed, similar to the way the backup process proceeds. Its purpose is to perform any wrap-up work necessary before transitioning the restore process to a terminal phase.
## Background
Currently, the restore process enters a terminal phase once all item restoration and plugin operations have been completed. However, there are some wrap-up works that need to be performed after item restoration and plugin operations have been fully executed. There is no suitable opportunity to perform them at present.
To address this, a new finalization phase should be added to the existing restore workflow. in this phase, all plugin operations and item restoration has been fully completed, which provides a clean opportunity to perform any wrap-up work before termination, improving the overall restore process.
Wrap-up tasks in Velero can serve several purposes:
- Post-restore modification - Velero can modify the restored data that was temporarily changed for some purpose but required to be changed back finally or data that was newly created but missing some information. For example, [issue6435](https://github.com/vmware-tanzu/velero/issues/6435) indicates that some custom settings(like labels, reclaim policy) on restored PVs was lost because those restored PVs was newly dynamically provisioned. Velero can address it by patching the PVs' custom settings back in the finalization phase.
- Clean up unused data - Velero can identify and delete any data that are no longer needed after a successful restore in the finalization phase.
- Post-restore validation - Velero can validate the state of restored data and report any errors to help users locate the issue in the finalization phase.
The uses of wrap-up tasks are not limited to these examples. Additional needs may be addressed as they develop over time.
## Goals
- Add the finalization phase and the corresponding controller to restore workflow.
## Non Goals
- Implement the specific wrap-up work.
## High-Level Design
- The finalization phase will be added to current restore workflow.
- The logic for handling current phase transition in restore and restore operations controller will be modified with the introduction of the finalization phase.
- A new restore finalizer controller will be implemented to handle the finalization phase.
## Detailed Design
### phase transition
Two new phases related to finalization will be added to restore workflow, which are `FinalizingPartiallyFailed` and `Finalizing`. The new phase transition will be similar to backup workflow, proceeding as follow:
![image](restore-phases-transition.png)
### restore finalizer controller
The new restore finalizer controller will be implemented to watch for restores in `FinalizingPartiallyFailed` and `Finalizing` phases. Any wrap-up work that needs to wait for the completion of item restoration and plugin operations will be executed by this controller, and the phase will be set to either `Completed` or `PartiallyFailed` based on the results of these works.
Points worth noting about the new restore finalizer controller:
A new structure `finalizerContext` will be created to facilitate the implementation of any wrap-up tasks. It includes all the dependencies the tasks require as well as a function `execute()` to orderly implement task logic.
```
// finalizerContext includes all the dependencies required by wrap-up tasks
type finalizerContext struct {
.......
restore *velerov1api.Restore
log logrus.FieldLogger
.......
}
// execute executes all the wrap-up tasks and return the result
func (ctx *finalizerContext) execute() (results.Result, results.Result) {
// execute task1
.......
// execute task2
.......
// the task execution logic will be expanded as new tasks are included
.......
}
// newFinalizerContext returns a finalizerContext object, the parameters will be added as new tasks are included.
func newFinalizerContext(restore *velerov1api.Restore, log logrus.FieldLogger, ...) *finalizerContext{
return &finalizerContext{
.......
restore: restore,
log: log,
.......
}
}
```
The finalizer controller is responsible for collecting all dependencies and creating a `finalizerContext` object using those dependencies. It then invokes the `execute` function.
```
func (r *restoreFinalizerReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
.......
// collect all dependencies required by wrap-up tasks
.......
// create a finalizerContext object and invoke execute()
finalizerCtx := newFinalizerContext(restore, log, ...)
warnings, errs := finalizerCtx.execute()
.......
}
```
After completing all necessary tasks, the result metadata in object storage will be updated if any errors or warnings occur during the execution. This behavior breaks the feature of keeping metadata files in object storage immutable, However, we believe the tradeoff is justified because it provides users with the access to examine the error/warning details when the wrap-up tasks go wrong.
```
// UpdateResults updates the result metadata in object storage if necessary
func (r *restoreFinalizerReconciler) UpdateResults(restore *api.Restore, newWarnings *results.Result, newErrs *results.Result, backupStore persistence.BackupStore) error {
originResults, err := backupStore.GetRestoreResults(restore.Name)
if err != nil {
return errors.Wrap(err, "error getting restore results")
}
warnings := originResults["warnings"]
errs := originResults["errors"]
warnings.Merge(newWarnings)
errs.Merge(newErrs)
m := map[string]results.Result{
"warnings": warnings,
"errors": errs,
}
if err := putResults(restore, m, backupStore); err != nil {
return errors.Wrap(err, "error putting restore results")
}
return nil
}
```
## Compatibility
The new finalization phases are added without modifying the existing phases in the restore workflow. Both new and ongoing restore processes will continue to eventually transition to a terminal phase from any prior phase, ensuring backward compatibility.
## Implementation
This will be implemented during the Velero 1.14 development cycle.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 65 KiB

View File

@@ -1,111 +0,0 @@
# Backup Restore Status Patch Retrying Configuration
## Abstract
When a backup/restore completes, we want to ensure that the custom resource progresses to the correct status.
If a patch call fails to update status to completion, it should be retried up to a certain time limit.
This design proposes a way to configure timeout for this retry time limit.
## Background
Original Issue: https://github.com/vmware-tanzu/velero/issues/7207
Velero was performing a restore when the API server was rolling out to a new version.
It had trouble connecting to the API server, but eventually, the restore was successful.
However, since the API server was still in the middle of rolling out, Velero failed to update the restore CR status and gave up.
After the connection was restored, it didn't attempt to update, causing the restore CR to be stuck at "In progress" indefinitely.
This can lead to incorrect decisions for other components that rely on the backup/restore CR status to determine completion.
## Goals
- Make timeout configurable for retry patching by reusing existing [`--resource-timeout` server flag](https://github.com/vmware-tanzu/velero/blob/d9ca14747925630664c9e4f85a682b5fc356806d/pkg/cmd/server/server.go#L245)
## Non Goals
- Create a new timeout flag
- Refactor backup/restore workflow
## High-Level Design
We will add retries with timeout to existing patch calls that moves a backup/restore from InProgress to a different status phase such as
- FailedValidation (final)
- Failed (final)
- WaitingForPluginOperations
- WaitingForPluginOperationsPartiallyFailed
- Finalizing
- FinalizingPartiallyFailed
and from above non final phases to
- Completed
- PartiallyFailed
Once backup/restore is in some phase it will already be reconciled again periodically and do not need additional retry
- WaitingForPluginOperations
- WaitingForPluginOperationsPartiallyFailed
## Detailed Design
Relevant reconcilers will have `resourceTimeout time.Duration` added to its struct and to parameters of New[Backup|Restore]XReconciler functions.
pkg/cmd/server/server.go in `func (s *server) runControllers(..) error` also update the New[Backup|Restore]XCReconciler with added duration parameters using value from existing `--resource-timeout` server flag.
Current calls to kube.PatchResource involving status patch will be replaced with kube.PatchResourceWithRetriesOnErrors added to package `kube` below.
Calls where there is a ...client.Patch() will be wrapped with client.RetriesPhasePatchFuncOnErrors() added to package `client` below.
pkg/util/kube/client.go
```go
// PatchResourceWithRetries patches the original resource with the updated resource, retrying when the provided retriable function returns true.
func PatchResourceWithRetries(maxDuration time.Duration, original, updated client.Object, kbClient client.Client, retriable func(error) bool) error {
return veleroPkgClient.RetryOnRetriableMaxBackOff(maxDuration, func() error { return PatchResource(original, updated, kbClient) }, retriable)
}
// PatchResourceWithRetriesOnErrors patches the original resource with the updated resource, retrying when the operation returns an error.
func PatchResourceWithRetriesOnErrors(maxDuration time.Duration, original, updated client.Object, kbClient client.Client) error {
return PatchResourceWithRetries(maxDuration, original, updated, kbClient, func(err error) bool {
// retry using DefaultBackoff to resolve connection refused error that may occur when the server is under heavy load
// TODO: consider using a more specific error type to retry, for now, we retry on all errors
// specific errors:
// - connection refused: https://pkg.go.dev/syscall#:~:text=Errno(0x67)-,ECONNREFUSED,-%3D%20Errno(0x6f
return err != nil
})
}
```
pkg/client/retry.go
```go
// CapBackoff provides a backoff with a set backoff cap
func CapBackoff(cap time.Duration) wait.Backoff {
if cap < 0 {
cap = 0
}
return wait.Backoff{
Steps: math.MaxInt,
Duration: 10 * time.Millisecond,
Cap: cap,
Factor: retry.DefaultBackoff.Factor,
Jitter: retry.DefaultBackoff.Jitter,
}
}
// RetryOnRetriableMaxBackOff accepts a patch function param, retrying when the provided retriable function returns true.
func RetryOnRetriableMaxBackOff(maxDuration time.Duration, fn func() error, retriable func(error) bool) error {
return retry.OnError(CapBackoff(maxDuration), func(err error) bool { return retriable(err) }, fn)
}
// RetryOnErrorMaxBackOff accepts a patch function param, retrying when the error is not nil.
func RetryOnErrorMaxBackOff(maxDuration time.Duration, fn func() error) error {
return RetryOnRetriableMaxBackOff(maxDuration, fn, func(err error) bool { return err != nil })
}
```
## Alternatives Considered
- Requeuing InProgress backups that is not known by current velero instance to still be in progress as failed (attempted in [#7863](https://github.com/vmware-tanzu/velero/pull/7863))
- It was deemed as making backup restore flow hard to enhance for future reconciler updates such as adding cancel or adding parallel backups.
## Security Considerations
None
## Compatibility
Retry should only trigger a restore or backup that is already in progress and not patching successfully by current instance. Prior InProgress backups/restores will not be re-processed and will remain stuck InProgress until there is another velero server (re)start.
## Implementation
There is a past implementation in [#7845](https://github.com/vmware-tanzu/velero/pull/7845/) where implementation for this design will be based upon.

View File

@@ -71,20 +71,6 @@ type ScheduleSpec struct {
}
```
**Note:** The Velero server automatically patches the `skipImmediately` field back to `false` after it's been used. This is because `skipImmediately` is designed to be a one-time operation rather than a persistent state. When the controller detects that `skipImmediately` is set to `true`, it:
1. Sets the flag back to `false`
2. Records the current time in `schedule.Status.LastSkipped`
This "consume and reset" pattern ensures that after skipping one immediate backup, the schedule returns to normal behavior for subsequent runs. The `LastSkipped` timestamp is then used to determine when the next backup should run.
```go
// From pkg/controller/schedule_controller.go
if schedule.Spec.SkipImmediately != nil && *schedule.Spec.SkipImmediately {
*schedule.Spec.SkipImmediately = false
schedule.Status.LastSkipped = &metav1.Time{Time: c.clock.Now()}
}
```
`LastSkipped` will be added to `ScheduleStatus` struct to track the last time a schedule was skipped.
```diff
// ScheduleStatus captures the current state of a Velero schedule
@@ -111,8 +97,6 @@ type ScheduleStatus struct {
}
```
The `LastSkipped` field is crucial for the schedule controller to determine the next run time. When a backup is skipped, this timestamp is used instead of `LastBackup` to calculate when the next backup should occur, ensuring the schedule maintains its intended cadence even after skipping a backup.
When `schedule.spec.SkipImmediately` is `true`, `LastSkipped` will be set to the current time, and `schedule.spec.SkipImmediately` set to nil so it can be used again.
The `getNextRunTime()` function below is updated so `LastSkipped` which is after `LastBackup` will be used to determine next run time.

View File

@@ -1,84 +0,0 @@
# Adding Support For VolumeAttributes in Resource Policy
## Abstract
Currently [Velero Resource policies](https://velero.io/docs/main/resource-filtering/#creating-resource-policies) are only supporting "Driver" to be filtered for [CSI volume conditions](https://github.com/vmware-tanzu/velero/blob/8e23752a6ea83f101bd94a69dcf17f519a805388/internal/resourcepolicies/volume_resources_validator.go#L28)
If user want to skip certain CSI volumes based on other volume attributes like protocol or SKU, etc, they can't do it with the current Velero resource policies. It would be convenient if Velero resource policies could be extended to filter on volume attributes along with existing driver filter in the resource policies `conditions` to handle the backup of volumes just by `some specific volumes attributes conditions`.
## Background
As of Today, Velero resource policy already provides us the way to filter volumes based on the `driver` name. But it's not enough to handle the volumes based on other volume attributes like protocol, SKU, etc.
## Example:
- Provision Azure NFS: Define the Storage class with `protocol: nfs` under storage class parameters to provision [CSI NFS Azure File Shares](https://learn.microsoft.com/en-us/azure/aks/azure-files-csi#nfs-file-shares).
- User wants to back up AFS (Azure file shares) but only want to backup `SMB` type of file share volumes and not `NFS` file share volumes.
## Goals
- We are only bringing additional support in the resource policy to only handle volumes during backup.
- Introducing support for `VolumeAttributes` filter along with `driver` filter in CSI volume conditions to handle volumes.
## Non-Goals
- Currently, only handles volumes, and does not support other resources.
## Use-cases/Scenarios
### Skip backup volumes by some volume attributes:
Users want to skip PV with the requirements:
- option to skip specified PV on volume attributes type (like Protocol as NFS, SMB, etc)
### Sample Storage Class Used to create such Volumes
```
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: azurefile-csi-nfs
provisioner: file.csi.azure.com
allowVolumeExpansion: true
parameters:
protocol: nfs
```
## High-Level Design
Modifying the existing Resource Policies code for [csiVolumeSource](https://github.com/vmware-tanzu/velero/blob/8e23752a6ea83f101bd94a69dcf17f519a805388/internal/resourcepolicies/volume_resources_validator.go#L28C6-L28C22) to add the new `VolumeAttributes` filter for CSI volumes and adding validations in existing [csiCondition](https://github.com/vmware-tanzu/velero/blob/8e23752a6ea83f101bd94a69dcf17f519a805388/internal/resourcepolicies/volume_resources.go#L150) to match with volume attributes in the conditions from Resource Policy config map and original persistent volume.
## Detailed Design
The volume resources policies should contain a list of policies which is the combination of conditions and related `action`, when target volumes meet the conditions, the related `action` will take effection.
Below is the API Design for the user configuration:
### API Design
```go
type csiVolumeSource struct {
Driver string `yaml:"driver,omitempty"`
// [NEW] CSI volume attributes
VolumeAttributes map[string]string `yaml:"volumeAttributes,omitempty"`
}
```
The policies YAML config file would look like this:
```yaml
version: v1
volumePolicies:
- conditions:
csi:
driver: disk.csi.azure.com
action:
type: skip
- conditions:
csi:
driver: file.csi.azure.com
volumeAttributes:
protocol: nfs
action:
type: skip`
```
### New Supported Conditions
#### VolumeAttributes
Existing CSI Volume Condition can now add `volumeAttributes` which will be key and value pairs.
Specify details for the related volume source (currently only csi driver is supported filter)
```yaml
csi: // match volume using `file.csi.azure.com` and with volumeAttributes protocol as nfs
driver: file.csi.azure.com
volumeAttributes:
protocol: nfs
```

View File

@@ -177,54 +177,5 @@ Roughly, the process is as follows:
4. Each respective controller within the CRs calls the uploader, and the WriteSparseFiles from map in CRs is passed to the uploader.
5. When the uploader subsequently calls the Kopia API, it can use the WriteSparseFiles to set the WriteSparseFiles parameter, and if the uploader calls the Restic command it would append `--sparse` flag within the restore command.
### Parallel Restore
Setting the parallelism of restore operations can improve the efficiency and speed of the restore process, especially when dealing with large amounts of data.
### Velero CLI
The Velero CLI will support a --parallel-files-download flag, allowing users to set the parallelism value when creating restores. when no value specified, the value of it would be the number of CPUs for the node that the node agent pod is running.
```bash
velero restore create --parallel-files-download $num
```
### UploaderConfig
below the sub-option parallel is added into UploaderConfig:
```go
type UploaderConfigForRestore struct {
// ParallelFilesDownload is the number of parallel for restore.
// +optional
ParallelFilesDownload int `json:"parallelFilesDownload,omitempty"`
}
```
#### Kopia Parallel Restore Policy
Velero Uploader can set restore policies when calling Kopia APIs. In the Kopia codebase, the structure for restore policies is defined as follows:
```go
// first get concurrrency from uploader config
restoreConcurrency, _ := uploaderutil.GetRestoreConcurrency(uploaderCfg)
// set restore concurrency into restore options
restoreOpt := restore.Options{
Parallel: restoreConcurrency,
}
// do restore with restore option
restore.Entry(..., restoreOpt)
```
#### Restic Parallel Restore Policy
Configurable parallel restore is not supported by restic, so we would return one error if the option is configured.
```go
restoreConcurrency, err := uploaderutil.GetRestoreConcurrency(uploaderCfg)
if err != nil {
return extraFlags, errors.Wrap(err, "failed to get uploader config")
}
if restoreConcurrency > 0 {
return extraFlags, errors.New("restic does not support parallel restore")
}
```
## Alternatives Considered
To enhance extensibility further, the option of storing `UploaderConfig` in a Kubernetes ConfigMap can be explored, this approach would allow the addition and modification of configuration options without the need to modify the CRD.

View File

@@ -1,257 +0,0 @@
# Velero Generic Data Path Load Affinity Enhancement Design
## Glossary & Abbreviation
**Velero Generic Data Path (VGDP)**: VGDP is the collective modules that is introduced in [Unified Repository design][1]. Velero uses these modules to finish data transfer for various purposes (i.e., PodVolume backup/restore, Volume Snapshot Data Movement). VGDP modules include uploaders and the backup repository.
**Exposer**: Exposer is a module that is introduced in [Volume Snapshot Data Movement Design][1]. Velero uses this module to expose the volume snapshots to Velero node-agent pods or node-agent associated pods so as to complete the data movement from the snapshots.
## Background
The implemented [VGDP LoadAffinity design][3] already defined the a structure `LoadAffinity` in `--node-agent-configmap` parameter. The parameter is used to set the affinity of the backupPod of VGDP.
There are still some limitations of this design:
* The affinity setting is global. Say there are two StorageClasses and the underlying storage can only provision volumes to part of the cluster nodes. The supported nodes don't have intersection. Then the affinity will definitely not work in some cases.
* The old design focuses on the backupPod affinity, but the restorePod also needs the affinity setting.
As a result, create this design to address the limitations.
## Goals
- Enhance the node affinity of VGDP instances for volume snapshot data movement: add per StorageClass node affinity.
- Enhance the node affinity of VGDP instances for volume snapshot data movement: support the or logic between affinity selectors.
- Define the behaviors of node affinity of VGDP instances in node-agent for volume snapshot data movement restore, when the PVC restore doesn't require delay binding.
## Non-Goals
- It is also beneficial to support VGDP instances affinity for PodVolume backup/restore, this will be implemented after the PodVolume micro service completes.
## Solution
This design still uses the ConfigMap specified by `velero node-agent` CLI's parameter `--node-agent-configmap` to host the node affinity configurations.
Upon the implemented [VGDP LoadAffinity design][3] introduced `[]*LoadAffinity` structure, this design add a new field `StorageClass`. This field is optional.
* If the `LoadAffinity` element's `StorageClass` doesn't have value, it means this element is applied to global, just as the old design.
* If the `LoadAffinity` element's `StorageClass` has value, it means this element is applied to the VGDP instances' PVCs use the specified StorageClass.
* The `LoadAffinity` element whose `StorageClass` has value has higher priority than the `LoadAffinity` element whose `StorageClass` doesn't have value.
```go
type Configs struct {
// LoadConcurrency is the config for load concurrency per node.
LoadConcurrency *LoadConcurrency `json:"loadConcurrency,omitempty"`
// LoadAffinity is the config for data path load affinity.
LoadAffinity []*LoadAffinity `json:"loadAffinity,omitempty"`
}
type LoadAffinity struct {
// NodeSelector specifies the label selector to match nodes
NodeSelector metav1.LabelSelector `json:"nodeSelector"`
}
```
``` go
type LoadAffinity struct {
// NodeSelector specifies the label selector to match nodes
NodeSelector metav1.LabelSelector `json:"nodeSelector"`
// StorageClass specifies the VGDPs the LoadAffinity applied to. If the StorageClass doesn't have value, it applies to all. If not, it applies to only the VGDPs that use this StorageClass.
StorageClass string `json:"storageClass"`
}
```
### Decision Tree
```mermaid
flowchart TD
A[VGDP Pod Needs Scheduling] --> B{Is this a restore operation?}
B -->|Yes| C{StorageClass has volumeBindingMode: WaitForFirstConsumer?}
B -->|No| D[Backup Operation]
C -->|Yes| E{restorePVC.ignoreDelayBinding = true?}
C -->|No| F[StorageClass binding mode: Immediate]
E -->|No| G[Wait for target Pod scheduling<br/>Use Pod's selected node<br/>⚠️ Affinity rules ignored]
E -->|Yes| H[Apply affinity rules<br/>despite WaitForFirstConsumer]
F --> I{Check StorageClass in loadAffinity by StorageClass field}
H --> I
D --> J{Using backupPVC with different StorageClass?}
J -->|Yes| K[Use final StorageClass<br/>for affinity lookup]
J -->|No| L[Use original PVC StorageClass<br/>for affinity lookup]
K --> I
L --> I
I -->|StorageClass found| N[Filter the LoadAffinity by <br/>the StorageClass<br/>🎯 and apply the LoadAffinity HIGHEST PRIORITY]
I -->|StorageClass not found| O{Check loadAffinity element without StorageClass field}
O -->|No loadAffinity configured| R[No affinity constraints<br/>Schedule on any available node<br/>🌐 DEFAULT]
O --> V[Validate node-agent availability<br/>⚠️ Ensure node-agent pods exist on target nodes]
N --> V
V --> W{Node-agent available on selected nodes?}
W -->|Yes| X[✅ VGDP Pod scheduled successfully]
W -->|No| Y[❌ Pod stays in Pending state<br/>Timeout after 30min<br/>Check node-agent DaemonSet coverage]
R --> Z[Schedule on any node<br/>✅ Basic scheduling]
%% Styling
classDef successNode fill:#d4edda,stroke:#155724,color:#155724
classDef warningNode fill:#fff3cd,stroke:#856404,color:#856404
classDef errorNode fill:#f8d7da,stroke:#721c24,color:#721c24
classDef priorityHigh fill:#e7f3ff,stroke:#0066cc,color:#0066cc
classDef priorityMedium fill:#f0f8ff,stroke:#4d94ff,color:#4d94ff
classDef priorityDefault fill:#f8f9fa,stroke:#6c757d,color:#6c757d
class X,Z successNode
class G,V,Y warningNode
class Y errorNode
class N,T,U priorityHigh
class P,Q priorityMedium
class R priorityDefault
```
### Examples
#### LoadAffinity interacts with LoadAffinityPerStorageClass
``` json
{
"loadAffinity": [
{
"nodeSelector": {
"matchLabels": {
"beta.kubernetes.io/instance-type": "Standard_B4ms"
}
}
},
{
"nodeSelector": {
"matchExpressions": [
{
"key": "kubernetes.io/os",
"values": [
"linux"
],
"operator": "In"
}
]
},
"storageClass": "kibishii-storage-class"
},
{
"nodeSelector": {
"matchLabels": {
"beta.kubernetes.io/instance-type": "Standard_B8ms"
}
},
"storageClass": "kibishii-storage-class"
}
]
}
```
This sample demonstrates how the `loadAffinity` elements with `StorageClass` field and without `StorageClass` field setting work together.
If the VGDP mounting volume is created from StorageClass `kibishii-storage-class`, its pod will run Linux nodes or instance type as `Standard_B8ms`.
The other VGDP instances will run on nodes, which instance type is `Standard_B4ms`.
#### LoadAffinity interacts with BackupPVC
``` json
{
"loadAffinity": [
{
"nodeSelector": {
"matchLabels": {
"beta.kubernetes.io/instance-type": "Standard_B4ms"
}
},
"storageClass": "kibishii-storage-class"
},
{
"nodeSelector": {
"matchLabels": {
"beta.kubernetes.io/instance-type": "Standard_B2ms"
}
},
"storageClass": "worker-storagepolicy"
}
],
"backupPVC": {
"kibishii-storage-class": {
"storageClass": "worker-storagepolicy"
}
}
}
```
Velero data mover supports to use different StorageClass to create backupPVC by [design](https://github.com/vmware-tanzu/velero/pull/7982).
In this example, if the backup target PVC's StorageClass is `kibishii-storage-class`, its backupPVC should use StorageClass `worker-storagepolicy`. Because the final StorageClass is `worker-storagepolicy`, the backupPod uses the loadAffinity specified by `loadAffinity`'s elements with `StorageClass` field set to `worker-storagepolicy`. backupPod will be assigned to nodes, which instance type is `Standard_B2ms`.
#### LoadAffinity interacts with RestorePVC
``` json
{
"loadAffinity": [
{
"nodeSelector": {
"matchLabels": {
"beta.kubernetes.io/instance-type": "Standard_B4ms"
}
},
"storageClass": "kibishii-storage-class"
}
],
"restorePVC": {
"ignoreDelayBinding": false
}
}
```
##### StorageClass's bind mode is WaitForFirstConsumer
``` yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: kibishii-storage-class
parameters:
svStorageClass: worker-storagepolicy
provisioner: csi.vsphere.vmware.com
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
```
If restorePVC should be created from StorageClass `kibishii-storage-class`, and it's volumeBindingMode is `WaitForFirstConsumer`.
Although `loadAffinityPerStorageClass` has a section matches the StorageClass, the `ignoreDelayBinding` is set `false`, the Velero exposer will wait until the target Pod scheduled to a node, and returns the node as SelectedNode for the restorePVC.
As a result, the `loadAffinityPerStorageClass` will not take affect.
##### StorageClass's bind mode is Immediate
``` yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: kibishii-storage-class
parameters:
svStorageClass: worker-storagepolicy
provisioner: csi.vsphere.vmware.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
```
Because the StorageClass volumeBindingMode is `Immediate`, although `ignoreDelayBinding` is set to `false`, restorePVC will not be created according to the target Pod.
The restorePod will be assigned to nodes, which instance type is `Standard_B4ms`.
[1]: Implemented/unified-repo-and-kopia-integration/unified-repo-and-kopia-integration.md
[2]: Implemented/volume-snapshot-data-movement/volume-snapshot-data-movement.md
[3]: Implemented/node-agent-affinity.md

Some files were not shown because too many files have changed in this diff Show More