Compare commits

..

25 Commits

Author SHA1 Message Date
Xun Jiang/Bruce Jiang
f2fc105094 Merge pull request #4693 from ywk253100/220223_changelog
Generate changelog for 1.7.2
2022-02-23 15:23:35 +08:00
Wenkai Yin(尹文开)
dd41c75118 Generate changelog for 1.7.2
Generate changelog for 1.7.2

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2022-02-23 15:11:18 +08:00
Wenkai Yin(尹文开)
9bf3aa8600 Merge pull request #4679 from ywk253100/release-test-2
Append "-dev" suffix for the image tag of release branches
2022-02-22 09:35:37 +08:00
Wenkai Yin(尹文开)
ec8c4cf3a5 Append "-dev" suffix for the image tag of release branches
Append "-dev" suffix for the image tag of release branches: release-1.0-dev

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2022-02-21 19:25:05 +08:00
Wenkai Yin(尹文开)
3845f205cf Merge pull request #4674 from ywk253100/220221_nil_value
Check for nil before logging DefaultVolumesToRestic value
2022-02-21 17:34:47 +08:00
Wenkai Yin(尹文开)
bc94c8784b Check for nil before logging DefaultVolumesToRestic value
Check for nil before logging DefaultVolumesToRestic value

Fixes #4617

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2022-02-21 17:17:58 +08:00
Wenkai Yin(尹文开)
12a8c17137 Merge pull request #4673 from ywk253100/220221_push_patch
Bug fixing, only check whether the tag is the latest version when tag isn't empty
2022-02-21 17:09:15 +08:00
Wenkai Yin(尹文开)
be3e4cc391 Bug fixing, only check whether the tag is the latest version when tag isn't empty
Bug fixing, only check whether the tag is the latest version when tag isn't empty

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2022-02-21 16:49:41 +08:00
Daniel Jiang
87b84e29ae Merge pull request #4672 from ywk253100/220221_push
Enable building and pushing image for release branches
2022-02-21 15:29:14 +08:00
Wenkai Yin(尹文开)
212550c5a9 Enable building and pushing image for release branches
Enable building and pushing image for release branches

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2022-02-21 14:49:20 +08:00
Daniel Jiang
f9f9c291f2 Merge pull request #4668 from ywk253100/220218_go
Pin the golang version to patch version for the image used by make
2022-02-18 16:30:52 +08:00
Wenkai Yin(尹文开)
b20bbdaa80 Pin the golang version to patch version for the image used by make
Pin the golang version to patch version for the image used by make

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2022-02-18 16:10:43 +08:00
Daniel Jiang
1236a38daf Merge pull request #4667 from ywk253100/220218_golang
Bump up golang to 1.17.7
2022-02-18 15:34:47 +08:00
Wenkai Yin(尹文开)
599b686596 Bump up golang to 1.17.7
Bump up golang to 1.17.7

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2022-02-18 11:33:25 +08:00
Daniel Jiang
4729274d07 Merge pull request #4385 from ywk253100/211122_rc
Add change log for 1.7.1
2021-11-22 17:30:00 +08:00
Wenkai Yin(尹文开)
cdf3acab5a Add change log for 1.7.1
Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2021-11-22 15:36:14 +08:00
Daniel Jiang
80b43f8f40 Merge pull request #4358 from ywk253100/211117_pager
[cherry-pick]fix buggy pager func
2021-11-17 16:05:28 +08:00
Alay Patel
bf10709f98 add 4358 changelog
Signed-off-by: Alay Patel <alay1431@gmail.com>
2021-11-17 15:00:40 +08:00
Alay Patel
8c6ed31528 - fix buggy pager func
fix paging items in to use list options passed by the paging function

The client-go pager sets the Limit options for the list call
to paginate the request[1]. This PR fixes the paging function
to use the options passed by the pager instead of shadowed options
This is required for the pagination to work correctly.

- simplify the pager list implementation by using pager.List()
The List() function already implements a lot of the logic that was
needed for paging here, using it simplifies the code.

1. 3f40906dd8/staging/src/k8s.io/client-go/tools/pager/pager.go (L219)

Signed-off-by: Alay Patel <alay1431@gmail.com>
2021-11-17 14:58:13 +08:00
Wenkai Yin(尹文开)
37a712ef2f Fix CVE-2020-29652 and CVE-2020-26160 (#4315)
Bump up restic to v0.12.1 to fix CVE-2020-26160.
Bump up module "github.com/vmware-tanzu/crash-diagnostics" to v0.3.7 to fix CVE-2020-29652.
The "github.com/vmware-tanzu/crash-diagnostics" updates client-go to v0.22.2 which introduces several break changes, this commit updates the related codes as well

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2021-11-09 17:04:25 -08:00
Frangipani Gold
1da212b0e3 Namespace validation now allows asterisks and empty string (#4316)
Validation allows empty string namespace

Signed-off-by: F. Gold <fgold@vmware.com>
2021-11-08 09:34:05 -08:00
Daniel Jiang
9996dc5ce9 Comment in Dockerfile to explain the digest of base image (#4224)
Signed-off-by: Daniel Jiang <jiangd@vmware.com>
2021-10-08 08:57:29 -04:00
Wenkai Yin(尹文开)
9e52260568 Merge pull request #4182 from ywk253100/210922_snapshot_cherrypick
Specify the "--snapshot-volumes=false" option explicitly when running backup with Restic
2021-09-22 22:00:31 +08:00
Wenkai Yin(尹文开)
4863ff4119 Specify the "--snapshot-volumes=false" option explicitly when running backup with Restic
If the "--snapshot-volumes=false" isn't specified explicitly, the vSphere plugin will always take snapshots for the volumes even though the "--default-volumes-to-restic" is specified
This can be removed if the logic of vSphere plugin changes

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2021-09-22 21:50:54 +08:00
Daniel Jiang
3327d209f7 Pin the base image for v1.7 (#4180)
To improve the reproducibility of the images of velero, this commit pins
the golang and distroless images to specific tag and digest.

Signed-off-by: Daniel Jiang <jiangd@vmware.com>
2021-09-22 07:50:07 -04:00
2244 changed files with 36905 additions and 287162 deletions

View File

@@ -5,18 +5,15 @@ about: Tell us about a problem you are experiencing
---
**What steps did you take and what happened:**
<!--A clear and concise description of what the bug is, and what commands you ran.-->
[A clear and concise description of what the bug is, and what commands you ran.)
**What did you expect to happen:**
**The following information will help us better understand what's going on**:
_If you are using velero v1.7.0+:_
Please use `velero debug --backup <backupname> --restore <restorename>` to generate the support bundle, and attach to this issue, more options please refer to `velero debug --help`
**The output of the following commands will help us better understand what's going on**:
(Pasting long output into a [GitHub gist](https://gist.github.com) or other pastebin is fine.)
_If you are using earlier versions:_
Please provide the output of the following commands (Pasting long output into a [GitHub gist](https://gist.github.com) or other pastebin is fine.)
- `kubectl logs deployment/velero -n velero`
- `velero backup describe <backupname>` or `kubectl get backup/<backupname> -n velero -o yaml`
- `velero backup logs <backupname>`
@@ -25,7 +22,7 @@ Please provide the output of the following commands (Pasting long output into a
**Anything else you would like to add:**
<!--Miscellaneous information that will assist in solving the issue.-->
[Miscellaneous information that will assist in solving the issue.]
**Environment:**

View File

@@ -5,15 +5,15 @@ about: Suggest an idea for this project
---
**Describe the problem/challenge you have**
<!--A description of the current limitation/problem/challenge that you are experiencing.-->
[A description of the current limitation/problem/challenge that you are experiencing.]
**Describe the solution you'd like**
<!--A clear and concise description of what you want to happen.-->
[A clear and concise description of what you want to happen.]
**Anything else you would like to add:**
<!--Miscellaneous information that will assist in solving the issue.-->
[Miscellaneous information that will assist in solving the issue.]
**Environment:**

View File

@@ -9,20 +9,15 @@ reviewers:
groups:
maintainers:
- zubron
- dsu-igeek
- jenting
- sseago
- reasonerjt
- ywk253100
- blackpiglet
- shubham-pampattiwar
- Lyndon-Li
- anshulahuja98
- kaovilai
tech-writer:
- sseago
- reasonerjt
- ywk253100
- Lyndon-Li
- a-mccarthy
files:
'site/**':

View File

@@ -1,21 +0,0 @@
version: 2
updates:
# Dependencies listed in .github/workflows
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"
labels:
- "Dependencies"
- "github_actions"
- "kind/changelog-not-required"
# Dependencies listed in go.mod
- package-ecosystem: "gomod"
directory: "/" # Location of package manifests
schedule:
interval: "weekly"
labels:
- "kind/changelog-not-required"
ignore:
- dependency-name: "*"
update-types: ["version-update:semver-major", "version-update:semver-minor", "version-update:semver-patch"]

33
.github/labeler.yml vendored
View File

@@ -1,33 +0,0 @@
# This file is used by Auto Label PRs action.
# Works with https://github.com/actions/labeler/
# Below this line, the keys are labels to be applied, and the values are the file globs to match against.
# Anything in the `design` directory gets the `Design` label.
Area/Design:
- changed-files:
- any-glob-to-any-file: design/*
# Anything that has plugin infra will be labeled.
# Individual plugins don't necessarily live here, though
Area/Plugins:
- changed-files:
- any-glob-to-any-file: pkg/plugins/**/*
Dependencies:
- changed-files:
- any-glob-to-any-file: go.mod
Documentation:
- changed-files:
- any-glob-to-any-file: site/content/docs/**/*
# Anything in the site directory gets the website label *EXCEPT* docs
Website:
- all:
- changed-files:
- any-glob-to-any-file: site/**/*
- all-globs-to-all-files: '!site/content/docs/**/*'
has-changelog:
- changed-files:
- any-glob-to-any-file: changelogs/**
has-e2e-2tests:
- changed-files:
- any-glob-to-any-file: test/e2e/**/*
has-unit-tests:
- changed-files:
- any-glob-to-any-file: pkg/**/*_test.go

43
.github/labels.yaml vendored
View File

@@ -1,43 +0,0 @@
# This file is used by [prow github action](https://github.com/jpmcb/prow-github-actions/) in .github/workflows/prow-action.yml.
# This file only has values for kind and area commands.
area:
- CLI
- CSI
- Cloud/AWS
- Cloud/Azure
- Cloud/DigitalOcean
- Cloud/GCP
- Cloud/vSphere
- Design
- Documentation
- Filters
- Plugins
- Process
- Storage/Minio
- Storage/Cinder
- WindowsSupport
- datamover
- fs-backup
- fs-backup/deletion
- fs-backup/file-selectable
- fs-uploader
- kopia-integration
- migration
- multi-tenancy
- progress-monitoring
- resilience
- schedule
- storage/IBM-ObjectStorage
- upgrade
- volume-snapshot-dm
kind:
- changelog-not-required
- question
- refactor
- requirement
- release-note
- release-blocker
- spike
- tech-debt
- usage-error
- voting

41
.github/labels.yml vendored Normal file
View File

@@ -0,0 +1,41 @@
area:
- "Cloud/AWS"
- "Cloud/GCP"
- "Cloud/Azure"
- "Design"
- "Plugins"
# Labels that can be applied to PRs with the /kind command
kind:
- "changelog-not-required"
- "tech-debt"
# Works with https://github.com/actions/labeler/
# Below this line, the keys are labels to be applied, and the values are the file globs to match against.
# Anything in the `design` directory gets the `Design` label.
Area/Design:
- design/*
# Anything in the site directory gets the website label *EXCEPT* docs
Website:
- any: ["site/**/*", "!site/content/docs/**/*"]
Documentation:
- site/content/docs/**/*
Dependencies:
- go.mod
# Anything that has plugin infra will be labeled.
# Individual plugins don't necessarily live here, though
Area/Plugins:
- "pkg/plugins/**/*"
has-unit-tests:
- "pkg/**/*_test.go"
has-e2e-2tests:
- "test/e2e/**/*"
has-changelog:
- "changelogs/**"

View File

@@ -9,5 +9,5 @@ Fixes #(issue)
# Please indicate you've done the following:
- [ ] [Accepted the DCO](https://velero.io/docs/v1.5/code-standards/#dco-sign-off). Commits without the DCO will delay acceptance.
- [ ] [Created a changelog file (`make new-changelog`)](https://velero.io/docs/main/code-standards/#adding-a-changelog) or comment `/kind changelog-not-required` on this PR.
- [ ] [Created a changelog file](https://velero.io/docs/v1.5/code-standards/#adding-a-changelog) or added `/kind changelog-not-required`.
- [ ] Updated the corresponding documentation in `site/content/docs/main`.

38
.github/stale.yml vendored Normal file
View File

@@ -0,0 +1,38 @@
# Number of days of inactivity before an issue becomes stale
daysUntilStale: 60
# Number of days of inactivity before a stale issue is closed
daysUntilClose: 14
# Issues with these labels will never be considered stale
exemptLabels:
- Epic
- Area/CLI
- Area/Cloud/AWS
- Area/Cloud/Azure
- Area/Cloud/GCP
- Area/Cloud/vSphere
- Area/CSI
- Area/Design
- Area/Documentation
- Area/Plugins
- Enhancement/User
- kind/tech-debt
- Needs investigation
- P0 - Hair on fire
- P1 - Important
- P2 - Long-term important
- P3 - Wouldn't it be nice if...
- Product Requirements
- Restic - GA
- Restic
- release-blocker
- Security
# Label to use when marking an issue as stale
staleLabel: staled
# Comment to post when marking an issue as stale. Set to `false` to disable
markComment: >
This issue has been automatically marked as stale because it has not had
recent activity. It will be closed if no further activity occurs. Thank you
for your contributions.
# Comment to post when closing a stale issue. Set to `false` to disable
closeComment: >
Closing the stale issue.

View File

@@ -13,7 +13,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Set the author of a PR as the assignee
uses: kentaro-m/auto-assign-action@v2.0.0
uses: kentaro-m/auto-assign-action@v1.1.1
with:
configuration-path: ".github/auto-assignees.yml"
repo-token: "${{ secrets.GITHUB_TOKEN }}"

View File

@@ -13,7 +13,7 @@ jobs:
triage:
runs-on: ubuntu-latest
steps:
- uses: actions/labeler@v5
- uses: actions/labeler@v3
with:
repo-token: "${{ secrets.GITHUB_TOKEN }}"
configuration-path: .github/labeler.yml
configuration-path: .github/labels.yml

View File

@@ -11,7 +11,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Request a PR review based on files types/paths, and/or groups the author belongs to
uses: necojackarc/auto-request-review@v0.13.0
uses: necojackarc/auto-request-review@v0.7.0
with:
token: ${{ secrets.GITHUB_TOKEN }}
config: .github/auto-assignees.yml

94
.github/workflows/crds-verify-kind.yaml vendored Normal file
View File

@@ -0,0 +1,94 @@
name: "Verify Velero CRDs across k8s versions"
on:
pull_request:
# Do not run when the change only includes these directories.
paths-ignore:
- "site/**"
- "design/**"
jobs:
# Build the Velero CLI once for all Kubernetes versions, and cache it so the fan-out workers can get it.
build-cli:
runs-on: ubuntu-latest
steps:
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.17
id: go
# Look for a CLI that's made for this PR
- name: Fetch built CLI
id: cache
uses: actions/cache@v2
env:
cache-name: cache-velero-cli
with:
path: ./_output/bin/linux/amd64/velero
# The cache key a combination of the current PR number, and a SHA256 hash of the Velero binary
key: velero-${{ github.event.pull_request.number }}-${{ hashFiles('./_output/bin/linux/amd64/velero') }}
# This key controls the prefixes that we'll look at in the cache to restore from
restore-keys: |
velero-${{ github.event.pull_request.number }}-
- name: Fetch cached go modules
uses: actions/cache@v2
if: steps.cache.outputs.cache-hit != 'true'
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Check out the code
uses: actions/checkout@v2
if: steps.cache.outputs.cache-hit != 'true'
# If no binaries were built for this PR, build it now.
- name: Build Velero CLI
if: steps.cache.outputs.cache-hit != 'true'
run: |
make local
# Check the common CLI against all kubernetes versions
crd-check:
needs: build-cli
runs-on: ubuntu-latest
strategy:
matrix:
# Latest k8s versions. There's no series-based tag, nor is there a latest tag.
k8s:
- 1.15.12
- 1.16.15
- 1.17.17
- 1.18.15
- 1.19.7
- 1.20.2
- 1.21.1
- 1.22.0
# All steps run in parallel unless otherwise specified.
# See https://docs.github.com/en/actions/learn-github-actions/managing-complex-workflows#creating-dependent-jobs
steps:
- name: Fetch built CLI
id: cache
uses: actions/cache@v2
env:
cache-name: cache-velero-cli
with:
path: ./_output/bin/linux/amd64/velero
# The cache key a combination of the current PR number, and a SHA256 hash of the Velero binary
key: velero-${{ github.event.pull_request.number }}-${{ hashFiles('./_output/bin/linux/amd64/velero') }}
# This key controls the prefixes that we'll look at in the cache to restore from
restore-keys: |
velero-${{ github.event.pull_request.number }}-
- uses: engineerd/setup-kind@v0.5.0
with:
version: "v0.11.1"
image: "kindest/node:v${{ matrix.k8s }}"
- name: Install CRDs
run: |
kubectl cluster-info
kubectl get pods -n kube-system
kubectl version
echo "current-context:" $(kubectl config current-context)
echo "environment-kubeconfig:" ${KUBECONFIG}
./_output/bin/linux/amd64/velero install --crds-only --dry-run -oyaml | kubectl apply -f -

View File

@@ -6,43 +6,42 @@ on:
paths-ignore:
- "site/**"
- "design/**"
- "**/*.md"
jobs:
get-go-version:
uses: ./.github/workflows/get-go-version.yaml
with:
ref: ${{ github.event.pull_request.base.ref }}
# Build the Velero CLI and image once for all Kubernetes versions, and cache it so the fan-out workers can get it.
build:
runs-on: ubuntu-latest
needs: get-go-version
outputs:
minio-dockerfile-sha: ${{ steps.minio-version.outputs.dockerfile_sha }}
steps:
- name: Check out the code
uses: actions/checkout@v6
- name: Set up Go version
uses: actions/setup-go@v6
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: ${{ needs.get-go-version.outputs.version }}
go-version: 1.17
id: go
# Look for a CLI that's made for this PR
- name: Fetch built CLI
id: cli-cache
uses: actions/cache@v4
uses: actions/cache@v2
with:
path: ./_output/bin/linux/amd64/velero
# The cache key a combination of the current PR number and the commit SHA
key: velero-cli-${{ github.event.pull_request.number }}-${{ github.sha }}
- name: Fetch built image
id: image-cache
uses: actions/cache@v4
uses: actions/cache@v2
with:
path: ./velero.tar
# The cache key a combination of the current PR number and the commit SHA
key: velero-image-${{ github.event.pull_request.number }}-${{ github.sha }}
- name: Fetch cached go modules
uses: actions/cache@v2
if: steps.cli-cache.outputs.cache-hit != 'true'
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Check out the code
uses: actions/checkout@v2
if: steps.cli-cache.outputs.cache-hit != 'true' || steps.image-cache.outputs.cache-hit != 'true'
# If no binaries were built for this PR, build it now.
- name: Build Velero CLI
if: steps.cli-cache.outputs.cache-hit != 'true'
@@ -52,107 +51,63 @@ jobs:
- name: Build Velero Image
if: steps.image-cache.outputs.cache-hit != 'true'
run: |
IMAGE=velero VERSION=pr-test BUILD_OUTPUT_TYPE=docker make container
docker save velero:pr-test-linux-amd64 -o ./velero.tar
# Check and build MinIO image once for all e2e tests
- name: Check Bitnami MinIO Dockerfile version
id: minio-version
run: |
DOCKERFILE_SHA=$(curl -s https://api.github.com/repos/bitnami/containers/commits?path=bitnami/minio/2025/debian-12/Dockerfile\&per_page=1 | jq -r '.[0].sha')
echo "dockerfile_sha=${DOCKERFILE_SHA}" >> $GITHUB_OUTPUT
- name: Cache MinIO Image
uses: actions/cache@v4
id: minio-cache
with:
path: ./minio-image.tar
key: minio-bitnami-${{ steps.minio-version.outputs.dockerfile_sha }}
- name: Build MinIO Image from Bitnami Dockerfile
if: steps.minio-cache.outputs.cache-hit != 'true'
run: |
echo "Building MinIO image from Bitnami Dockerfile..."
git clone --depth 1 https://github.com/bitnami/containers.git /tmp/bitnami-containers
cd /tmp/bitnami-containers/bitnami/minio/2025/debian-12
docker build -t bitnami/minio:local .
docker save bitnami/minio:local > ${{ github.workspace }}/minio-image.tar
# Create json of k8s versions to test
# from guide: https://stackoverflow.com/a/65094398/4590470
setup-test-matrix:
runs-on: ubuntu-latest
env:
GH_TOKEN: ${{ github.token }}
outputs:
matrix: ${{ steps.set-matrix.outputs.matrix }}
steps:
- name: Set k8s versions
id: set-matrix
# everything excluding older tags. limits needs to be high enough to cover all latest versions
# and test labels
# grep -E "v[1-9]\.(2[5-9]|[3-9][0-9])" filters for v1.25 to v9.99
# and removes older patches of the same minor version
# awk -F. '{if(!a[$1"."$2]++)print $1"."$2"."$NF}'
run: |
echo "matrix={\
\"k8s\":$(wget -q -O - "https://hub.docker.com/v2/namespaces/kindest/repositories/node/tags?page_size=50" | grep -o '"name": *"[^"]*' | grep -o '[^"]*$' | grep -v -E "alpha|beta" | grep -E "v[1-9]\.(2[5-9]|[3-9][0-9])" | awk -F. '{if(!a[$1"."$2]++)print $1"."$2"."$NF}' | sort -r | sed s/v//g | jq -R -c -s 'split("\n")[:-1]'),\
\"labels\":[\
\"Basic && (ClusterResource || NodePort || StorageClass)\", \
\"ResourceFiltering && !Restic\", \
\"ResourceModifier || (Backups && BackupsSync) || PrivilegesMgmt || OrderedResources\", \
\"(NamespaceMapping && Single && Restic) || (NamespaceMapping && Multiple && Restic)\"\
]}" >> $GITHUB_OUTPUT
# Run E2E test against all Kubernetes versions on kind
IMAGE=velero VERSION=pr-test make container
docker save velero:pr-test -o ./velero.tar
# Run E2E test against all kubernetes versions on kind
run-e2e-test:
needs:
- build
- setup-test-matrix
- get-go-version
needs: build
runs-on: ubuntu-latest
strategy:
matrix: ${{fromJson(needs.setup-test-matrix.outputs.matrix)}}
matrix:
k8s:
# doesn't cover 1.15 as 1.15 doesn't support "apiextensions.k8s.io/v1" that is needed for the case
#- 1.15.12
- 1.16.15
- 1.17.17
- 1.18.15
- 1.19.7
- 1.20.2
- 1.21.1
- 1.22.0
fail-fast: false
steps:
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.17
id: go
- name: Check out the code
uses: actions/checkout@v6
- name: Set up Go version
uses: actions/setup-go@v6
with:
go-version: ${{ needs.get-go-version.outputs.version }}
# Fetch the pre-built MinIO image from the build job
- name: Fetch built MinIO Image
uses: actions/cache@v4
id: minio-cache
with:
path: ./minio-image.tar
key: minio-bitnami-${{ needs.build.outputs.minio-dockerfile-sha }}
- name: Load MinIO Image
run: |
echo "Loading MinIO image..."
docker load < ./minio-image.tar
uses: actions/checkout@v2
- name: Install MinIO
run: |
docker run -d --rm -p 9000:9000 -e "MINIO_ROOT_USER=minio" -e "MINIO_ROOT_PASSWORD=minio123" -e "MINIO_DEFAULT_BUCKETS=bucket,additional-bucket" bitnami/minio:local
- uses: engineerd/setup-kind@v0.6.2
run:
docker run -d --rm -p 9000:9000 -e "MINIO_ACCESS_KEY=minio" -e "MINIO_SECRET_KEY=minio123" -e "MINIO_DEFAULT_BUCKETS=bucket,additional-bucket" bitnami/minio:2021.6.17-debian-10-r7
- uses: engineerd/setup-kind@v0.5.0
with:
skipClusterLogsExport: true
version: "v0.27.0"
version: "v0.11.1"
image: "kindest/node:v${{ matrix.k8s }}"
- name: Fetch built CLI
id: cli-cache
uses: actions/cache@v4
uses: actions/cache@v2
with:
path: ./_output/bin/linux/amd64/velero
key: velero-cli-${{ github.event.pull_request.number }}-${{ github.sha }}
- name: Fetch built Image
id: image-cache
uses: actions/cache@v4
uses: actions/cache@v2
with:
path: ./velero.tar
key: velero-image-${{ github.event.pull_request.number }}-${{ github.sha }}
- name: Load Velero Image
run:
kind load image-archive velero.tar
# always try to fetch the cached go modules as the e2e test needs it either
- name: Fetch cached go modules
uses: actions/cache@v2
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Run E2E test
run: |
cat << EOF > /tmp/credential
@@ -160,32 +115,10 @@ jobs:
aws_access_key_id=minio
aws_secret_access_key=minio123
EOF
# Match kubectl version to k8s server version
curl -LO https://dl.k8s.io/release/v${{ matrix.k8s }}/bin/linux/amd64/kubectl
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
git clone https://github.com/vmware-tanzu-experiments/distributed-data-generator.git -b main /tmp/kibishii
GOPATH=~/go \
CLOUD_PROVIDER=kind \
OBJECT_STORE_PROVIDER=aws \
BSL_CONFIG=region=minio,s3ForcePathStyle="true",s3Url=http://$(hostname -i):9000 \
CREDS_FILE=/tmp/credential \
BSL_BUCKET=bucket \
ADDITIONAL_OBJECT_STORE_PROVIDER=aws \
ADDITIONAL_BSL_CONFIG=region=minio,s3ForcePathStyle="true",s3Url=http://$(hostname -i):9000 \
ADDITIONAL_CREDS_FILE=/tmp/credential \
ADDITIONAL_BSL_BUCKET=additional-bucket \
VELERO_IMAGE=velero:pr-test-linux-amd64 \
PLUGINS=velero/velero-plugin-for-aws:latest \
GINKGO_LABELS="${{ matrix.labels }}" \
KIBISHII_DIRECTORY=/tmp/kibishii/kubernetes/yaml/ \
make -C test/ run-e2e
timeout-minutes: 30
- name: Upload debug bundle
if: ${{ failure() }}
uses: actions/upload-artifact@v5
with:
name: DebugBundle-k8s-${{ matrix.k8s }}-job-${{ strategy.job-index }}
path: /home/runner/work/velero/velero/test/e2e/debug-bundle*
GOPATH=~/go CLOUD_PROVIDER=kind \
OBJECT_STORE_PROVIDER=aws BSL_CONFIG=region=minio,s3ForcePathStyle="true",s3Url=http://$(hostname -i):9000 \
CREDS_FILE=/tmp/credential BSL_BUCKET=bucket \
ADDITIONAL_OBJECT_STORE_PROVIDER=aws ADDITIONAL_BSL_CONFIG=region=minio,s3ForcePathStyle="true",s3Url=http://$(hostname -i):9000 \
ADDITIONAL_CREDS_FILE=/tmp/credential ADDITIONAL_BSL_BUCKET=additional-bucket \
GINKGO_FOCUS=Basic VELERO_IMAGE=velero:pr-test \
make -C test/e2e run

View File

@@ -1,33 +0,0 @@
on:
workflow_call:
inputs:
ref:
description: "The target branch's ref"
required: true
type: string
outputs:
version:
description: "The expected Go version"
value: ${{ jobs.extract.outputs.version }}
jobs:
extract:
runs-on: ubuntu-latest
outputs:
version: ${{ steps.pick-version.outputs.version }}
steps:
- name: Check out the code
uses: actions/checkout@v6
- id: pick-version
run: |
if [ "${{ inputs.ref }}" == "main" ]; then
version=$(grep '^go ' go.mod | awk '{print $2}' | cut -d. -f1-2)
else
goDirectiveVersion=$(grep '^go ' go.mod | awk '{print $2}')
toolChainVersion=$(grep '^toolchain ' go.mod | awk '{print $2}')
version=$(printf "%s\n%s\n" "$goDirectiveVersion" "$toolChainVersion" | sort -V | tail -n1)
fi
echo "version=$version"
echo "version=$version" >> $GITHUB_OUTPUT

18
.github/workflows/milestoned-issues.yml vendored Normal file
View File

@@ -0,0 +1,18 @@
name: Add issues with a milestone to the milestone's board
on:
issues:
types: [milestoned]
jobs:
automate-project-columns:
runs-on: ubuntu-latest
steps:
- uses: alex-page/github-project-automation-plus@v0.3.0
with:
# Do NOT add PRs to the board, as that's duplication. Their corresponding issue should be on the board.
if: ${{ !github.event.issue.pull_request }}
project: "${{ github.event.issue.milestone.title }}"
column: "To Do"
repo-token: ${{ secrets.GH_TOKEN }}

View File

@@ -1,36 +0,0 @@
name: Trivy Nightly Scan
on:
schedule:
- cron: '0 2 * * *' # run at 2 AM UTC
jobs:
nightly-scan:
name: Trivy nightly scan
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
# maintain the versions of Velero those need security scan
versions: [main]
# list of images that need scan
images: [velero, velero-plugin-for-aws, velero-plugin-for-gcp, velero-plugin-for-microsoft-azure]
permissions:
security-events: write # for github/codeql-action/upload-sarif to upload SARIF results
steps:
- name: Checkout code
uses: actions/checkout@v6
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: 'docker.io/velero/${{ matrix.images }}:${{ matrix.versions }}'
severity: 'CRITICAL,HIGH,MEDIUM'
format: 'template'
template: '@/contrib/sarif.tpl'
output: 'trivy-results.sarif'
- name: Upload Trivy scan results to GitHub Security tab
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: 'trivy-results.sarif'

View File

@@ -0,0 +1,15 @@
name: Move new issues into Triage
on:
issues:
types: [opened]
jobs:
automate-project-columns:
runs-on: ubuntu-latest
steps:
- uses: alex-page/github-project-automation-plus@v0.3.0
with:
project: "Velero Support Board"
column: "New"
repo-token: ${{ secrets.GH_TOKEN }}

View File

@@ -1,9 +1,5 @@
name: Pull Request Changelog Check
# by setting `on: [pull_request]`, that means action will be trigger when PR is opened, synchronize, reopened.
# Add labeled and unlabeled events too.
on:
pull_request:
types: [opened, synchronize, reopened, labeled, unlabeled]
on: [pull_request]
jobs:
build:
@@ -12,7 +8,7 @@ jobs:
steps:
- name: Check out the code
uses: actions/checkout@v6
uses: actions/checkout@v2
- name: Changelog check
if: ${{ !(contains(github.event.pull_request.labels.*.name, 'kind/changelog-not-required') || contains(github.event.pull_request.labels.*.name, 'Design') || contains(github.event.pull_request.labels.*.name, 'Website') || contains(github.event.pull_request.labels.*.name, 'Documentation'))}}

View File

@@ -1,32 +1,23 @@
name: Pull Request CI Check
on: [pull_request]
jobs:
get-go-version:
uses: ./.github/workflows/get-go-version.yaml
with:
ref: ${{ github.event.pull_request.base.ref }}
build:
name: Run CI
needs: get-go-version
runs-on: ubuntu-latest
strategy:
fail-fast: false
steps:
- name: Check out the code
uses: actions/checkout@v6
- name: Set up Go version
uses: actions/setup-go@v6
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: ${{ needs.get-go-version.outputs.version }}
go-version: 1.17
id: go
- name: Check out the code
uses: actions/checkout@v2
- name: Fetch cached go modules
uses: actions/cache@v2
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Make ci
run: make ci
- name: Upload test coverage
uses: codecov/codecov-action@v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: coverage.out
verbose: true
fail_ci_if_error: true

View File

@@ -8,50 +8,13 @@ jobs:
steps:
- name: Check out the code
uses: actions/checkout@v6
uses: actions/checkout@v2
- name: Codespell
uses: codespell-project/actions-codespell@master
with:
# ignore the config/.../crd.go file as it's generated binary data that is edited elsewhere.
skip: .git,*.png,*.jpg,*.woff,*.ttf,*.gif,*.ico,./config/crd/v1beta1/crds/crds.go,./config/crd/v1/crds/crds.go,./config/crd/v2alpha1/crds/crds.go,./go.sum,./LICENSE
ignore_words_list: iam,aks,ist,bridget,ue,shouldnot,atleast,notin,sme,optin,sie
# ignore the config/.../crd.go file as it's generated binary data that is edited elswhere.
skip: .git,*.png,*.jpg,*.woff,*.ttf,*.gif,*.ico,./config/crd/v1beta1/crds/crds.go,./config/crd/v1/crds/crds.go
ignore_words_list: iam,aks,ist,bridget,ue
check_filenames: true
check_hidden: true
- name: Velero.io word list check
shell: bash {0}
run: |
IGNORE_COMMENT="Velero.io word list : ignore"
FILES_TO_CHECK=$(find . -type f \
! -path "./.git/*" \
! -path "./site/content/docs/v*" \
! -path "./changelogs/CHANGELOG-*" \
! -path "./.github/workflows/pr-codespell.yml" \
! -path "./site/static/fonts/Metropolis/Open Font License.md" \
! -regex '.*\.\(png\|jpg\|woff\|ttf\|gif\|ico\|svg\)'
)
function check_word_in_files() {
local word=$1
xargs grep -Iinr "$word" <<< "$FILES_TO_CHECK" | \
grep -v "$IGNORE_COMMENT" | \
grep -i --color=always "$word" && \
EXIT_STATUS=1
}
function check_word_case_sensitive_in_files() {
local word=$1
xargs grep -Inr "$word" <<< "$FILES_TO_CHECK" | \
grep -v "$IGNORE_COMMENT" | \
grep --color=always "$word" && \
EXIT_STATUS=1
}
EXIT_STATUS=0
check_word_case_sensitive_in_files ' kubernetes '
check_word_in_files 'on-premise\b'
check_word_in_files 'back-up'
check_word_in_files 'plug-in'
check_word_in_files 'whitelist'
check_word_in_files 'blacklist'
exit $EXIT_STATUS

View File

@@ -1,37 +0,0 @@
name: build Velero containers on Dockerfile change
on:
pull_request:
branches:
- 'main'
- 'release-**'
paths:
- 'Dockerfile'
jobs:
build:
name: Build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
name: Checkout
- name: Set up QEMU
id: qemu
uses: docker/setup-qemu-action@v3
with:
platforms: all
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v3
with:
version: latest
# Although this action also calls docker-push.sh, it is not triggered
# by push, so BRANCH and TAG are empty by default. docker-push.sh will
# only build Velero image without pushing.
- name: Make Velero container without pushing to registry.
if: github.repository == 'vmware-tanzu/velero'
run: |
./hack/docker-push.sh

View File

@@ -1,29 +0,0 @@
name: Verify goreleaser change
on:
pull_request:
branches:
- 'main'
- 'release-**'
paths:
- '.goreleaser.yml'
- 'hack/release-tools/goreleaser.sh'
jobs:
build:
name: Build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
name: Checkout
- name: Verify .goreleaser.yml and try a dryrun release.
if: github.repository == 'vmware-tanzu/velero'
run: |
CHANGELOG=$(ls changelogs | sort -V -r | head -n 1)
GITHUB_TOKEN=${{ secrets.GITHUB_TOKEN }} \
REGISTRY=velero \
RELEASE_NOTES_FILE=changelogs/$CHANGELOG \
PUBLISH=false \
make release

View File

@@ -1,32 +1,14 @@
name: Pull Request Linter Check
on:
pull_request:
# Do not run when the change only includes these directories.
paths-ignore:
- "site/**"
- "design/**"
- "**/*.md"
on: [pull_request]
jobs:
get-go-version:
uses: ./.github/workflows/get-go-version.yaml
with:
ref: ${{ github.event.pull_request.base.ref }}
build:
name: Run Linter Check
runs-on: ubuntu-latest
needs: get-go-version
steps:
- name: Check out the code
uses: actions/checkout@v6
- name: Set up Go version
uses: actions/setup-go@v6
with:
go-version: ${{ needs.get-go-version.outputs.version }}
- name: Check out the code
uses: actions/checkout@v2
- name: Linter check
uses: golangci/golangci-lint-action@v9
with:
version: v2.5.0
args: --verbose
- name: Linter check
run: make lint

View File

@@ -9,21 +9,12 @@ jobs:
execute:
runs-on: ubuntu-latest
steps:
- uses: jpmcb/prow-github-actions@v1.1.3
- uses: jpmcb/prow-github-actions@v1.1.2
with:
# Only support /kind command for now.
# TODO: before allowing the /lgtm command, see if we can block merging if changelog labels are missing.
prow-commands: |
/approve
/area
/assign
/cc
/close
/hold
prow-commands: "/area
/kind
/milestone
/retitle
/remove
/reopen
/uncc
/unassign
/cc
/uncc"
github-token: "${{ secrets.GITHUB_TOKEN }}"

View File

@@ -12,16 +12,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
with:
# The default value is "1" which fetches only a single commit. If we merge PR without squash or rebase,
# there are at least two commits: the first one is the merge commit and the second one is the real commit
# contains the changes.
# As we use the Dockerfile's commit ID as the tag of the build-image, fetching only 1 commit causes the merge
# commit ID to be the tag.
# While when running make commands locally, as the local git repository usually contains all commits, the Dockerfile's
# commit ID is the second one. This is mismatch with the images in Dockerhub
fetch-depth: 2
- uses: actions/checkout@master
- name: Build
run: make build-image

View File

@@ -9,55 +9,42 @@ on:
- '*'
jobs:
get-go-version:
uses: ./.github/workflows/get-go-version.yaml
with:
ref: ${{ github.ref_name }}
build:
name: Build
runs-on: ubuntu-latest
needs: get-go-version
steps:
- name: Check out the code
uses: actions/checkout@v6
- name: Set up Go version
uses: actions/setup-go@v6
with:
go-version: ${{ needs.get-go-version.outputs.version }}
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.17
id: go
- name: Set up QEMU
id: qemu
uses: docker/setup-qemu-action@v3
with:
platforms: all
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v3
with:
version: latest
- name: Build
run: |
make local
# Clean go cache to ease the build environment storage pressure.
go clean -modcache -cache
- name: Test
run: make test
- name: Upload test coverage
uses: codecov/codecov-action@v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: coverage.out
verbose: true
# Only try to publish the container image from the root repo; forks don't have permission to do so and will always get failures.
- name: Publish container image
if: github.repository == 'vmware-tanzu/velero'
run: |
sudo swapoff -a
sudo rm -f /mnt/swapfile
docker system prune -a --force
# Build and push Velero image to docker registry
docker login -u ${{ secrets.DOCKER_USER }} -p ${{ secrets.DOCKER_PASSWORD }}
./hack/docker-push.sh
- name: Check out code into the Go module directory
uses: actions/checkout@v2
- name: Set up QEMU
id: qemu
uses: docker/setup-qemu-action@v1
with:
platforms: all
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
with:
version: latest
- name: Build
run: make local
- name: Test
run: make test
# Only try to publish the container image from the root repo; forks don't have permission to do so and will always get failures.
- name: Publish container image
if: github.repository == 'vmware-tanzu/velero'
run: |
docker login -u ${{ secrets.DOCKER_USER }} -p ${{ secrets.DOCKER_PASSWORD }}
./hack/docker-push.sh

View File

@@ -9,10 +9,10 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout the latest code
uses: actions/checkout@v6
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Automatic Rebase
uses: cirrus-actions/rebase@1.8
uses: cirrus-actions/rebase@1.3.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -1,23 +1,24 @@
name: "Close stale issues and PRs"
on:
schedule:
- cron: "30 1 * * *" # Every day at 1:30 UTC
# First of every month
- cron: "30 1 * * *"
jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v10.1.1
- uses: actions/stale@v3
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-issue-message: "This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 14 days. If a Velero team member has requested log or more information, please provide the output of the shared commands."
close-issue-message: "This issue was closed because it has been stalled for 14 days with no activity."
days-before-issue-stale: 60
days-before-issue-close: 14
stale-issue-label: staled
stale-issue-message: "This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days. If a Velero team member has requested log or more information, please provide the output of the shared commands."
close-issue-message: "This issue was closed because it has been stalled for 5 days with no activity."
days-before-issue-stale: 30
days-before-issue-close: 5
# Disable stale PRs for now; they can remain open.
days-before-pr-stale: -1
days-before-pr-close: -1
# Only issues made after Feb 09 2021.
start-date: "2021-09-02T00:00:00"
exempt-issue-labels: "Epic,Area/CLI,Area/Cloud/AWS,Area/Cloud/Azure,Area/Cloud/GCP,Area/Cloud/vSphere,Area/CSI,Area/Design,Area/Documentation,Area/Plugins,Bug,Enhancement/User,kind/requirement,kind/refactor,kind/tech-debt,limitation,Needs investigation,Needs triage,Needs Product,P0 - Hair on fire,P1 - Important,P2 - Long-term important,P3 - Wouldn't it be nice if...,Product Requirements,Restic - GA,Restic,release-blocker,Security,backlog"
# Only make issues stale if they have these labels. Comma separated.
only-labels: "Needs info,Duplicate"

17
.gitignore vendored
View File

@@ -38,7 +38,6 @@ _testmain.go
# Hugo compiled data
site/public
site/resources
site/.hugo_build.lock
.vs
@@ -47,19 +46,5 @@ _tiltbuild
tilt-resources/tilt-settings.json
tilt-resources/velero_v1_backupstoragelocation.yaml
tilt-resources/deployment.yaml
tilt-resources/node-agent.yaml
tilt-resources/restic.yaml
tilt-resources/cloud
# test generated files
test/e2e/report.xml
coverage.out
__debug_bin*
debug.test*
# make lint cache
.cache/
# Go telemetry directory created when container sets HOME to working directory
# This happens because Makefile uses 'docker run -w /github.com/vmware-tanzu/velero'
# and Go's os.UserConfigDir() falls back to $HOME/.config when XDG_CONFIG_HOME is unset
.config/

View File

@@ -1,438 +0,0 @@
# This file contains all available configuration options
# with their default values.
# options for analysis running
run:
# default concurrency is a available CPU number
concurrency: 4
# timeout for analysis, e.g. 30s, 5m, default is 0
timeout: 20m
# exit code when at least one issue was found, default is 1
issues-exit-code: 1
# by default isn't set. If set we pass it to "go list -mod={option}". From "go help modules":
# If invoked with -mod=readonly, the go command is disallowed from the implicit
# automatic updating of go.mod described above. Instead, it fails when any changes
# to go.mod are needed. This setting is most useful to check that go.mod does
# not need updates, such as in a continuous integration and testing system.
# If invoked with -mod=vendor, the go command assumes that the vendor
# directory holds the correct copies of dependencies and ignores
# the dependency descriptions in go.mod.
# modules-download-mode: readonly|release|vendor
modules-download-mode: readonly
# Allow multiple parallel golangci-lint instances running.
# If false (default) - golangci-lint acquires file lock on start.
allow-parallel-runners: false
# output configuration options
output:
formats:
text:
path: stdout
# print lines of code with issue, default is true
print-issued-lines: true
# print linter name in the end of issue text, default is true
print-linter-name: true
# Show statistics per linter.
show-stats: false
linters:
# all available settings of specific linters
settings:
depguard:
rules:
main:
deny:
# specify an error message to output when a denylisted package is used
- pkg: github.com/sirupsen/logrus
desc: "logging is allowed only by logutils.Log"
dogsled:
# checks assignments with too many blank identifiers; default is 2
max-blank-identifiers: 2
dupl:
# tokens count to trigger issue, 150 by default
threshold: 100
errcheck:
# report about not checking of errors in type assertions: `a := b.(MyStruct)`;
# default is false: such cases aren't reported by default.
check-type-assertions: false
# report about assignment of errors to blank identifier: `num, _ := strconv.Atoi(numStr)`;
# default is false: such cases aren't reported by default.
check-blank: false
exhaustive:
# indicates that switch statements are to be considered exhaustive if a
# 'default' case is present, even if all enum members aren't listed in the
# switch
default-signifies-exhaustive: false
funlen:
lines: 60
statements: 40
gocognit:
# minimal code complexity to report, 30 by default (but we recommend 10-20)
min-complexity: 10
nestif:
# minimal complexity of if statements to report, 5 by default
min-complexity: 4
goconst:
# minimal length of string constant, 3 by default
min-len: 3
# minimal occurrences count to trigger, 3 by default
min-occurrences: 5
gocritic:
# Which checks should be enabled; can't be combined with 'disabled-checks';
# See https://go-critic.github.io/overview#checks-overview
# To check which checks are enabled run `GL_DEBUG=gocritic golangci-lint run`
# By default list of stable checks is used.
settings: # settings passed to gocritic
captLocal: # must be valid enabled check name
paramsOnly: true
gocyclo:
# minimal code complexity to report, 30 by default (but we recommend 10-20)
min-complexity: 10
godot:
# check all top-level comments, not only declarations
check-all: false
godox:
# report any comments starting with keywords, this is useful for TODO or FIXME comments that
# might be left in the code accidentally and should be resolved before merging
keywords: # default keywords are TODO, BUG, and FIXME, these can be overwritten by this setting
- NOTE
- OPTIMIZE # marks code that should be optimized before merging
- HACK # marks hack-arounds that should be removed before merging
gosec:
excludes:
- G115
govet:
# enable or disable analyzers by name
enable:
- atomicalign
enable-all: false
disable:
- shadow
disable-all: false
importas:
alias:
- alias: appsv1api
pkg: k8s.io/api/apps/v1
- alias: corev1api
pkg: k8s.io/api/core/v1
- alias: rbacv1
pkg: k8s.io/api/rbac/v1
- alias: apierrors
pkg: k8s.io/apimachinery/pkg/api/errors
- alias: apiextv1
pkg: k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1
- alias: metav1
pkg: k8s.io/apimachinery/pkg/apis/meta/v1
- alias: storagev1api
pkg: k8s.io/api/storage/v1
- alias: batchv1api
pkg: k8s.io/api/batch/v1
lll:
# max line length, lines longer will be reported. Default is 120.
# '\t' is counted as 1 character by default, and can be changed with the tab-width option
line-length: 120
# tab width in spaces. Default to 1.
tab-width: 1
misspell:
# Correct spellings using locale preferences for US or UK.
# Default is to use a neutral variety of English.
# Setting locale to US will correct the British spelling of 'colour' to 'color'.
locale: US
ignore-rules:
- someword
nakedret:
# make an issue if func has more lines of code than this setting and it has naked returns; default is 30
max-func-lines: 30
prealloc:
# XXX: we don't recommend using this linter before doing performance profiling.
# For most programs usage of prealloc will be a premature optimization.
# Report preallocation suggestions only on simple loops that have no returns/breaks/continues/gotos in them.
# True by default.
simple: true
range-loops: true # Report preallocation suggestions on range loops, true by default
for-loops: false # Report preallocation suggestions on for loops, false by default
nolintlint:
# Enable to ensure that nolint directives are all used. Default is true.
allow-unused: false
# Exclude following linters from requiring an explanation. Default is [].
allow-no-explanation: []
# Enable to require an explanation of nonzero length after each nolint directive. Default is false.
require-explanation: true
# Enable to require nolint directives to mention the specific linter being suppressed. Default is false.
require-specific: true
perfsprint:
strconcat: false
sprintf1: false
errorf: false
int-conversion: true
revive:
rules:
- name: blank-imports
disabled: true
- name: context-as-argument
disabled: true
- name: context-keys-type
- name: dot-imports
disabled: true
- name: early-return
disabled: true
arguments:
- "preserveScope"
- name: empty-block
disabled: true
- name: error-naming
disabled: true
- name: error-return
disabled: true
- name: error-strings
disabled: true
- name: errorf
disabled: true
- name: increment-decrement
- name: indent-error-flow
disabled: true
- name: range
- name: receiver-naming
disabled: true
- name: redefines-builtin-id
disabled: true
- name: superfluous-else
disabled: true
arguments:
- "preserveScope"
- name: time-naming
- name: unexported-return
disabled: true
- name: unnecessary-stmt
- name: unreachable-code
- name: unused-parameter
disabled: true
- name: use-any
- name: var-declaration
- name: var-naming
disabled: true
rowserrcheck:
packages:
- github.com/jmoiron/sqlx
staticcheck:
checks:
- all
- -QF1001 # FIXME
- -QF1003 # FIXME
- -QF1004 # FIXME
- -QF1007 # FIXME
- -QF1008 # FIXME
- -QF1009 # FIXME
- -QF1012 # FIXME
testifylint:
# TODO: enable them all
disable:
- float-compare
- go-require
enable-all: true
testpackage:
# regexp pattern to skip files
skip-regexp: (export|internal)_test\.go
unparam:
# Inspect exported functions, default is false. Set to true if no external program/library imports your code.
# XXX: if you enable this setting, unparam will report a lot of false-positives in text editors:
# if it's called for subdir of a project it can't find external interfaces. All text editor integrations
# with golangci-lint call it on a directory with the changed file.
check-exported: false
usetesting:
os-setenv: false
whitespace:
multi-if: false # Enforces newlines (or comments) after every multi-line if statement
multi-func: false # Enforces newlines (or comments) after every multi-line function signature
wsl:
# If true append is only allowed to be cuddled if appending value is
# matching variables, fields or types on line above. Default is true.
strict-append: true
# Allow calls and assignments to be cuddled as long as the lines have any
# matching variables, fields or types. Default is true.
allow-assign-and-call: true
# Allow multiline assignments to be cuddled. Default is true.
allow-multiline-assign: true
# Allow declarations (var) to be cuddled.
allow-cuddle-declarations: false
# Allow trailing comments in ending of blocks
allow-trailing-comment: false
# Force newlines in end of case at this limit (0 = never).
force-case-trailing-whitespace: 0
# Force cuddling of err checks with err var assignment
force-err-cuddling: false
# Allow leading comments to be separated with empty lines
allow-separated-leading-comment: false
default: none
enable:
- asasalint
- asciicheck
- bidichk
- bodyclose
- copyloopvar
- dogsled
- dupword
- durationcheck
- errcheck
- errchkjson
- exptostd
- ginkgolinter
- goconst
- goheader
- goprintffuncname
- gosec
- govet
- importas
- ineffassign
- misspell
- nakedret
- nilerr
- noctx
- nolintlint
- nosprintfhostport
- perfsprint
- revive
- staticcheck
- testifylint
- thelper
- unconvert
- unparam
- unused
- usestdlibvars
- usetesting
- whitespace
exclusions:
# which dirs to skip: issues from them won't be reported;
# can use regexp here: generated.*, regexp is applied on full path;
# default value is empty list, but default dirs are skipped independently
# from this option's value (see skip-dirs-use-default).
# "/" will be replaced by current OS file path separator to properly work
# on Windows.
paths:
- pkg/plugin/generated/*
- third_party
rules:
- linters:
- staticcheck
text: "DefaultVolumesToRestic" # No need to report deprecate for DefaultVolumesToRestic.
- path: ".*_test.go$"
linters:
- errcheck
- goconst
- gosec
- govet
- staticcheck
- unparam
- unused
- path: test/
linters:
- errcheck
- goconst
- gosec
- nilerr
- staticcheck
- unparam
- unused
- path: ".*data_upload_controller_test.go$"
linters:
- dupword
text: "type"
- path: ".*config_test.go$"
linters:
- dupword
text: "bucket"
generated: lax
presets:
- comments
- common-false-positives
- legacy
- std-error-handling
issues:
# Maximum issues count per one linter. Set to 0 to disable. Default is 50.
max-issues-per-linter: 0
# Maximum count of issues with the same text. Set to 0 to disable. Default is 3.
max-same-issues: 0
# make issues output unique by line, default is true
uniq-by-line: true
# This file contains all available configuration options
# with their default values.
formatters:
enable:
- gofmt
- goimports
exclusions:
generated: lax
paths:
- pkg/plugin/generated/*
- third_party
settings:
gofmt:
# simplify code: gofmt with `-s` option, true by default
simplify: true
goimports:
local-prefixes:
- github.com/vmware-tanzu/velero
severity:
default: error
# Default value is empty list.
# When a list of severity rules are provided, severity information will be added to lint
# issues. Severity rules have the same filtering capability as exclude rules except you
# are allowed to specify one matcher per severity rule.
# Only affects out formats that support setting severity information.
rules:
- linters:
- dupl
severity: info
version: "2"

View File

@@ -14,7 +14,7 @@
dist: _output
builds:
- main: ./cmd/velero/velero.go
- main: ./cmd/velero/main.go
env:
- CGO_ENABLED=0
goos:
@@ -26,23 +26,20 @@ builds:
- arm
- arm64
- ppc64le
- s390x
ignore:
# don't build arm for darwin and arm/arm64 for windows
# don't build arm/arm64 for darwin or windows
- goos: darwin
goarch: arm
- goos: darwin
goarch: ppc64le
goarch: arm64
- goos: darwin
goarch: s390x
goarch: ppc64le
- goos: windows
goarch: arm
- goos: windows
goarch: arm64
- goos: windows
goarch: ppc64le
- goos: windows
goarch: s390x
ldflags:
- -X "github.com/vmware-tanzu/velero/pkg/buildinfo.Version={{ .Tag }}" -X "github.com/vmware-tanzu/velero/pkg/buildinfo.GitSHA={{ .FullCommit }}" -X "github.com/vmware-tanzu/velero/pkg/buildinfo.GitTreeState={{ .Env.GIT_TREE_STATE }}" -X "github.com/vmware-tanzu/velero/pkg/buildinfo.ImageRegistry={{ .Env.REGISTRY }}"
archives:
@@ -59,10 +56,3 @@ release:
name: velero
draft: true
prerelease: auto
git:
# What should be used to sort tags when gathering the current and previous
# tags if there are more than one tag in the same commit.
#
# Default: `-version:refname`
tag_sort: -version:creatordate

View File

@@ -3,7 +3,6 @@
If you're using Velero and want to add your organization to this list,
[follow these directions][1]!
<a href="https://www.pitsdatarecovery.net/" border="0" target="_blank"><img alt="pitsdatarecovery.net" src="site/static/img/adopters/PITSGlobalDataRecoveryServices.svg" height="50"></a>
<a href="https://www.bitgo.com" border="0" target="_blank"><img alt="bitgo.com" src="site/static/img/adopters/BitGo.svg" height="50"></a>&nbsp; &nbsp; &nbsp;
<a href="https://www.nirmata.com" border="0" target="_blank"><img alt="nirmata.com" src="site/static/img/adopters/nirmata.svg" height="50"></a>&nbsp; &nbsp; &nbsp;
<a href="https://kyma-project.io/" border="0" target="_blank"><img alt="kyma-project.io" src="site/static/img/adopters/kyma.svg" height="50"></a>&nbsp; &nbsp; &nbsp;
@@ -15,18 +14,17 @@ If you're using Velero and want to add your organization to this list,
<a href="https://sighup.io/" border="0" target="_blank"><img alt="sighup.io" src="site/static/img/adopters/sighup.svg" height="50"></a>&nbsp; &nbsp; &nbsp;
<a href="https://mayadata.io/" border="0" target="_blank"><img alt="mayadata.io" src="site/static/img/adopters/mayadata.svg" height="50"></a>&nbsp; &nbsp; &nbsp;
<a href="https://www.replicated.com/" border="0" target="_blank"><img alt="replicated.com" src="site/static/img/adopters/replicated-logo-red.svg" height="50"></a>
<a href="https://cloudcasa.io/" border="0" target="_blank"><img alt="cloudcasa.io" src="site/static/img/adopters/cloudcasa.svg" height="50"></a>
<a href="https://azure.microsoft.com/" border="0" target="_blank"><img alt="azure.com" src="site/static/img/adopters/azure.svg" height="50"></a>
## Success Stories
Below is a list of adopters of Velero in **production environments** that have
publicly shared the details of how they use it.
**[BitGo][20]**
BitGo uses Velero backup and restore capabilities to seamlessly provision and scale fullnode statefulsets on the fly as well as having it serve an integral piece for our Kubernetes disaster-recovery story.
BitGo uses Velero backup and restore capabilities to seamlessly provision and scale fullnode statefulsets on the fly as well as having it serve an integral piece for our kubernetes disaster-recovery story.
**[Bugsnag][30]**
We use Velero for managing backups of an internal instance of our on-premise clustered solution. We also recommend our users of [on-premise Bugsnag installations](https://www.bugsnag.com/on-premise) use Velero for [managing their own backups](https://docs.bugsnag.com/on-premise/clustered/backup-restore/). <!-- Velero.io word list : ignore -->
We use Velero for managing backups of an internal instance of our on-premise clustered solution. We also recommend our users of [on-premise Bugsnag installations][31] use Velero for [managing their own backups][32].
**[Banzai Cloud][60]**
[Banzai Cloud Pipeline][61] is a Kubernetes-based microservices platform that integrates services needed for Day-1 and Day-2 operations along with first-class support both for on-prem and hybrid multi-cloud deployments. We use Velero to periodically [backup and restore these clusters in case of disasters][62].
@@ -42,9 +40,7 @@ We have integrated our [solution with Velero][11] to provide our customers with
Kyma [integrates with Velero][41] to effortlessly back up and restore Kyma clusters with all its resources. Velero capabilities allow Kyma users to define and run manual and scheduled backups in order to successfully handle a disaster-recovery scenario.
**[Red Hat][50]**
Red Hat has developed 2 operators for the OpenShift platform:
- [Migration Toolkit for Containers][51] (Crane): This operator uses [Velero and Restic][52] to drive the migration of applications between OpenShift clusters.
- [OADP (OpenShift API for Data Protection) Operator][53]: This operator sets up and installs Velero on the OpenShift platform, allowing users to backup and restore applications.
Red Hat has developed the [Cluster Application Migration Tool][51] which uses [Velero and Restic][52] to drive the migration of applications between OpenShift clusters.
**[Dell EMC][70]**
For Kubernetes environments, [PowerProtect Data Manager][71] leverages the Container Storage Interface (CSI) framework to take snapshots to back up the persistent data or the data that the application creates e.g. databases. [Dell EMC leverages Velero][72] to backup the namespace configuration files (also known as Namespace meta data) for enterprise grade data protection.
@@ -60,14 +56,8 @@ MayaData is a large user of Velero as well as a contributor. MayaData offers a D
Okteto integrates Velero in [Okteto Cloud][94] and [Okteto Enterprise][95] to periodically backup and restore our clusters for disaster recovery. Velero is also a core software building block to provide namespace cloning capabilities, a feature that allows our users cloning staging environments into their personal development namespace for providing production-like development environments.
**[Replicated][100]**<br>
Replicated uses the Velero open source project to enable snapshots in [KOTS][101] to backup Kubernetes manifests & persistent volumes. In addition to the default functionality that Velero provides, [KOTS][101] provides a detailed interface in the [Admin Console][102] that can be used to manage the storage destination and schedule, and to perform and monitor the backup and restore process.<br>
**[CloudCasa][103]**<br>
[Catalogic Software][104] integrates Velero with [CloudCasa][103] - A Smart Home in the Cloud for Backups. CloudCasa is a full-featured, scalable, cloud-native solution providing Kubernetes data protection, disaster recovery, and migration as a service. An option to manage existing Velero instances and an enterprise self-hosted option are also available.<br>
**[Microsoft Azure][105]**<br>
[Azure Backup for AKS][106] is an Azure native, Kubernetes aware, Enterprise ready backup for containerized applications deployed on Azure Kubernetes Service (AKS). AKS Backup utilizes Velero to perform backup and restore operations to protect stateful applications in AKS clusters.<br>
Replicated uses the Velero open source project to enable snapshots in [KOTS][101] to backup Kubernetes manifests & persistent volumes. In addition to the default functionality that Velero provides, [KOTS][101] provides a detailed interface in the [Admin Console][102] that can be used to manage the storage destination and schedule, and to perform and monitor the backup and restore process.
## Adding your organization to the list of Velero Adopters
If you are using Velero and would like to be included in the list of `Velero Adopters`, add an SVG version of your logo to the `site/static/img/adopters` directory in this repo and submit a [pull request][3] with your change. Name the image file something that reflects your company (e.g., if your company is called Acme, name the image acme.png). See this for an example [PR][4].
@@ -87,6 +77,8 @@ If you would like to add your logo to a future `Adopters of Velero` section on [
[20]: https://bitgo.com
[30]: https://bugsnag.com
[31]: https://www.bugsnag.com/on-premise
[32]: https://docs.bugsnag.com/on-premise/clustered/backup-restore/
[40]: https://kyma-project.io
[41]: https://kyma-project.io/docs/components/backup/#overview-overview
@@ -94,7 +86,6 @@ If you would like to add your logo to a future `Adopters of Velero` section on [
[50]: https://redhat.com
[51]: https://github.com/fusor/mig-operator
[52]: https://github.com/fusor/mig-operator/blob/master/docs/usage/2.md
[53]: https://github.com/openshift/oadp-operator
[60]: https://banzaicloud.com
[61]: https://banzaicloud.com/products/pipeline/
@@ -119,9 +110,3 @@ If you would like to add your logo to a future `Adopters of Velero` section on [
[100]: https://www.replicated.com
[101]: https://kots.io
[102]: https://kots.io/kotsadm/snapshots/overview/
[103]: https://cloudcasa.io/
[104]: https://www.catalogicsoftware.com/
[105]: https://azure.microsoft.com/
[106]: https://learn.microsoft.com/azure/backup/backup-overview

View File

@@ -1,15 +1,7 @@
## Current release:
* [CHANGELOG-1.15.md][25]
* [CHANGELOG-1.7.md][17]
## Older releases:
* [CHANGELOG-1.14.md][24]
* [CHANGELOG-1.13.md][23]
* [CHANGELOG-1.12.md][22]
* [CHANGELOG-1.11.md][21]
* [CHANGELOG-1.10.md][20]
* [CHANGELOG-1.9.md][19]
* [CHANGELOG-1.8.md][18]
* [CHANGELOG-1.7.md][17]
* [CHANGELOG-1.6.md][16]
* [CHANGELOG-1.5.md][15]
* [CHANGELOG-1.4.md][14]
@@ -28,14 +20,6 @@
* [CHANGELOG-0.3.md][1]
[25]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.15.md
[24]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.14.md
[23]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.13.md
[22]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.12.md
[21]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.11.md
[20]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.10.md
[19]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.9.md
[18]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.8.md
[17]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.7.md
[16]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.6.md
[15]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.5.md

View File

@@ -5,7 +5,7 @@
We as members, contributors, and leaders pledge to make participation in the Velero project and our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socioeconomic status,
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.

View File

@@ -11,75 +11,51 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Velero binary build section
FROM --platform=$BUILDPLATFORM golang:1.25-bookworm AS velero-builder
FROM --platform=$BUILDPLATFORM golang:1.17.7 as builder-env
ARG GOPROXY
ARG BIN
ARG PKG
ARG VERSION
ARG REGISTRY
ARG GIT_SHA
ARG GIT_TREE_STATE
ARG TARGETOS
ARG TARGETARCH
ARG TARGETVARIANT
ARG REGISTRY
ENV CGO_ENABLED=0 \
GO111MODULE=on \
GOPROXY=${GOPROXY} \
GOOS=${TARGETOS} \
GOARCH=${TARGETARCH} \
GOARM=${TARGETVARIANT} \
LDFLAGS="-X ${PKG}/pkg/buildinfo.Version=${VERSION} -X ${PKG}/pkg/buildinfo.GitSHA=${GIT_SHA} -X ${PKG}/pkg/buildinfo.GitTreeState=${GIT_TREE_STATE} -X ${PKG}/pkg/buildinfo.ImageRegistry=${REGISTRY}"
WORKDIR /go/src/github.com/vmware-tanzu/velero
COPY . /go/src/github.com/vmware-tanzu/velero
RUN mkdir -p /output/usr/bin && \
export GOARM=$( echo "${GOARM}" | cut -c2-) && \
go build -o /output/${BIN} \
-ldflags "${LDFLAGS}" ${PKG}/cmd/${BIN} && \
go build -o /output/velero-restore-helper \
-ldflags "${LDFLAGS}" ${PKG}/cmd/velero-restore-helper && \
go build -o /output/velero-helper \
-ldflags "${LDFLAGS}" ${PKG}/cmd/velero-helper && \
go clean -modcache -cache
RUN apt-get update && apt-get install -y bzip2
# Restic binary build section
FROM --platform=$BUILDPLATFORM golang:1.25-bookworm AS restic-builder
FROM --platform=$BUILDPLATFORM builder-env as builder
ARG GOPROXY
ARG BIN
ARG TARGETOS
ARG TARGETARCH
ARG TARGETVARIANT
ARG PKG
ARG BIN
ARG RESTIC_VERSION
ENV CGO_ENABLED=0 \
GO111MODULE=on \
GOPROXY=${GOPROXY} \
GOOS=${TARGETOS} \
ENV GOOS=${TARGETOS} \
GOARCH=${TARGETARCH} \
GOARM=${TARGETVARIANT}
COPY . /go/src/github.com/vmware-tanzu/velero
RUN mkdir -p /output/usr/bin && \
export GOARM=$(echo "${GOARM}" | cut -c2-) && \
/go/src/github.com/vmware-tanzu/velero/hack/build-restic.sh && \
go clean -modcache -cache
bash ./hack/download-restic.sh && \
export GOARM=$( echo "${GOARM}" | cut -c2-) && \
go build -o /output/${BIN} \
-ldflags "${LDFLAGS}" ${PKG}/cmd/${BIN}
# Velero image packing section
FROM paketobuildpacks/run-jammy-tiny:latest
# The digest of tag "nonroot" at the time of v1.7.0
FROM gcr.io/distroless/base-debian10@sha256:a74f307185001c69bc362a40dbab7b67d410a872678132b187774fa21718fa13
LABEL maintainer="Xun Jiang <jxun@vmware.com>"
LABEL maintainer="Nolan Brubaker <brubakern@vmware.com>"
COPY --from=velero-builder /output /
COPY --from=builder /output /
COPY --from=restic-builder /output /
USER cnb:cnb
USER nonroot:nonroot

View File

@@ -1,57 +0,0 @@
# Copyright the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
ARG OS_VERSION=1809
# Velero binary build section
FROM --platform=$BUILDPLATFORM golang:1.25-bookworm AS velero-builder
ARG GOPROXY
ARG BIN
ARG PKG
ARG VERSION
ARG REGISTRY
ARG GIT_SHA
ARG GIT_TREE_STATE
ARG TARGETOS
ARG TARGETARCH
ARG TARGETVARIANT
ENV CGO_ENABLED=0 \
GO111MODULE=on \
GOPROXY=${GOPROXY} \
GOOS=${TARGETOS} \
GOARCH=${TARGETARCH} \
GOARM=${TARGETVARIANT} \
LDFLAGS="-X ${PKG}/pkg/buildinfo.Version=${VERSION} -X ${PKG}/pkg/buildinfo.GitSHA=${GIT_SHA} -X ${PKG}/pkg/buildinfo.GitTreeState=${GIT_TREE_STATE} -X ${PKG}/pkg/buildinfo.ImageRegistry=${REGISTRY}"
WORKDIR /go/src/github.com/vmware-tanzu/velero
COPY . /go/src/github.com/vmware-tanzu/velero
RUN mkdir -p /output/usr/bin && \
export GOARM=$( echo "${GOARM}" | cut -c2-) && \
go build -o /output/${BIN}.exe \
-ldflags "${LDFLAGS}" ${PKG}/cmd/${BIN} && \
go build -o /output/velero-restore-helper.exe \
-ldflags "${LDFLAGS}" ${PKG}/cmd/velero-restore-helper && \
go build -o /output/velero-helper.exe \
-ldflags "${LDFLAGS}" ${PKG}/cmd/velero-helper && \
go clean -modcache -cache
# Velero image packing section
FROM mcr.microsoft.com/windows/nanoserver:${OS_VERSION}
COPY --from=velero-builder /output /
USER ContainerUser

View File

@@ -107,29 +107,6 @@ Lazy consensus does _not_ apply to the process of:
* Removal of maintainers from Velero
## Deprecation Policy
### Deprecation Process
Any contributor may introduce a request to deprecate a feature or an option of a feature by opening a feature request issue in the vmware-tanzu/velero GitHub project. The issue should describe why the feature is no longer needed or has become detrimental to Velero, as well as whether and how it has been superseded. The submitter should give as much detail as possible.
Once the issue is filed, a one-month discussion period begins. Discussions take place within the issue itself as well as in the community meetings. The person who opens the issue, or a maintainer, should add the date and time marking the end of the discussion period in a comment on the issue as soon as possible after it is opened. A decision on the issue needs to be made within this one-month period.
The feature will be deprecated by a supermajority vote of 50% plus one of the project maintainers at the time of the vote tallying, which is 72 hours after the end of the community meeting that is the end of the comment period. (Maintainers are permitted to vote in advance of the deadline, but should hold their votes until as close as possible to hear all possible discussion.) Votes will be tallied in comments on the issue.
Non-maintainers may add non-binding votes in comments to the issue as well; these are opinions to be taken into consideration by maintainers, but they do not count as votes.
If the vote passes, the deprecation window takes effect in the subsequent release, and the removal follows the schedule.
### Schedule
If depreciation proposal passes by supermajority votes, the feature is deprecated in the next minor release and the feature can be removed completely after two minor version or equivalent major version e.g., if feature gets deprecated in Nth minor version, then feature can be removed after N+2 minor version or its equivalent if the major version number changes.
### Deprecation Window
The deprecation window is the period from the release in which the deprecation takes effect through the release in which the feature is removed. During this period, only critical security vulnerabilities and catastrophic bugs should be fixed.
**Note:** If a backup relies on a deprecated feature, then backups made with the last Velero release before this feature is removed must still be restorable in version `n+2`. For instance, something like restic feature support, that might mean that restic is removed from the list of supported uploader types in version `n` but the underlying implementation required to restore from a restic backup won't be removed until release `n+2`.
## Updating Governance
All substantive changes in Governance require a supermajority agreement by all maintainers.

View File

@@ -4,16 +4,14 @@
## Maintainers
| Maintainer | GitHub ID | Affiliation |
|---------------------|---------------------------------------------------------------|--------------------------------------------------|
| Scott Seago | [sseago](https://github.com/sseago) | [OpenShift](https://github.com/openshift) |
| Daniel Jiang | [reasonerjt](https://github.com/reasonerjt) | [VMware](https://www.github.com/vmware/) |
| Wenkai Yin | [ywk253100](https://github.com/ywk253100) | [VMware](https://www.github.com/vmware/) |
| Xun Jiang | [blackpiglet](https://github.com/blackpiglet) | [VMware](https://www.github.com/vmware/) |
| Shubham Pampattiwar | [shubham-pampattiwar](https://github.com/shubham-pampattiwar) | [OpenShift](https://github.com/openshift) |
| Yonghui Li | [Lyndon-Li](https://github.com/Lyndon-Li) | [VMware](https://www.github.com/vmware/) |
| Anshul Ahuja | [anshulahuja98](https://github.com/anshulahuja98) | [Microsoft Azure](https://www.github.com/azure/) |
| Tiger Kaovilai | [kaovilai](https://github.com/kaovilai) | [OpenShift](https://github.com/openshift) |
| Maintainer | GitHub ID | Affiliation |
| --------------- | --------- | ----------- |
| Bridget McErlean | [zubron](https://github.com/zubron) | [VMware](https://www.github.com/vmware/) |
| Dave Smith-Uchida | [dsu-igeek](https://github.com/dsu-igeek) | [VMware](https://www.github.com/vmware/) |
| JenTing Hsiao | [jenting](https://github.com/jenting) | [SUSE](https://github.com/SUSE/)
| Scott Seago | [sseago](https://github.com/sseago) | [OpenShift](https://github.com/openshift)
| Daniel Jiang | [reasonerjt](https://github.com/reasonerjt) | [VMware](https://www.github.com/vmware/)
| Wenkai Yin | [ywk253100](https://github.com/ywk253100) | [VMware](https://www.github.com/vmware/) |
## Emeritus Maintainers
* Adnan Abdulhussein ([prydonius](https://github.com/prydonius))
@@ -23,18 +21,13 @@
* Nolan Brubaker ([nrb](https://github.com/nrb))
* Ashish Amarnath ([ashish-amarnath](https://github.com/ashish-amarnath))
* Carlisia Thompson ([carlisia](https://github.com/carlisia))
* Bridget McErlean ([zubron](https://github.com/zubron))
* JenTing Hsiao ([jenting](https://github.com/jenting))
* Dave Smith-Uchida ([dsu-igeek](https://github.com/dsu-igeek))
* Ming Qiu ([qiuming-best](https://github.com/qiuming-best))
## Velero Contributors & Stakeholders
| Feature Area | Lead |
|------------------------|:------------------------------------------------------------------------------------:|
| Technical Lead | Daniel Jiang [reasonerjt](https://github.com/reasonerjt) |
| Kubernetes CSI Liaison | |
| Deployment | |
| Community Management | Orlin Vasilev [OrlinVasilev](https://github.com/OrlinVasilev) |
| Product Management | Pradeep Kumar Chaturvedi [pradeepkchaturvedi](https://github.com/pradeepkchaturvedi) |
| Feature Area | Lead |
| ----------------------------- | :---------------------: |
| Technical Lead | Dave Smith-Uchida (dsu-igeek) |
| Kubernetes CSI Liaison | |
| Deployment | JenTing Hsiao (jenting) |
| Community Management | Jonas Rosland (jonasrosland) |
| Product Management | Eleanor Millman (eleanor-millman) |

221
Makefile
View File

@@ -22,18 +22,6 @@ PKG := github.com/vmware-tanzu/velero
# Where to push the docker image.
REGISTRY ?= velero
# In order to push images to an insecure registry, follow the two steps:
# 1. Set "INSECURE_REGISTRY=true"
# 2. Provide your own buildx builder instance by setting "BUILDX_INSTANCE=your-own-builder-instance"
# The builder can be created with the following command:
# cat << EOF > buildkitd.toml
# [registry."insecure-registry-ip:port"]
# http = true
# insecure = true
# EOF
# docker buildx create --name=velero-builder --driver=docker-container --bootstrap --use --config ./buildkitd.toml
# Refer to https://github.com/docker/buildx/issues/1370#issuecomment-1288516840 for more details
INSECURE_REGISTRY ?= false
# Image name
IMAGE ?= $(REGISTRY)/$(BIN)
@@ -41,7 +29,6 @@ IMAGE ?= $(REGISTRY)/$(BIN)
# We allow the Dockerfile to be configurable to enable the use of custom Dockerfiles
# that pull base images from different registries.
VELERO_DOCKERFILE ?= Dockerfile
VELERO_DOCKERFILE_WINDOWS ?= Dockerfile-Windows
BUILDER_IMAGE_DOCKERFILE ?= hack/build-image/Dockerfile
# Calculate the realpath of the build-image Dockerfile as we `cd` into the hack/build
@@ -65,7 +52,7 @@ endif
BUILDER_IMAGE := $(REGISTRY)/build-image:$(BUILDER_IMAGE_TAG)
BUILDER_IMAGE_CACHED := $(shell docker images -q ${BUILDER_IMAGE} 2>/dev/null )
HUGO_IMAGE := ghcr.io/gohugoio/hugo
HUGO_IMAGE := hugo-builder
# Which architecture to build - see $(ALL_ARCH) for options.
# if the 'local' rule is being run, detect the ARCH from 'go env'
@@ -83,17 +70,7 @@ else
IMAGE_TAGS ?= $(IMAGE):$(VERSION)
endif
# check buildx is enabled only if docker is in path
# macOS/Windows docker cli without Docker Desktop license: https://github.com/abiosoft/colima
# To add buildx to docker cli: https://github.com/abiosoft/colima/discussions/273#discussioncomment-2684502
ifeq ($(shell which docker 2>/dev/null 1>&2 && docker buildx inspect 2>/dev/null | awk '/Status/ { print $$2 }'), running)
BUILDX_ENABLED ?= true
# if emulated docker cli from podman, assume enabled
# emulated docker cli from podman: https://podman-desktop.io/docs/migrating-from-docker/emulating-docker-cli-with-podman
# podman known issues:
# - on remote podman, such as on macOS,
# --output issue: https://github.com/containers/podman/issues/15922
else ifeq ($(shell which docker 2>/dev/null 1>&2 && cat $(shell which docker) | grep -c "exec podman"), 1)
ifeq ($(shell docker buildx inspect 2>/dev/null | awk '/Status/ { print $$2 }'), running)
BUILDX_ENABLED ?= true
else
BUILDX_ENABLED ?= false
@@ -103,32 +80,13 @@ define BUILDX_ERROR
buildx not enabled, refusing to run this recipe
see: https://velero.io/docs/main/build-from-source/#making-images-and-updating-velero for more info
endef
# comma cannot be escaped and can only be used in Make function arguments by putting into variable
comma=,
# The version of restic binary to be downloaded
RESTIC_VERSION ?= 0.15.0
RESTIC_VERSION ?= 0.12.1
CLI_PLATFORMS ?= linux-amd64 linux-arm linux-arm64 darwin-amd64 darwin-arm64 windows-amd64 linux-ppc64le linux-s390x
BUILD_OUTPUT_TYPE ?= docker
BUILD_OS ?= linux
BUILD_ARCH ?= amd64
BUILD_WINDOWS_VERSION ?= ltsc2022
ifeq ($(BUILD_OUTPUT_TYPE), docker)
ALL_OS = linux
ALL_ARCH.linux = $(word 2, $(subst -, ,$(shell go env GOOS)-$(shell go env GOARCH)))
else
ALL_OS = $(subst $(comma), ,$(BUILD_OS))
ALL_ARCH.linux = $(subst $(comma), ,$(BUILD_ARCH))
endif
ALL_ARCH.windows = $(if $(filter windows,$(ALL_OS)),amd64,)
ALL_OSVERSIONS.windows = $(if $(filter windows,$(ALL_OS)),$(BUILD_WINDOWS_VERSION),)
ALL_OS_ARCH.linux = $(foreach os, $(filter linux,$(ALL_OS)), $(foreach arch, ${ALL_ARCH.linux}, ${os}-$(arch)))
ALL_OS_ARCH.windows = $(foreach os, $(filter windows,$(ALL_OS)), $(foreach arch, $(ALL_ARCH.windows), $(foreach osversion, ${ALL_OSVERSIONS.windows}, ${os}-${osversion}-${arch})))
ALL_OS_ARCH = $(ALL_OS_ARCH.linux)$(ALL_OS_ARCH.windows)
ALL_IMAGE_TAGS = $(IMAGE_TAGS)
CLI_PLATFORMS ?= linux-amd64 linux-arm linux-arm64 darwin-amd64 windows-amd64 linux-ppc64le
BUILDX_PLATFORMS ?= $(subst -,/,$(ARCH))
BUILDX_OUTPUT_TYPE ?= docker
# set git sha and tree state
GIT_SHA = $(shell git rev-parse HEAD)
@@ -138,6 +96,9 @@ else
GIT_TREE_STATE ?= clean
endif
# The default linters used by lint and local-lint
LINTERS ?= "gosec,goconst,gofmt,goimports,unparam"
###
### These variables should not need tweaking.
###
@@ -146,26 +107,26 @@ platform_temp = $(subst -, ,$(ARCH))
GOOS = $(word 1, $(platform_temp))
GOARCH = $(word 2, $(platform_temp))
GOPROXY ?= https://proxy.golang.org
GOBIN=$$(pwd)/.go/bin
# If you want to build all binaries, see the 'all-build' rule.
# If you want to build all containers, see the 'all-containers' rule.
all:
@$(MAKE) build
@$(MAKE) build BIN=velero-restic-restore-helper
build-%:
@$(MAKE) --no-print-directory ARCH=$* build
@$(MAKE) --no-print-directory ARCH=$* build BIN=velero-restic-restore-helper
all-build: $(addprefix build-, $(CLI_PLATFORMS))
all-containers:
all-containers: container-builder-env
@$(MAKE) --no-print-directory container
@$(MAKE) --no-print-directory container BIN=velero-restic-restore-helper
local: build-dirs
# Add DEBUG=1 to enable debug locally
GOOS=$(GOOS) \
GOARCH=$(GOARCH) \
GOBIN=$(GOBIN) \
VERSION=$(VERSION) \
REGISTRY=$(REGISTRY) \
PKG=$(PKG) \
@@ -182,7 +143,6 @@ _output/bin/$(GOOS)/$(GOARCH)/$(BIN): build-dirs
$(MAKE) shell CMD="-c '\
GOOS=$(GOOS) \
GOARCH=$(GOARCH) \
GOBIN=$(GOBIN) \
VERSION=$(VERSION) \
REGISTRY=$(REGISTRY) \
PKG=$(PKG) \
@@ -202,7 +162,6 @@ shell: build-dirs build-env
@# under $GOPATH).
@docker run \
-e GOFLAGS \
-e GOPROXY \
-i $(TTY) \
--rm \
-u $$(id -u):$$(id -g) \
@@ -217,43 +176,28 @@ shell: build-dirs build-env
$(BUILDER_IMAGE) \
/bin/sh $(CMD)
container-builder-env:
ifneq ($(BUILDX_ENABLED), true)
$(error $(BUILDX_ERROR))
endif
@docker buildx build \
--target=builder-env \
--build-arg=GOPROXY=$(GOPROXY) \
--build-arg=PKG=$(PKG) \
--build-arg=VERSION=$(VERSION) \
--build-arg=GIT_SHA=$(GIT_SHA) \
--build-arg=GIT_TREE_STATE=$(GIT_TREE_STATE) \
--build-arg=REGISTRY=$(REGISTRY) \
-f $(VELERO_DOCKERFILE) .
container:
ifneq ($(BUILDX_ENABLED), true)
$(error $(BUILDX_ERROR))
endif
ifeq ($(BUILDX_INSTANCE),)
@echo creating a buildx instance
-docker buildx rm velero-builder || true
@docker buildx create --use --name=velero-builder
else
@echo using a specified buildx instance $(BUILDX_INSTANCE)
@docker buildx use $(BUILDX_INSTANCE)
endif
@mkdir -p _output
@for osarch in $(ALL_OS_ARCH); do \
$(MAKE) container-$${osarch}; \
done
ifeq ($(BUILD_OUTPUT_TYPE), registry)
@for tag in $(ALL_IMAGE_TAGS); do \
IMAGE_TAG=$${tag} $(MAKE) push-manifest; \
done
endif
container-linux-%:
@BUILDX_ARCH=$* $(MAKE) container-linux
container-linux:
@echo "building container: $(IMAGE):$(VERSION)-linux-$(BUILDX_ARCH)"
@docker buildx build --pull \
--output="type=$(BUILD_OUTPUT_TYPE)$(if $(findstring tar, $(BUILD_OUTPUT_TYPE)),$(comma)dest=_output/$(BIN)-$(VERSION)-linux-$(BUILDX_ARCH).tar,)" \
--platform="linux/$(BUILDX_ARCH)" \
$(addprefix -t , $(addsuffix "-linux-$(BUILDX_ARCH)",$(ALL_IMAGE_TAGS))) \
--build-arg=GOPROXY=$(GOPROXY) \
--output=type=$(BUILDX_OUTPUT_TYPE) \
--platform $(BUILDX_PLATFORMS) \
$(addprefix -t , $(IMAGE_TAGS)) \
--build-arg=PKG=$(PKG) \
--build-arg=BIN=$(BIN) \
--build-arg=VERSION=$(VERSION) \
@@ -261,54 +205,8 @@ container-linux:
--build-arg=GIT_TREE_STATE=$(GIT_TREE_STATE) \
--build-arg=REGISTRY=$(REGISTRY) \
--build-arg=RESTIC_VERSION=$(RESTIC_VERSION) \
--provenance=false \
--sbom=false \
-f $(VELERO_DOCKERFILE) .
@echo "built container: $(IMAGE):$(VERSION)-linux-$(BUILDX_ARCH)"
container-windows-%:
@BUILDX_OSVERSION=$(firstword $(subst -, ,$*)) BUILDX_ARCH=$(lastword $(subst -, ,$*)) $(MAKE) container-windows
container-windows:
@echo "building container: $(IMAGE):$(VERSION)-windows-$(BUILDX_OSVERSION)-$(BUILDX_ARCH)"
@docker buildx build --pull \
--output="type=$(BUILD_OUTPUT_TYPE)$(if $(findstring tar, $(BUILD_OUTPUT_TYPE)),$(comma)dest=_output/$(BIN)-$(VERSION)-windows-$(BUILDX_OSVERSION)-$(BUILDX_ARCH).tar,)" \
--platform="windows/$(BUILDX_ARCH)" \
$(addprefix -t , $(addsuffix "-windows-$(BUILDX_OSVERSION)-$(BUILDX_ARCH)",$(ALL_IMAGE_TAGS))) \
--build-arg=GOPROXY=$(GOPROXY) \
--build-arg=PKG=$(PKG) \
--build-arg=BIN=$(BIN) \
--build-arg=VERSION=$(VERSION) \
--build-arg=OS_VERSION=$(BUILDX_OSVERSION) \
--build-arg=GIT_SHA=$(GIT_SHA) \
--build-arg=GIT_TREE_STATE=$(GIT_TREE_STATE) \
--build-arg=REGISTRY=$(REGISTRY) \
--provenance=false \
--sbom=false \
-f $(VELERO_DOCKERFILE_WINDOWS) .
@echo "built container: $(IMAGE):$(VERSION)-windows-$(BUILDX_OSVERSION)-$(BUILDX_ARCH)"
push-manifest:
@echo "building manifest: $(IMAGE_TAG) for $(foreach osarch, $(ALL_OS_ARCH), $(IMAGE_TAG)-${osarch})"
@docker manifest create --amend --insecure=$(INSECURE_REGISTRY) $(IMAGE_TAG) $(foreach osarch, $(ALL_OS_ARCH), $(IMAGE_TAG)-${osarch})
@set -x; \
for arch in $(ALL_ARCH.windows); do \
for osversion in $(ALL_OSVERSIONS.windows); do \
BASEIMAGE=mcr.microsoft.com/windows/nanoserver:$${osversion}; \
full_version=`docker manifest inspect --insecure=$(INSECURE_REGISTRY) $${BASEIMAGE} | jq -r '.manifests[0].platform["os.version"]'`; \
docker manifest annotate --os windows --arch $${arch} --os-version $${full_version} $(IMAGE_TAG) $(IMAGE_TAG)-windows-$${osversion}-$${arch}; \
done; \
done
@echo "pushing manifest $(IMAGE_TAG)"
@docker manifest push --purge --insecure=$(INSECURE_REGISTRY) $(IMAGE_TAG)
@echo "pushed manifest $(IMAGE_TAG):"
@docker manifest inspect --insecure=$(INSECURE_REGISTRY) $(IMAGE_TAG)
@echo "container: $(IMAGE):$(VERSION)"
SKIP_TESTS ?=
test: build-dirs
@@ -328,21 +226,27 @@ endif
lint:
ifneq ($(SKIP_TESTS), 1)
@$(MAKE) shell CMD="-c 'hack/lint.sh'"
@$(MAKE) shell CMD="-c 'hack/lint.sh $(LINTERS)'"
endif
local-lint:
ifneq ($(SKIP_TESTS), 1)
@hack/lint.sh
@hack/lint.sh $(LINTERS)
endif
lint-all:
ifneq ($(SKIP_TESTS), 1)
@$(MAKE) shell CMD="-c 'hack/lint.sh $(LINTERS) true'"
endif
local-lint-all:
ifneq ($(SKIP_TESTS), 1)
@hack/lint.sh $(LINTERS) true
endif
update:
@$(MAKE) shell CMD="-c 'hack/update-all.sh'"
# update-crd is for development purpose only, it is faster than update, so is a shortcut when you want to generate CRD changes only
update-crd:
@$(MAKE) shell CMD="-c 'hack/update-3generated-crd-code.sh'"
build-dirs:
@mkdir -p _output/bin/$(GOOS)/$(GOARCH)
@mkdir -p .go/src/$(PKG) .go/pkg .go/bin .go/std/$(GOOS)/$(GOARCH) .go/go-build .go/golangci-lint
@@ -434,9 +338,9 @@ changelog:
# PUBLISH=false \
# make release
#
# To run the release, which will publish a *DRAFT* GitHub release in github.com/vmware-tanzu/velero
# To run the release, which will publish a *DRAFT* GitHub release in github.com/vmware-tanzu/velero
# (you still need to review/publish the GitHub release manually):
# GITHUB_TOKEN=your-github-token \
# GITHUB_TOKEN=your-github-token \
# RELEASE_NOTES_FILE=changelogs/CHANGELOG-1.2.md \
# PUBLISH=true \
# make release
@@ -451,40 +355,15 @@ release:
serve-docs: build-image-hugo
docker run \
--rm \
-v "$$(pwd)/site:/project" \
-v "$$(pwd)/site:/srv/hugo" \
-it -p 1313:1313 \
$(HUGO_IMAGE) \
server --bind=0.0.0.0 --enableGitInfo=false
# gen-docs generates a new versioned docs directory under site/content/docs.
hugo server --bind=0.0.0.0 --enableGitInfo=false
# gen-docs generates a new versioned docs directory under site/content/docs.
# Please read the documentation in the script for instructions on how to use it.
gen-docs:
@hack/release-tools/gen-docs.sh
.PHONY: test-e2e
test-e2e: local
$(MAKE) -e VERSION=$(VERSION) -C test/ run-e2e
.PHONY: test-perf
test-perf: local
$(MAKE) -e VERSION=$(VERSION) -C test/ run-perf
go-generate:
go generate ./pkg/...
# requires an authenticated gh cli
# gh: https://cli.github.com/
# First create a PR
# gh pr create --title 'Title name' --body 'PR body'
# by default uses PR title as changelog body but can be overwritten like so
# make new-changelog CHANGELOG_BODY="Changes you have made"
new-changelog: GH_LOGIN ?= $(shell gh pr view --json author --jq .author.login 2> /dev/null)
new-changelog: GH_PR_NUMBER ?= $(shell gh pr view --json number --jq .number 2> /dev/null)
new-changelog: CHANGELOG_BODY ?= '$(shell gh pr view --json title --jq .title)'
new-changelog:
@if [ "$(GH_LOGIN)" = "" ]; then \
echo "branch does not have PR or cli not logged in, try 'gh auth login' or 'gh pr create'"; \
exit 1; \
fi
@mkdir -p ./changelogs/unreleased/ && \
echo $(CHANGELOG_BODY) > ./changelogs/unreleased/$(GH_PR_NUMBER)-$(GH_LOGIN) && \
echo \"$(CHANGELOG_BODY)\" added to "./changelogs/unreleased/$(GH_PR_NUMBER)-$(GH_LOGIN)"
$(MAKE) -e VERSION=$(VERSION) -C test/e2e run

24
OWNERS
View File

@@ -1,24 +0,0 @@
# This file is used by the [PROW action](https://github.com/jpmcb/prow-github-actions) to approve and merge PRs.
# The file's format follows the [OWNERS SPEC](https://www.kubernetes.dev/docs/guide/owners/#owners-spec).
# List of usernames who may use /lgtm
reviewers:
- @Lyndon-Li
- @anshulahuja98
- @blackpiglet
- @qiuming-best
- @reasonerjt
- @shubham-pampattiwar
- @sseago
- @ywk253100
# List of usernames who may use /approve
approvers:
- @Lyndon-Li
- @anshulahuja98
- @blackpiglet
- @qiuming-best
- @reasonerjt
- @shubham-pampattiwar
- @sseago
- @ywk253100

7
PROJECT Normal file
View File

@@ -0,0 +1,7 @@
domain: io
repo: github.com/vmware-tanzu/velero
resources:
- group: velero
kind: BackupStorageLocation
version: v1
version: "2"

View File

@@ -1,13 +1,11 @@
![100]
[![Build Status][1]][2] [![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/3811/badge)](https://bestpractices.coreinfrastructure.org/projects/3811)
![GitHub release (latest SemVer)](https://img.shields.io/github/v/release/vmware-tanzu/velero)
## Overview
Velero (formerly Heptio Ark) gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. You can run Velero with a public cloud platform or on-premises.
Velero lets you:
Velero (formerly Heptio Ark) gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. You can run Velero with a public cloud platform or on-premises. Velero lets you:
* Take backups of your cluster and restore in case of loss.
* Migrate cluster resources to other clusters.
@@ -20,7 +18,7 @@ Velero consists of:
## Documentation
[The documentation][29] provides a getting started guide and information about building from source, architecture, extending Velero and more.
[The documentation][29] provides a getting started guide and information about building from source, architecture, extending Velero, and more.
Please use the version selector at the top of the site to ensure you are using the appropriate documentation for your version of Velero.
@@ -36,28 +34,6 @@ If you are ready to jump in and test, add code, or help with documentation, foll
See [the list of releases][6] to find out about feature changes.
### Velero compatibility matrix
The following is a list of the supported Kubernetes versions for each Velero version.
| Velero version | Expected Kubernetes version compatibility | Tested on Kubernetes version |
|----------------|-------------------------------------------|-------------------------------------|
| 1.17 | 1.18-latest | 1.31.7, 1.32.3, 1.33.1, and 1.34.0 |
| 1.16 | 1.18-latest | 1.31.4, 1.32.3, and 1.33.0 |
| 1.15 | 1.18-latest | 1.28.8, 1.29.8, 1.30.4 and 1.31.1 |
| 1.14 | 1.18-latest | 1.27.9, 1.28.9, and 1.29.4 |
| 1.13 | 1.18-latest | 1.26.5, 1.27.3, 1.27.8, and 1.28.3 |
| 1.12 | 1.18-latest | 1.25.7, 1.26.5, 1.26.7, and 1.27.3 |
| 1.11 | 1.18-latest | 1.23.10, 1.24.9, 1.25.5, and 1.26.1 |
Velero supports IPv4, IPv6, and dual stack environments. Support for this was tested against Velero v1.8.
The Velero maintainers are continuously working to expand testing coverage, but are not able to test every combination of Velero and supported Kubernetes versions for each Velero release. The table above is meant to track the current testing coverage and the expected supported Kubernetes versions for each Velero version.
If you are interested in using a different version of Kubernetes with a given Velero version, we'd recommend that you perform testing before installing or upgrading your environment. For full information around capabilities within a release, also see the Velero [release notes](https://github.com/vmware-tanzu/velero/releases) or Kubernetes [release notes](https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG). See the Velero [support page](https://velero.io/docs/latest/support-process/) for information about supported versions of Velero.
For each release, Velero maintainers run the test to ensure the upgrade path from n-2 minor release. For example, before the release of v1.10.x, the test will verify that the backup created by v1.9.x and v1.8.x can be restored using the build to be tagged as v1.10.x.
[1]: https://github.com/vmware-tanzu/velero/workflows/Main%20CI/badge.svg
[2]: https://github.com/vmware-tanzu/velero/actions?query=workflow%3A"Main+CI"
[4]: https://github.com/vmware-tanzu/velero/issues

View File

@@ -1 +1,47 @@
# Please go to the [Velero Wiki](https://github.com/vmware-tanzu/velero/wiki/) to see our latest roadmap, archived roadmaps and roadmap guidance.
## Velero Roadmap
### About this document
This document provides a link to the [Velero Project boards](https://github.com/vmware-tanzu/velero/projects) that serves as the up to date description of items that are in the release pipeline. The release boards have separate swim lanes based on prioritization. Most items are gathered from the community or include a feedback loop with the community. This should serve as a reference point for Velero users and contributors to understand where the project is heading, and help determine if a contribution could be conflicting with a longer term plan.
### How to help?
Discussion on the roadmap can take place in threads under [Issues](https://github.com/vmware-tanzu/velero/issues) or in [community meetings](https://velero.io/community/). Please open and comment on an issue if you want to provide suggestions, use cases, and feedback to an item in the roadmap. Please review the roadmap to avoid potential duplicated effort.
### How to add an item to the roadmap?
One of the most important aspects in any open source community is the concept of proposals. Large changes to the codebase and / or new features should be preceded by a [proposal](https://github.com/vmware-tanzu/velero/blob/main/GOVERNANCE.md#proposal-process) in our repo.
For smaller enhancements, you can open an issue to track that initiative or feature request.
We work with and rely on community feedback to focus our efforts to improve Velero and maintain a healthy roadmap.
### Current Roadmap
The following table includes the current roadmap for Velero. If you have any questions or would like to contribute to Velero, please attend a [community meeting](https://velero.io/community/) to discuss with our team. If you don't know where to start, we are always looking for contributors that will help us reduce technical, automation, and documentation debt.
Please take the timelines & dates as proposals and goals. Priorities and requirements change based on community feedback, roadblocks encountered, community contributions, etc. If you depend on a specific item, we encourage you to attend community meetings to get updated status information, or help us deliver that feature by contributing to Velero.
`Last Updated: July 2021`
#### 1.7.0 Roadmap (to be delivered early fall)
The release roadmap is split into Core items that are required for the release and desired items that may slip the release.
##### Core items
The top priority of 1.7 is to increase the technical health of Velero and be more efficient with Velero developer time by streamlining the release process and automating and expanding the E2E test suite.
|Issue|Description|
|---|---|
||Streamline release process|
||Automate the running of the E2E tests|
||Convert pre-release manual tests to automated E2E tests|
|[3493](https://github.com/vmware-tanzu/velero/issues/3493)|[Carvel](https://github.com/vmware-tanzu/velero/issues/3493) based installation (in addition to the existing *velero install* CLI).|
|[675](https://github.com/vmware-tanzu/velero/issues/675)|Velero command to generate debugging information. Will integrate with [Crashd - Crash Diagnostics](https://github.com/vmware-tanzu/velero/issues/675)|
|[3285](https://github.com/vmware-tanzu/velero/issues/3285)|Design doc for Velero plugin versioning|
|[1975](https://github.com/vmware-tanzu/velero/issues/1975)|IPV6 support|
|[3533](https://github.com/vmware-tanzu/velero/issues/3533)|Upload Progress Monitoring|
|[3500](https://github.com/vmware-tanzu/velero/issues/3500)|Use distroless containers as a base|
##### Items formerly in 1.7 that will slip due to staffing changes
|Issue|Description|
|---|---|
|[3536](https://github.com/vmware-tanzu/velero/issues/3536)|Manifest for backup/restore|
|[2066](https://github.com/vmware-tanzu/velero/issues/2066)|CSI Snapshots GA|
|[3535](https://github.com/vmware-tanzu/velero/issues/3535)|Design doc for multiple cluster support|
|[2922](https://github.com/vmware-tanzu/velero/issues/2922)|Plugin timeouts|
|[3531](https://github.com/vmware-tanzu/velero/issues/3531)|Test plan for Velero|

View File

@@ -12,13 +12,13 @@ The Velero project maintains the following [governance document](https://github.
Security is of the highest importance and all security vulnerabilities or suspected security vulnerabilities should be reported to Velero privately, to minimize attacks against current users of Velero before they are fixed. Vulnerabilities will be investigated and patched on the next patch (or minor) release as soon as possible. This information could be kept entirely internal to the project.
If you know of a publicly disclosed security vulnerability for Velero, please **IMMEDIATELY** contact the Security Team (velero-security.pdl@broadcom.com).
If you know of a publicly disclosed security vulnerability for Velero, please **IMMEDIATELY** contact the VMware Security Team (security@vmware.com).
**IMPORTANT: Do not file public issues on GitHub for security vulnerabilities**
To report a vulnerability or a security-related issue, please contact the email address with the details of the vulnerability. The email will be fielded by the Security Team and then shared with the Velero maintainers who have committer and release permissions. Emails will be addressed within 3 business days, including a detailed plan to investigate the issue and any potential workarounds to perform in the meantime. Do not report non-security-impacting bugs through this channel. Use [GitHub issues](https://github.com/vmware-tanzu/velero/issues/new/choose) instead.
To report a vulnerability or a security-related issue, please contact the VMware email address with the details of the vulnerability. The email will be fielded by the VMware Security Team and then shared with the Velero maintainers who have committer and release permissions. Emails will be addressed within 3 business days, including a detailed plan to investigate the issue and any potential workarounds to perform in the meantime. Do not report non-security-impacting bugs through this channel. Use [GitHub issues](https://github.com/vmware-tanzu/velero/issues/new/choose) instead.
## Proposed Email Content
@@ -29,7 +29,7 @@ Provide a descriptive subject line and in the body of the email include the foll
* Basic identity information, such as your name and your affiliation or company.
* Detailed steps to reproduce the vulnerability (POC scripts, screenshots, and logs are all helpful to us).
* Description of the effects of the vulnerability on Velero and the related hardware and software configurations, so that the Security Team can reproduce it.
* Description of the effects of the vulnerability on Velero and the related hardware and software configurations, so that the VMware Security Team can reproduce it.
* How the vulnerability affects Velero usage and an estimation of the attack surface, if there is one.
* List other projects or dependencies that were used in conjunction with Velero to produce the vulnerability.
@@ -49,7 +49,7 @@ Provide a descriptive subject line and in the body of the email include the foll
## Patch, Release, and Disclosure
The Security Team will respond to vulnerability reports as follows:
The VMware Security Team will respond to vulnerability reports as follows:
@@ -62,7 +62,7 @@ The Security Team will respond to vulnerability reports as follows:
5. The Security Team will also create a [CVSS](https://www.first.org/cvss/specification-document) using the [CVSS Calculator](https://www.first.org/cvss/calculator/3.0). The Security Team makes the final call on the calculated CVSS; it is better to move quickly than making the CVSS perfect. Issues may also be reported to [Mitre](https://cve.mitre.org/) using this [scoring calculator](https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator). The CVE will initially be set to private.
6. The Security Team will work on fixing the vulnerability and perform internal testing before preparing to roll out the fix.
7. The Security Team will provide early disclosure of the vulnerability by emailing the [Velero Distributors](https://groups.google.com/u/1/g/projectvelero-distributors) mailing list. Distributors can initially plan for the vulnerability patch ahead of the fix, and later can test the fix and provide feedback to the Velero team. See the section **Early Disclosure to Velero Distributors List** for details about how to join this mailing list.
8. A public disclosure date is negotiated by the SecurityTeam, the bug submitter, and the distributors list. We prefer to fully disclose the bug as soon as possible once a user mitigation or patch is available. It is reasonable to delay disclosure when the bug or the fix is not yet fully understood, the solution is not well-tested, or for distributor coordination. The timeframe for disclosure is from immediate (especially if its already publicly known) to a few weeks. For a critical vulnerability with a straightforward mitigation, we expect the report date for the public disclosure date to be on the order of 14 business days. The Security Team holds the final say when setting a public disclosure date.
8. A public disclosure date is negotiated by the VMware SecurityTeam, the bug submitter, and the distributors list. We prefer to fully disclose the bug as soon as possible once a user mitigation or patch is available. It is reasonable to delay disclosure when the bug or the fix is not yet fully understood, the solution is not well-tested, or for distributor coordination. The timeframe for disclosure is from immediate (especially if its already publicly known) to a few weeks. For a critical vulnerability with a straightforward mitigation, we expect the report date for the public disclosure date to be on the order of 14 business days. The VMware Security Team holds the final say when setting a public disclosure date.
9. Once the fix is confirmed, the Security Team will patch the vulnerability in the next patch or minor release, and backport a patch release into all earlier supported releases. Upon release of the patched version of Velero, we will follow the **Public Disclosure Process**.
@@ -79,7 +79,7 @@ The Security Team will also publish any mitigating steps users can take until th
* Use velero-security.pdl@broadcom.com to report security concerns to the Security Team, who uses the list to privately discuss security issues and fixes prior to disclosure.
* Use security@vmware.com to report security concerns to the VMware Security Team, who uses the list to privately discuss security issues and fixes prior to disclosure.
* Join the [Velero Distributors](https://groups.google.com/u/1/g/projectvelero-distributors) mailing list for early private information and vulnerability disclosure. Early disclosure may include mitigating steps and additional information on security patch releases. See below for information on how Velero distributors or vendors can apply to join this list.
@@ -107,11 +107,11 @@ To be eligible to join the [Velero Distributors](https://groups.google.com/u/1/g
## Embargo Policy
The information that members receive on the Velero Distributors mailing list must not be made public, shared, or even hinted at anywhere beyond those who need to know within your specific team, unless you receive explicit approval to do so from the Security Team. This remains true until the public disclosure date/time agreed upon by the list. Members of the list and others cannot use the information for any reason other than to get the issue fixed for your respective distribution's users.
The information that members receive on the Velero Distributors mailing list must not be made public, shared, or even hinted at anywhere beyond those who need to know within your specific team, unless you receive explicit approval to do so from the VMware Security Team. This remains true until the public disclosure date/time agreed upon by the list. Members of the list and others cannot use the information for any reason other than to get the issue fixed for your respective distribution's users.
Before you share any information from the list with members of your team who are required to fix the issue, these team members must agree to the same terms, and only be provided with information on a need-to-know basis.
In the unfortunate event that you share information beyond what is permitted by this policy, you must urgently inform the Security Team (velero-security.pdl@broadcom.com) of exactly what information was leaked and to whom. If you continue to leak information and break the policy outlined here, you will be permanently removed from the list.
In the unfortunate event that you share information beyond what is permitted by this policy, you must urgently inform the VMware Security Team (security@vmware.com) of exactly what information was leaked and to whom. If you continue to leak information and break the policy outlined here, you will be permanently removed from the list.
@@ -123,6 +123,6 @@ Send new membership requests to projectvelero-distributors@googlegroups.com. In
## Confidentiality, integrity and availability
We consider vulnerabilities leading to the compromise of data confidentiality, elevation of privilege, or integrity to be our highest priority concerns. Availability, in particular in areas relating to DoS and resource exhaustion, is also a serious security concern. The Security Team takes all vulnerabilities, potential vulnerabilities, and suspected vulnerabilities seriously and will investigate them in an urgent and expeditious manner.
We consider vulnerabilities leading to the compromise of data confidentiality, elevation of privilege, or integrity to be our highest priority concerns. Availability, in particular in areas relating to DoS and resource exhaustion, is also a serious security concern. The VMware Security Team takes all vulnerabilities, potential vulnerabilities, and suspected vulnerabilities seriously and will investigate them in an urgent and expeditious manner.
Note that we do not currently consider the default settings for Velero to be secure-by-default. It is necessary for operators to explicitly configure settings, role based access control, and other resource related features in Velero to provide a hardened Velero environment. We will not act on any security disclosure that relates to a lack of safe defaults. Over time, we will work towards improved safe-by-default configuration, taking into account backwards compatibility.

View File

@@ -7,19 +7,17 @@ k8s_yaml([
'config/crd/v1/bases/velero.io_downloadrequests.yaml',
'config/crd/v1/bases/velero.io_podvolumebackups.yaml',
'config/crd/v1/bases/velero.io_podvolumerestores.yaml',
'config/crd/v1/bases/velero.io_backuprepositories.yaml',
'config/crd/v1/bases/velero.io_resticrepositories.yaml',
'config/crd/v1/bases/velero.io_restores.yaml',
'config/crd/v1/bases/velero.io_schedules.yaml',
'config/crd/v1/bases/velero.io_serverstatusrequests.yaml',
'config/crd/v1/bases/velero.io_volumesnapshotlocations.yaml',
'config/crd/v2alpha1/bases/velero.io_datauploads.yaml',
'config/crd/v2alpha1/bases/velero.io_datadownloads.yaml',
])
# default values
settings = {
"default_registry": "docker.io/velero",
"use_node_agent": False,
"default_registry": "",
"enable_restic": False,
"enable_debug": False,
"debug_continue_on_start": True, # Continue the velero process by default when in debug mode
"create_backup_locations": False,
@@ -36,9 +34,9 @@ k8s_yaml(kustomize('tilt-resources'))
k8s_yaml('tilt-resources/deployment.yaml')
if settings.get("enable_debug"):
k8s_resource('velero', port_forwards = '2345')
# TODO: Need to figure out how to apply port forwards for all node-agent pods
if settings.get("use_node_agent"):
k8s_yaml('tilt-resources/node-agent.yaml')
# TODO: Need to figure out how to apply port forwards for all restic pods
if settings.get("enable_restic"):
k8s_yaml('tilt-resources/restic.yaml')
if settings.get("create_backup_locations"):
k8s_yaml('tilt-resources/velero_v1_backupstoragelocation.yaml')
if settings.get("setup-minio"):
@@ -52,7 +50,7 @@ git_sha = str(local("git rev-parse HEAD", quiet = True, echo_off = True)).strip(
tilt_helper_dockerfile_header = """
# Tilt image
FROM golang:1.25 as tilt-helper
FROM golang:1.16.6 as tilt-helper
# Support live reloading with Tilt
RUN wget --output-document /restart.sh --quiet https://raw.githubusercontent.com/windmilleng/rerun-process-wrapper/master/restart.sh && \
@@ -62,9 +60,9 @@ RUN wget --output-document /restart.sh --quiet https://raw.githubusercontent.com
additional_docker_helper_commands = """
# Install delve to allow debugging
RUN go install github.com/go-delve/delve/cmd/dlv@latest
RUN go get github.com/go-delve/delve/cmd/dlv
RUN wget -qO- https://dl.k8s.io/v1.25.2/kubernetes-client-linux-amd64.tar.gz | tar xvz
RUN wget -qO- https://dl.k8s.io/v1.19.2/kubernetes-client-linux-amd64.tar.gz | tar xvz
RUN wget -qO- https://get.docker.com | sh
"""
@@ -92,25 +90,25 @@ def get_debug_flag():
# Set up a local_resource build of the Velero binary. The binary is written to _tiltbuild/velero.
local_resource(
"velero_server_binary",
cmd = 'cd ' + '.' + ';mkdir -p _tiltbuild;PKG=. BIN=velero GOOS=linux GOARCH=amd64 GIT_SHA=' + git_sha + ' VERSION=main GIT_TREE_STATE=dirty OUTPUT_DIR=_tiltbuild ' + get_debug_flag() + ' REGISTRY=' + settings.get("default_registry") + ' ./hack/build.sh',
cmd = 'cd ' + '.' + ';mkdir -p _tiltbuild;PKG=. BIN=velero GOOS=linux GOARCH=amd64 GIT_SHA=' + git_sha + ' VERSION=main GIT_TREE_STATE=dirty OUTPUT_DIR=_tiltbuild ' + get_debug_flag() + ' ./hack/build.sh',
deps = ["cmd", "internal", "pkg"],
ignore = ["pkg/cmd"],
)
local_resource(
"velero_local_binary",
cmd = 'cd ' + '.' + ';mkdir -p _tiltbuild/local;PKG=. BIN=velero GOOS=' + local_goos + ' GOARCH=amd64 GIT_SHA=' + git_sha + ' VERSION=main GIT_TREE_STATE=dirty OUTPUT_DIR=_tiltbuild/local ' + get_debug_flag() + ' REGISTRY=' + settings.get("default_registry") + ' ./hack/build.sh',
cmd = 'cd ' + '.' + ';mkdir -p _tiltbuild/local;PKG=. BIN=velero GOOS=' + local_goos + ' GOARCH=amd64 GIT_SHA=' + git_sha + ' VERSION=main GIT_TREE_STATE=dirty OUTPUT_DIR=_tiltbuild/local ' + get_debug_flag() + ' ./hack/build.sh',
deps = ["internal", "pkg/cmd"],
)
local_resource(
"restic_binary",
cmd = 'cd ' + '.' + ';mkdir -p _tiltbuild/restic; BIN=velero GOOS=linux GOARCH=amd64 GOARM="" RESTIC_VERSION=0.13.1 OUTPUT_DIR=_tiltbuild/restic ./hack/build-restic.sh',
cmd = 'cd ' + '.' + ';mkdir -p _tiltbuild/restic; BIN=velero GOOS=' + local_goos + ' GOARCH=amd64 RESTIC_VERSION=0.12.0 OUTPUT_DIR=_tiltbuild/restic ./hack/download-restic.sh',
)
# Note: we need a distro with a bash shell to exec into the Velero container
tilt_dockerfile_header = """
FROM ubuntu:22.04 as tilt
FROM ubuntu:focal as tilt
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -qq -y ca-certificates tzdata && rm -rf /var/lib/apt/lists/*
@@ -218,7 +216,7 @@ def enable_provider(provider):
# Note: we need a distro with a shell to do a copy of the plugin binary
tilt_dockerfile_header = """
FROM ubuntu:22.04 as tilt
FROM ubuntu:focal as tilt
WORKDIR /
COPY --from=tilt-helper /start.sh .
COPY --from=tilt-helper /restart.sh .

View File

@@ -1,6 +1,6 @@
# Velero Assets
This folder contains logo images for Velero in gray (for light backgrounds) and white (for dark backgrounds like black t-shirts or dark mode!) horizontal and stacked… in .eps and .svg.
This folder contains logo images for Velero in gray (for light backgrounds) and white (for dark backgrounds like black tshirts or dark mode!) horizontal and stacked… in .eps and .svg.
## Some general guidelines for usage

View File

@@ -154,7 +154,7 @@
* Skip completed jobs and pods when restoring (#463, @nrb)
* Set namespace correctly when syncing backups from object storage (#472, @skriss)
* When building on macOS, bind-mount volumes with delegated config (#478, @skriss)
* Add replica sets and daemonsets to cohabiting resources so they're not backed up twice (#482 #485, @skriss)
* Add replica sets and daemonsets to cohabitating resources so they're not backed up twice (#482 #485, @skriss)
* Shut down the Ark server gracefully on SIGINT/SIGTERM (#483, @skriss)
* Only back up resources that support GET and DELETE in addition to LIST and CREATE (#486, @nrb)
* Show a better error message when trying to get an incomplete restore's logs (#496, @nrb)

View File

@@ -1,190 +0,0 @@
## v1.10.0
### 2022-11-23
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.10.0
### Container Image
`velero/velero:v1.10.0`
### Documentation
https://velero.io/docs/v1.10/
### Upgrading
https://velero.io/docs/v1.10/upgrade-to-1.10/
### Highlights
#### Unified Repository and Kopia integration
In this release, we introduced the Unified Repository architecture to build a data path where data movers and the backup repository are decoupled and a unified backup repository could serve various data movement activities.
In this release, we also deeply integrate Velero with Kopia, specifically, Kopia's uploader modules are isolated as a generic file system uploader; Kopia's repository modules are encapsulated as the unified backup repository.
For more information, refer to the [design document](https://github.com/vmware-tanzu/velero/blob/v1.10.0/design/unified-repo-and-kopia-integration/unified-repo-and-kopia-integration.md).
#### File system backup refactor
Velero's file system backup (a.k.s. pod volume backup or formerly restic backup) is refactored as the first user of the Unified Repository architecture. Specifically, we added a new path, the Kopia path, besides the existing Restic path. While Restic path is still available and set as default, you can opt in Kopia path by specifying the `uploader-type` parameter at installation time. Meanwhile, you are free to restore from existing backups under either path, Velero dynamically switches to the correct path to process the restore.
Because of the new path, we renamed some modules and parameters, refer to the Break Changes section for more details.
For more information, visit the [file system backup document](https://velero.io/docs/v1.10/file-system-backup/) and [v1.10 upgrade guide document](https://velero.io/docs/v1.10/upgrade-to-1.10/).
Meanwhile, we've created a performance guide for both Restic path and Kopia path, which helps you to choose between the two paths and provides you the best practice to configure them under different scenarios. Please note that the results in the guide are based on our testing environments, you may get different results when testing in your own ones. For more information, visit the [performance guide document](https://velero.io/docs/v1.10/performance-guidance/).
#### Plugin versioning V1 refactor
In this release, Velero moves plugins BackupItemAction, RestoreItemAction and VolumeSnapshotterAction to version v1, this allows future plugin changes that do not support backward compatibility, so is a preparation for various complex tasks, for example, data movement tasks.
For more information, refer to the [plugin versioning design document](https://github.com/vmware-tanzu/velero/blob/v1.10.0/design/plugin-versioning.md).
#### Refactor the controllers using Kubebuilder v3
In this release we continued our code modernization work, rewriting some controllers using Kubebuilder v3. This work is ongoing and we will continue to make progress in future releases.
#### Add credentials to volume snapshot locations
In this release, we enabled dedicate credentials options to volume snapshot locations so that you can specify credentials per volume snapshot location as same as backup storage location.
For more information, please visit the [locations document](https://velero.io/docs/v1.10/locations/).
#### CSI snapshot enhancements
In this release we added several changes to enhance the robustness of CSI snapshot procedures, for example, some protection code for error handling, and a mechanism to skip exclusion checks so that CSI snapshot works with various backup resource filters.
#### Backup schedule pause/unpause
In this release, Velero supports to pause/unpause a backup schedule during or after its creation. Specifically:
At creation time, you can specify `paused` flag to `velero schedule create` command, if so, you will create a paused schedule that will not run until it is unpaused
After creation, you can run `velero schedule pause` or `velero schedule unpause` command to pause/unpause a schedule
#### Runtime and dependencies
In order to fix CVEs, we changed Velero's runtime and dependencies as follows:
Bump go runtime to v1.18.8
Bump some core dependent libraries to newer versions
Compile Restic (v0.13.1) with go 1.18.8 instead of packaging the official binary
#### Breaking changes
Due to file system backup refactor, below modules and parameters name have been changed in this release:
`restic` daemonset is renamed to `node-agent`
`resticRepository` CR is renamed to `backupRepository`
`velero restic repo` command is renamed to `velero repo`
`velero-restic-credentials` secret is renamed to `velero-repo-credentials`
`default-volumes-to-restic` parameter is renamed to `default-volumes-to-fs-backup`
`restic-timeout` parameter is renamed to `fs-backup-timeout`
`default-restic-prune-frequency` parameter is renamed to `default-repo-maintain-frequency`
#### Upgrade
Due to the major changes of file system backup, the old upgrade steps are not suitable any more. For the new upgrade steps, visit [v1.10 upgrade guide document](https://velero.io/docs/v1.10/upgrade-to-1.10/).
#### Limitations/Known issues
In this release, Kopia backup repository (so the Kopia path of file system backup) doesn't support self signed certificate for S3 compatible storage. To track this problem, refer to this [Velero issue](https://github.com/vmware-tanzu/velero/issues/5123) or [Kopia issue](https://github.com/kopia/kopia/issues/1443).
Due to the code change in Velero, there will be some code change required in vSphere plugin, without which the functionality may be impacted. Therefore, if you are using vSphere plugin in your workflow, please hold the upgrade until the issue [#485](https://github.com/vmware-tanzu/velero-plugin-for-vsphere/issues/485) is fixed in vSphere plugin.
### All changes
* Restore ClusterBootstrap before Cluster otherwise a new default ClusterBootstrap object is create for the cluster (#5616, @ywk253100)
* Add compile restic binary for CVE fix (#5574, @qiuming-best)
* Fix controller problematic log output (#5572, @qiuming-best)
* Enhance the restore priorities list to support specifying the low prioritized resources that need to be restored in the last (#5535, @ywk253100)
* fix restic backup progress error (#5534, @qiuming-best)
* fix restic backup failure with self-signed certification backend storage (#5526, @qiuming-best)
* Add credential store in backup deletion controller to support VSL credential. (#5521, @blackpiglet)
* Fix issue 5505: the pod volume backups/restores except the first one fail under the kopia path if "AZURE_CLOUD_NAME" is specified (#5512, @Lyndon-Li)
* After Pod Volume Backup/Restore refactor, remove all the unreasonable appearance of "restic" word from documents (#5499, @Lyndon-Li)
* Refactor Pod Volume Backup/Restore doc to match the new behavior (#5484, @Lyndon-Li)
* Remove redundancy code block left by #5388. (#5483, @blackpiglet)
* Issue fix 5477: create the common way to support S3 compatible object storages that work for both Restic and Kopia; Keep the resticRepoPrefix parameter for compatibility (#5478, @Lyndon-Li)
* Update the k8s.io dependencies to 0.24.0.
This also required an update to github.com/bombsimon/logrusr/v3.
Removed the `WithClusterName` method
as it is a "legacy field that was
always cleared by the system and never used" as per upstream k8s
https://github.com/kubernetes/apimachinery/blob/release-1.24/pkg/apis/meta/v1/types.go#L257-L259 (#5471, @kcboyle)
* Add v1.10 velero upgrade doc (#5468, @qiuming-best)
* Upgrade velero docker image to use go 1.18 and upgrade golangci-lint to 1.45.0 (#5459, @Lyndon-Li)
* Add VolumeSnapshot client back. (#5449, @blackpiglet)
* Change subcommand `velero restic repo` to `velero repo` (#5446, @allenxu404)
* Remove irrational "Restic" names in Velero code after the PVBR refactor (#5444, @Lyndon-Li)
* moved RIA execute input/output structs back to velero package (#5441, @sseago)
* Rename Velero pod volume restore init helper from "velero-restic-restore-helper" to "velero-restore-helper" (#5432, @Lyndon-Li)
* Skip the exclusion check for additional resources returned by BIA (#5429, @reasonerjt)
* Change B/R describe CLI to support Kopia (#5412, @allenxu404)
* Add nil check before execution of csi snapshot delete (#5401, @shubham-pampattiwar)
* update velero using klog to version v2.9.0 (#5396, @blackpiglet)
* Fix Test_prepareBackupRequest_BackupStorageLocation UT failure. (#5394, @blackpiglet)
* Rename Velero daemonset from "restic" to "node-agent" (#5390, @Lyndon-Li)
* Add some corner cases checking for CSI snapshot in backup controller. (#5388, @blackpiglet)
* Fix issue 5386: Velero providers a full URL as the S3Url while the underlying minio client only accept the host part of the URL as the endpoint and the schema should be specified separately. (#5387, @Lyndon-Li)
* Fix restore error with flag namespace-mappings (#5377, @qiuming-best)
* Pod Volume Backup/Restore Refactor: Rename parameters in CRDs and commands to remove "Restic" word (#5370, @Lyndon-Li)
* Added backupController's UT to test the prepareBackupRequest() method BackupStorageLocation processing logic (#5362, @niulechuan)
* Fix a repoEnsurer problem introduced by the refactor - The repoEnsurer didn't check "" state of BackupRepository, as a result, the function GetBackupRepository always returns without an error even though the ensreReady is specified. (#5359, @Lyndon-Li)
* Add E2E test for schedule backup (#5355, @danfengliu)
* Add useOwnerReferencesInBackup field doc for schedule. (#5353, @cleverhu)
* Clarify the help message for the default value of parameter --snapshot-volumes, when it's not set. (#5350, @blackpiglet)
* Fix restore cmd extraflag overwrite bug (#5347, @qiuming-best)
* Resolve gopkg.in/yaml.v3 vulnerabilities by upgrading gopkg.in/yaml.v3 to v3.0.1 (#5344, @kaovilai)
* Increase ensure restic repository timeout to 5m (#5335, @shubham-pampattiwar)
* Add opt-in and opt-out PersistentVolume backup to E2E tests (#5331, @danfengliu)
* Cancel downloadRequest when timeout without downloadURL (#5329, @kaovilai)
* Fix PVB finds wrong parent snapshot (#5322, @qiuming-best)
* Fix issue 4874 and 4752: check the daemonset pod is running in the node where the workload pod resides before running the PVB for the pod (#5319, @Lyndon-Li)
* plugin versioning v1 refactor for VolumeSnapshotter (#5318, @sseago)
* Change the status of restore to completed from partially failed when restore empty backup (#5314, @allenxu404)
* RestoreItemAction v1 refactoring for plugin api versioning (#5312, @sseago)
* Refactor the repoEnsurer code to use controller runtime client and wrap some common BackupRepository operations to share with other modules (#5308, @Lyndon-Li)
* Remove snapshot related lister, informer and client from backup controller. (#5299, @jxun)
* Remove github.com/apex/log logger. (#5297, @blackpiglet)
* change CSISnapshotTimeout from pointer to normal variables. (#5294, @cleverhu)
* Optimize code for restore exists resources. (#5293, @cleverhu)
* Add more detailed comments for labels columns. (#5291, @cleverhu)
* Add backup status checking in schedule controller. (#5283, @blackpiglet)
* Add changes for problems/enhancements found during smoking test for Kopia pod volume backup/restore (#5282, @Lyndon-Li)
* Support pause/unpause schedules (#5279, @ywk253100)
* plugin/clientmgmt refactoring for BackupItemAction v1 (#5271, @sseago)
* Don't move velero v1 plugins to new proto dir (#5263, @sseago)
* Fill gaps for Kopia path of PVBR: integrate Repo Manager with Unified Repo; pass UploaderType to PVBR backupper and restorer; pass RepositoryType to BackupRepository controller and Repo Ensurer (#5259, @Lyndon-Li)
* Add csiSnapshotTimeout for describe backup (#5252, @cleverhu)
* equip gc controller with configurable frequency (#5248, @allenxu404)
* Fix nil pointer panic when restoring StatefulSets (#5247, @divolgin)
* Controller refactor code modifications. (#5241, @jxun)
* Fix edge cases for already exists resources (#5239, @shubham-pampattiwar)
* Check for empty ns list before checking nslist[0] (#5236, @sseago)
* Remove reference to non-existent doc (#5234, @reasonerjt)
* Add changes for Kopia Integration: Kopia Lib - method implementation. Add changes to write Kopia Repository logs to Velero log (#5233, @Lyndon-Li)
* Add changes for Kopia Integration: Kopia Lib - initialize Kopia repo (#5231, @Lyndon-Li)
* Uploader Implementation: Kopia backup and restore (#5221, @qiuming-best)
* Migrate backup sync controller from code-generator to kubebuilder. (#5218, @jxun)
* check vsc null pointer (#5217, @lilongfeng0902)
* Refactor GCController with kubebuilder (#5215, @allenxu404)
* Uploader Implementation: Restic backup and restore (#5214, @qiuming-best)
* Add parameter "uploader-type" to velero server (#5212, @reasonerjt)
* Add annotation "pv.kubernetes.io/migrated-to" for CSI checking. (#5181, @jxun)
* Add changes for Kopia Integration: Unified Repository Provider - method implementation (#5179, @Lyndon-Li)
* Treat namespaces with exclude label as excludedNamespaces
Related issue: #2413 (#5178, @allenxu404)
* Reduce CRD size. (#5174, @jxun)
* Fix restic backups to multiple backup storage locations bug (#5172, @qiuming-best)
* Add changes for Kopia Integration: Unified Repository Provider - Repo Password (#5167, @Lyndon-Li)
* Skip registering "crd-remap-version" plugin when feature flag "EnableAPIGroupVersions" is set (#5165, @reasonerjt)
* Kopia uploader integration on shim progress uploader module (#5163, @qiuming-best)
* Add labeled and unlabeled events for PR changelog check action. (#5157, @jxun)
* VolumeSnapshotLocation refactor with kubebuilder. (#5148, @jxun)
* Delay CA file deletion in PVB controller. (#5145, @jxun)
* This commit splits the pkg/restic package into several packages to support Kopia integration works (#5143, @ywk253100)
* Kopia Integration: Add the Unified Repository Interface definition. Kopia Integration: Add the changes for Unified Repository storage config. Related Issues; #5076, #5080 (#5142, @Lyndon-Li)
* Update the CRD for kopia integration (#5135, @reasonerjt)
* Let "make shell xxx" respect GOPROXY (#5128, @reasonerjt)
* Modify BackupStoreGetter to avoid BSL spec changes (#5122, @sseago)
* Dump stack trace when the plugin server handles panic (#5110, @reasonerjt)
* Make CSI snapshot creation timeout configurable. (#5104, @jxun)
* Fix bsl validation bug: the BSL is validated continually and doesn't respect the validation period configured (#5101, @ywk253100)
* Exclude "csinodes.storage.k8s.io" and "volumeattachments.storage.k8s.io" from restore by default. (#5064, @jxun)
* Move 'velero.io/exclude-from-backup' label string to const (#5053, @niulechuan)
* Modify Github actions. (#5052, @jxun)
* Fix typo in doc, in https://velero.io/docs/main/restore-reference/ "Restore order" section, "Mamespace" should be "Namespace". (#5051, @niulechuan)
* Delete opened issues triage action. (#5041, @jxun)
* When spec.RestoreStatus is empty, don't restore status (#5008, @sseago)
* Added DownloadTargetKindCSIBackupVolumeSnapshots for retrieving the signed URL to download only the `<backup name>`-csi-volumesnapshots.json.gz and DownloadTargetKindCSIBackupVolumeSnapshotContents to download only `<backup name>`-csi-volumesnapshotcontents.json.gz in the DownloadRequest CR structure. These files are already present in the backup layout. (#4980, @anshulahuja98)
* Refactor BackupItemAction proto and related code to backupitemaction/v1 package. This is part of implementation of the plugin version design https://github.com/vmware-tanzu/velero/blob/main/design/plugin-versioning.md (#4943, @phuongatemc)
* Unified Repository Design (#4926, @Lyndon-Li)
* Add credentials to volume snapshot locations (#4864, @sseago)

View File

@@ -1,126 +0,0 @@
## v1.11
### 2023-04-07
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.11.0
### Container Image
`velero/velero:v1.11.0`
### Documentation
https://velero.io/docs/v1.11/
### Upgrading
https://velero.io/docs/v1.11/upgrade-to-1.11/
### Highlights
#### BackupItemAction v2
This feature implements the BackupItemAction v2. BIA v2 has two new methods: Progress() and Cancel() and modifies the Execute() return value.
The API change is needed to facilitate long-running BackupItemAction plugin actions that may not be complete when the Execute() method returns. This will allow long-running BackupItemAction plugin actions to continue in the background while the Velero moves to the following plugin or the next item.
#### RestoreItemAction v2
This feature implemented the RestoreItemAction v2. RIA v2 has three new methods: Progress(), Cancel(), and AreAdditionalItemsReady(), and it modifies RestoreItemActionExecuteOutput() structure in the RIA return value.
The Progress() and Cancel() methods are needed to facilitate long-running RestoreItemAction plugin actions that may not be complete when the Execute() method returns. This will allow long-running RestoreItemAction plugin actions to continue in the background while the Velero moves to the following plugin or the next item. The AreAdditionalItemsReady() method is needed to allow plugins to tell Velero to wait until the returned additional items have been restored and are ready for use in the cluster before restoring the current item.
#### Plugin Progress Monitoring
This is intended as a replacement for the previously-approved Upload Progress Monitoring design ([Upload Progress Monitoring](https://github.com/vmware-tanzu/velero/blob/main/design/upload-progress.md)) to expand the supported use cases beyond snapshot upload to include what was previously called Async Backup/Restore Item Actions.
#### Flexible resource policy that can filter volumes to skip in the backup
This feature provides a flexible policy to filter volumes in the backup without requiring patching any labels or annotations to the pods or volumes. This policy is configured as k8s ConfigMap and maintained by the users themselves, and it can be extended to more scenarios in the future. By now, the policy rules out volumes from backup depending on the CSI driver, NFS setting, volume size, and StorageClass setting. Please refer to [policy API design](https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/handle-backup-of-volumes-by-resources-filters.md#api-design) for the policy's ConifgMap format. It is not guaranteed to work on unofficial third-party plugins as it may not follow the existing backup workflow code logic of Velero.
#### Resource Filters that can distinguish cluster scope and namespace scope resources
This feature adds four new resource filters for backup. The new filters are separated into cluster scope and namespace scope. Before this feature, Velero could not filter cluster scope resources precisely. This feature provides the ability and refactors existing resource filter parameters.
#### Add a parameter for setting the Velero server connection with the k8s API server's timeout
In Velero, some code pieces need to communicate with the k8s API server. Before v1.11, these code pieces used hard-code timeout settings. This feature adds a resource-timeout parameter in the velero server binary to make it configurable.
#### Add resource list in the output of the restore describe command
Before this feature, Velero restore didn't have a restored resources list as the Velero backup. It's not convenient for users to learn what is restored. This feature adds the resources list and the handling result of the resources (including created, updated, failed, and skipped).
#### Refactor controllers with controller-runtime
In v1.11, Backup Controller and Restore controller are refactored with controller-runtime. Till v1.11, all Velero controllers use the controller-runtime framework.
#### Runtime and dependencies
To fix CVEs and keep pace with Golang, Velero made changes as follows:
* Bump Golang runtime to v1.19.8.
* Bump several dependent libraries to new versions.
* Compile Restic (v0.15.0) with Golang v1.19.8 instead of packaging the official binary.
### Breaking changes
* The Velero CSI plugin now determines whether to restore Volume's data from snapshots on the restore's restorePVs setting. Before v1.11, the CSI plugin doesn't check the restorePVs parameter setting.
### Limitations/Known issues
* The Flexible resource policy that can filter volumes to skip in the backup is not guaranteed to work on unofficial third-party plugins because the plugins may not follow the existing backup workflow code logic of Velero. The ConfigMap used as the policy is supposed to be maintained by users.
### All Changes
* Modify new scope resource filters name. (#6089, @blackpiglet)
* Make Velero not exits when EnableCSI is on and CSI snapshot not installed (#6062, @blackpiglet)
* Restore Services before Clusters (#6057, @ywk253100)
* Fixed backup deletion bug related to async operations (#6041, @sseago)
* Update Golang version to v1.19 for branch main. (#6039, @blackpiglet)
* Fix issue #5972, don't assume errorField as error type when dealing with logger.WithError (#6028, @Lyndon-Li)
* distinguish between New and InProgress operations (#6012, @sseago)
* Modify golangci.yaml file. Resolve found lint issues. (#6008, @blackpiglet)
* Remove Reference of itemsnapshotter (#5997, @reasonerjt)
* minor fixes for backup_operations_controller (#5996, @sseago)
* RIAv2 async operations controller work (#5993, @sseago)
* Follow-on fixes for BIAv2 controller work (#5971, @sseago)
* Refactor backup controller based on the controller-runtime framework. (#5969, @qiuming-best)
* Fix client wait problem after async operation change, velero backup/restore --wait should check a full list of the terminal status (#5964, @Lyndon-Li)
* Fix issue #5935, refactor the logics for backup/restore persistent log, so as to remove the contest to gzip writer (#5956, @Lyndon-Li)
* Switch the base image to distroless/base-nossl-debian11 to reduce the CVE triage efforts (#5939, @ywk253100)
* Wait for additional items to be ready before restoring current item (#5933, @sseago)
* Add configurable server setting for default timeouts (#5926, @eemcmullan)
* Add warning/error result to cmd `velero backup describe` (#5916, @allenxu404)
* Fix Dependabot alerts. Use 1.18 and 1.19 golang instead of patch image in dockerfile. Add release-1.10 and release-1.9 in Trivy daily scan. (#5911, @blackpiglet)
* Update client-go to v0.25.6 (#5907, @kaovilai)
* Limit the concurrent number for backup's VolumeSnapshot operation. (#5900, @blackpiglet)
* Fix goreleaser issue for resolving tags and updated it's version. (#5899, @anshulahuja98)
* This is to fix issue 5881, enhance the PVB tracker in two modes, Track and Taken (#5894, @Lyndon-Li)
* Add labels for velero installed namespace to support PSA. (#5873, @blackpiglet)
* Add restored resource list in the restore describe command (#5867, @ywk253100)
* Add a json output to cmd velero backup describe (#5865, @allenxu404)
* Make restore controller adopting the controller-runtime framework. (#5864, @blackpiglet)
* Replace k8s.io/apimachinery/pkg/util/clock with k8s.io/utils/clock (#5859, @hezhizhen)
* Restore finalizer and managedFields of metadata during the restoration (#5853, @ywk253100)
* BIAv2 async operations controller work (#5849, @sseago)
* Add secret restore item action to handle service account token secret (#5843, @ywk253100)
* Add new resource filters can separate cluster and namespace scope resources. (#5838, @blackpiglet)
* Correct PVB/PVR Failed Phase patching during startup (#5828, @kaovilai)
* bump up golang net to fix CVE-2022-41721 (#5812, @Lyndon-Li)
* Update CRD descriptions for SnapshotVolumes and restorePVs (#5807, @shubham-pampattiwar)
* Add mapped selected-node existence check (#5806, @blackpiglet)
* Add option "--service-account-name" to install cmd (#5802, @reasonerjt)
* Enable staticcheck linter. (#5788, @blackpiglet)
* Set Kopia IgnoreUnknownTypes in ErrorHandlingPolicy to True for ignoring backup unknown file type (#5786, @qiuming-best)
* Bump up Restic version to 0.15.0 (#5784, @qiuming-best)
* Add File system backup related metrics to Grafana dashboard
- Add metrics backup_warning_total for record of total warnings
- Add metrics backup_last_status for record of last status of the backup (#5779, @allenxu404)
* Design for Handling backup of volumes by resources filters (#5773, @qiuming-best)
* Add PR container build action, which will not push image. Add GOARM parameter. (#5771, @blackpiglet)
* Fix issue 5458, track pod volume backup until the CR is submitted in case it is skipped half way (#5769, @Lyndon-Li)
* Fix issue 5226, invalidate the related backup repositories whenever the backup storage info change in BSL (#5768, @Lyndon-Li)
* Add Restic builder in Dockerfile, and keep the used built Golang image version in accordance with upstream Restic. (#5764, @blackpiglet)
* Fix issue 5043, after the restore pod is scheduled, check if the node-agent pod is running in the same node. (#5760, @Lyndon-Li)
* Remove restore controller's redundant client. (#5759, @blackpiglet)
* Define itemoperations.json format and update DownloadRequest API (#5752, @sseago)
* Add Trivy nightly scan. (#5740, @jxun)
* Fix issue 5696, check if the repo is still openable before running the prune and forget operation, if not, try to reconnect the repo (#5715, @Lyndon-Li)
* Fix error with Restic backup empty volumes (#5713, @qiuming-best)
* new backup and restore phases to support async plugin operations:
- WaitingForPluginOperations
- WaitingForPluginOperationsPartiallyFailed (#5710, @sseago)
* Prevent nil panic on exec restore hooks (#5675, @dymurray)
* Fix CVEs scanned by trivy (#5653, @qiuming-best)
* Publish backupresults json to enhance error info during backups. (#5576, @anshulahuja98)
* RestoreItemAction v2 API implementation (#5569, @sseago)
* add new RestoreItemAction of "velero.io/change-image-name" to handle the issue mentioned at #5519 (#5540, @wenterjoy)
* BackupItemAction v2 API implementation (#5442, @sseago)
* Proposal to separate resource filter into cluster scope and namespace scope (#5333, @blackpiglet)

View File

@@ -1,134 +0,0 @@
## v1.12
### 2023-08-18
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.12.0
### Container Image
`velero/velero:v1.12.0`
### Documentation
https://velero.io/docs/v1.12/
### Upgrading
https://velero.io/docs/v1.12/upgrade-to-1.12/
### Highlights
#### CSI Snapshot Data Movement
CSI Snapshot Data Movement refers to back up CSI snapshot data from the volatile and limited production environment into durable, heterogeneous, and scalable backup storage in a consistent manner; and restore the data to volumes in the original or alternative environment.
CSI Snapshot Data Movement is useful in below scenarios:
* For on-premises users, the storage usually doesn't support durable snapshots, so it is impossible/less efficient/cost ineffective to keep volume snapshots by the storage This feature helps to move the snapshot data to a storage with lower cost and larger scale for long time preservation.
* For public cloud users, this feature helps users to fulfill the multiple cloud strategy. It allows users to back up volume snapshots from one cloud provider and preserve or restore the data to another cloud provider. Then users will be free to flow their business data across cloud providers based on Velero backup and restore
CSI Snapshot Data Movement is built according to the Volume Snapshot Data Movement design ([Volume Snapshot Data Movement](https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/unified-repo-and-kopia-integration/unified-repo-and-kopia-integration.md)). More details can be found in the design.
#### Resource Modifiers
In many use cases, customers often need to substitute specific values in Kubernetes resources during the restoration process like changing the namespace, changing the storage class, etc.
To address this need, Resource Modifiers (also known as JSON Substitutions) offer a generic solution in the restore workflow. It allows the user to define filters for specific resources and then specify a JSON patch (operator, path, value) to apply to the resource. This feature simplifies the process of making substitutions without requiring the implementation of a new RestoreItemAction plugin. More details can be found in Volume Snapshot Resource Modifiers design ([Resource Modifiers](https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/json-substitution-action-design.md)).
#### Multiple VolumeSnapshotClasses
Prior to version 1.12, the Velero CSI plugin would choose the VolumeSnapshotClass in the cluster based on matching driver names and the presence of the "velero.io/csi-volumesnapshot-class" label. However, this approach proved inadequate for many user scenarios.
With the introduction of version 1.12, Velero now offers support for multiple VolumeSnapshotClasses in the CSI Plugin, enabling users to select a specific class for a particular backup. More details can be found in Multiple VolumeSnapshotClasses design ([Multiple VolumeSnapshotClasses](https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/multiple-csi-volumesnapshotclass-support.md)).
#### Restore Finalizer
Before v1.12, the restore controller would only delete restore resources but wouldnt delete restore data from the backup storage location when the command `velero restore delete` was executed. The only chance Velero deletes restores data from the backup storage location is when the associated backup is deleted.
In this version, Velero introduces a finalizer that ensures the cleanup of all associated data for restores when running the command `velero restore delete`.
#### Runtime and dependencies
To fix CVEs and keep pace with Golang, Velero made changes as follows:
* Bump Golang runtime to v1.20.7.
* Bump several dependent libraries to new versions.
* Bump Kopia to v0.13.
### Breaking changes
* Prior to v1.12, the parameter `uploader-type` for Velero installation had a default value of "restic". However, starting from this version, the default value has been changed to "kopia". This means that Velero will now use Kopia as the default path for file system backup.
* The ways of setting CSI snapshot time have changed in v1.12. First, the sync waiting time for creating a snapshot handle in the CSI plugin is changed from the fixed 10 minutes into backup.Spec.CSISnapshotTimeout. The second, the async waiting time for VolumeSnapshot and VolumeSnapshotContent's status turning into `ReadyToUse` in operation uses the operation's timeout. The default value is 4 hours.
* As from [Velero helm chart v4.0.0](https://github.com/vmware-tanzu/helm-charts/releases/tag/velero-4.0.0), it supports multiple BSL and VSL, and the BSL and VSL have changed from the map into a slice, and[ this breaking change](https://github.com/vmware-tanzu/helm-charts/pull/413) is not backward compatible. So it would be best to change the BSL and VSL configuration into slices before the Upgrade.
### Limitations/Known issues
* The Azure plugin supports Azure AD Workload identity way, but it only works for Velero native snapshots. It cannot support filesystem backup and snapshot data mover scenarios.
### All Changes
* Fixes #6498. Get resource client again after restore actions in case resource's gv is changed. This is an improvement of pr #6499, to support group changes. A group change usually happens in a restore plugin which is used for resource conversion: convert a resource from a not supported gv to a supported gv (#6634, @27149chen)
* Add API support for volMode block, only error for now. (#6608, @shawn-hurley)
* Fix how the AWS credentials are obtained from configuration (#6598, @aws_creds)
* Add performance E2E test (#6569, @qiuming-best)
* Non default s3 credential profiles work on Unified Repository Provider (kopia) (#6558, @kaovilai)
* Fix issue #6571, fix the problem for restore item operation to set the errors correctly so that they can be recorded by Velero restore and then reflect the correct status for Velero restore. (#6594, @Lyndon-Li)
* Fix issue 6575, flush the repo after delete the snapshot, otherwise, the changes(deleting repo snapshot) cannot be committed to the repo. (#6587, @Lyndon-Li)
* Delete moved snapshots when the backup is deleted (#6547, @reasonerjt)
* check if restore crd exist before operating restores (#6544, @allenxu404)
* Remove PVC's selector in backup's PVC action. (#6481, @blackpiglet)
* Delete the expired deletebackuprequests that are stuck in "InProgress" (#6476, @reasonerjt)
* Fix issue #6534, reset PVB CR's StorageLocation to the latest one during backup sync as same as the backup CR. Also fix similar problem with DataUploadResult for data mover restore. (#6533, @Lyndon-Li)
* Fix issue #6519. Restrict the client manager of node-agent server to include only Velero resources from the server's namespace, otherwise, the controllers will try to reconcile CRs from all the installed Velero namespaces. (#6523, @Lyndon-Li)
* Track the skipped PVC and print the summary in backup log (#6496, @reasonerjt)
* Add restore finalizer to clean up external resources (#6479, @allenxu404)
* fix: Typos and add more spell checking rules to CI (#6415, @mateusoliveira43)
* Add missing CompletionTimestamp and metrics when restore moved into terminal phase in restoreOperationsReconciler (#6397, @Nutrymaco)
* Add support for resource Modifications in the restore flow. Also known as JSON Substitutions. (#6452, @anshulahuja98)
* Remove dependency of the legacy client code from pkg/cmd directory part 2 (#6497, @blackpiglet)
* Add data upload and download metrics (#6493, @allenxu404)
* Fix issue 6490, If a backup/restore has multiple async operations and one operation fails while others are still in-progress, when all the operations finish, the backup/restore will be set as Completed falsely (#6491, @Lyndon-Li)
* Velero Plugins no longer need kopia indirect dependency in their go.mod (#6484, @kaovilai)
* Remove dependency of the legacy client code from pkg/cmd directory (#6469, @blackpiglet)
* Add support for OpenStack CSI drivers topology keys (#6464, @openstack-csi-topology-keys)
* Add exit code log and possible memory shortage warning log for Restic command failure. (#6459, @blackpiglet)
* Modify DownloadRequest controller logic (#6433, @blackpiglet)
* Add data download controller for data mover (#6436, @qiuming-best)
* Fix hook filter display issue for backup describer (#6434, @allenxu404)
* Retrieve DataUpload into backup result ConfigMap during volume snapshot restore. (#6410, @blackpiglet)
* Design to add support for Multiple VolumeSnapshotClasses in CSI Plugin. (#5774, @anshulahuja98)
* Clarify the deletion frequency for gc controller (#6414, @allenxu404)
* Add unit tests for pkg/archive (#6396, @allenxu404)
* Add UT for pkg/discovery (#6394, @qiuming-best)
* Add UT for pkg/util (#6368, @Lyndon-Li)
* Add the code for data mover restore expose (#6357, @Lyndon-Li)
* Restore Endpoints before Services (#6315, @ywk253100)
* Add warning message for volume snapshotter in data mover case. (#6377, @blackpiglet)
* Add unit test for pkg/uploader (#6374, @qiuming-best)
* Change kopia as the default path of PVB (#6370, @Lyndon-Li)
* Do not persist VolumeSnapshot and VolumeSnapshotContent for snapshot DataMover case. (#6366, @blackpiglet)
* Add data mover related options in CLI (#6365, @ywk253100)
* Add dataupload controller (#6337, @qiuming-best)
* Add UT cases for pkg/podvolume (#6336, @Lyndon-Li)
* Remove Wait VolumeSnapshot to ReadyToUse logic. (#6327, @blackpiglet)
* Enhance the code because of #6297, the return value of GetBucketRegion is not recorded, as a result, when it fails, we have no way to get the cause (#6326, @Lyndon-Li)
* Skip updating status when CRDs are restored (#6325, @reasonerjt)
* Include namespaces needed by namespaced-scope resources in backup. (#6320, @blackpiglet)
* Update metrics when backup failed with validation error (#6318, @ywk253100)
* Add the code for data mover backup expose (#6308, @Lyndon-Li)
* Fix a PVR issue for generic data path -- the namespace remap was not honored, and enhance the code for better error handling (#6303, @Lyndon-Li)
* Add default values for defaultItemOperationTimeout and itemOperationSyncFrequency in velero CLI (#6298, @shubham-pampattiwar)
* Add UT cases for pkg/repository (#6296, @Lyndon-Li)
* Fix issue #5875. Since Kopia has supported IAM, Velero should not require static credentials all the time (#6283, @Lyndon-Li)
* Fixed a bug where status.progress is not getting updated for backups. (#6276, @kkothule)
* Add code change for async generic data path that is used by both PVB/PVR and data mover (#6226, @Lyndon-Li)
* Add data mover CRD under v2alpha1, include DataUpload CRD and DataDownload CRD (#6176, @Lyndon-Li)
* Remove any dataSource or dataSourceRef fields from PVCs in PVC BIA for cases of
prior PVC restores with CSI (#6111, @eemcmullan)
* Add the design for Volume Snapshot Data Movement (#5968, @Lyndon-Li)
* Fix issue #5123, Kopia repository supports self-cert CA for S3 compatible storage. (#6268, @Lyndon-Li)
* Bump up Kopia to v0.13 (#6248, @Lyndon-Li)
* log volumes to backup to help debug why `IsPodRunning` is called. (#6232, @kaovilai)
* Enable errcheck linter and resolve found issues (#6208, @blackpiglet)
* Enable more linters, and remove mal-functioned milestoned issue action. (#6194, @blackpiglet)
* Enable stylecheck linter and resolve found issues. (#6185, @blackpiglet)
* Fix issue #6182. If pod is not running, don't treat it as an error, let it go and leave a warning. (#6184, @Lyndon-Li)
* Enable staticcheck and resolve found issues (#6183, @blackpiglet)
* Enable linter revive and resolve found errors: part 2 (#6177, @blackpiglet)
* Enable linter revive and resolve found errors: part 1 (#6173, @blackpiglet)
* Fix usestdlibvars and whitespace linters issues. (#6162, @blackpiglet)
* Update Golang to v1.20 for main. (#6158, @blackpiglet)
* Make GetPluginConfig accessible from other packages. (#6151, @tkaovila)
* Ignore not found error during patching managedFields (#6136, @ywk253100)
* Fix the goreleaser issues and add a new goreleaser action (#6109, @blackpiglet)

View File

@@ -1,166 +0,0 @@
## v1.13
### 2024-01-10
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.13.0
### Container Image
`velero/velero:v1.13.0`
### Documentation
https://velero.io/docs/v1.13/
### Upgrading
https://velero.io/docs/v1.13/upgrade-to-1.13/
### Highlights
#### Resource Modifier Enhancement
Velero introduced the Resource Modifiers in v1.12.0. This feature allows users to specify a ConfigMap with a set of rules to modify the resources during restoration. However, only the JSON Patch is supported when creating the rules, and JSON Patch has some limitations, which cannot cover all use cases. In v1.13.0, Velero adds new support for JSON Merge Patch and Strategic Merge Patch, which provide more power and flexibility and allow users to use the same ConfigMap to apply patches on the resources. More design details can be found in [Support JSON Merge Patch and Strategic Merge Patch in Resource Modifiers](https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/merge-patch-and-strategic-in-resource-modifier.md) design. For instructions on how to use the feature, please refer to the [Resource Modifiers](https://velero.io/docs/v1.13/restore-resource-modifiers/) doc.
#### Node-Agent Concurrency
Velero data movement activities from fs-backups and CSI snapshot data movements run in Velero node-agent, so may be hosted by every node in the cluster and consume resources (i.e. CPU, memory, network bandwidth) from there. With v1.13, users are allowed to configure how many data movement activities (a.k.a, loads) run in each node globally or by node, so that users can better leverage the performance of Velero data movement activities and the resource consumption in the cluster. For more information, check the [Node-Agent Concurrency](https://velero.io/docs/v1.13/node-agent-concurrency/) document.
#### Parallel Files Upload Options
Velero now supports configurable options for parallel files upload when using Kopia uploader to do fs-backups or CSI snapshot data movements which makes speed up backup possible.
For more information, please check [Here](https://velero.io/docs/v1.13/backup-reference/#parallel-files-upload).
#### Write Sparse Files Options
If using fs-restore or CSI snapshot data movements, its supported to write sparse files during restore. For more information, please check [Here](https://velero.io/docs/v1.13/restore-reference/#write-sparse-files).
#### Backup Describe
In v1.13, the Backup Volume section is added to the velero backup describe command output. The backup Volumes section describes information for all the volumes included in the backup of various backup types, i.e. native snapshot, fs-backup, CSI snapshot, and CSI snapshot data movement. Particularly, the velero backup description now supports showing the information of CSI snapshot data movements, which is not supported in v1.12.
Additionally, backup describe command will not check EnableCSI feature gate from client side, so if a backup has volumes with CSI snapshot or CSI snapshot data movement, backup describe command always shows the corresponding information in its output.
#### Backup's new VolumeInfo metadata
Create a new metadata file in the backup repository's backup name sub-directory to store the backup-including PVC and PV information. The information includes the backing-up method of the PVC and PV data, snapshot information, and status. The VolumeInfo metadata file determines how the PV resource should be restored. The Velero downstream software can also use this metadata file to get a summary of the backup's volume data information.
#### Enhancement for CSI Snapshot Data Movements when Velero Pod Restart
When performing backup and restore operations, enhancements have been implemented for Velero server pods or node agents to ensure that the current backup or restore process is not stuck or interrupted after restart due to certain exceptional circumstances.
#### New status fields added to show hook execution details
Hook execution status is now included in the backup/restore CR status and displayed in the backup/restore describe command output. Specifically, it will show the number of hooks which attempted to execute under the HooksAttempted field and the number of hooks which failed to execute under the HooksFailed field.
#### AWS SDK Bump Up
Bump up AWS SDK for Go to version 2, which offers significant performance improvements in CPU and memory utilization over version 1.
#### Azure AD/Workload Identity Support
Azure AD/Workload Identity is the recommended approach to do the authentication with Azure services/AKS, Velero has introduced support for Azure AD/Workload Identity on the Velero Azure plugin side in previous releases, and in v1.13.0 Velero adds new support for Kopia operations(file system backup/data mover/etc.) with Azure AD/Workload Identity.
#### Runtime and dependencies
To fix CVEs and keep pace with Golang, Velero made changes as follows:
* Bump Golang runtime to v1.21.6.
* Bump several dependent libraries to new versions.
* Bump Kopia to v0.15.0.
### Breaking changes
* Backup describe command: due to the backup describe output enhancement, some existing information (i.e. the output for native snapshot, CSI snapshot, and fs-backup) has been moved to the Backup Volumes section with some format changes.
* API type changes: changes the field [DataMoverConfig](https://github.com/vmware-tanzu/velero/blob/v1.13.0/pkg/apis/velero/v2alpha1/data_upload_types.go#L54) in DataUploadSpec from `*map[string][string]`` to `map[string]string`
* Velero install command: due to the issue [#7264](https://github.com/vmware-tanzu/velero/issues/7264), v1.13.0 introduces a break change that make the informer cache enabled by default to keep the actual behavior consistent with the helper message(the informer cache is disabled by default before the change).
### Limitations/Known issues
* The backup's VolumeInfo metadata doesn't have the information updated in the async operations. This function could be supported in v1.14 release.
### Note
* Velero introduces the informer cache which is enabled by default. The informer cache improves the restore performance but may cause higher memory consumption. Increase the memory limit of the Velero pod or disable the informer cache by specifying the `--disable-informer-cache` option when installing Velero if you get the OOM error.
### Deprecation announcement
* The generated k8s clients, informers, and listers are deprecated in the Velero v1.13 release. They are put in the Velero repository's pkg/generated directory. According to the n+2 supporting policy, the deprecated are kept for two more releases. The pkg/generated directory should be deleted in the v1.15 release.
* After the backup VolumeInfo metadata file is added to the backup, Velero decides how to restore the PV resource according to the VolumeInfo content. To support the backup generated by the older version of Velero, the old logic is also kept. The support for the backup without the VolumeInfo metadata file will be kept for two releases. The support logic will be deleted in the v1.15 release.
### All Changes
* Make "disable-informer-cache" option false(enabled) by default to keep it consistent with the help message (#7294, @ywk253100)
* Fix issue #6928, remove snapshot deletion timeout for PVB (#7282, @Lyndon-Li)
* Do not set "targetNamespace" to namespace items (#7274, @reasonerjt)
* Fix issue #7244. By the end of the upload, check the outstanding incomplete snapshots and delete them by calling ApplyRetentionPolicy (#7245, @Lyndon-Li)
* Adjust the newline output of resource list in restore describer (#7238, @allenxu404)
* Remove the redundant newline in backup describe output (#7229, @allenxu404)
* Fix issue #7189, data mover generic restore - don't assume the first volume as the restore volume (#7201, @Lyndon-Li)
* Update CSIVolumeSnapshotsCompleted in backup's status and the metric
during backup finalize stage according to async operations content. (#7184, @blackpiglet)
* Refactor DownloadRequest Stream function (#7175, @blackpiglet)
* Add `--skip-immediately` flag to schedule commands; `--schedule-skip-immediately` server and install (#7169, @kaovilai)
* Add node-agent concurrency doc and change the config name from dataPathConcurrency to loadCocurrency (#7161, @Lyndon-Li)
* Enhance hooks tracker by adding a returned error to record function (#7153, @allenxu404)
* Track the skipped PV when SnapshotVolumes set as false (#7152, @reasonerjt)
* Add more linters part 2. (#7151, @blackpiglet)
* Fix issue #7135, check pod status before checking node-agent pod status (#7150, @Lyndon-Li)
* Treat namespace as a regular restorable item (#7143, @reasonerjt)
* Allow sparse option for Kopia & Restic restore (#7141, @qiuming-best)
* Use VolumeInfo to help restore the PV. (#7138, @blackpiglet)
* Node agent restart enhancement (#7130, @qiuming-best)
* Fix issue #6695, add describe for data mover backups (#7125, @Lyndon-Li)
* Add hooks status to backup/restore CR (#7117, @allenxu404)
* Include plugin name in the error message by operations (#7115, @reasonerjt)
* Fix issue #7068, due to a behavior of CSI external snapshotter, manipulations of VS and VSC may not be handled in the same order inside external snapshotter as the API is called. So add a protection finalizer to ensure the order (#7102, @Lyndon-Li)
* Generate VolumeInfo for backup. (#7100, @blackpiglet)
* Fix issue #7094, fallback to full backup if previous snapshot is not found (#7096, @Lyndon-Li)
* Fix issue #7068, due to an behavior of CSI external snapshotter, manipulations of VS and VSC may not be handled in the same order inside external snapshotter as the API is called. So add a protection finalizer to ensure the order (#7095, @Lyndon-Li)
* Skip syncing the backup which doesn't contain backup metadata (#7081, @ywk253100)
* Fix issue #6693, partially fail restore if CSI snapshot is involved but CSI feature is not ready, i.e., CSI feature gate is not enabled or CSI plugin is not installed. (#7077, @Lyndon-Li)
* Truncate the credential file to avoid the change of secret content messing it up (#7072, @ywk253100)
* Add VolumeInfo metadata structures. (#7070, @blackpiglet)
* improve discoveryHelper.Refresh() in restore (#7069, @27149chen)
* Add DataUpload Result and CSI VolumeSnapshot check for restore PV. (#7061, @blackpiglet)
* Add the implementation for design #6950, configurable data path concurrency (#7059, @Lyndon-Li)
* Make data mover fail early (#7052, @qiuming-best)
* Remove dependency of generated client part 3. (#7051, @blackpiglet)
* Update Backup.Status.CSIVolumeSnapshotsCompleted during finalize (#7046, @kaovilai)
* Remove the Velero generated client. (#7041, @blackpiglet)
* Fix issue #7027, data mover backup exposer should not assume the first volume as the backup volume in backup pod (#7038, @Lyndon-Li)
* Read information from the credential specified by BSL (#7034, @ywk253100)
* Fix #6857. Added check for matching Owner References when synchronizing backups, removing references that are not found/have mismatched uid. (#7032, @deefdragon)
* Add description markers for dataupload and datadownload CRDs (#7028, @shubham-pampattiwar)
* Add HealthCheckNodePort deletion logic for Service restore. (#7026, @blackpiglet)
* Fix inconsistent behavior of Backup and Restore hook execution (#7022, @allenxu404)
* Fix #6964. Don't use csiSnapshotTimeout (10 min) for waiting snapshot to readyToUse for data mover, so as to make the behavior complied with CSI snapshot backup (#7011, @Lyndon-Li)
* restore: Use warning when Create IsAlreadyExist and Get error (#7004, @kaovilai)
* Bump kopia to 0.15.0 (#7001, @Lyndon-Li)
* Make Kopia file parallelism configurable (#7000, @qiuming-best)
* Fix unified repository (kopia) s3 credentials profile selection (#6995, @kaovilai)
* Fix #6988, always get region from BSL if it is not empty (#6990, @Lyndon-Li)
* Limit PVC block mode logic to non-Windows platform. (#6989, @blackpiglet)
* It is a valid case that the Status.RestoreSize field in VolumeSnapshot is not set, if so, get the volume size from the source PVC to create the backup PVC (#6976, @Lyndon-Li)
* Check whether the action is a CSI action and whether CSI feature is enabled, before executing the action. (#6968, @blackpiglet)
* Add the PV backup information design document. (#6962, @blackpiglet)
* Change controller-runtime List option from MatchingFields to ListOptions (#6958, @blackpiglet)
* Add the design for node-agent concurrency (#6950, @Lyndon-Li)
* Import auth provider plugins (#6947, @0x113)
* Fix #6668, add a limitation for file system restore parallelism with other types of restores (CSI snapshot restore, CSI snapshot movement restore) (#6946, @Lyndon-Li)
* Add MSI Support for Azure plugin. (#6938, @yanggangtony)
* Partially fix #6734, guide Kubernetes' scheduler to spread backup pods evenly across nodes as much as possible, so that data mover backup could achieve better parallelism (#6926, @Lyndon-Li)
* Bump up aws sdk to aws-sdk-go-v2 (#6923, @reasonerjt)
* Optional check if targeted container is ready before executing a hook (#6918, @Ripolin)
* Support JSON Merge Patch and Strategic Merge Patch in Resource Modifiers (#6917, @27149chen)
* Fix issue 6913: Velero Built-in Datamover: Backup stucks in phase WaitingForPluginOperations when Node Agent pod gets restarted (#6914, @shubham-pampattiwar)
* Set ParallelUploadAboveSize as MaxInt64 and flush repo after setting up policy so that policy is retrieved correctly by TreeForSource (#6885, @Lyndon-Li)
* Replace the base image with paketobuildpacks image (#6883, @ywk253100)
* Fix issue #6859, move plugin depending podvolume functions to util pkg, so as to remove the dependencies to unnecessary repository packages like kopia, azure, etc. (#6875, @Lyndon-Li)
* Fix #6861. Only Restic path requires repoIdentifier, so for non-restic path, set the repoIdentifier fields as empty in PVB and PVR and also remove the RepoIdentifier column in the get output of PVBs and PVRs (#6872, @Lyndon-Li)
* Add volume types filter in resource policies (#6863, @qiuming-best)
* change the metrics backup_attempt_total default value to 1. (#6838, @yanggangtony)
* Bump kopia to v0.14 (#6833, @Lyndon-Li)
* Retry failed create when using generateName (#6830, @sseago)
* Fix issue #6786, always delete VSC regardless of the deletion policy (#6827, @Lyndon-Li)
* Proposal to support JSON Merge Patch and Strategic Merge Patch in Resource Modifiers (#6797, @27149chen)
* Fix the node-agent missing metrics-address defines. (#6784, @yanggangtony)
* Fix default BSL setting not work (#6771, @qiuming-best)
* Update restore controller logic for restore deletion (#6770, @ywk253100)
* Fix #6752: add namespace exclude check. (#6760, @blackpiglet)
* Fix issue #6753, remove the check for read-only BSL in restore async operation controller since Velero cannot fully support read-only mode BSL in restore at present (#6757, @Lyndon-Li)
* Fix issue #6647, add the --default-snapshot-move-data parameter to Velero install, so that users don't need to specify --snapshot-move-data per backup when they want to move snapshot data for all backups (#6751, @Lyndon-Li)
* Use old(origin) namespace in resource modifier conditions in case namespace may change during restore (#6724, @27149chen)
* Perf improvements for existing resource restore (#6723, @sseago)
* Remove schedule-related metrics on schedule delete (#6715, @nilesh-akhade)
* Kubernetes 1.27 new job label batch.kubernetes.io/controller-uid are deleted during restore per https://github.com/kubernetes/kubernetes/pull/114930 (#6712, @kaovilai)
* This pr made some improvements in Resource Modifiers: 1. add label selector 2. change the field name from groupKind to groupResource (#6704, @27149chen)
* Make Kopia support Azure AD (#6686, @ywk253100)
* Add support for block volumes with Kopia (#6680, @dzaninovic)
* Delete PartiallyFailed orphaned backups as well as Completed ones (#6649, @sseago)
* Add CSI snapshot data movement doc (#6637, @Lyndon-Li)
* Fixes #6636, skip subresource in resource discovery (#6635, @27149chen)
* Add `orLabelSelectors` for backup, restore commands (#6475, @nilesh-akhade)
* fix run preHook and postHook on completed pods (#5211, @cleverhu)

View File

@@ -1,105 +0,0 @@
## v1.14
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.14.0
### Container Image
`velero/velero:v1.14.0`
### Documentation
https://velero.io/docs/v1.14/
### Upgrading
https://velero.io/docs/v1.14/upgrade-to-1.14/
### Highlights
#### The maintenance work for kopia/restic backup repositories is run in jobs
Since velero started using kopia as the approach for filesystem-level backup/restore, we've noticed an issue when velero connects to the kopia backup repositories and performs maintenance, it sometimes consumes excessive memory that can cause the velero pod to get OOM Killed. To mitigate this issue, the maintenance work will be moved out of velero pod to a separate kubernetes job, and the user will be able to specify the resource request in "velero install".
#### Volume Policies are extended to support more actions to handle volumes
In an earlier release, a flexible volume policy was introduced to skip certain volumes from a backup. In v1.14 we've made enhancement to this policy to allow the user to set how the volumes should be backed up. The user will be able to set "fs-backup" or "snapshot" as value of “action" in the policy and velero will backup the volumes accordingly. This enhancement allows the user to achieve a fine-grained control like "opt-in/out" without having to update the target workload. For more details please refer to https://velero.io/docs/v1.14/resource-filtering/#supported-volumepolicy-actions
#### Node Selection for Data Movement Backup
In velero the data movement flow relies on datamover pods, and these pods may take substantial resources and keep running for a long time. In v1.14, the user will be able to create a configmap to define the eligible nodes on which the datamover pods are launched. For more details refer to https://velero.io/docs/v1.14/data-movement-backup-node-selection/
#### VolumeInfo metadata for restored volumes
In v1.13, we introduced volumeinfo metadata for backup to help velero CLI and downstream adopter understand how velero handles each volume during backup. In v1.14, similar metadata will be persisted for each restore. velero CLI is also updated to bring more info in the output of "velero restore describe".
#### "Finalizing" phase is introduced to restores
The "Finalizing" phase is added to the state transition flow to restore, which helps us fix several issues: The labels added to PVs will be restored after the data in the PV is restored via volumesnapshotter. The post restore hook will be executed after datamovement is finished.
#### Certificate-based authentication support for Azure
Besides the service principal with secret(password)-based authentication, Velero introduces the new support for service principal with certificate-based authentication in v1.14.0. This approach enables you to adopt a phishing resistant authentication by using conditional access policies, which better protects Azure resources and is the recommended way by Azure.
### Runtime and dependencies
* Golang runtime: v1.22.2
* kopia: v0.17.0
### Limitations/Known issues
* For the external BackupItemAction plugins that take snapshots for PVs, such as vsphere plugin. If the plugin checks the value of the field "snapshotVolumes" in the backup spec as a criteria for snapshot, the settings in the volume policy will not take effect. For example, if the "snapshotVolumes" is set to False in the backup spec, but a volume meets the condition in the volume policy for "snapshot" action, because the plugin will not check the settings in the volume policy, the plugin will not take snapshot for the volume. For more details please refer to #7818
### Breaking changes
* CSI plugin has been merged into velero repo in v1.14 release. It will be installed by default as an internal plugin, and should not be installed via "plugins " parameter in "velero install" command.
* The default resource requests and limitations for node agent are removed in v1.14, to make the node agent pods have the QoS class of "BestEffort", more details please refer to #7391
* There's a change in namespace filtering behavior during backup: In v1.14, when the includedNamespaces/excludedNamespaces fields are not set and the labelSelector/OrLabelSelectors are set in the backup spec, the backup will only include the namespaces which contain the resources that match the label selectors, while in previous releases all namespaces will be included in the backup with such settings. More details refer to #7105
* Patching the PV in the "Finalizing" state may cause the restore to be in "PartiallyFailed" state when the PV is blocked in "Pending" state, while in the previous release the restore may end up being in "Complete" state. For more details refer to #7866
### All Changes
* Fix backup log to show error string, not index (#7805, @piny940)
* Modify the volume helper logic. (#7794, @blackpiglet)
* Add documentation for extension of volume policy feature (#7779, @shubham-pampattiwar)
* Surface errors when waiting for backupRepository and timeout occurs (#7762, @kaovilai)
* Add existingResourcePolicy restore CR validation to controller (#7757, @kaovilai)
* Fix condition matching in resource modifier when there are multiple rules (#7715, @27149chen)
* Bump up the version of KinD and k8s in github actions (#7702, @reasonerjt)
* Implementation for Extending VolumePolicies to support more actions (#7664, @shubham-pampattiwar)
* Migrate from `github.com/Azure/azure-storage-blob-go` to `github.com/Azure/azure-sdk-for-go/sdk/storage/azblob` (#7598, @mmorel-35)
* When Included/ExcludedNamespaces are omitted, and LabelSelector or OrLabelSelector is used, namespaces without selected items are excluded from backup. (#7697, @blackpiglet)
* Display CSI snapshot restores in restore describe (#7687, @reasonerjt)
* Use specific credential rather than the credential chain for Azure (#7680, @ywk253100)
* Modify hook docs for clarity on displaying hook execution results (#7679, @allenxu404)
* Wait for results of restore exec hook executions in Finalizing phase instead of InProgress phase (#7619, @allenxu404)
* migrating to `sdk/resourcemanager/**/arm**` from `services/**/mgmt/**` (#7596, @mmorel-35)
* Bump up to go1.22 (#7666, @reasonerjt)
* Fix issue #7648. Adjust the exposing logic to avoid exposing failure and snapshot leak when expose fails (#7662, @Lyndon-Li)
* Track and persist restore volume info (#7630, @reasonerjt)
* Check the existence of the namespaces provided in the "--include-namespaces" option (#7569, @ywk253100)
* Add the finalization phase to the restore workflow (#7377, @allenxu404)
* Upgrade the version of go plugin related libs/tools (#7373, @ywk253100)
* Check resource Group Version and Kind is available in cluster before attempting restore to prevent being stuck. (#7322, @kaovilai)
* Merge CSI plugin code into Velero. (#7609, @blackpiglet)
* Fix issue #7391, remove the default constraint for node-agent pods (#7488, @Lyndon-Li)
* Fix DataDownload fails during restore for empty PVC workload (#7521, @qiuming-best)
* Add repository maintenance job (#7451, @qiuming-best)
* Check whether the VolumeSnapshot's source PVC is nil before using it.
Skip populate VolumeInfo for data-moved PV when CSI is not enabled. (#7515, @blackpiglet)
* Fix issue #7308, change the data path requeue time to 5 second for data mover backup/restore, PVB and PVR. (#7458, @Lyndon-Li)
* Patch newly dynamically provisioned PV with volume info to restore custom setting of PV (#7504, @allenxu404)
* Adjust the logic for the backup_last_status metrics to stop incorrectly incrementing over time (#7445, @allenxu404)
* dependabot: support github-actions updates (#7594, @mmorel-35)
* Include the design for adding the finalization phase to the restore workflow (#7317, @allenxu404)
* Fix issue #7211. Enable advanced feature capability and add support to concatenate objects for unified repo. (#7452, @Lyndon-Li)
* Add design to introduce restore volume info (#7610, @reasonerjt)
* Increase the k8s client QPS/burst to avoid throttling request errors (#7311, @ywk253100)
* Support update the backup VolumeInfos by the Async ops result. (#7554, @blackpiglet)
* FS backup create PodVolumeBackup when the backup excluded PVC,
so I added logic to skip PVC volume type when PVC is not included in the backup resources to be backed up. (#7472, @sbahar619)
* Respect and use `credentialsFile` specified in BSL.spec.config when IRSA is configured over Velero Pod Environment credentials (#7374, @reasonerjt)
* Move the native snapshot definition code into internal directory (#7544, @blackpiglet)
* Fix issue #7036. Add the implementation of node selection for data mover backups (#7437, @Lyndon-Li)
* Fix issue #7535, add the MustHave resource check during item collection and item filter for restore (#7585, @Lyndon-Li)
* build(deps): bump json-patch to v5.8.0 (#7584, @mmorel-35)
* Add confirm flag to velero plugin add (#7566, @kaovilai)
* do not skip unknown gvr at the beginning and get new gr when kind is changed (#7523, @27149chen)
* Fix snapshot leak for backup (#7558, @qiuming-best)
* For issue #7036, add the document for data mover node selection (#7640, @Lyndon-Li)
* Add design for Extending VolumePolicies to support more actions (#6956, @shubham-pampattiwar)
* BackupRepositories associated with a BSL are invalidated when BSL is (re-)created. (#7380, @kaovilai)
* Improve the concurrency for PVBs in different pods (#7571, @ywk253100)
* Bump up Kopia to v0.16.0 and open kopia repo with no index change (#7559, @Lyndon-Li)
* Bump up the versions of several Kubernetes-related libs (#7489, @ywk253100)
* Make parallel restore configurable (#7512, @qiuming-best)
* Support certificate-based authentication for Azure (#7549, @ywk253100)
* Fix issue #7281, batch delete snapshots in the same repo (#7438, @Lyndon-Li)
* Add CRD name to error message when it is not ready to use (#7295, @josemarevalo)
* Add the design for node selection for data mover backup (#7383, @Lyndon-Li)
* Bump up aws-sdk to latest version to leverage Pod Identity credentials. (#7307, @guikcd)
* Fix issue #7246. Document the behavior for repo snapshot deletion (#7622, @Lyndon-Li)
* Fix issue #7583, set backupName optional for Restore CRD (#7617, @Lyndon-Li)

View File

@@ -1,145 +0,0 @@
## v1.15
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.15.0
### Container Image
`velero/velero:v1.15.0`
### Documentation
https://velero.io/docs/v1.15/
### Upgrading
https://velero.io/docs/v1.15/upgrade-to-1.15/
### Highlights
#### Data mover micro service
Data transfer activities for CSI Snapshot Data Movement are moved from node-agent pods to dedicate backupPods or restorePods. This brings many benefits such as:
- This avoids to access volume data through host path, while host path access is privileged and may involve security escalations, which are concerned by users.
- This enables users to to control resource (i.e., cpu, memory) allocations in a granular manner, e.g., control them per backup/restore of a volume.
- This enhances the resilience, crash of one data movement activity won't affect others.
- This prevents unnecessary full backup because of host path changes after workload pods restart.
- For more information, check the design https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/vgdp-micro-service/vgdp-micro-service.md.
#### Item Block concepts and ItemBlockAction (IBA) plugin
Item Block concepts are introduced for resource backups to help to achieve multiple thread backups. Specifically, correlated resources are categorized in the same item block and item blocks could be processed concurrently in multiple threads.
ItemBlockAction plugin is introduced to help Velero to categorize resources into item blocks. At present, Velero provides built-in IBAs for pods and PVCs and Velero also supports customized IBAs for any resources.
In v1.15, Velero doesn't support multiple thread process of item blocks though item block concepts and IBA plugins are fully supported. The multiple thread support will be delivered in future releases.
For more information, check the design https://github.com/vmware-tanzu/velero/blob/main/design/backup-performance-improvements.md.
#### Node selection for repository maintenance job
Repository maintenance are resource consuming tasks, Velero now allows you to configure the nodes to run repository maintenance jobs, so that you can run repository maintenance jobs in idle nodes or avoid them to run in nodes hosting critical workloads.
To support the configuration, a new repository maintenance configuration configMap is introduced.
For more information, check the document https://velero.io/docs/v1.15/repository-maintenance/.
#### Backup PVC read-only configuration
In 1.15, Velero allows you to configure the data mover backupPods to read-only mount the backupPVCs. In this way, the data mover expose process could be significantly accelerated for some storages (i.e., ceph).
To support the configuration, a new backup PVC configuration configMap is introduced.
For more information, check the document https://velero.io/docs/v1.15/data-movement-backup-pvc-configuration/.
#### Backup PVC storage class configuration
In 1.15, Velero allows you to configure the storageclass used by the data mover backupPods. In this way, the provision of backupPVCs don't need to adhere to the same pattern as workload PVCs, e.g., for a backupPVC, it only needs one replica, whereas, the a workload PVC may have multiple replicas.
To support the configuration, the same backup PVC configuration configMap is used.
For more information, check the document https://velero.io/docs/v1.15/data-movement-backup-pvc-configuration/.
#### Backup repository data cache configuration
The backup repository may need to cache data on the client side during various repository operations, i.e., read, write, maintenance, etc. The cache consumes the root file system space of the pod where the repository access happens.
In 1.15, Velero allows you to configure the total size of the cache per repository. In this way, if your pod doesn't have enough space in its root file system, the pod won't be evicted due to running out of ephemeral storage.
To support the configuration, a new backup repository configuration configMap is introduced.
For more information, check the document https://velero.io/docs/v1.15/backup-repository-configuration/.
#### Performance improvements
In 1.15, several performance related issues/enhancements are included, which makes significant performance improvements in specific scenarios:
- There was a memory leak of Velero server after plugin calls, now it is fixed, see issue https://github.com/vmware-tanzu/velero/issues/7925
- The `client-burst/client-qps` parameters are automatically inherited to plugins, so that you can use the same velero server parameters to accelerate the plugin executions when large number of API server calls happen, see issue https://github.com/vmware-tanzu/velero/issues/7806
- Maintenance of Kopia repository takes huge memory in scenarios that huge number of files have been backed up, Velero 1.15 has included the Kopia upstream enhancement to fix the problem, see issue https://github.com/vmware-tanzu/velero/issues/7510
### Runtime and dependencies
Golang runtime: v1.22.8
kopia: v0.17.0
### Limitations/Known issues
#### Read-only backup PVC may not work on SELinux environments
Due to an issue of Kubernetes upstream, if a volume is mounted as read-only in SELinux environments, the read privilege is not granted to any user, as a result, the data mover backup will fail. On the other hand, the backupPVC must be mounted as read-only in order to accelerate the data mover expose process.
Therefore, a user option is added in the same backup PVC configuration configMap, once the option is enabled, the backupPod container will run as a super privileged container and disable SELinux access control. If you have concern in this super privileged container or you have configured [pod security admissions](https://kubernetes.io/docs/concepts/security/pod-security-admission/) and don't allow super privileged containers, you will not be able to use this read-only backupPVC feature and lose the benefit to accelerate the data mover expose process.
### Breaking changes
#### Deprecation of Restic
Restic path for fs-backup is in deprecation process starting from 1.15. According to [Velero deprecation policy](https://github.com/vmware-tanzu/velero/blob/v1.15/GOVERNANCE.md#deprecation-policy), for 1.15, if Restic path is used the backup/restore of fs-backup still creates and succeeds, but you will see warnings in below scenarios:
- When `--uploader-type=restic` is used in Velero installation
- When Restic path is used to create backup/restore of fs-backup
#### node-agent configuration name is configurable
Previously, a fixed name is searched for node-agent configuration configMap. Now in 1.15, Velero allows you to customize the name of the configMap, on the other hand, the name must be specified by node-agent server parameter `node-agent-configmap`.
#### Repository maintenance job configurations in Velero server parameter are moved to repository maintenance job configuration configMap
In 1.15, below Velero server parameters for repository maintenance jobs are moved to the repository maintenance job configuration configMap. While for back compatibility reason, the same Velero sever parameters are preserved as is. But the configMap is recommended and the same values in the configMap take preference if they exist in both places:
```
--keep-latest-maintenance-jobs
--maintenance-job-cpu-request
--maintenance-job-mem-request
--maintenance-job-cpu-limit
--maintenance-job-mem-limit
```
#### Changing PVC selected-node feature is deprecated
In 1.15, the [Changing PVC selected-node feature](https://velero.io/docs/v1.15/restore-reference/#changing-pvc-selected-node) enters deprecation process and will be removed in future releases according to [Velero deprecation policy](https://github.com/vmware-tanzu/velero/blob/v1.15/GOVERNANCE.md#deprecation-policy). Usage of this feature for any purpose is not recommended.
### All Changes
* add no-relabeling option to backupPVC configmap (#8288, @sseago)
* only set spec.volumes readonly if PVC is readonly for datamover (#8284, @sseago)
* Add labels to maintenance job pods (#8256, @shubham-pampattiwar)
* Add the Carvel package related resources to the restore priority list (#8228, @ywk253100)
* Reduces indirect imports for plugin/framework importers (#8208, @kaovilai)
* Add controller name to periodical_enqueue_source. The logger parameter now includes an additional field with the value of reflect.TypeOf(objList).String() and another field with the value of controllerName. (#8198, @kaovilai)
* Update Openshift SCC docs link (#8170, @shubham-pampattiwar)
* Partially fix issue #8138, add doc for node-agent memory preserve (#8167, @Lyndon-Li)
* Pass Velero server command args to the plugins (#8166, @ywk253100)
* Fix issue #8155, Merge Kopia upstream commits for critical issue fixes and performance improvements (#8158, @Lyndon-Li)
* Implement the Repo maintenance Job configuration. (#8145, @blackpiglet)
* Add document for data mover micro service (#8144, @Lyndon-Li)
* Fix issue #8134, allow to config resource request/limit for data mover micro service pods (#8143, @Lyndon-Li)
* Apply backupPVCConfig to backupPod volume spec (#8141, @shubham-pampattiwar)
* Add resource modifier for velero restore describe CLI (#8139, @blackpiglet)
* Fix issue #7620, add doc for backup repo config (#8131, @Lyndon-Li)
* Modify E2E and perf test report generated directory (#8129, @blackpiglet)
* Add docs for backup pvc config support (#8119, @shubham-pampattiwar)
* Delete generated k8s client and informer. (#8114, @blackpiglet)
* Add support for backup PVC configuration (#8109, @shubham-pampattiwar)
* ItemBlock model and phase 1 (single-thread) workflow changes (#8102, @sseago)
* Fix issue #8032, make node-agent configMap name configurable (#8097, @Lyndon-Li)
* Fix issue #8072, add the warning messages for restic deprecation (#8096, @Lyndon-Li)
* Fix issue #7620, add backup repository configuration implementation and support cacheLimit configuration for Kopia repo (#8093, @Lyndon-Li)
* Patch dbr's status when error happens (#8086, @reasonerjt)
* According to design #7576, after node-agent restarts, if a DU/DD is in InProgress status, re-capture the data mover ms pod and continue the execution (#8085, @Lyndon-Li)
* Updates to IBM COS documentation to match current version (#8082, @gjanders)
* Data mover micro service DUCR/DDCR controller refactor according to design #7576 (#8074, @Lyndon-Li)
* add retries with timeout to existing patch calls that moves a backup/restore from InProgress/Finalizing to a final status phase. (#8068, @kaovilai)
* Data mover micro service restore according to design #7576 (#8061, @Lyndon-Li)
* Internal ItemBlockAction plugins (#8054, @sseago)
* Data mover micro service backup according to design #7576 (#8046, @Lyndon-Li)
* Avoid wrapping failed PVB status with empty message. (#8028, @mrnold)
* Created new ItemBlockAction (IBA) plugin type (#8026, @sseago)
* Make PVPatchMaximumDuration timeout configurable (#8021, @shubham-pampattiwar)
* Reuse existing plugin manager for get/put volume info (#8012, @sseago)
* Data mover ms watcher according to design #7576 (#7999, @Lyndon-Li)
* New data path for data mover ms according to design #7576 (#7988, @Lyndon-Li)
* For issue #7700 and #7747, add the design for backup PVC configurations (#7982, @Lyndon-Li)
* Only get VolumeSnapshotClass when DataUpload exists. (#7974, @blackpiglet)
* Fix issue #7972, sync the backupPVC deletion in expose clean up (#7973, @Lyndon-Li)
* Expose the VolumeHelper to third-party plugins. (#7969, @blackpiglet)
* Check whether the volume's source is PVC before fetching its PV. (#7967, @blackpiglet)
* Check whether the namespaces specified in namespace filter exist. (#7965, @blackpiglet)
* Add design for backup repository configurations for issue #7620, #7301 (#7963, @Lyndon-Li)
* New data path for data mover ms according to design #7576 (#7955, @Lyndon-Li)
* Skip PV patch step in Restoe workflow for WaitForFirstConsumer VolumeBindingMode Pending state PVCs (#7953, @shubham-pampattiwar)
* Fix issue #7904, add the deprecation and limitation clarification for change PVC selected-node feature (#7948, @Lyndon-Li)
* Expose the VolumeHelper to third-party plugins. (#7944, @blackpiglet)
* Don't consider unschedulable pods unrecoverable (#7899, @sseago)
* Upgrade to robfig/cron/v3 to support time zone specification. (#7793, @kaovilai)
* Add the result in the backup's VolumeInfo. (#7775, @blackpiglet)
* Migrate from github.com/golang/protobuf to google.golang.org/protobuf (#7593, @mmorel-35)
* Add the design for data mover micro service (#7576, @Lyndon-Li)
* Descriptive restore error when restoring into a terminating namespace. (#7424, @kaovilai)
* Ignore missing path error in conditional match (#7410, @seanblong)
* Propose a deprecation process for velero (#5532, @shubham-pampattiwar)

View File

@@ -1,156 +0,0 @@
## v1.16
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.16.0
### Container Image
`velero/velero:v1.16.0`
### Documentation
https://velero.io/docs/v1.16/
### Upgrading
https://velero.io/docs/v1.16/upgrade-to-1.16/
### Highlights
#### Windows cluster support
In v1.16, Velero supports to run in Windows clusters and backup/restore Windows workloads, either stateful or stateless:
* Hybrid build and all-in-one image: the build process is enhanced to build an all-in-one image for hybrid CPU architecture and hybrid platform. For more information, check the design https://github.com/vmware-tanzu/velero/blob/main/design/multiple-arch-build-with-windows.md
* Deployment in Windows clusters: Velero node-agent, data mover pods and maintenance jobs now support to run in both linux and Windows nodes
* Data mover backup/restore Windows workloads: Velero built-in data mover supports Windows workloads throughout its full cycle, i.e., discovery, backup, restore, pre/post hook, etc. It automatically identifies Windows workloads and schedules data mover pods to the right group of nodes
Check the epic issue https://github.com/vmware-tanzu/velero/issues/8289 for more information.
#### Parallel Item Block backup
v1.16 now supports to back up item blocks in parallel. Specifically, during backup, correlated resources are grouped in item blocks and Velero backup engine creates a thread pool to back up the item blocks in parallel. This significantly improves the backup throughput, especially when there are large scale of resources.
Pre/post hooks also belongs to item blocks, so will also run in parallel along with the item blocks.
Users are allowed to configure the parallelism through the `--item-block-worker-count` Velero server parameter. If not configured, the default parallelism is 1.
For more information, check issue https://github.com/vmware-tanzu/velero/issues/8334.
#### Data mover restore enhancement in scalability
In previous releases, for each volume of WaitForFirstConsumer mode, data mover restore is only allowed to happen in the node that the volume is attached. This severely degrades the parallelism and the balance of node resource(CPU, memory, network bandwidth) consumption for data mover restore (https://github.com/vmware-tanzu/velero/issues/8044).
In v1.16, users are allowed to configure data mover restores running and spreading evenly across all nodes in the cluster. The configuration is done through a new flag `ignoreDelayBinding` in node-agent configuration (https://github.com/vmware-tanzu/velero/issues/8242).
#### Data mover enhancements in observability
In 1.16, some observability enhancements are added:
* Output various statuses of intermediate objects for failures of data mover backup/restore (https://github.com/vmware-tanzu/velero/issues/8267)
* Output the errors when Velero fails to delete intermediate objects during clean up (https://github.com/vmware-tanzu/velero/issues/8125)
The outputs are in the same node-agent log and enabled automatically.
#### CSI snapshot backup/restore enhancement in usability
In previous releases, a unnecessary VolumeSnapshotContent object is retained for each backup and synced to other clusters sharing the same backup storage location. And during restore, the retained VolumeSnapshotContent is also restored unnecessarily.
In 1.16, the retained VolumeSnapshotContent is removed from the backup, so no unnecessary CSI objects are synced or restored.
For more information, check issue https://github.com/vmware-tanzu/velero/issues/8725.
#### Backup Repository Maintenance enhancement in resiliency and observability
In v1.16, some enhancements of backup repository maintenance are added to improve the observability and resiliency:
* A new backup repository maintenance history section, called `RecentMaintenance`, is added to the BackupRepository CR. Specifically, for each BackupRepository, including start/completion time, completion status and error message. (https://github.com/vmware-tanzu/velero/issues/7810)
* Running maintenance jobs are now recaptured after Velero server restarts. (https://github.com/vmware-tanzu/velero/issues/7753)
* The maintenance job will not be launched for readOnly BackupStorageLocation. (https://github.com/vmware-tanzu/velero/issues/8238)
* The backup repository will not try to initialize a new repository for readOnly BackupStorageLocation. (https://github.com/vmware-tanzu/velero/issues/8091)
* Users now are allowed to configure the intervals of an effective maintenance in the way of `normalGC`, `fastGC` and `eagerGC`, through the `fullMaintenanceInterval` parameter in backupRepository configuration. (https://github.com/vmware-tanzu/velero/issues/8364)
#### Volume Policy enhancement of filtering volumes by PVC labels
In v1.16, Volume Policy is extended to support filtering volumes by PVC labels. (https://github.com/vmware-tanzu/velero/issues/8256).
#### Resource Status restore per object
In v1.16, users are allowed to define whether to restore resource status per object through an annotation `velero.io/restore-status` set on the object. (https://github.com/vmware-tanzu/velero/issues/8204).
#### Velero Restore Helper binary is merged into Velero image
In v1.16, Velero banaries, i.e., velero, velero-helper and velero-restore-helper, are all included into the single Velero image. (https://github.com/vmware-tanzu/velero/issues/8484).
### Runtime and dependencies
Golang runtime: 1.23.7
kopia: 0.19.0
### Limitations/Known issues
#### Limitations of Windows support
* fs-backup is not supported for Windows workloads and so fs-backup runs only in linux nodes for linux workloads
* Backup/restore of NTFS extended attributes/advanced features are not supported, i.e., Security Descriptors, System/Hidden/ReadOnly attributes, Creation Time, NTFS Streams, etc.
### All Changes
* Add third party annotation support for maintenance job, so that the declared third party annotations could be added to the maintenance job pods (#8812, @Lyndon-Li)
* Fix issue #8803, use deterministic name to create backupRepository (#8808, @Lyndon-Li)
* Refactor restoreItem and related functions to differentiate the backup resource name and the restore target resource name. (#8797, @blackpiglet)
* ensure that PV is removed before VS is deleted (#8777, @ix-rzi)
* host_pods should not be mandatory to node-agent (#8774, @mpryc)
* Log doesn't show pv name, but displays %!s(MISSING) instead (#8771, @hu-keyu)
* Fix issue #8754, add third party annotation support for data mover (#8770, @Lyndon-Li)
* Add docs for volume policy with labels as a criteria (#8759, @shubham-pampattiwar)
* Move pvc annotation removal from CSI RIA to regular PVC RIA (#8755, @sseago)
* Add doc for maintenance history (#8747, @Lyndon-Li)
* Fix issue #8733, add doc for restorePVC (#8737, @Lyndon-Li)
* Fix issue #8426, add doc for Windows support (#8736, @Lyndon-Li)
* Fix issue #8475, refactor build-from-source doc for hybrid image build (#8729, @Lyndon-Li)
* Return directly if no pod volme backup are tracked (#8728, @ywk253100)
* Fix issue #8706, for immediate volumes, there is no selected-node annotation on PVC, so deduce the attached node from VolumeAttachment CRs (#8715, @Lyndon-Li)
* Add labels as a criteria for volume policy (#8713, @shubham-pampattiwar)
* Copy SecurityContext from Containers[0] if present for PVR (#8712, @sseago)
* Support pushing images to an insecure registry (#8703, @ywk253100)
* Modify golangci configuration to make it work. (#8695, @blackpiglet)
* Run backup post hooks inside ItemBlock synchronously (#8694, @ywk253100)
* Add docs for object level status restore (#8693, @shubham-pampattiwar)
* Clean artifacts generated during CSI B/R. (#8684, @blackpiglet)
* Don't run maintenance on the ReadOnly BackupRepositories. (#8681, @blackpiglet)
* Fix #8657: WaitGroup panic issue (#8679, @ywk253100)
* Fixes issue #8214, validate `--from-schedule` flag in create backup command to prevent empty or whitespace-only values. (#8665, @aj-2000)
* Implement parallel ItemBlock processing via backup_controller goroutines (#8659, @sseago)
* Clean up leaked CSI snapshot for incomplete backup (#8637, @raesonerjt)
* Handle update conflict when restoring the status (#8630, @ywk253100)
* Fix issue #8419, support repo maintenance job to run on Windows nodes (#8626, @Lyndon-Li)
* Always create DataUpload configmap in restore namespace (#8621, @sseago)
* Fix issue #8091, avoid to create new repo when BSL is readonly (#8615, @Lyndon-Li)
* Fix issue #8242, distribute dd evenly across nodes (#8611, @Lyndon-Li)
* Fix issue #8497, update du/dd progress on completion (#8608, @Lyndon-Li)
* Fix issue #8418, add Windows toleration to data mover pods (#8606, @Lyndon-Li)
* Check the PVB status via podvolume Backupper rather than calling API server to avoid API server issue (#8603, @ywk253100)
* Fix issue #8067, add tmp folder (/tmp for linux, C:\Windows\Temp for Windows) as an alternative of udmrepo's config file location (#8602, @Lyndon-Li)
* Data mover restore for Windows (#8594, @Lyndon-Li)
* Skip patching the PV in finalization for failed operation (#8591, @reasonerjt)
* Fix issue #8579, set event burst to block event broadcaster from filtering events (#8590, @Lyndon-Li)
* Configurable Kopia Maintenance Interval. backup-repository-configmap adds an option for configurable`fullMaintenanceInterval` where fastGC (12 hours), and eagerGC (6 hours) allowing for faster removal of deleted velero backups from kopia repo. (#8581, @kaovilai)
* Fix issue #7753, recall repo maintenance history on Velero server restart (#8580, @Lyndon-Li)
* Clear validation errors when schedule is valid (#8575, @ywk253100)
* Merge restore helper image into Velero server image (#8574, @ywk253100)
* Don't include excluded items in ItemBlocks (#8572, @sseago)
* fs uploader and block uploader support Windows nodes (#8569, @Lyndon-Li)
* Fix issue #8418, support data mover backup for Windows nodes (#8555, @Lyndon-Li)
* Fix issue #8044, allow users to ignore delay binding the restorePVC of data mover when it is in WaitForFirstConsumer mode (#8550, @Lyndon-Li)
* Fix issue #8539, validate uploader types when o.CRDsOnly is set to false only since CRD installation doesn't rely on uploader types (#8538, @Lyndon-Li)
* Fix issue #7810, add maintenance history for backupRepository CRs (#8532, @Lyndon-Li)
* Make fs-backup work on linux nodes with the new Velero deployment and disable fs-backup if the source/target pod is running in non-linux node (#8424) (#8518, @Lyndon-Li)
* Fix issue: backup schedule pause/unpause doesn't work (#8512, @ywk253100)
* Fix backup post hook issue #8159 (caused by #7571): always execute backup post hooks after PVBs are handled (#8509, @ywk253100)
* Fix issue #8267, enhance the error message when expose fails (#8508, @Lyndon-Li)
* Fix issue #8416, #8417, deploy Velero server and node-agent in linux/Windows hybrid env (#8504, @Lyndon-Li)
* Design to add label selector as a criteria for volume policy (#8503, @shubham-pampattiwar)
* Related to issue #8485, move the acceptedByNode and acceptedTimestamp to Status of DU/DD CRD (#8498, @Lyndon-Li)
* Add SecurityContext to restore-helper (#8491, @reasonerjt)
* Fix issue #8433, add third party labels to data mover pods when the same labels exist in node-agent pods (#8487, @Lyndon-Li)
* Fix issue #8485, add an accepted time so as to count the prepare timeout (#8486, @Lyndon-Li)
* Fix issue #8125, log diagnostic info for data mover exposers when expose timeout (#8482, @Lyndon-Li)
* Fix issue #8415, implement multi-arch build and Windows build (#8476, @Lyndon-Li)
* Pin kopia to 0.18.2 (#8472, @Lyndon-Li)
* Add nil check for updating DataUpload VolumeInfo in finalizing phase (#8471, @blackpiglet)
* Allowing Object-Level Resource Status Restore (#8464, @shubham-pampattiwar)
* For issue #8429. Add the design for multi-arch build and windows build (#8459, @Lyndon-Li)
* Upgrade go.mod k8s.io/ go.mod to v0.31.3 and implemented proper logger configuration for both client-go and controller-runtime libraries. This change ensures that logging format and level settings are properly applied throughout the codebase. The update improves logging consistency and control across the Velero system. (#8450, @kaovilai)
* Add Design for Allowing Object-Level Resource Status Restore (#8403, @shubham-pampattiwar)
* Fix issue #8391, check ErrCancelled from suffix of data mover pod's termination message (#8396, @Lyndon-Li)
* Fix issue #8394, don't call closeDataPath in VGDP callbacks, otherwise, the VGDP cleanup will hang (#8395, @Lyndon-Li)
* Adding support in velero Resource Policies for filtering PVs based on additional VolumeAttributes properties under CSI PVs (#8383, @mayankagg9722)
* Add --item-block-worker-count flag to velero install and server (#8380, @sseago)
* Make BackedUpItems thread safe (#8366, @sseago)
* Include --annotations flag in backup and restore create commands (#8354, @alromeros)
* Use aggregated discovery API to discovery API groups and resources (#8353, @ywk253100)
* Copy "envFrom" from Velero server when creating maintenance jobs (#8343, @evhan)
* Set hinting region to use for GetBucketRegion() in pkg/repository/config/aws.go (#8297, @kaovilai)
* Bump up version of client-go and controller-runtime (#8275, @ywk253100)
* fix(pkg/repository/maintenance): don't panic when there's no container statuses (#8271, @mcluseau)
* Add Backup warning for inclusion of NS managed by ArgoCD (#8257, @shubham-pampattiwar)
* Added tracking for deleted namespace status check in restore flow. (#8233, @sangitaray2021)

View File

@@ -1,143 +0,0 @@
## v1.17
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.17.0
### Container Image
`velero/velero:v1.17.0`
### Documentation
https://velero.io/docs/v1.17/
### Upgrading
https://velero.io/docs/v1.17/upgrade-to-1.17/
### Highlights
#### Modernized fs-backup
In v1.17, Velero fs-backup is modernized to the micro-service architecture, which brings below benefits:
- Many features that were absent to fs-backup are now available, i.e., load concurrency control, cancel, resume on restart, etc.
- fs-backup is more robust, the running backup/restore could survive from node-agent restart; and the resource allocation is in a more granular manner, the failure of one backup/restore won't impact others.
- The resource usage of node-agent is steady, especially, the node-agent pods won't request huge memory and hold it for a long time.
Check design https://github.com/vmware-tanzu/velero/blob/main/design/vgdp-micro-service-for-fs-backup/vgdp-micro-service-for-fs-backup.md for more details.
#### fs-backup support Windows cluster
In v1.17, Velero fs-backup supports to backup/restore Windows workloads. By leveraging the new micro-service architecture for fs-backup, data mover pods could run in Windows nodes and backup/restore Windows volumes. Together with CSI snapshot data movement for Windows which is delivered in 1.16, Velero now supports Windows workload backup/restore in full scenarios.
Check design https://github.com/vmware-tanzu/velero/blob/main/design/vgdp-micro-service-for-fs-backup/vgdp-micro-service-for-fs-backup.md for more details.
#### Volume group snapshot support
In v1.17, Velero supports [volume group snapshots](https://kubernetes.io/blog/2024/12/18/kubernetes-1-32-volume-group-snapshot-beta/) which is a beta feature in Kubernetes upstream, for both CSI snapshot backup and CSI snapshot data movement. This allows a snapshot to be taken from multiple volumes at the same point-in-time to achieve write order consistency, which is helpful to achieve better data consistency when multiple volumes being backed up are correlated.
Check the document https://velero.io/docs/main/volume-group-snapshots/ for more details.
#### Priority class support
In v1.17, [Kubernetes priority class](https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass) is supported for all modules across Velero. Specifically, users are allowed to configure priority class to Velero server, node-agent, data mover pods, backup repository maintenance jobs separately.
Check design https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/priority-class-name-support_design.md for more details.
#### Scalability and Resiliency improvements of data movers
##### Reduce excessive number of data mover pods in Pending state
In v1.17, Velero allows users to set a `PrepareQueueLength` in the node-agent configuration, data mover pods and volumes out of this number won't be created until data path quota is available, so that excessive number cluster resources won't be taken unnecessarily, which is particularly helpful for large scale environments. This improvement applies to all kinds of data movements, including fs-backup and CSI snapshot data movement.
Check design https://github.com/vmware-tanzu/velero/blob/main/design/node-agent-load-soothing.md for more details.
##### Enhancement on node-agent restart handling for data movements
In v1.17, data movements in all phases could survive from node-agent restart and resume themselves; when a data movement gets orphaned in special cases, e.g., cluster node absent, it could also be canceled appropriately after the restart. This improvement applies to all kinds of data movements, including fs-backup and CSI snapshot data movement.
Check issue https://github.com/vmware-tanzu/velero/issues/8534 for more details.
##### CSI snapshot data movement restore node-selection and node-selection by storage class
In v1.17, CSI snapshot data movement restore acquires the same node-selection capability as backup, that is, users could specify which nodes can/cannot run data mover pods for both backup and restore now. And users are also allowed to configure the node-selection per storage class, which is particularly helpful to the environments where a storage class are not usable by all cluster nodes.
Check issue https://github.com/vmware-tanzu/velero/issues/8186 and https://github.com/vmware-tanzu/velero/issues/8223 for more details.
#### Include/exclude policy support for resource policy
In v1.17, Velero resource policy supports `includeExcludePolicy` besides the existing `volumePolicy`. This allows users to set include/exclude filters for resources in a resource policy configmap, so that these filters are reusable among multiple backups.
Check the document https://velero.io/docs/main/resource-filtering/#creating-resource-policies:~:text=resources%3D%22*%22-,Resource%20policies,-Velero%20provides%20resource for more details.
### Runtime and dependencies
Golang runtime: 1.24.6
kopia: 0.21.1
### Limitations/Known issues
### Breaking changes
#### Deprecation of Restic
According to [Velero deprecation policy](https://github.com/vmware-tanzu/velero/blob/main/GOVERNANCE.md#deprecation-policy), backup of fs-backup under Restic path is removed in v1.17, so `--uploader-type=restic` is not a valid installation configuration anymore. This means you cannot create a backup under Restic path, but you can still restore from the previous backups under Restic path until v1.19.
#### Repository maintenance job configurations are removed from Velero server parameter
Since the repository maintenance job configurations are moved to repository maintenance job configMap, in v1.17 below Velero sever parameters are removed:
- --keep-latest-maintenance-jobs
- --maintenance-job-cpu-request
- --maintenance-job-mem-request
- --maintenance-job-cpu-limit
- --maintenance-job-mem-limit
### All Changes
* Add ConfigMap parameters validation for install CLI and server start. (#9200, @blackpiglet)
* Add priorityclasses to high priority restore list (#9175, @kaovilai)
* Introduced context-based logger for backend implementations (Azure, GCS, S3, and Filesystem) (#9168, @priyansh17)
* Fix issue #9140, add os=windows:NoSchedule toleration for Windows pods (#9165, @Lyndon-Li)
* Remove the repository maintenance job parameters from velero server. (#9147, @blackpiglet)
* Add include/exclude policy to resources policy (#9145, @reasonerjt)
* Add ConfigMap support for keepLatestMaintenanceJobs with CLI parameter fallback (#9135, @shubham-pampattiwar)
* Fix the dd and du's node affinity issue. (#9130, @blackpiglet)
* Remove the WaitUntilVSCHandleIsReady from vs BIA. (#9124, @blackpiglet)
* Add comprehensive Volume Group Snapshots documentation with workflow diagrams and examples (#9123, @shubham-pampattiwar)
* Fix issue #9065, add doc for node-agent prepare queue length (#9118, @Lyndon-Li)
* Fix issue #9095, update restore doc for PVC selected-node (#9117, @Lyndon-Li)
* Update CSI Snapshot Data Movement doc for issue #8534, #8185 (#9113, @Lyndon-Li)
* Fix issue #8986, refactor fs-backup doc after VGDP Micro Service for fs-backup (#9112, @Lyndon-Li)
* Return error if timeout when checking server version (#9111, @ywk253100)
* Update "Default Volumes to Fs Backup" to "File System Backup (Default)" (#9105, @shubham-pampattiwar)
* Fix issue #9077, don't block backup deletion on list VS error (#9100, @Lyndon-Li)
* Bump up Kopia to v0.21.1 (#9098, @Lyndon-Li)
* Add imagePullSecrets inheritance for VGDP pod and maintenance job. (#9096, @blackpiglet)
* Avoid checking the VS and VSC status in the backup finalizing phase. (#9092, @blackpiglet)
* Fix issue #9053, Always remove selected-node annotation during PVC restore when no node mapping exists. Breaking change: Previously, the annotation was preserved if the node existed. (#9076, @Lyndon-Li)
* Enable parameterized kubelet mount path during node-agent installation (#9074, @longxiucai)
* Fix issue #8857, support third party tolerations for data mover pods (#9072, @Lyndon-Li)
* Fix issue #8813, remove restic from the valid uploader type (#9069, @Lyndon-Li)
* Fix issue #8185, allow users to disable pod volume host path mount for node-agent (#9068, @Lyndon-Li)
* Fix #8344, add the design for a mechanism to soothe creation of data mover pods for DataUpload, DataDownload, PodVolumeBackup and PodVolumeRestore (#9067, @Lyndon-Li)
* Fix #8344, add a mechanism to soothe creation of data mover pods for DataUpload, DataDownload, PodVolumeBackup and PodVolumeRestore (#9064, @Lyndon-Li)
* Add Gauge metric for BSL availability (#9059, @reasonerjt)
* Fix missing defaultVolumesToFsBackup flag output in Velero describe backup cmd (#9056, @shubham-pampattiwar)
* Allow for proper tracking of multiple hooks per container (#9048, @sseago)
* Make the backup repository controller doesn't invalidate the BSL on restart (#9046, @blackpiglet)
* Removed username/password credential handling from newConfigCredential as azidentity.UsernamePasswordCredentialOptions is reported as deprecated. (#9041, @priyansh17)
* Remove dependency with VolumeSnapshotClass in DataUpload. (#9040, @blackpiglet)
* Fix issue #8961, cancel PVB/PVR on Velero server restart (#9031, @Lyndon-Li)
* Fix issue #8962, resume PVB/PVR during node-agent restarts (#9030, @Lyndon-Li)
* Bump kopia v0.20.1 (#9027, @Lyndon-Li)
* Fix issue #8965, support PVB/PVR's cancel state in the backup/restore (#9026, @Lyndon-Li)
* Fix Issue 8816 When specifying LabelSelector on restore, related items such as PVC and VolumeSnapshot are not included (#9024, @amastbau)
* Fix issue #8963, add legacy PVR controller for Restic path (#9022, @Lyndon-Li)
* Fix issue #8964, add Windows support for VGDP MS for fs-backup (#9021, @Lyndon-Li)
* Accommodate VGS workflows in PVC CSI plugin (#9019, @shubham-pampattiwar)
* Fix issue #8958, add VGDP MS PVB controller (#9015, @Lyndon-Li)
* Fix issue #8959, add VGDP MS PVR controller (#9014, @Lyndon-Li)
* Fix issue #8988, add data path for VGDP ms PVR (#9005, @Lyndon-Li)
* Fix issue #8988, add data path for VGDP ms pvb (#8998, @Lyndon-Li)
* Skip VS and VSC not created by backup. (#8990, @blackpiglet)
* Make ResticIdentifier optional for kopia BackupRepositories (#8987, @kaovilai)
* Fix issue #8960, implement PodVolume exposer for PVB/PVR (#8985, @Lyndon-Li)
* fix: update mc command in minio-deployment example (#8982, @vishal-chdhry)
* Fix issue #8957, add design for VGDP MS for fs-backup (#8979, @Lyndon-Li)
* Add BSL status check for backup/restore operations. (#8976, @blackpiglet)
* Mark BackupRepository not ready when BSL changed (#8975, @ywk253100)
* Add support for [distributed snapshotting](https://github.com/kubernetes-csi/external-snapshotter/tree/4cedb3f45790ac593ebfa3324c490abedf739477?tab=readme-ov-file#distributed-snapshotting) (#8969, @flx5)
* Fix issue #8534, refactor dm controllers to tolerate cancel request in more cases, e.g., node restart, node drain (#8952, @Lyndon-Li)
* The backup and restore VGDP affinity enhancement implementation. (#8949, @blackpiglet)
* Remove CSI VS and VSC metadata from backup. (#8946, @blackpiglet)
* Extend PVCAction itemblock plugin to support grouping PVCs under VGS label key (#8944, @shubham-pampattiwar)
* Copy security context from origin pod (#8943, @farodin91)
* Add support for configuring VGS label key (#8938, @shubham-pampattiwar)
* Add VolumeSnapshotContent into the RIA and the mustHave resource list. (#8924, @blackpiglet)
* Mounted cloud credentials should not be world-readable (#8919, @sseago)
* Warn for not found error in patching managed fields (#8902, @sseago)
* Fix issue 8878, relief node os deduction error checks (#8891, @Lyndon-Li)
* Skip namespace in terminating state in backup resource collection. (#8890, @blackpiglet)
* Implement PriorityClass Support (#8883, @kaovilai)
* Fix Velero adding restore-wait init container when not needed. (#8880, @kaovilai)
* Pass the logger in kopia related operations. (#8875, @hu-keyu)
* Inherit the dnsPolicy and dnsConfig from the node agent pod. This is done so that the kopia task uses the same configuration. (#8845, @flx5)
* Add design for VolumeGroupSnapshot support (#8778, @shubham-pampattiwar)
* Inherit k8s default volumeSnapshotClass. (#8719, @hu-keyu)
* CLI automatically discovers and uses cacert from BSL for download requests (#8557, @kaovilai)
* This PR aims to add s390x support to Velero binary. (#7505, @pandurangkhandeparker)

View File

@@ -1,3 +1,44 @@
## v1.7.2
### 2022-02-23
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.7.2
### Container Image
`velero/velero:v1.7.2`
### Documentation
https://velero.io/docs/v1.7/
### Upgrading
https://velero.io/docs/v1.7/upgrade-to-1.7/
### All changes
* Bump up golang to 1.17.7 (#4667, @ywk253100)
* Check for nil before logging DefaultVolumesToRestic value(#4674, @ywk253100)
## v1.7.1
### 2021-11-22
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.7.1
### Container Image
`velero/velero:v1.7.1`
### Documentation
https://velero.io/docs/v1.7/
### Upgrading
https://velero.io/docs/v1.7/upgrade-to-1.7/
### All changes
* fix buggy pager func (#4358, @alaypatel07)
* Fix CVE-2020-29652 and CVE-2020-26160 (#4315, @ywk253100)
## v1.7.0
### 2021-09-07

View File

@@ -1,110 +0,0 @@
## v1.8.0
### 2022-01-14
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.8.0
### Container Image
`velero/velero:v1.8.0`
### Documentation
https://velero.io/docs/v1.8
### Upgrading
https://velero.io/docs/v1.8/upgrade-to-1.8/
### Highlights
#### Velero plugins now support handling volumes created by the CSI drivers of cloud providers
Versions 1.4 of the Velero plugins for AWS, Azure and GCP now support snapshotting and restoring the persistent volumes provisioned by CSI driver via the APIs of the cloud providers. With this enhancement, users can backup and restore the persistent volumes on these cloud providers without using the Velero CSI plugin. The CSI plugin will remain beta and the feature flag `EnableCSI` will be disabled by default.
For the version of the plugins and the CSI drivers they support respectively please see the table:
| Plugin | Version | CSI Driver |
| --- | ----------- | ---------- |
| velero-plugin-for-aws | v1.4.0 | ebs.csi.aws.com |
| velero-plugin-for-microsoft-azure | v1.4.0 | disk.csi.azure.com |
| velero-plugin-for-gcp | v1.4.0 | pd.csi.storage.gke.io |
#### IPv6 dual stack support
We've verified the functionality of Velero on IPv6 dual stack by successfully running the E2E test on IPv6 dual stack environment.
#### Refactor the controllers using Kubebuilder v3
In this release we continued our code modernization work, rewriting some controllers using Kubebuilder v3. This work is ongoing and we will continue to make progress in future releases.
#### Enhancements to E2E test cases
More test cases have been added to the E2E test suite to improve the release health.
#### Respect the cron setting of scheduled backup
The creation time is now taken into account to calculate the next run for scheduled backup.
#### Deleting BSLs also cleans up related resources
When a Backup Storage Location (BSL) is deleted, backup and Restic repository resources will also be deleted.
#### Breaking changes
Starting in v1.8, Velero will only support Kubernetes v1 CRD meaning that Velero v1.8+ will only run on Kubernetes v1.16+. Before upgrading, make sure you are running a supported Kubernetes version. For more information, see our [compatibility matrix](https://github.com/vmware-tanzu/velero#velero-compatibility-matrix).
#### Upload Progress Monitoring and Item Snapshotter
Item Snapshotter plugin API was merged. This will support both Upload Progress
monitoring and the planned Data Mover. Upload Progress monitoring PRs are
in progress for 1.9.
### All changes
* E2E test on ssr object with controller namespace mix-ups (#4521, @mqiu)
* Check whether the volume is provisioned by CSI driver or not by the annotation as well (#4513, @ywk253100)
* Initialize the labels field of `velero backup-location create` option to avoid #4484 (#4491, @ywk253100)
* Fix e2e 2500 namespaces scale test timeout problem (#4480, @mqiu)
* Add backup deletion e2e test (#4401, @danfengliu)
* Return the error when getting backup store in backup deletion controller (#4465, @reasonerjt)
* Ignore the provided port is already allocated error when restoring the LoadBalancer service (#4462, @ywk253100)
* Revert #4423 migrate backup sync controller to kubebuilder. (#4457, @jxun)
* Add rbac and annotation test cases (#4455, @mqiu)
* remove --crds-version in velero install command. (#4446, @jxun)
* Upgrade e2e test vsphere plugin (#4440, @mqiu)
* Fix e2e test failures for the inappropriate optimize of velero install (#4438, @mqiu)
* Limit backup namespaces on test resource filtering cases (#4437, @mqiu)
* Bump up Go to 1.17 (#4431, @reasonerjt)
* Added `<backup name>`-itemsnapshots.json.gz to the backup format. This file exists
when item snapshots are taken and contains an array of volume.Itemsnapshots
containing the information about the snapshots. This will not be used unless
upload progress monitoring and item snapshots are enabled and an ItemSnapshot
plugin is used to take snapshots.
Also added DownloadTargetKindBackupItemSnapshots for retrieving the signed URL to download only the `<backup name>`-itemsnapshots.json.gz part of a backup for use by
`velero backup describe`. (#4429, @dsmithuchida)
* Migrate backup sync controller from code-generator to kubebuilder. (#4423, @jxun)
* Added UploadProgressFeature flag to enable Upload Progress Monitoring and Item
Snapshotters. (#4416, @dsmithuchida)
* Added BackupWithResolvers and RestoreWithResolvers calls. Will eventually replace Backup and Restore methods.
Adds ItemSnapshotters to Backup and Restore workflows. (#4410, @dsu)
* Build for darwin-arm64 (#4409, @epk)
* Add resource filtering test cases (#4404, @mqiu)
* Fix the issue that the backup cannot be deleted after the application uninstalled (#4398, @ywk253100)
* Add restoreactionitem plugin to handle admission webhook configurations (#4397, @reasonerjt)
* Keep the annotation "pv.kubernetes.io/provisioned-by" when restoring PVs (#4391, @ywk253100)
* Adjust structure of e2e test codes (#4386, @mqiu)
* feat: migrate velero controller from kubebuilder v2 to v3
From Velero v1.8, apiextesions.k8s.io/v1beta1 is no longer supported,
which means only CRD of apiextensions.k8s.io/v1 is supported,
and the supported Kubernetes version is updated to v1.16 and later. (#4382, @jxun)
* Delete backups and Restic repos associated with deleted BSL(s) (#4377, @codegold79)
* Add the key for GKE zone for AZ collection (#4376, @reasonerjt)
* Fix statefulsets volumeClaimTemplates storageClassName when use Changing PV/PVC Storage Classes (#4375, @Box-Cube)
* Fix snapshot e2e test issue of jsonpath (#4372, @danfengliu)
* Modify the timestamp in the name of a backup generated from schedule to use UTC. (#4353, @jxun)
* Read Availability zone from nodeAffinity requirements (#4350, @reasonerjt)
* Use factory.Namespace() to replace hardcoded velero namespace (#4346, @half-life666)
* Return the error if velero failed to detect S3 region for restic repo (#4343, @reasonerjt)
* Add init log option for velero controller-runtime manager. (#4341, @jxun)
* Ignore the `provided port is already allocated` error when restoring the `NodePort` service (#4336, @ywk253100)
* Fixed an issue with the `backup-location create` command where the BSL Credential field would be set to an invalid empty SecretKeySelector when no credential details were provided. (#4322, @zubron)
* fix buggy pager func (#4306, @alaypatel07)
* Don't create a backup immediately after creating a schedule (#4281, @ywk253100)
* Fix CVE-2020-29652 and CVE-2020-26160 (#4274, @ywk253100)
* Refine tag-release.sh to align with change in release process (#4185, @reasonerjt)
* Fix plugins incompatible issue in upgrade test (#4141, @danfengliu)
* Verify group before treating resource as cohabiting (#4126, @sseago)
* Added ItemSnapshotter plugin definition and plugin framework - addresses #3533.
Part of the Upload Progress enhancement (#3533) (#4077, @dsmithuchida)
* Add upgrade test in E2E test (#4058, @danfengliu)
* Handle namespace mapping for PVs without snapshots on restore (#3708, @sseago)

View File

@@ -1,104 +0,0 @@
## v1.9.0
### 2022-06-13
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.9.0
### Container Image
`velero/velero:v1.9.0`
### Documentation
https://velero.io/docs/v1.9/
### Upgrading
https://velero.io/docs/v1.9/upgrade-to-1.9/
### Highlights
#### Improvement to the CSI plugin
- Bump up to the CSI volume snapshot v1 API
- No VolumeSnapshot will be left in the source namespace of the workload
- Report metrics for CSI snapshots
More improvements please refer to [CSI plugin improvement](https://github.com/vmware-tanzu/velero/issues?q=is%3Aissue+label%3A%22CSI+plugin+-+GA+-+phase1%22+is%3Aclosed)
With these improvements we'll provide official support for CSI snapshots on AKS/EKS clusters. (with CSI plugin v0.3.0)
#### Refactor the controllers using Kubebuilder v3
In this release we continued our code modernization work, rewriting some controllers using Kubebuilder v3. This work is ongoing and we will continue to make progress in future releases.
#### Optionally restore status on selected resources
Options are added to the CLI and Restore spec to control the group of resources whose status will be restored.
#### ExistingResourcePolicy in the restore API
Users can choose to overwrite or patch the existing resources during restore by setting this policy.
#### Upgrade integrated Restic version and add skip TLS validation in Restic command
Upgrade integrated Restic version, which will resolve some of the CVEs, and support skip TLS validation in Restic backup/restore.
#### Breaking changes
With bumping up the API to v1 in CSI plugin, the v0.3.0 CSI plugin will only work for Kubernetes v1.20+
### All changes
* restic: add full support for setting SecurityContext for restore init container from configMap. (#4084, @MatthieuFin)
* Add metrics backup_items_total and backup_items_errors (#4296, @tobiasgiese)
* Convert PodVolumebackup controller to the Kubebuilder framework (#4436, @fgold)
* Skip not mounted volumes when backing up (#4497, @dkeven)
* Update doc for v1.8 (#4517, @reasonerjt)
* Fix bug to make the restic prune frequency configurable (#4518, @ywk253100)
* Add E2E test of backups sync from BSL (#4545, @mqiu)
* Fix: OrderedResources in Schedules (#4550, @dbrekau)
* Skip volumes of non-running pods when backing up (#4584, @bynare)
* E2E SSR test add retry mechanism and logs (#4591, @mqiu)
* Add pushing image to GCR in github workflow to facilitate some environments that have rate limitation to docker hub, e.g. vSphere. (#4623, @jxun)
* Add existingResourcePolicy to Restore API (#4628, @shubham-pampattiwar)
* Fix E2E backup namespaces test (#4634, @qiuming-best)
* Update image used by E2E test to gcr.io (#4639, @jxun)
* Add multiple label selector support to Velero Backup and Restore APIs (#4650, @shubham-pampattiwar)
* Convert Pod Volume Restore resource/controller to the Kubebuilder framework (#4655, @ywk253100)
* Update --use-owner-references-in-backup description in velero command line. (#4660, @jxun)
* Avoid overwritten hook's exec.container parameter when running pod command executor. (#4661, @jxun)
* Support regional pv for GKE (#4680, @jxun)
* Bypass the remap CRD version plugin when v1beta1 CRD is not supported (#4686, @reasonerjt)
* Add GINKGO_SKIP to support skip specific case in e2e test. (#4692, @jxun)
* Add --pod-labels flag to velero install (#4694, @j4m3s-s)
* Enable coverage in test.sh and upload to codecov (#4704, @reasonerjt)
* Mark the BSL as "Unavailable" when gets any error and add a new field "Message" to the status to record the error message (#4719, @ywk253100)
* Support multiple skip option for E2E test (#4725, @jxun)
* Add PriorityClass to the AdditionalItems of Backup's PodAction and Restore's PodAction plugin to backup and restore PriorityClass if it is used by a Pod. (#4740, @phuongatemc)
* Insert all restore errors and warnings into restore log. (#4743, @sseago)
* Refactor schedule controller with kubebuilder (#4748, @ywk253100)
* Garbage collector now adds labels to backups that failed to delete for BSLNotFound, BSLCannotGet, BSLReadOnly reasons. (#4757, @kaovilai)
* Skip podvolumerestore creation when restore excludes pv/pvc (#4769, @half-life666)
* Add parameter for e2e test to support modify kibishii install path. (#4778, @jxun)
* Ensure the restore hook applied to new namespace based on the mapping (#4779, @reasonerjt)
* Add ability to restore status on selected resources (#4785, @RafaeLeal)
* Do not take snapshot for PV to avoid duplicated snapshotting, when CSI feature is enabled. (#4797, @jxun)
* Bump up to v1 API for CSI snapshot (#4800, @reasonerjt)
* fix: delete empty backups (#4817, @yuvalman)
* Add CSI VolumeSnapshot related metrics. (#4818, @jxun)
* Fix default-backup-ttl not work (#4831, @qiuming-best)
* Make the vsc created by backup sync controller deletable (#4832, @reasonerjt)
* Make in-progress backup/restore as failed when doing the reconcile to avoid hanging in in-progress status (#4833, @ywk253100)
* Use controller-gen to generate the deep copy methods for objects (#4838, @ywk253100)
* Update integrated Restic version and add insecureSkipTLSVerify for Restic CLI. (#4839, @jxun)
* Modify CSI VolumeSnapshot metric related code. (#4854, @jxun)
* Refactor backup deletion controller based on kubebuilder (#4855, @reasonerjt)
* Remove VolumeSnapshots created during backup when CSI feature is enabled. (#4858, @jxun)
* Convert Restic Repository resource/controller to the Kubebuilder framework (#4859, @qiuming-best)
* Add ClusterClasses to the restore priority list (#4866, @reasonerjt)
* Cleanup the .velero folder after restic done (#4872, @big-appled)
* Delete orphan CSI snapshots in backup sync controller (#4887, @reasonerjt)
* Make waiting VolumeSnapshot to ready process parallel. (#4889, @jxun)
* continue rather than return for non-matching restore action label (#4890, @sseago)
* Make in-progress PVB/PVR as failed when restic controller restarts to avoid hanging backup/restore (#4893, @ywk253100)
* Refactor BSL controller with periodical enqueue source (#4894, @jxun)
* Make garbage collection for expired backups configurable (#4897, @ywk253100)
* Bump up the version of distroless to base-debian11 (#4898, @ywk253100)
* Add schedule ordered resources E2E test (#4913, @qiuming-best)
* Make velero completion zsh command output can be used by `source` command. (#4914, @jxun)
* Enhance the map flag to support parsing input value contains entry delimiters (#4920, @ywk253100)
* Fix E2E test [Backups][Deletion][Restic] on GCP. (#4968, @jxun)
* Disable status as sub resource in CRDs (#4972, @ywk253100)
* Add more information for failing to get path or snapshot in restic backup and restore. (#4988, @jxun)

View File

@@ -1 +0,0 @@
Add `--apply` flag to `install` command, allowing usage of Kubernetes apply to make changes to existing installs

View File

@@ -1 +0,0 @@
feat: Enhance BackupStorageLocation with Secret-based CA certificate support

View File

@@ -1 +0,0 @@
Fix issue #7725, add design for backup repo cache configuration

View File

@@ -1 +0,0 @@
Add VolumePolicy support for PVC Phase conditions to allow skipping Pending PVCs

View File

@@ -1 +0,0 @@
feat: Permit specifying annotations for the BackupPVC

View File

@@ -1 +0,0 @@
Remove labels associated with previous backups

View File

@@ -1 +0,0 @@
Get pod list once per namespace in pvc IBA

View File

@@ -1 +0,0 @@
Fix issue #9229, don't attach backupPVC to the source node

View File

@@ -1 +0,0 @@
Update AzureAD Microsoft Authentication Library to v1.5.0

View File

@@ -1 +0,0 @@
Protect VolumeSnapshot field from race condition during multi-thread backup

View File

@@ -1,10 +0,0 @@
Implement wildcard namespace pattern expansion for backup namespace includes/excludes.
This change adds support for wildcard patterns (*, ?, [abc], {a,b,c}) in namespace includes and excludes during backup operations.
When wildcard patterns are detected, they are expanded against the list of active namespaces in the cluster before the backup proceeds.
Key features:
- Wildcard patterns in namespace includes/excludes are automatically detected and expanded
- Pattern validation ensures unsupported patterns (regex, consecutive asterisks) are rejected
- Empty wildcard results (e.g., "invalid*" matching no namespaces) correctly result in empty backups
- Exact namespace names and "*" continue to work as before (no expansion needed)

View File

@@ -1 +0,0 @@
Fix repository maintenance jobs to inherit allowlisted tolerations from Velero deployment

View File

@@ -1 +0,0 @@
Fix schedule controller to prevent backup queue accumulation during extended blocking scenarios by properly handling empty backup phases

View File

@@ -1 +0,0 @@
Fix issue #7904, remove the code and doc for PVC node selection

View File

@@ -1 +0,0 @@
Implement concurrency control for cache of native VolumeSnapshotter plugin.

View File

@@ -1 +0,0 @@
Fix issue #9193, don't connect repo in repo controller

View File

@@ -1 +0,0 @@
Add option for privileged fs-backup pod

View File

@@ -1 +0,0 @@
Fix issue #9267, add events to data mover prepare diagnostic

View File

@@ -1 +0,0 @@
VerifyJSONConfigs verify every elements in Data.

View File

@@ -1 +0,0 @@
Concurrent backup processing

View File

@@ -1 +0,0 @@
Sanitize Azure HTTP responses in BSL status messages

View File

@@ -1 +0,0 @@
Fix typos in documentation

View File

@@ -1 +0,0 @@
Fix issue #9332, add bytesDone for cache files

View File

@@ -1 +0,0 @@
Add cache configuration to VGDP

View File

@@ -1 +0,0 @@
Fix the Job build error when BackupReposiotry name longer than 63.

View File

@@ -1 +0,0 @@
Add cache dir configuration for udmrepo

View File

@@ -1 +0,0 @@
Add snapshotSize for DataDownload, PodVolumeRestore

View File

@@ -1 +0,0 @@
Add incrementalSize to DU/PVB for reporting new/changed size

View File

@@ -1 +0,0 @@
Support cache volume for generic restore exposer and pod volume exposer

View File

@@ -1 +0,0 @@
Use hookIndex for recording multiple restore exec hooks.

View File

@@ -1 +0,0 @@
Fix managed fields patch for resources using GenerateName

View File

@@ -1 +0,0 @@
Track actual resource names for GenerateName in restore status

View File

@@ -1 +0,0 @@
Add cache volume configuration

View File

@@ -1 +0,0 @@
Fix issue #9365, prevent fake completion notification due to multiple update of single PVR

View File

@@ -1 +0,0 @@
Refactor repo provider interface for static configuration

View File

@@ -1 +0,0 @@
don't copy securitycontext from first container if configmap found

View File

@@ -1 +0,0 @@
Cache volume support for DataDownload

View File

@@ -1 +0,0 @@
Cache volume for PVR

View File

@@ -1 +0,0 @@
Fix issue #9400, connect repo first time after creation so that init params could be written

View File

@@ -1 +0,0 @@
Add Prometheus metrics for maintenance jobs

View File

@@ -1 +0,0 @@
Fix issue #9276, add doc for cache volume support

Some files were not shown because too many files have changed in this diff Show More